article
stringlengths 475
275k
| abstract
stringlengths 8
1.79k
|
|---|---|
[SECTION: Purpose] Companies around the world are currently going through periods of great change, some are becoming increasingly global and many are wondering how the changing economy will impact on their business. These changes are determining what the enterprise of the future will look like and calling for organizations of all sizes to demonstrate agility and create an infrastructure that responds to an ever changing business environment. In 2008 IBM launched a Global CEO survey which questioned over 1,000 CEOs and found that 83 percent of respondents said they were expecting substantial change over the next three years, climbing from 65 percent in 2006 (IBM Corporation, 2008a). Although one-third of CEOs were expecting more change, the most significant factor was that only 61 percent of the same group said their organization had coped well with change in the past, a small 4 percent increase from 2006. This disparity between expected change and the perceived ability to manage it is called the "Change gap". At the global level, the change gap was 22 percent, however, in the UK the change gap was even higher at 30 percent with UK CEOs having similar expectations of change but citing less evidence of past change success. The big question is why do business leaders feel they are struggling to keep up with change? Businesses are now more global and therefore more complicated than they have ever been. There are more areas of the organization to improve and more ways to improve them. Companies are now facing new challenges to their success and must develop smarter ways to overcome them. To investigate how companies can keep up with the changing business environment, IBM Corporation (2008b) launched a Making Change Work study which surveyed over 1,500 change practitioners, project managers and project leaders across the world asking them how they actually "make change work".Respondents were from a mix of organization size and industry. In the UK there were over 200 survey participants, 40 percent from the public sector which has been characterized by an ambitious transformation agenda across many agencies in recent years. The study found that successfully executing projects is a major issue for companies with only an average of 41 percent of projects considered successful in meeting project objectives within planned time, budget and quality constraints, compared to the remaining 59 percent of projects which missed at least one objective or failed entirely. This is even more acute in the UK, where there were only 31 percent of successful projects.Interestingly, the detailed analysis found that achieving project success does not hinge primarily on technology - instead, success depends largely on people. This set the base of the study findings which focused on finding the key, targeted interventions required for successful change. The study identified four important focus areas that were highly correlated with project success and help to close the change gap:1. real insights, real actions;2. solid methods, solid benefits;3. better skills, better change; and4. right investment, right impact.These four change-related focus areas are represented graphically as four facets of a Change Diamond. When collective action is taken on all facets of the diamond, the change gap will close (see Figure 1). Globally, the top 20 percent of respondents - "Change Masters" - achieve success in 80 percent of their projects by focusing on all four facets of the diamond. They have double the success rate of the average organization (see Figure 2).In the UK, although Change Masters do not spend more than the average on change, by focusing on three of the four facets of the diamond - insights, methods and skills - they do achieve project success rates of more than twice the UK average. In contrast the Change Novices consistently under-utilize all four facets and their low success rates speak for themselves. So the UK Change Masters do not necessarily spend more, but they do carefully apportion their spend across the other three facets and utilize these to great effect (see Figure 3). Change Masters cite greater levels of project success than the average when they focus on the individual facets. But when they combined the facets, UK Change Masters attained a 65 percent project success rate, an additional 30 percent increase due to synergy. This suggests that in the UK, the impact of focusing on all facets in concert - is more keenly felt than at the global level (19 percent). The change gap is acute in the UK. This can no longer be addressed by the unstructured and ad hoc approaches to change management that have typified the past;* In the UK change practitioners are more successful when focusing on and combining the four facets of the Change Diamond;* In the UK, Change Masters target their spend carefully and they achieve project success rates of more than twice the average. For them, the impact of added insight, structured method and improved change skills is keenly felt in the UK, just as it is globally;* The study provides fresh data points from a new source: over 200 project managers and change managers in the UK - those who confront project realities everyday;* Much can be learnt from the small number of Change Masters who have achieved significantly greater success by "working the diamond";In the UK, although we invest more on change management than at a global level, it is not this that makes the difference. Instead, it is experienced change managers applying the facets of the diamond together in a comprehensive set of planned interventions that has the most significant impact.To discover how to prepare your business for change, benchmark your organization against our maturity model (see Figure 4) and find out if you are a Change Novice or a Master?Take the necessary steps towards becoming an effective Change Master. Utilize all facets of the Change Diamond in combination to make change work. Opens in a new window.Figure 1 Opens in a new window.Figure 2 The focus of the change masters pays dividends Opens in a new window.Figure 3 The collective effect of the four facets Opens in a new window.Figure 4
|
- To discover the important, practical factors that Make Change Work.
|
[SECTION: Method] Companies around the world are currently going through periods of great change, some are becoming increasingly global and many are wondering how the changing economy will impact on their business. These changes are determining what the enterprise of the future will look like and calling for organizations of all sizes to demonstrate agility and create an infrastructure that responds to an ever changing business environment. In 2008 IBM launched a Global CEO survey which questioned over 1,000 CEOs and found that 83 percent of respondents said they were expecting substantial change over the next three years, climbing from 65 percent in 2006 (IBM Corporation, 2008a). Although one-third of CEOs were expecting more change, the most significant factor was that only 61 percent of the same group said their organization had coped well with change in the past, a small 4 percent increase from 2006. This disparity between expected change and the perceived ability to manage it is called the "Change gap". At the global level, the change gap was 22 percent, however, in the UK the change gap was even higher at 30 percent with UK CEOs having similar expectations of change but citing less evidence of past change success. The big question is why do business leaders feel they are struggling to keep up with change? Businesses are now more global and therefore more complicated than they have ever been. There are more areas of the organization to improve and more ways to improve them. Companies are now facing new challenges to their success and must develop smarter ways to overcome them. To investigate how companies can keep up with the changing business environment, IBM Corporation (2008b) launched a Making Change Work study which surveyed over 1,500 change practitioners, project managers and project leaders across the world asking them how they actually "make change work".Respondents were from a mix of organization size and industry. In the UK there were over 200 survey participants, 40 percent from the public sector which has been characterized by an ambitious transformation agenda across many agencies in recent years. The study found that successfully executing projects is a major issue for companies with only an average of 41 percent of projects considered successful in meeting project objectives within planned time, budget and quality constraints, compared to the remaining 59 percent of projects which missed at least one objective or failed entirely. This is even more acute in the UK, where there were only 31 percent of successful projects.Interestingly, the detailed analysis found that achieving project success does not hinge primarily on technology - instead, success depends largely on people. This set the base of the study findings which focused on finding the key, targeted interventions required for successful change. The study identified four important focus areas that were highly correlated with project success and help to close the change gap:1. real insights, real actions;2. solid methods, solid benefits;3. better skills, better change; and4. right investment, right impact.These four change-related focus areas are represented graphically as four facets of a Change Diamond. When collective action is taken on all facets of the diamond, the change gap will close (see Figure 1). Globally, the top 20 percent of respondents - "Change Masters" - achieve success in 80 percent of their projects by focusing on all four facets of the diamond. They have double the success rate of the average organization (see Figure 2).In the UK, although Change Masters do not spend more than the average on change, by focusing on three of the four facets of the diamond - insights, methods and skills - they do achieve project success rates of more than twice the UK average. In contrast the Change Novices consistently under-utilize all four facets and their low success rates speak for themselves. So the UK Change Masters do not necessarily spend more, but they do carefully apportion their spend across the other three facets and utilize these to great effect (see Figure 3). Change Masters cite greater levels of project success than the average when they focus on the individual facets. But when they combined the facets, UK Change Masters attained a 65 percent project success rate, an additional 30 percent increase due to synergy. This suggests that in the UK, the impact of focusing on all facets in concert - is more keenly felt than at the global level (19 percent). The change gap is acute in the UK. This can no longer be addressed by the unstructured and ad hoc approaches to change management that have typified the past;* In the UK change practitioners are more successful when focusing on and combining the four facets of the Change Diamond;* In the UK, Change Masters target their spend carefully and they achieve project success rates of more than twice the average. For them, the impact of added insight, structured method and improved change skills is keenly felt in the UK, just as it is globally;* The study provides fresh data points from a new source: over 200 project managers and change managers in the UK - those who confront project realities everyday;* Much can be learnt from the small number of Change Masters who have achieved significantly greater success by "working the diamond";In the UK, although we invest more on change management than at a global level, it is not this that makes the difference. Instead, it is experienced change managers applying the facets of the diamond together in a comprehensive set of planned interventions that has the most significant impact.To discover how to prepare your business for change, benchmark your organization against our maturity model (see Figure 4) and find out if you are a Change Novice or a Master?Take the necessary steps towards becoming an effective Change Master. Utilize all facets of the Change Diamond in combination to make change work. Opens in a new window.Figure 1 Opens in a new window.Figure 2 The focus of the change masters pays dividends Opens in a new window.Figure 3 The collective effect of the four facets Opens in a new window.Figure 4
|
- Global survey conducted through online and face-to-face interviews. Statistical analysis using correlation and cross tabulation.
|
[SECTION: Findings] Companies around the world are currently going through periods of great change, some are becoming increasingly global and many are wondering how the changing economy will impact on their business. These changes are determining what the enterprise of the future will look like and calling for organizations of all sizes to demonstrate agility and create an infrastructure that responds to an ever changing business environment. In 2008 IBM launched a Global CEO survey which questioned over 1,000 CEOs and found that 83 percent of respondents said they were expecting substantial change over the next three years, climbing from 65 percent in 2006 (IBM Corporation, 2008a). Although one-third of CEOs were expecting more change, the most significant factor was that only 61 percent of the same group said their organization had coped well with change in the past, a small 4 percent increase from 2006. This disparity between expected change and the perceived ability to manage it is called the "Change gap". At the global level, the change gap was 22 percent, however, in the UK the change gap was even higher at 30 percent with UK CEOs having similar expectations of change but citing less evidence of past change success. The big question is why do business leaders feel they are struggling to keep up with change? Businesses are now more global and therefore more complicated than they have ever been. There are more areas of the organization to improve and more ways to improve them. Companies are now facing new challenges to their success and must develop smarter ways to overcome them. To investigate how companies can keep up with the changing business environment, IBM Corporation (2008b) launched a Making Change Work study which surveyed over 1,500 change practitioners, project managers and project leaders across the world asking them how they actually "make change work".Respondents were from a mix of organization size and industry. In the UK there were over 200 survey participants, 40 percent from the public sector which has been characterized by an ambitious transformation agenda across many agencies in recent years. The study found that successfully executing projects is a major issue for companies with only an average of 41 percent of projects considered successful in meeting project objectives within planned time, budget and quality constraints, compared to the remaining 59 percent of projects which missed at least one objective or failed entirely. This is even more acute in the UK, where there were only 31 percent of successful projects.Interestingly, the detailed analysis found that achieving project success does not hinge primarily on technology - instead, success depends largely on people. This set the base of the study findings which focused on finding the key, targeted interventions required for successful change. The study identified four important focus areas that were highly correlated with project success and help to close the change gap:1. real insights, real actions;2. solid methods, solid benefits;3. better skills, better change; and4. right investment, right impact.These four change-related focus areas are represented graphically as four facets of a Change Diamond. When collective action is taken on all facets of the diamond, the change gap will close (see Figure 1). Globally, the top 20 percent of respondents - "Change Masters" - achieve success in 80 percent of their projects by focusing on all four facets of the diamond. They have double the success rate of the average organization (see Figure 2).In the UK, although Change Masters do not spend more than the average on change, by focusing on three of the four facets of the diamond - insights, methods and skills - they do achieve project success rates of more than twice the UK average. In contrast the Change Novices consistently under-utilize all four facets and their low success rates speak for themselves. So the UK Change Masters do not necessarily spend more, but they do carefully apportion their spend across the other three facets and utilize these to great effect (see Figure 3). Change Masters cite greater levels of project success than the average when they focus on the individual facets. But when they combined the facets, UK Change Masters attained a 65 percent project success rate, an additional 30 percent increase due to synergy. This suggests that in the UK, the impact of focusing on all facets in concert - is more keenly felt than at the global level (19 percent). The change gap is acute in the UK. This can no longer be addressed by the unstructured and ad hoc approaches to change management that have typified the past;* In the UK change practitioners are more successful when focusing on and combining the four facets of the Change Diamond;* In the UK, Change Masters target their spend carefully and they achieve project success rates of more than twice the average. For them, the impact of added insight, structured method and improved change skills is keenly felt in the UK, just as it is globally;* The study provides fresh data points from a new source: over 200 project managers and change managers in the UK - those who confront project realities everyday;* Much can be learnt from the small number of Change Masters who have achieved significantly greater success by "working the diamond";In the UK, although we invest more on change management than at a global level, it is not this that makes the difference. Instead, it is experienced change managers applying the facets of the diamond together in a comprehensive set of planned interventions that has the most significant impact.To discover how to prepare your business for change, benchmark your organization against our maturity model (see Figure 4) and find out if you are a Change Novice or a Master?Take the necessary steps towards becoming an effective Change Master. Utilize all facets of the Change Diamond in combination to make change work. Opens in a new window.Figure 1 Opens in a new window.Figure 2 The focus of the change masters pays dividends Opens in a new window.Figure 3 The collective effect of the four facets Opens in a new window.Figure 4
|
- Study identified four areas correlated with project success - real insights, solid methods, better skills, right investment. We call this the Change Diamond. Combined effectively, they make change work better.
|
[SECTION: Value] Companies around the world are currently going through periods of great change, some are becoming increasingly global and many are wondering how the changing economy will impact on their business. These changes are determining what the enterprise of the future will look like and calling for organizations of all sizes to demonstrate agility and create an infrastructure that responds to an ever changing business environment. In 2008 IBM launched a Global CEO survey which questioned over 1,000 CEOs and found that 83 percent of respondents said they were expecting substantial change over the next three years, climbing from 65 percent in 2006 (IBM Corporation, 2008a). Although one-third of CEOs were expecting more change, the most significant factor was that only 61 percent of the same group said their organization had coped well with change in the past, a small 4 percent increase from 2006. This disparity between expected change and the perceived ability to manage it is called the "Change gap". At the global level, the change gap was 22 percent, however, in the UK the change gap was even higher at 30 percent with UK CEOs having similar expectations of change but citing less evidence of past change success. The big question is why do business leaders feel they are struggling to keep up with change? Businesses are now more global and therefore more complicated than they have ever been. There are more areas of the organization to improve and more ways to improve them. Companies are now facing new challenges to their success and must develop smarter ways to overcome them. To investigate how companies can keep up with the changing business environment, IBM Corporation (2008b) launched a Making Change Work study which surveyed over 1,500 change practitioners, project managers and project leaders across the world asking them how they actually "make change work".Respondents were from a mix of organization size and industry. In the UK there were over 200 survey participants, 40 percent from the public sector which has been characterized by an ambitious transformation agenda across many agencies in recent years. The study found that successfully executing projects is a major issue for companies with only an average of 41 percent of projects considered successful in meeting project objectives within planned time, budget and quality constraints, compared to the remaining 59 percent of projects which missed at least one objective or failed entirely. This is even more acute in the UK, where there were only 31 percent of successful projects.Interestingly, the detailed analysis found that achieving project success does not hinge primarily on technology - instead, success depends largely on people. This set the base of the study findings which focused on finding the key, targeted interventions required for successful change. The study identified four important focus areas that were highly correlated with project success and help to close the change gap:1. real insights, real actions;2. solid methods, solid benefits;3. better skills, better change; and4. right investment, right impact.These four change-related focus areas are represented graphically as four facets of a Change Diamond. When collective action is taken on all facets of the diamond, the change gap will close (see Figure 1). Globally, the top 20 percent of respondents - "Change Masters" - achieve success in 80 percent of their projects by focusing on all four facets of the diamond. They have double the success rate of the average organization (see Figure 2).In the UK, although Change Masters do not spend more than the average on change, by focusing on three of the four facets of the diamond - insights, methods and skills - they do achieve project success rates of more than twice the UK average. In contrast the Change Novices consistently under-utilize all four facets and their low success rates speak for themselves. So the UK Change Masters do not necessarily spend more, but they do carefully apportion their spend across the other three facets and utilize these to great effect (see Figure 3). Change Masters cite greater levels of project success than the average when they focus on the individual facets. But when they combined the facets, UK Change Masters attained a 65 percent project success rate, an additional 30 percent increase due to synergy. This suggests that in the UK, the impact of focusing on all facets in concert - is more keenly felt than at the global level (19 percent). The change gap is acute in the UK. This can no longer be addressed by the unstructured and ad hoc approaches to change management that have typified the past;* In the UK change practitioners are more successful when focusing on and combining the four facets of the Change Diamond;* In the UK, Change Masters target their spend carefully and they achieve project success rates of more than twice the average. For them, the impact of added insight, structured method and improved change skills is keenly felt in the UK, just as it is globally;* The study provides fresh data points from a new source: over 200 project managers and change managers in the UK - those who confront project realities everyday;* Much can be learnt from the small number of Change Masters who have achieved significantly greater success by "working the diamond";In the UK, although we invest more on change management than at a global level, it is not this that makes the difference. Instead, it is experienced change managers applying the facets of the diamond together in a comprehensive set of planned interventions that has the most significant impact.To discover how to prepare your business for change, benchmark your organization against our maturity model (see Figure 4) and find out if you are a Change Novice or a Master?Take the necessary steps towards becoming an effective Change Master. Utilize all facets of the Change Diamond in combination to make change work. Opens in a new window.Figure 1 Opens in a new window.Figure 2 The focus of the change masters pays dividends Opens in a new window.Figure 3 The collective effect of the four facets Opens in a new window.Figure 4
|
- Change practitioners can more effectively deploy their resources to deliver maximum change impact through targeted intervention. Change practitioners can focus on the Change Diamond facets to enable better change.
|
[SECTION: Purpose] Part one of this paper presented Waight and Stewart conceptual model (Figure 1) on valuing the adult learner in e-learning with the corporate settings. The purpose of this paper is to present how four e-learning teams in four companies are valuing the adult learner in e-learning. First, the methodology that guided the case studies is presented. Second, the four case studies are shared. Third, a discussion of the four case studies and their relationship with the conceptual model is provided. Lastly, conclusions are drawn based on the case studies and discussion.The conceptual model reflects companies where support for e-learning via supportive leadership, learning cultures, technology infrastructure, and finance are apparent. In addition, the model indicated analyses such as needs assessment, learner, work, work setting, and content analyses as viable processes that if implemented well can assist in providing the adult learner targeted and meaningful learning opportunities.The model shows, however, that antecedents start the value process but knowledge and skills on return on investment, learning theories, technology, and creativity add to the value for the adult learner. The model further indicates that when championing factors, antecedents, and moderators are adhered, outcomes such as engagement, learning, and transfer are possible. Research design The method used in this research was a case study. Yin (2003) stated that a case study is an empirical inquiry that investigates a contemporary phenomenon within its real-life context. The purpose of the study was to investigate how the adult learner is valued within the e-learning environment. The following questions guided data collection:1. What is the e-learning context in your organization?2. How is the adult learner valued in the e-learning environment?3. What considerations must be addressed when valuing the adult learner in e-learning environments within corporate settings?Sample The sample of this study was nine e-learning designers whose Fortune 500 companies had active e-learning initiatives for a minimum of four years. These companies represented the retail, insurance, oil and technology industries. Selection of companies occurred through an informational telephone interview with e-learning designers. During the interview, the researchers explained the purpose of the study and asked if the companies had an e-learning initiative. All e-learning designers within the four companies that were initially contacted had active e-learning initiatives and expressed their willingness to partake in the study. Overall, there were three e-learning team leaders, and six instructional designers. Interviewees for the four companies ranged between one and three.Data collection Semi-structured telephone interviews were conducted with e-learning representatives of each company. All participants received the study proposal, which included the interview questions via e-mail. Before the telephone interview was conducted the researchers asked for permission to record the interviews. All participants agreed to be recorded. All the interviews were transcribed.Instrumentation The interview guide included three major sections that paralleled the study's research questions. The three major sections included context, the adult learner in the e-learning environment and e-learning considerations in the corporate world. The researchers tested the interview guide with two of the e-learning participants. These participants were included in the study. The researchers conducted a preliminary content analysis to ensure that research questions intent was being targeted. The pilot test revealed that all the questions were relevant.Data analysis Participants' descriptive content responses to the interview questions were read, and categorized by research question. Each research question contained a main premise, which was used by the researcher to reduce large amounts of data into a smaller number of analytic units or themes. After analyzing each participant's response for each research question, the researcher color-coded the themes. Upon identifying the recurrent themes, the researcher created descriptive tables for each theme. In this way the researcher identified data that supported each theme. Upon completing the analysis, cases were written for each company. Cases were sent to the respective interviewees for their review and feedback. All cases received valuable feedback and were revised. The following are four case studies on how the adult learner is valued in the e-learning within the corporate setting. These cases represent the efforts of e-learning teams within specific divisions of their companies. Of the four companies, only two approved to have their names mentioned in the case study.Case study one: insurance company Context A large personal lines insurer has been a successful implementer of e-learning. The development of e-learning began in 1995 with computer based technology courses. These early courses were designed for agent training and led to the subsequent development of web-based courses. The learners tracked by the learning management system (LMS) include employees, contract agents, and their staffs. In 2003, more than 60,000 learners completed at least one course online. Typically, learners are positive about their e-learning experiences. The results of an annual enterprise-wide opinion survey indicated that 78 percent were satisfied or completely satisfied with their e-learning courses.Originally, the drive to introduce e-learning was precipitated by the geographic diversity of learners. In addition to cost, the need and value for learning, quickly became primary drivers of the e-learning growth. Predominately, e-learning is an asynchronous experience. Less than 10 percent of web-based training is blended with instructor led training. Presently, virtual classrooms are under consideration for possible addition to the e-learning experience.This company currently tracks over 4,000 learning activities including courses, assessments, workshops, and class registrations. There are approximately 700 enterprise wide online courses. These include technology courses such as Microsoft Office; insurance product knowledge courses; strategy courses covering topics such as company history, how the company makes money, and corporate ethics; process courses such as billing and computer applications; soft skills courses; and technology courses. The team charged with the creation of e-learning is within the human resources function. This team delivers enterprise based e-learning courses while business units deliver specific business related e-learning courses.How are adult learners valued? The adult learner is an important consideration for the e-learning designers. The adult learner consideration, however, sits within a realization of constraints and opportunities. First, e-learning designers are aware that bandwidth constrains the type and the application of animations and audio clips. Consequently, their application of video clips in their e-learning courses is minimal. Second, e-learning designers are aware of their expertise and realize that they need additional talent on two and three dimension simulations.Despite these constraints, however, the e-learning designers have also been able to take advantage of design and production opportunities. The use of standardized templates has improved how e-learning designers create courses. This has reduced the design and production time for a typical course from 40 to 15 days. The e-learning designers recognized, however, that interviewing subject matter experts and writing scripts are the processes that take the most time and there has only been limited help from technology for these parts of the course design and development process. Considering their constraints and opportunities, the e-learning designers identified a seven-dimension strategy to value the adult learner. The following describes the seven dimensions.Realizing that the e-learning group mainly serves enterprise-wide e-learning needs, content selection is driven by a steering committee that has the pulse on learning needs that are strategic for individual, group, and organizational performance. The committee has representation from the finance, marketing, and HR functions. This committee recently provided content advice on the economics of the business and compliance policies. This steering committee assists the e-learning team in valuing the adult learner by advising on content that is relevant and meaningful to the operations of the business.In reference to content sequence, e-learning courses may reflect an information-assessment, assessment-information, and an individualized sequence option. All three sequences give adult learners an opportunity to process information differently, and engage in a preferred learning style. The information-assessment sequence forces learners to process information before completing the assessment. In addition, the assessment-information sequence gives learners the opportunity to identify their knowledge gap and then target the specific knowledge that they may need. Instructional designers stated that the latter sequence works well with product related courses.Adult earners also get the opportunity to structure their courses. Learners are able to decide how they want to see things - for example, learners may choose to be asked questions throughout the course or only to be asked questions at the beginning and/or at the end. Learners may also choose the sequence of their learning activities. To avoid information overload, most modules are usually 15 minutes or less.Adult learners also get an opportunity to make choices on content presentation. Typically, courses will include a text and audio portion and have graphics, which are used to enhance the content delivery. Transcripts of the audio portion are also made available for anyone with hearing disabilities and for those who chose not to listen to the audio. Video is rarely used because of bandwidth constraints; a short video clip with the chairman introducing a new set of courses is not uncommon.Interaction focuses mainly on learner-content interaction. Adult learners get the chance to interact with content after every three or four screens, a design standard that applies to all courses. Interaction after three or four screens may include answering a question or completing a matching activity - an example, matching definitions with terms. Learner-content interaction also occurs through problem-solving scenarios. Learners get the opportunity to diagnose a scenario and choose among several solutions. Lastly, learner-content interaction also occurs through games.Value for the adult learner is supported by design standards. In addition to enforcing interaction after three or four screens, the e-learning designers developed five sets of templates for their courses to standardize features and allow adult learners easy navigation. Choice of templates depends on the content under design. Policy compliance courses, for example, use the templates with the office theme. The template background is typically a hallway, desks, or the cafeteria. Graphically, people will appear and ask questions in the context of an office. Overall, all templates also have navigation, glossary, and an area where links to other web sites are uploaded. All these components appear at the same locations on the screens in all five sets of templates.Assessment is another component via which the adult learner is valued. Pre and post assessment occurs in about 20 percent of the courses with about 80 percent only having post assessment. Assessment techniques may include true and false questions, matching and multiple choices, fill in the blanks and problem-solving scenarios. Assessment plays a crucial role in learners' performance on the job, especially since the e-learning courses are tied to the strategic organizational performance. If learners do not pass the test the first time, they retake the test and parts of the course if necessary to meet the passing criteria. Last year, for example, all learners selling insurance had to take the "do not call list" course. Every learner was tracked and anyone who did not take or did not pass the test could make unsolicited calls.Instant feedback, a component of the assessments, helps adult learners to immediately focus on their knowledge gap, and respond appropriately such as reviewing the module before retaking the assessment.Transfer of e-learning to the job has only been tracked with a few courses. The e-learning team, however, is making transfer of learning on the job one of their core competencies and they are presently exploring various transfer assessment tools. Presently, assessing transfer occurs through a 30- and 90-day follow-up with the learner and the manager, which seeks to find what learners have learned and applied on the job. Last year, transfer was tracked for 12 courses. Transfer is presently influenced by managers' involvement with the e-learning process. Transfer for the billing course was tracked by recording how learners answered billing questions and if they had to transfer questions to a more experienced person. Results showed that the transferring questions to an expert had decreased and managers were quite happy.In conclusion, the e-learning group is focused on providing enterprise-wide e-learning courses. Though bandwidth and lack of skills to develop high-end simulation techniques are present, the team keeps the adult learner at the forefront of their design by ensuring that content is relevant and meaningful. Meaningfulness occurs by working with the steering committee and subject matter experts to identify content and capture the best scenarios that are relevant to the organization. Meaningfulness is also inserted into the e-learning design by creating templates that have themes relating to company infrastructure and culture. Content sequence and presentation give the learners locus of control whereas learner-content interaction, standards, and assessments encourage learners to interact and process information in various ways. Enforcing interaction after three or four screens and limiting modules to 15 minutes or less help prevent information overload. Overall, this e-learning team has been proactive in valuing the adult learner in their e-learning design.Case study two: energy services group, Halliburton Context Halliburton is one of the world's largest providers of products and services to the oil and gas industries. Halliburton employs more than 100,000 people in over 120 countries. Halliburton's Energy Services Group consists of four business segments: drilling and formation evaluation, fluids, production optimization, and landmark and other energy services. The second group is the engineering and construction group, known as Kellogg Brown & Root. This mini-case focuses only on the Energy Services Group.For Halliburton's Energy Services Group e-learning is a company-wide initiative introduced in 2000. The driving forces influencing its initiation included a desire to reduce costs and increase access. The travel cost for both learners and instructors inherent in instructor-led training was one cost reduction factor. Additionally, e-learning was viewed as a means to successfully reach learners worldwide.Most of the e-learning opportunities are offered asynchronously. Learning modules are available through the LMS and can be accessed by learners at any time. Additionally, some instructor led courses have a blended format where learners may access materials to prepare for the course and/or may use e-learning assessment tools. In some cases synchronous approaches are used for collaborative and communication purposes. Both learners and instructors can talk to each other and share documents and applications.In many cases e-learning can be accessed at any time. This, however, can be affected by technology-related constraints. Bandwidth related issues are constraints, especially in specific geographic areas such as West Africa. Additional access issues may be as simple as the presence of a computer and appropriate network connections. In some cases, employees in remote areas utilize a satellite office equipped with workstations. Access for employees from home is possible for some, but it is riddled with home equipment, firewall, and security issues. Currently selected employees use virtual private networks (VPN) to connect from their homes. The e-learning team hopes to increase access locations by distributing laptop computers and providing kiosks at specific locations.Halliburton's LMS includes courses developed internally as well as those purchased from outside vendors. About 1,000 courses developed by external entities and more than 140 (mostly technical) internally developed courses are available. Typically, the duration of a course is about one hour. Internally developed courses include technical content for subject matter such as drilling, mud, hardware, and exploration technology. Additional non-technical content includes areas such as legal issues, business conduct, finance, procurement, human resources, and leadership. Purchasing of externally created course in the areas of leadership and management, for example, are driven by business units needs. A recent contract secured the availability of a catalogue of "soft skill" training offered in a number of formats including simulations, online books, quick reference materials, and job aides. While Halliburton purchases online training from vendors it does not sell internally developed e-learning courses.The team of five instructional designers, four multimedia specialists, and one technical publishing specialist manage the e-learning initiative. Together, they serve more than 35,000 employees of the Energy Services Group. For 2002, approximately 279,500 courses were completed, reflecting an average of about eight courses per employee. The e-learning initiative is a part of the human resource development function and it is termed Halliburton University.How are adult learners valued? E-learning team at Halliburton has gone great strides to design e-learning courses that value the adult learner. These strides, however, reside within constraints and opportunities. The first challenge is response time to clients due to a small staff of about ten designers for approximately 35,000 employees. Designers said that their roles entail consulting with product service lines on their learning needs, meeting clients' requests, and reviewing possible third party courses for compatibility with the LMS standards. While having a small staff affects response time, designers shared that the LMS has helped them to deliver course content and track learners' access and performance. Designers shared that business units' value and support for e-learning has helped them value the adult learner. In addition, Halliburton's initiative of identifying competencies for each job role has assisted the designers in identifying relevant content for targeted audiences.The e-learning team shared that they apply adult learning principles to the e-learning courses by adhering to the ADDIE module. The instructional design model, ADDIE, includes analysis, design, development, implementation, and evaluation. In specific, analysis can include performance problem analysis, needs assessment, goal, work, learner, work setting and content analyses. At present, the e-learning team is refining the instructional design process to foster consistency in the application of ISD among all team members. The e-learning team said that even though they are an in-house development team, internal client groups fund their design and development time and this in itself underlines the need to give the client the best service for their monies. Front-end analysis and even more inclusive, the complete instructional design process helps to value the adult learner by providing the most appropriate and relevant learning solution.The e-learning team employs front-end analysis procedures understand the adult learners and their needs. Understanding the adult learners' needs helps the e-learning team in identifying parameters for design, development and deployment. The front-end analysis may include technology infrastructure, type of content, level, and type of interaction necessary, learner analysis, as well as cost and delivery options. Learner analysis, in specific, does not occur for every course or module because the designers are often tasked with delivering e-learning courses to the same target audiences. Technology analysis has proved to be very important, especially when dealing with adult learners worldwide. Bandwidth issues, for example, vary and have major implications on accessing e-learning courses. The e-learning team said that they sometimes visit sites to get a good picture of technology limitations. The e-learning team also noted that the work setting analysis has shown that the multicultural context of their organization communicates the need to have e-learning courses translated into various languages and to be sensitive of cultural differences. The e-learning team said that they are presently translating some courses to Spanish and hope to have other languages in the near future. The e-learning team said that they use cultural informants from respective countries for advice on cultural issues.As a result of front-end analysis the e-learning team may chose one of the three levels of course design. The first level, the most basic, may include power point with audio and an assessment activity. In essence, level one has a linear structure where the user would be going through a page turn type of lesson. Level one is usually chosen when the client has an urgent need to get content to employees as quickly as possible. Level two, on the other hand, gives the learner control of the content sequence and presentation. Adult learners are given the chance to navigate through the module and select how they want the content presented and can choose the sequence of their activities. Text is presented in a static and dynamic form giving adult learners opportunities to access certain links or use graphics, tables or charts to reinforce concepts. Level two would generally contain more in depth assessments activities, which are tied to performance objectives. In contrast, level one does not necessarily state the performance objectives. Level three represents the most complex types of course design. In addition to having clearly stated performance objectives, which are tied to assessments, this level might include video and audio clips, and some more complex animations that require the learner to interact and make decisions at certain sections of the animations. Level three may also include some low level simulations, for example, where the learner is given procedures to simulate the correct use of a piece of machinery. The simulation would have to meet specifications required in the real world.The e-learning team stipulated that on the average they design level two and three type courses. Sometimes they might even use a combination of levels two and three. Designers shared that it could take somewhere between 325 and 425 hours to design level two and three type courses. Designers shared that they recently collaborated with the multimedia development team to identify timeframes for simple to complex animations and simulations, and for creating static graphics to complex tool drawings. In the next six to twelve months, designers will be evaluating projected timeframes for designing and developing level two and three type courses.Course selection is a process driven by product service lines. Product lines refer to the different aspects of the oil field service industry, for example, the logging and perforating service. Once the content area is identified, designers and subject matter experts examine content to ensure that it is relevant to the performance gap, the job, and the target audience. Thus, designers and subject matter experts conduct work analysis to ensure that they have real-life examples and scenarios. Designers shared that learner's characteristics such as educational backgrounds can also influence the selection of content. The e-learning team added that a technically based module that has a target audience of high school and college degreed individuals, for example, has to be designed well to engage all types of learners in the learning process.The e-learning team ensures that content presentation is varied to give the adult learners preferences on how to access content. The e-learning designers use audio, video, print, animations, and simulations and in some situations, case studies to present content. Animations, in particular, serve to give adult learners visual representations of concepts. Modules that have graphic intensive simulations are sometimes presented in a hybrid module because of bandwidth limitations. In hybrid module, learners receive their content in a CD and take their assessments through the LMS. Learners have the options of listening or turning off the audio and of downloading a job-aid, which can be a summary of the module. The use of dynamic text allows learners access to internal web pages that are relevant to the respective module. The designers use Flash, Dream Weaver, Robo Demo and Microsoft Office to assist with content presentation.Eighty percent of the courses are asynchronous and are sometimes the forerunners to classroom learning. Thus, learners are primarily engaged with learner-content interactions. Learner-content interaction may occur by learners choosing their modules, choosing their content presentation options, and answering questions. However, learners may get the opportunity to interact with each other and instructors via a virtual collaborative tool called Interwise. The tool engages learners and instructor in real time using audio, application sharing, white board, and a chat room. The instructor can give control to learners; learners in turn can raise their hands and interact with their peers.Assessments give adult learners feedback on their knowledge acquisition. Assessments may include multiple choice; fill in the blank, and matching. The e-learning team shared that while they would like to design dynamic assessments, the LMS does not presently support those capabilities. Assessments are designed to give instant feedback to the learner especially since learners are expected to receive an 80 percent on most tests. If the 80 percent is not met the first time, learners have the options of retaking the test or returning to the module. The LMS allows learners to take the test twice in one sitting and if the 80 percent is missed on the second try, the learners are automatically logged off and need to log in again if they want to retake the test for the third time. The e-learning team shared that the order of test questions and answers might change every time the test is taken.The standards that guide course design comply with the LMS. Standards include html, flash, navigation structures, titles, fonts, size, color, animations, pop-ups, for example. Keeping the adult learner in mind, the e-learning team also includes performance objectives, overviews and formative and summative learning checks or assessments. Performance objectives are categorized using Bloom's taxonomy reflecting simple to more complex learning outcomes.The e-learning team has also created storyboard screens to assist with standardizing the process of the course design. Designers mentioned that storyboards allow everyone involved including subject matter, and multimedia experts to review and revise instantly. This dynamic storyboard screen allows everyone involved to collaborate and to make changes to the course before it goes into development.Adult learners mostly take their e-learning courses at their desktops. Kiosks and labs are available in some locations. The field employees may access their courses from a truck using a laptop that may be connected via satellite. Designers recognized that bandwidth issues influences where adult learners can access their courses.Transfer of e-learning resides with the learner and supervisor. The e-learning team presently sees their role as identifying and using instructionally sound instruction methods and ensuring that the material is technically accurate. The team stated, however, that they plan to increase their efforts in addressing transfer of learning on the job and return on investment.In conclusion, the e-learning team at Halliburton is continuously looking for ways to serve the adult learner in the e-learning environment. The value for the adult learner resides with the application of their creative use of instructional design process and of the technology infrastructure that exist worldwide. E-learning designers are about deploying a sound learning opportunity that meets clients' and adult learners' needs worldwide.Case study three: retail chain store Context The focus of this case describes an e-learning team effort that serves the stores and asset protection sector of a retail store chain, which operates approximately 260 stores in 14 states. The chain is one component of a larger retail empire, which operates more than three distinct retail formats. This retail store chain employs approximately 29,000 employees.The e-learning initiative began in 2000 and is located within the university of the retail store chain. The e-learning initiative is mainly directed at the executive level employees within the stores and asset protection sector. Primary reasons for the introduction of e-learning were to reach a geographically dispersed workforce within the United States and to create consistency, quality and relevance among the training design, development, delivery and results.Asynchronous, synchronous, and blended formats are used to deliver training. Asynchronous learning focuses on computer-based modules while synchronous utilizes virtual classrooms. Blended learning may include a mix of computer-based modules and virtual classroom modules or face-to-face training with either computer based modules, or virtual classroom modules. A blended solution may have a team of employees taking a computer-based module all at the same time and upon completion the team gets together to share what they have learned. Approximately 30 modules have been developed with about 2 or 3 new offerings being created each quarter. While the retail store chain continues to expand its e-learning options, approximately 70 percent of the training continues to be offered face-to-face.The training and development team is comprised of four persons. Two members focus solely on e-learning. With an e-learning staff of two, these individuals assume roles in project management, design, creation, execution, and evaluation. Typically, a computer-based module takes between 100 and 300 hours to design and develop; a virtual classroom module, on the other hand, requires about two months. While this e-learning initiative resides within the stores and asset protection sector of retail store chain, it shares e-learning modules with other sectors and sometimes purchases modules from various vendors.How are adult learners valued? With e-learning representing approximately 30 percent of the training function and with the development team being relatively small (two persons), this case provides an overview of how the adult learner is valued within an emergent e-learning initiative.Front-end analysis is conducted to ensure that training is relevant, meaningful, and authentic. Performance problem analysis, for example, ensures that the performance gap is linked with training content. Once the training link is identified, e-learning designers further analyze to identify whether the training can be delivered through face-to-face training or e-learning.If e-learning is decided upon, then further analysis occurs to decide whether the medium will be computer-based, virtual classroom or a blended solution. Learner analysis also occurs to ensure that adult learners' characteristics are considered in the module design and development. Given that the target audience is usually executives, e-learning designers may conduct mini learner analyses when introducing a new content category.Once the performance problem has been aligned with training, content selection is directed either through a partnership with senior managers and/or employees in the field or through a review of new processes or procedures. Subject matter experts focus on the target audience, the performance problem, and the respective environments to identify relevant content and most appropriate learning activities given the selected medium of delivery.Content presentation occurs through text, graphics, video, and audio. Most computer-based modules have a combination of these mediums. Blended solutions use more video than computer-based modules because of bandwidth issues. Designers use flash, fireworks, trainer soft, robo-demo, and author ware to present content. At present, the adult learners cannot personalize the content presentation. The e-learning team is aware of this limitation, and is exploring options to give learners more control of content presentation options.With learning occurring asynchronously, synchronously, and in blended formats, adult learners can engage in learner-content, learner-learner and learner-instructor interactions. Learner-content interaction occurs mainly with asynchronous e-learning modules. Learners usually interact via simulations, games, or quizzes. These types of activities can force learners to analyze, synthesize, and evaluate. Learner-content interaction also occurs while listening to the audio portion of the module or watching a video. E-learning designers ensure that adult learners have various types of interactive activities because they want learners to enjoy a preferred type of activity. Learner-learner and instructor-learner interactions occur via the virtual classroom. Virtual classroom modules are chosen primarily because they can meet the just-in-time needs of business units. Virtual classroom have live instructors who communicate to the learners using a radio talk show format. Learners are also able to interact with scenarios and questions posed by the instructor. Adult learners, in turn, can use the chat space to communicate among themselves and with the instructors.With e-learning being an emergent initiative, the e-learning designers do not presently have a LMS. The designers, however, are in the process of developing an e-learning template that has consistent module design features. The template will reflect the retail store chain's university campus. The template will have links to the various colleges, for example, the college of logistics. The college of logistics will encompass all logistic related modules. The template for each module, on the other hand, will include nine major sections. The sections are performance objectives, module overview, content presentation, assessment, journal tool, tip cards, glossary, help tool, and a library. The e-learning team shared that this template will give adult learners more opportunities to access their preferred learning activities and tools. In keeping with adult learning principles, in specific the attention span, e-learning designers are designing modules that take no more than 30 minutes to complete. Lastly, as the designers work towards developing their template, they are conducting usability testing for every module that is released. In specific, designers are evaluating how employees navigate, access and interact with the content. Designers said that listening to adult learners is important if they are to deliver a learner-centered e-learning experience.The journal, one of the module features, will be a tool that can assist the transfer of learning on the job. Designers expect that the learners will use the journal to make personal notes about their learning, in specific, write what they would like to share with their peers and managers upon returning to the job. Presently, the direct supervisor is responsible to ensuring that learning is transferred on the job. The just-in-time delivery of computer-based and virtual classroom modules assist learners to transfer their learning on the job because the modules are based on just-in-time needs.Assessments occur at the formative and summative levels. At the formative level, learners may be asked to answer questions as they proceed through the module. At the summative level, if assessments will not be tracked, learners may take a self-assessment in the form of game such as jeopardy or who wants to be a millionaire. If the summative assessment will be tracked, learners receive a reaction survey and assessment that may include multiple choice, fill in the blanks, true or false and matching type questions. With the absence of a formal LMS, the e-learning designers contract database services to collect and access data from the reaction survey and assessments. Learners usually receive instant feedback on assessments and can return to the assessment or module for clarification. Instant feedback is not given, however, when assessments are for certification. Performance on certification assessments is communicated directly to the learner or to the respective managers or supervisors.Adult learners have access modules only at the stores. Every store has approximately three computers; one computer is dedicated to learning. Modules are delivered to the computers via an executable package. Employees individually or in groups of two or three can take the module at the same time. Introducing e-learning at the stores has been challenging because of the customer driven environment. The e-learning, shared, however, that they are presently exploring how to make learning seamless at the store level; one possibility could be the introduction of kiosks.In conclusion, though the e-learning team at the retail store chain has the human resource, time, money, and bandwidth challenge, they are highly motivated and confident that their e-learning initiative will only improve to serve the adult learner. The designers highly anticipate the completion of the template because it will serve to decrease their design and development time. In addition, e-learning designers are hoping to cross train the other two members of training and development team on e-learning processes such as meeting with clients, analyzing the performance problem, setting objectives, and getting the initial content layout using the instructional design process. This type of expertise could assist the existing designers to focus on coding and development. Although, emergent, the e-learning initiative at retail store chain is progressing in its mission to value the adult learner.Case study four: HP services workforce development, HP Context Hewlett Packard (HP) is a technology solutions provider to consumers, businesses, and institutions globally. The company's offerings span IT infrastructure, personal computing and access devices, global services, imaging and printing for consumers, enterprises and small and medium businesses. With approximately 140,000 employees worldwide, HP serves more than one billion customers in more than 160 countries. While organizationally HP accomplishes this by focusing on multiple business organizations, this case examines only the HP Services Group (HPS), which consists of approximately 60,000 employees. It is recognized that e-learning initiatives exist in multiple organizations and groups within HP and they are beyond the scope of this case. Within HPS, a designated organization exists to develop, design, and deliver learning solutions to this business unit. The name of this organization is HPS Workforce Development (HPS WD).The e-learning initiative with HPS began in 1999. The HPS WD team has created and monitored approximately 400,000 learning incidents in a recent six-month period. Approximately 85 percent of training is delivered electronically. Reasons cited for the introduction of e-learning into the group include cost efficiency; accessibility to learners, especially remote learners and those on customer sites; and time efficiency in that learners in the field can train where they are, without having to travel to a classroom setting.Both synchronous and asynchronous formats are used. Self-paced modules, both those created "in-house" and those purchased from vendors, are available to learners for access at a time and place that meets their scheduled needs. Required company training, such as business conduct, is an example of content that is delivered by asynchronous, self-paced methods. Virtual classroom environments and virtual labs can be accessed remotely for synchronous and/or asynchronous experience. Virtual labs have successfully been used for technical training.HPS e-learning courses are organized into portfolios. The main portfolio contains 1,800 self and instructor based courses. An additional 200 courses bring the total number of courses to 2,000. Portfolios include business conduct, technical, legal and professional skills such as consulting and project management. A virtual team comprised of curriculum developers, learners, delivery professionals and HP businesses work together and interface with product divisions to prioritize course offerings and to develop content. Specifically, the development team is a global organization of about 80 program managers plus developers and designers (total about 140 people) whose task it is to decide format, create content, develop courses, and evaluate feedback for the updating of courses. Increasingly, the outsourcing of course creation is being explored and utilized. As of 1 May 2004, most HPS employees had taken multiple e-learning courses. In general, while HPS WD does share courses across HP, it does not sell its learning packages to outside entities with the exception of customer services training solutions.HPS WD employs a business performance-consulting model. To help learning consultants within the HPS apply the business performance model, the corporate workforce development team at HP introduced an electronic performance support system to assist teams in becoming aware of similarities among performance issues and the types of training that they are providing. As a result of this awareness, teams are combining their strengths and discovering synergies in their training efforts.How are adult learners valued? The adult learner is an important factor in the design of e-training within HPS. E-training includes web-based courses, virtual classrooms, web-inars, webcasts and remote labs. In addition to e-training, the e-learning designers within HPS WD also incorporate e-learning which includes all knowledge management/sharing type of learning activities. For the purpose of this case, e-training refers to all e-learning courses, while e-learning refers to all knowledge management/sharing type of activities. Both e-training and e-learning give the adult learners synchronous and asynchronous learning opportunities.To meet adult learners' needs, the e-learning designers within HPS WD conduct a front-end analysis on several parameters to assist with design and development choices. Cost and time are primary factors as they affect the type of learning solution that can be developed. Size of audience also influences delivery options. Location of audience is critical, because a worldwide audience will have different implications for design and delivery versus a specific audience within a country. Lastly, equipment constraints in the target audiences' locations are considered for the design and development decisions. Once all the constraints are taken into account, the e-learning team ensures that the adult learner receives the best learning solution.Content selection for e-training resides within a business performance consulting model. This model tightens the relevance and meaningfulness of content because training is provided based on short- and long-term performance needs. This performance approach to content selection gives adult learners learning opportunities that are tied to individual, group and organizational performance. In addition to identifying performance driven content, adult learners have the opportunity to select which components of the training solution is appropriate for them. For example, sales, support, service type of employees may take different components within the one training solution.Having a technology-enabled environment gives the e-learning team the capability to present content using visuals, audio and video. Content presentation usually represents a mix of highly interactive solutions that give the adult learner the opportunity to choose their preferred medium for content presentation. In some cases, especially with customer training solutions, e-training courses may have a prompter that supports several languages. The language prompter gives the adult learner an additional opportunity to choose their preferred language of instruction.Again, because of a strong technology infrastructure learning activities within the web-based courses are varied. Simulations, games, knowledge checks, and quizzes are some activities that are frequently used in web-based courses. The purpose of using varied learning activities is to give adult learners an opportunity to listen, interact, and play. Combining different learning strategies also helps the e-learning designers target a broader audience. Business games, for example, are presently used in very specific and strategic areas, because they are a significant investment to build. Business games can teach how to pursue and close a measure of opportunity with major clients by simulating various scenarios for the learners. Teams of employees whose members have specific roles play these games. It must be mentioned, however, that developing these highly interactive courses takes time, and that not all courses are developed with the same level of interactivity. In essence, e-learning designers must find the balance between what they can do in the ideal world and what they can do in the practical world.While web-based courses mainly focus on learner-content interaction, HP uses other mediums to enhance learner-learner and learner-instructor interactions. Virtual classrooms are used for small groups of people to maximize instructor-learner interaction. In the virtual classroom, learners can raise their hand if they want to ask a question or if they want to comment or answer a question. In addition, the virtual classroom has a group chat feature, which can be categorized as private or public.Learners can engage in learner-learner or instructor-learner interaction via the chat space. Virtual classroom allows you to share applications, use a white board, or poll learners' reactions to different questions or issues. Virtual classrooms can be used to poll learners' satisfaction, and measure their level of knowledge acquisition.Learners can draw, paint, and on the more humorous side, even throw tomatoes if they did not like the instructor or give their instructor apples if they enjoyed the learning experience. Web-inars and webcast, on the other hand, are used for large audiences but very little interaction between the instructor and the audience occurs, learners listen only. In addition to virtual classrooms, web-inars and webcast, HP is also using remote virtual labs that are sometimes instructor driven, self-directed or are used for practice or application of learning.The blending of e-training and e-learning gives adult learners more opportunities for interaction and learning. HPS WD group has a robust knowledge management strategy that fosters e-learning through different forms of peer-to-peer learning, one form being communities of practice. Commercially available collaborative tools such as netmeeting, e-rooms, and instant messenger help to support peer-to-peer learning. HPS WD has a formal program called "Professions" which is a structured manner of organizing communities. Through communities of practice, the e-learning designers organize best practice training sessions, which are sometimes delivered using virtual classrooms. Another form of peer-to-peer learning is online mentoring and coaching. Formerly, coaching was offered at the executive level, but presently there is informal coaching at all levels of the organization. Discussion forums, another form of peer-to-peer learning, are also used extensively. Lastly, the formal technical career path is a program that is corporate wide and aims to offer individual contributors a virtual environment where they are valued and where they can strengthen their technological skills. To facilitate peer-to-peer learning, adult learners have access to knowledge management systems that are used to create, share, and reuse information regularly. Documentation of white papers, what HPS WD calls knowledge briefs, is a common practice among the group members.To reduce frustration and to help meet preferred learning styles among adult learners, the e-learning designers are creating development roadmaps for employees that include both virtual and face-to-face learning opportunities. Combining virtual and face-to-face learning opportunities prevents HP from becoming a virtual training environment and gives the adult learner a chance to network during face-to-face classes while meeting their learning needs anytime, and anywhere.Most adult learners around the world use their desktops as their primary location for taking their e-training courses. Some employees in the United States take the courses from their homes, as many are telecommuters. E-learning designers are starting to create learning rooms in major offices around the world to give learners an opportunity to isolate themselves from the day-to-day business pressures in order to concentrate on their learning.Pre- and post-test assessments are frequently used in the e-training courses, especially since several of the training solutions are formal certification tracts. In reference to Kirkpatrick's four levels, reaction, learning, transfer, and results, e-learning designers mostly apply levels one and two. Level one, reaction occurs via the customer satisfaction survey. Level two, learning occurs through final tests or knowledge checks. Final tests and knowledge checks could include open-ended questions, multiple-choice questions, and true or false questions, for example. Levels three, transfer and four, results are done on a case-by-case basis because they are major undertakings and can be very costly.To help with learner-assessment interaction, e-learning designers are starting to use interactive software that can create drag and drop boxes, and select an area on a picture, for example. Like high-end interactivity courses, however, such depends on cost and time constraints on design, development, and deployment.In conclusion, the e-learning team with HPS WD is challenged to reach the adult learners worldwide via e-training and e-learning. The team of designers is pushing ahead with the help of a wide variety of technology tools. The designers, though, are quick to point out that front-end analyses identify which of the available technologies can be relevant for the design, development, and deployment of a given learning solution. Nevertheless, these e-learning designers are doing their best to value the adult learner within the e-training and e-learning environments. E-learning is a valuable training and development solution for many companies. Unlike academic environments (Johnson and Aragon, 2003), very little is known about how the adult learner is valued in e-learning within corporate settings. This study explored a Waight & Stewart conceptual model, which posits that valuing the adult learner in e-learning within corporate settings depends on the interdependence of championing factors, antecedents, and moderators for the achievement of engagement, learning, and transfer. Table I provides an overview how these e-learning teams played out in terms of championing factors, antecedents, moderators, and outcomes. The comparative analysis was made on the incidence of occurrence and not on the extent of occurrence. More research needs to be conducted to identify the extent of occurrence among all the factors.The comparative analysis showed that all e-learning teams had leadership, a learning culture, technology infrastructure, and finance championing their efforts. The analysis also showed that all e-learning teams employed all five antecedents, which are needs assessments, learner analysis, work, work setting, and content analyses to help provide the most meaningful learning experience to the adult learner. In reference to moderators, the e-learning teams shared no incidence of return on investment. Incidence, however, the use of learning theories, technology skills, and creativity was provided. Lastly, all e-learning teams sighted incidents of learner engagement and learning. Only one e-learning team referred to transfer.While there were varying degrees of occurrence among the championing factors, antecedents, moderators and outcomes, this study provides basic confirmation that the adult learner is valued in e-learning within corporate settings. Realizing that there were opportunities and constraints for each e-learning team, the four cases provide a good insight into the energy and creativity that e-learning teams are employing to create sound learning experiences for adult learners. Further research is needed to establish the extent of implementation among the individual factors of the conceptual model (Figure 1). In addition, verification of this conceptual model is needed with more companies. Opportunities exist to explore how this conceptual model is being employed worldwide. Overall, these four case studies reveal that adult learners are valued in e-learning within the four corporate settings. It can be said that these e-learning teams are progressively improving to provide the adult learner the best e-learning experience. These case studies also show that e-learning teams have strong competencies in instructional design, learning theories, and technology and that they are operating within companies that support their efforts in creating instructionally sound e-learning experiences. Opens in a new window.Figure 1 Valuing the adult learner in e-learning within corporate settings Opens in a new window.Table I Comparative analysis conceptual model and four case studies
|
- To investigate how the adult learner is valued in e-learning corporate settings.
|
[SECTION: Method] Part one of this paper presented Waight and Stewart conceptual model (Figure 1) on valuing the adult learner in e-learning with the corporate settings. The purpose of this paper is to present how four e-learning teams in four companies are valuing the adult learner in e-learning. First, the methodology that guided the case studies is presented. Second, the four case studies are shared. Third, a discussion of the four case studies and their relationship with the conceptual model is provided. Lastly, conclusions are drawn based on the case studies and discussion.The conceptual model reflects companies where support for e-learning via supportive leadership, learning cultures, technology infrastructure, and finance are apparent. In addition, the model indicated analyses such as needs assessment, learner, work, work setting, and content analyses as viable processes that if implemented well can assist in providing the adult learner targeted and meaningful learning opportunities.The model shows, however, that antecedents start the value process but knowledge and skills on return on investment, learning theories, technology, and creativity add to the value for the adult learner. The model further indicates that when championing factors, antecedents, and moderators are adhered, outcomes such as engagement, learning, and transfer are possible. Research design The method used in this research was a case study. Yin (2003) stated that a case study is an empirical inquiry that investigates a contemporary phenomenon within its real-life context. The purpose of the study was to investigate how the adult learner is valued within the e-learning environment. The following questions guided data collection:1. What is the e-learning context in your organization?2. How is the adult learner valued in the e-learning environment?3. What considerations must be addressed when valuing the adult learner in e-learning environments within corporate settings?Sample The sample of this study was nine e-learning designers whose Fortune 500 companies had active e-learning initiatives for a minimum of four years. These companies represented the retail, insurance, oil and technology industries. Selection of companies occurred through an informational telephone interview with e-learning designers. During the interview, the researchers explained the purpose of the study and asked if the companies had an e-learning initiative. All e-learning designers within the four companies that were initially contacted had active e-learning initiatives and expressed their willingness to partake in the study. Overall, there were three e-learning team leaders, and six instructional designers. Interviewees for the four companies ranged between one and three.Data collection Semi-structured telephone interviews were conducted with e-learning representatives of each company. All participants received the study proposal, which included the interview questions via e-mail. Before the telephone interview was conducted the researchers asked for permission to record the interviews. All participants agreed to be recorded. All the interviews were transcribed.Instrumentation The interview guide included three major sections that paralleled the study's research questions. The three major sections included context, the adult learner in the e-learning environment and e-learning considerations in the corporate world. The researchers tested the interview guide with two of the e-learning participants. These participants were included in the study. The researchers conducted a preliminary content analysis to ensure that research questions intent was being targeted. The pilot test revealed that all the questions were relevant.Data analysis Participants' descriptive content responses to the interview questions were read, and categorized by research question. Each research question contained a main premise, which was used by the researcher to reduce large amounts of data into a smaller number of analytic units or themes. After analyzing each participant's response for each research question, the researcher color-coded the themes. Upon identifying the recurrent themes, the researcher created descriptive tables for each theme. In this way the researcher identified data that supported each theme. Upon completing the analysis, cases were written for each company. Cases were sent to the respective interviewees for their review and feedback. All cases received valuable feedback and were revised. The following are four case studies on how the adult learner is valued in the e-learning within the corporate setting. These cases represent the efforts of e-learning teams within specific divisions of their companies. Of the four companies, only two approved to have their names mentioned in the case study.Case study one: insurance company Context A large personal lines insurer has been a successful implementer of e-learning. The development of e-learning began in 1995 with computer based technology courses. These early courses were designed for agent training and led to the subsequent development of web-based courses. The learners tracked by the learning management system (LMS) include employees, contract agents, and their staffs. In 2003, more than 60,000 learners completed at least one course online. Typically, learners are positive about their e-learning experiences. The results of an annual enterprise-wide opinion survey indicated that 78 percent were satisfied or completely satisfied with their e-learning courses.Originally, the drive to introduce e-learning was precipitated by the geographic diversity of learners. In addition to cost, the need and value for learning, quickly became primary drivers of the e-learning growth. Predominately, e-learning is an asynchronous experience. Less than 10 percent of web-based training is blended with instructor led training. Presently, virtual classrooms are under consideration for possible addition to the e-learning experience.This company currently tracks over 4,000 learning activities including courses, assessments, workshops, and class registrations. There are approximately 700 enterprise wide online courses. These include technology courses such as Microsoft Office; insurance product knowledge courses; strategy courses covering topics such as company history, how the company makes money, and corporate ethics; process courses such as billing and computer applications; soft skills courses; and technology courses. The team charged with the creation of e-learning is within the human resources function. This team delivers enterprise based e-learning courses while business units deliver specific business related e-learning courses.How are adult learners valued? The adult learner is an important consideration for the e-learning designers. The adult learner consideration, however, sits within a realization of constraints and opportunities. First, e-learning designers are aware that bandwidth constrains the type and the application of animations and audio clips. Consequently, their application of video clips in their e-learning courses is minimal. Second, e-learning designers are aware of their expertise and realize that they need additional talent on two and three dimension simulations.Despite these constraints, however, the e-learning designers have also been able to take advantage of design and production opportunities. The use of standardized templates has improved how e-learning designers create courses. This has reduced the design and production time for a typical course from 40 to 15 days. The e-learning designers recognized, however, that interviewing subject matter experts and writing scripts are the processes that take the most time and there has only been limited help from technology for these parts of the course design and development process. Considering their constraints and opportunities, the e-learning designers identified a seven-dimension strategy to value the adult learner. The following describes the seven dimensions.Realizing that the e-learning group mainly serves enterprise-wide e-learning needs, content selection is driven by a steering committee that has the pulse on learning needs that are strategic for individual, group, and organizational performance. The committee has representation from the finance, marketing, and HR functions. This committee recently provided content advice on the economics of the business and compliance policies. This steering committee assists the e-learning team in valuing the adult learner by advising on content that is relevant and meaningful to the operations of the business.In reference to content sequence, e-learning courses may reflect an information-assessment, assessment-information, and an individualized sequence option. All three sequences give adult learners an opportunity to process information differently, and engage in a preferred learning style. The information-assessment sequence forces learners to process information before completing the assessment. In addition, the assessment-information sequence gives learners the opportunity to identify their knowledge gap and then target the specific knowledge that they may need. Instructional designers stated that the latter sequence works well with product related courses.Adult earners also get the opportunity to structure their courses. Learners are able to decide how they want to see things - for example, learners may choose to be asked questions throughout the course or only to be asked questions at the beginning and/or at the end. Learners may also choose the sequence of their learning activities. To avoid information overload, most modules are usually 15 minutes or less.Adult learners also get an opportunity to make choices on content presentation. Typically, courses will include a text and audio portion and have graphics, which are used to enhance the content delivery. Transcripts of the audio portion are also made available for anyone with hearing disabilities and for those who chose not to listen to the audio. Video is rarely used because of bandwidth constraints; a short video clip with the chairman introducing a new set of courses is not uncommon.Interaction focuses mainly on learner-content interaction. Adult learners get the chance to interact with content after every three or four screens, a design standard that applies to all courses. Interaction after three or four screens may include answering a question or completing a matching activity - an example, matching definitions with terms. Learner-content interaction also occurs through problem-solving scenarios. Learners get the opportunity to diagnose a scenario and choose among several solutions. Lastly, learner-content interaction also occurs through games.Value for the adult learner is supported by design standards. In addition to enforcing interaction after three or four screens, the e-learning designers developed five sets of templates for their courses to standardize features and allow adult learners easy navigation. Choice of templates depends on the content under design. Policy compliance courses, for example, use the templates with the office theme. The template background is typically a hallway, desks, or the cafeteria. Graphically, people will appear and ask questions in the context of an office. Overall, all templates also have navigation, glossary, and an area where links to other web sites are uploaded. All these components appear at the same locations on the screens in all five sets of templates.Assessment is another component via which the adult learner is valued. Pre and post assessment occurs in about 20 percent of the courses with about 80 percent only having post assessment. Assessment techniques may include true and false questions, matching and multiple choices, fill in the blanks and problem-solving scenarios. Assessment plays a crucial role in learners' performance on the job, especially since the e-learning courses are tied to the strategic organizational performance. If learners do not pass the test the first time, they retake the test and parts of the course if necessary to meet the passing criteria. Last year, for example, all learners selling insurance had to take the "do not call list" course. Every learner was tracked and anyone who did not take or did not pass the test could make unsolicited calls.Instant feedback, a component of the assessments, helps adult learners to immediately focus on their knowledge gap, and respond appropriately such as reviewing the module before retaking the assessment.Transfer of e-learning to the job has only been tracked with a few courses. The e-learning team, however, is making transfer of learning on the job one of their core competencies and they are presently exploring various transfer assessment tools. Presently, assessing transfer occurs through a 30- and 90-day follow-up with the learner and the manager, which seeks to find what learners have learned and applied on the job. Last year, transfer was tracked for 12 courses. Transfer is presently influenced by managers' involvement with the e-learning process. Transfer for the billing course was tracked by recording how learners answered billing questions and if they had to transfer questions to a more experienced person. Results showed that the transferring questions to an expert had decreased and managers were quite happy.In conclusion, the e-learning group is focused on providing enterprise-wide e-learning courses. Though bandwidth and lack of skills to develop high-end simulation techniques are present, the team keeps the adult learner at the forefront of their design by ensuring that content is relevant and meaningful. Meaningfulness occurs by working with the steering committee and subject matter experts to identify content and capture the best scenarios that are relevant to the organization. Meaningfulness is also inserted into the e-learning design by creating templates that have themes relating to company infrastructure and culture. Content sequence and presentation give the learners locus of control whereas learner-content interaction, standards, and assessments encourage learners to interact and process information in various ways. Enforcing interaction after three or four screens and limiting modules to 15 minutes or less help prevent information overload. Overall, this e-learning team has been proactive in valuing the adult learner in their e-learning design.Case study two: energy services group, Halliburton Context Halliburton is one of the world's largest providers of products and services to the oil and gas industries. Halliburton employs more than 100,000 people in over 120 countries. Halliburton's Energy Services Group consists of four business segments: drilling and formation evaluation, fluids, production optimization, and landmark and other energy services. The second group is the engineering and construction group, known as Kellogg Brown & Root. This mini-case focuses only on the Energy Services Group.For Halliburton's Energy Services Group e-learning is a company-wide initiative introduced in 2000. The driving forces influencing its initiation included a desire to reduce costs and increase access. The travel cost for both learners and instructors inherent in instructor-led training was one cost reduction factor. Additionally, e-learning was viewed as a means to successfully reach learners worldwide.Most of the e-learning opportunities are offered asynchronously. Learning modules are available through the LMS and can be accessed by learners at any time. Additionally, some instructor led courses have a blended format where learners may access materials to prepare for the course and/or may use e-learning assessment tools. In some cases synchronous approaches are used for collaborative and communication purposes. Both learners and instructors can talk to each other and share documents and applications.In many cases e-learning can be accessed at any time. This, however, can be affected by technology-related constraints. Bandwidth related issues are constraints, especially in specific geographic areas such as West Africa. Additional access issues may be as simple as the presence of a computer and appropriate network connections. In some cases, employees in remote areas utilize a satellite office equipped with workstations. Access for employees from home is possible for some, but it is riddled with home equipment, firewall, and security issues. Currently selected employees use virtual private networks (VPN) to connect from their homes. The e-learning team hopes to increase access locations by distributing laptop computers and providing kiosks at specific locations.Halliburton's LMS includes courses developed internally as well as those purchased from outside vendors. About 1,000 courses developed by external entities and more than 140 (mostly technical) internally developed courses are available. Typically, the duration of a course is about one hour. Internally developed courses include technical content for subject matter such as drilling, mud, hardware, and exploration technology. Additional non-technical content includes areas such as legal issues, business conduct, finance, procurement, human resources, and leadership. Purchasing of externally created course in the areas of leadership and management, for example, are driven by business units needs. A recent contract secured the availability of a catalogue of "soft skill" training offered in a number of formats including simulations, online books, quick reference materials, and job aides. While Halliburton purchases online training from vendors it does not sell internally developed e-learning courses.The team of five instructional designers, four multimedia specialists, and one technical publishing specialist manage the e-learning initiative. Together, they serve more than 35,000 employees of the Energy Services Group. For 2002, approximately 279,500 courses were completed, reflecting an average of about eight courses per employee. The e-learning initiative is a part of the human resource development function and it is termed Halliburton University.How are adult learners valued? E-learning team at Halliburton has gone great strides to design e-learning courses that value the adult learner. These strides, however, reside within constraints and opportunities. The first challenge is response time to clients due to a small staff of about ten designers for approximately 35,000 employees. Designers said that their roles entail consulting with product service lines on their learning needs, meeting clients' requests, and reviewing possible third party courses for compatibility with the LMS standards. While having a small staff affects response time, designers shared that the LMS has helped them to deliver course content and track learners' access and performance. Designers shared that business units' value and support for e-learning has helped them value the adult learner. In addition, Halliburton's initiative of identifying competencies for each job role has assisted the designers in identifying relevant content for targeted audiences.The e-learning team shared that they apply adult learning principles to the e-learning courses by adhering to the ADDIE module. The instructional design model, ADDIE, includes analysis, design, development, implementation, and evaluation. In specific, analysis can include performance problem analysis, needs assessment, goal, work, learner, work setting and content analyses. At present, the e-learning team is refining the instructional design process to foster consistency in the application of ISD among all team members. The e-learning team said that even though they are an in-house development team, internal client groups fund their design and development time and this in itself underlines the need to give the client the best service for their monies. Front-end analysis and even more inclusive, the complete instructional design process helps to value the adult learner by providing the most appropriate and relevant learning solution.The e-learning team employs front-end analysis procedures understand the adult learners and their needs. Understanding the adult learners' needs helps the e-learning team in identifying parameters for design, development and deployment. The front-end analysis may include technology infrastructure, type of content, level, and type of interaction necessary, learner analysis, as well as cost and delivery options. Learner analysis, in specific, does not occur for every course or module because the designers are often tasked with delivering e-learning courses to the same target audiences. Technology analysis has proved to be very important, especially when dealing with adult learners worldwide. Bandwidth issues, for example, vary and have major implications on accessing e-learning courses. The e-learning team said that they sometimes visit sites to get a good picture of technology limitations. The e-learning team also noted that the work setting analysis has shown that the multicultural context of their organization communicates the need to have e-learning courses translated into various languages and to be sensitive of cultural differences. The e-learning team said that they are presently translating some courses to Spanish and hope to have other languages in the near future. The e-learning team said that they use cultural informants from respective countries for advice on cultural issues.As a result of front-end analysis the e-learning team may chose one of the three levels of course design. The first level, the most basic, may include power point with audio and an assessment activity. In essence, level one has a linear structure where the user would be going through a page turn type of lesson. Level one is usually chosen when the client has an urgent need to get content to employees as quickly as possible. Level two, on the other hand, gives the learner control of the content sequence and presentation. Adult learners are given the chance to navigate through the module and select how they want the content presented and can choose the sequence of their activities. Text is presented in a static and dynamic form giving adult learners opportunities to access certain links or use graphics, tables or charts to reinforce concepts. Level two would generally contain more in depth assessments activities, which are tied to performance objectives. In contrast, level one does not necessarily state the performance objectives. Level three represents the most complex types of course design. In addition to having clearly stated performance objectives, which are tied to assessments, this level might include video and audio clips, and some more complex animations that require the learner to interact and make decisions at certain sections of the animations. Level three may also include some low level simulations, for example, where the learner is given procedures to simulate the correct use of a piece of machinery. The simulation would have to meet specifications required in the real world.The e-learning team stipulated that on the average they design level two and three type courses. Sometimes they might even use a combination of levels two and three. Designers shared that it could take somewhere between 325 and 425 hours to design level two and three type courses. Designers shared that they recently collaborated with the multimedia development team to identify timeframes for simple to complex animations and simulations, and for creating static graphics to complex tool drawings. In the next six to twelve months, designers will be evaluating projected timeframes for designing and developing level two and three type courses.Course selection is a process driven by product service lines. Product lines refer to the different aspects of the oil field service industry, for example, the logging and perforating service. Once the content area is identified, designers and subject matter experts examine content to ensure that it is relevant to the performance gap, the job, and the target audience. Thus, designers and subject matter experts conduct work analysis to ensure that they have real-life examples and scenarios. Designers shared that learner's characteristics such as educational backgrounds can also influence the selection of content. The e-learning team added that a technically based module that has a target audience of high school and college degreed individuals, for example, has to be designed well to engage all types of learners in the learning process.The e-learning team ensures that content presentation is varied to give the adult learners preferences on how to access content. The e-learning designers use audio, video, print, animations, and simulations and in some situations, case studies to present content. Animations, in particular, serve to give adult learners visual representations of concepts. Modules that have graphic intensive simulations are sometimes presented in a hybrid module because of bandwidth limitations. In hybrid module, learners receive their content in a CD and take their assessments through the LMS. Learners have the options of listening or turning off the audio and of downloading a job-aid, which can be a summary of the module. The use of dynamic text allows learners access to internal web pages that are relevant to the respective module. The designers use Flash, Dream Weaver, Robo Demo and Microsoft Office to assist with content presentation.Eighty percent of the courses are asynchronous and are sometimes the forerunners to classroom learning. Thus, learners are primarily engaged with learner-content interactions. Learner-content interaction may occur by learners choosing their modules, choosing their content presentation options, and answering questions. However, learners may get the opportunity to interact with each other and instructors via a virtual collaborative tool called Interwise. The tool engages learners and instructor in real time using audio, application sharing, white board, and a chat room. The instructor can give control to learners; learners in turn can raise their hands and interact with their peers.Assessments give adult learners feedback on their knowledge acquisition. Assessments may include multiple choice; fill in the blank, and matching. The e-learning team shared that while they would like to design dynamic assessments, the LMS does not presently support those capabilities. Assessments are designed to give instant feedback to the learner especially since learners are expected to receive an 80 percent on most tests. If the 80 percent is not met the first time, learners have the options of retaking the test or returning to the module. The LMS allows learners to take the test twice in one sitting and if the 80 percent is missed on the second try, the learners are automatically logged off and need to log in again if they want to retake the test for the third time. The e-learning team shared that the order of test questions and answers might change every time the test is taken.The standards that guide course design comply with the LMS. Standards include html, flash, navigation structures, titles, fonts, size, color, animations, pop-ups, for example. Keeping the adult learner in mind, the e-learning team also includes performance objectives, overviews and formative and summative learning checks or assessments. Performance objectives are categorized using Bloom's taxonomy reflecting simple to more complex learning outcomes.The e-learning team has also created storyboard screens to assist with standardizing the process of the course design. Designers mentioned that storyboards allow everyone involved including subject matter, and multimedia experts to review and revise instantly. This dynamic storyboard screen allows everyone involved to collaborate and to make changes to the course before it goes into development.Adult learners mostly take their e-learning courses at their desktops. Kiosks and labs are available in some locations. The field employees may access their courses from a truck using a laptop that may be connected via satellite. Designers recognized that bandwidth issues influences where adult learners can access their courses.Transfer of e-learning resides with the learner and supervisor. The e-learning team presently sees their role as identifying and using instructionally sound instruction methods and ensuring that the material is technically accurate. The team stated, however, that they plan to increase their efforts in addressing transfer of learning on the job and return on investment.In conclusion, the e-learning team at Halliburton is continuously looking for ways to serve the adult learner in the e-learning environment. The value for the adult learner resides with the application of their creative use of instructional design process and of the technology infrastructure that exist worldwide. E-learning designers are about deploying a sound learning opportunity that meets clients' and adult learners' needs worldwide.Case study three: retail chain store Context The focus of this case describes an e-learning team effort that serves the stores and asset protection sector of a retail store chain, which operates approximately 260 stores in 14 states. The chain is one component of a larger retail empire, which operates more than three distinct retail formats. This retail store chain employs approximately 29,000 employees.The e-learning initiative began in 2000 and is located within the university of the retail store chain. The e-learning initiative is mainly directed at the executive level employees within the stores and asset protection sector. Primary reasons for the introduction of e-learning were to reach a geographically dispersed workforce within the United States and to create consistency, quality and relevance among the training design, development, delivery and results.Asynchronous, synchronous, and blended formats are used to deliver training. Asynchronous learning focuses on computer-based modules while synchronous utilizes virtual classrooms. Blended learning may include a mix of computer-based modules and virtual classroom modules or face-to-face training with either computer based modules, or virtual classroom modules. A blended solution may have a team of employees taking a computer-based module all at the same time and upon completion the team gets together to share what they have learned. Approximately 30 modules have been developed with about 2 or 3 new offerings being created each quarter. While the retail store chain continues to expand its e-learning options, approximately 70 percent of the training continues to be offered face-to-face.The training and development team is comprised of four persons. Two members focus solely on e-learning. With an e-learning staff of two, these individuals assume roles in project management, design, creation, execution, and evaluation. Typically, a computer-based module takes between 100 and 300 hours to design and develop; a virtual classroom module, on the other hand, requires about two months. While this e-learning initiative resides within the stores and asset protection sector of retail store chain, it shares e-learning modules with other sectors and sometimes purchases modules from various vendors.How are adult learners valued? With e-learning representing approximately 30 percent of the training function and with the development team being relatively small (two persons), this case provides an overview of how the adult learner is valued within an emergent e-learning initiative.Front-end analysis is conducted to ensure that training is relevant, meaningful, and authentic. Performance problem analysis, for example, ensures that the performance gap is linked with training content. Once the training link is identified, e-learning designers further analyze to identify whether the training can be delivered through face-to-face training or e-learning.If e-learning is decided upon, then further analysis occurs to decide whether the medium will be computer-based, virtual classroom or a blended solution. Learner analysis also occurs to ensure that adult learners' characteristics are considered in the module design and development. Given that the target audience is usually executives, e-learning designers may conduct mini learner analyses when introducing a new content category.Once the performance problem has been aligned with training, content selection is directed either through a partnership with senior managers and/or employees in the field or through a review of new processes or procedures. Subject matter experts focus on the target audience, the performance problem, and the respective environments to identify relevant content and most appropriate learning activities given the selected medium of delivery.Content presentation occurs through text, graphics, video, and audio. Most computer-based modules have a combination of these mediums. Blended solutions use more video than computer-based modules because of bandwidth issues. Designers use flash, fireworks, trainer soft, robo-demo, and author ware to present content. At present, the adult learners cannot personalize the content presentation. The e-learning team is aware of this limitation, and is exploring options to give learners more control of content presentation options.With learning occurring asynchronously, synchronously, and in blended formats, adult learners can engage in learner-content, learner-learner and learner-instructor interactions. Learner-content interaction occurs mainly with asynchronous e-learning modules. Learners usually interact via simulations, games, or quizzes. These types of activities can force learners to analyze, synthesize, and evaluate. Learner-content interaction also occurs while listening to the audio portion of the module or watching a video. E-learning designers ensure that adult learners have various types of interactive activities because they want learners to enjoy a preferred type of activity. Learner-learner and instructor-learner interactions occur via the virtual classroom. Virtual classroom modules are chosen primarily because they can meet the just-in-time needs of business units. Virtual classroom have live instructors who communicate to the learners using a radio talk show format. Learners are also able to interact with scenarios and questions posed by the instructor. Adult learners, in turn, can use the chat space to communicate among themselves and with the instructors.With e-learning being an emergent initiative, the e-learning designers do not presently have a LMS. The designers, however, are in the process of developing an e-learning template that has consistent module design features. The template will reflect the retail store chain's university campus. The template will have links to the various colleges, for example, the college of logistics. The college of logistics will encompass all logistic related modules. The template for each module, on the other hand, will include nine major sections. The sections are performance objectives, module overview, content presentation, assessment, journal tool, tip cards, glossary, help tool, and a library. The e-learning team shared that this template will give adult learners more opportunities to access their preferred learning activities and tools. In keeping with adult learning principles, in specific the attention span, e-learning designers are designing modules that take no more than 30 minutes to complete. Lastly, as the designers work towards developing their template, they are conducting usability testing for every module that is released. In specific, designers are evaluating how employees navigate, access and interact with the content. Designers said that listening to adult learners is important if they are to deliver a learner-centered e-learning experience.The journal, one of the module features, will be a tool that can assist the transfer of learning on the job. Designers expect that the learners will use the journal to make personal notes about their learning, in specific, write what they would like to share with their peers and managers upon returning to the job. Presently, the direct supervisor is responsible to ensuring that learning is transferred on the job. The just-in-time delivery of computer-based and virtual classroom modules assist learners to transfer their learning on the job because the modules are based on just-in-time needs.Assessments occur at the formative and summative levels. At the formative level, learners may be asked to answer questions as they proceed through the module. At the summative level, if assessments will not be tracked, learners may take a self-assessment in the form of game such as jeopardy or who wants to be a millionaire. If the summative assessment will be tracked, learners receive a reaction survey and assessment that may include multiple choice, fill in the blanks, true or false and matching type questions. With the absence of a formal LMS, the e-learning designers contract database services to collect and access data from the reaction survey and assessments. Learners usually receive instant feedback on assessments and can return to the assessment or module for clarification. Instant feedback is not given, however, when assessments are for certification. Performance on certification assessments is communicated directly to the learner or to the respective managers or supervisors.Adult learners have access modules only at the stores. Every store has approximately three computers; one computer is dedicated to learning. Modules are delivered to the computers via an executable package. Employees individually or in groups of two or three can take the module at the same time. Introducing e-learning at the stores has been challenging because of the customer driven environment. The e-learning, shared, however, that they are presently exploring how to make learning seamless at the store level; one possibility could be the introduction of kiosks.In conclusion, though the e-learning team at the retail store chain has the human resource, time, money, and bandwidth challenge, they are highly motivated and confident that their e-learning initiative will only improve to serve the adult learner. The designers highly anticipate the completion of the template because it will serve to decrease their design and development time. In addition, e-learning designers are hoping to cross train the other two members of training and development team on e-learning processes such as meeting with clients, analyzing the performance problem, setting objectives, and getting the initial content layout using the instructional design process. This type of expertise could assist the existing designers to focus on coding and development. Although, emergent, the e-learning initiative at retail store chain is progressing in its mission to value the adult learner.Case study four: HP services workforce development, HP Context Hewlett Packard (HP) is a technology solutions provider to consumers, businesses, and institutions globally. The company's offerings span IT infrastructure, personal computing and access devices, global services, imaging and printing for consumers, enterprises and small and medium businesses. With approximately 140,000 employees worldwide, HP serves more than one billion customers in more than 160 countries. While organizationally HP accomplishes this by focusing on multiple business organizations, this case examines only the HP Services Group (HPS), which consists of approximately 60,000 employees. It is recognized that e-learning initiatives exist in multiple organizations and groups within HP and they are beyond the scope of this case. Within HPS, a designated organization exists to develop, design, and deliver learning solutions to this business unit. The name of this organization is HPS Workforce Development (HPS WD).The e-learning initiative with HPS began in 1999. The HPS WD team has created and monitored approximately 400,000 learning incidents in a recent six-month period. Approximately 85 percent of training is delivered electronically. Reasons cited for the introduction of e-learning into the group include cost efficiency; accessibility to learners, especially remote learners and those on customer sites; and time efficiency in that learners in the field can train where they are, without having to travel to a classroom setting.Both synchronous and asynchronous formats are used. Self-paced modules, both those created "in-house" and those purchased from vendors, are available to learners for access at a time and place that meets their scheduled needs. Required company training, such as business conduct, is an example of content that is delivered by asynchronous, self-paced methods. Virtual classroom environments and virtual labs can be accessed remotely for synchronous and/or asynchronous experience. Virtual labs have successfully been used for technical training.HPS e-learning courses are organized into portfolios. The main portfolio contains 1,800 self and instructor based courses. An additional 200 courses bring the total number of courses to 2,000. Portfolios include business conduct, technical, legal and professional skills such as consulting and project management. A virtual team comprised of curriculum developers, learners, delivery professionals and HP businesses work together and interface with product divisions to prioritize course offerings and to develop content. Specifically, the development team is a global organization of about 80 program managers plus developers and designers (total about 140 people) whose task it is to decide format, create content, develop courses, and evaluate feedback for the updating of courses. Increasingly, the outsourcing of course creation is being explored and utilized. As of 1 May 2004, most HPS employees had taken multiple e-learning courses. In general, while HPS WD does share courses across HP, it does not sell its learning packages to outside entities with the exception of customer services training solutions.HPS WD employs a business performance-consulting model. To help learning consultants within the HPS apply the business performance model, the corporate workforce development team at HP introduced an electronic performance support system to assist teams in becoming aware of similarities among performance issues and the types of training that they are providing. As a result of this awareness, teams are combining their strengths and discovering synergies in their training efforts.How are adult learners valued? The adult learner is an important factor in the design of e-training within HPS. E-training includes web-based courses, virtual classrooms, web-inars, webcasts and remote labs. In addition to e-training, the e-learning designers within HPS WD also incorporate e-learning which includes all knowledge management/sharing type of learning activities. For the purpose of this case, e-training refers to all e-learning courses, while e-learning refers to all knowledge management/sharing type of activities. Both e-training and e-learning give the adult learners synchronous and asynchronous learning opportunities.To meet adult learners' needs, the e-learning designers within HPS WD conduct a front-end analysis on several parameters to assist with design and development choices. Cost and time are primary factors as they affect the type of learning solution that can be developed. Size of audience also influences delivery options. Location of audience is critical, because a worldwide audience will have different implications for design and delivery versus a specific audience within a country. Lastly, equipment constraints in the target audiences' locations are considered for the design and development decisions. Once all the constraints are taken into account, the e-learning team ensures that the adult learner receives the best learning solution.Content selection for e-training resides within a business performance consulting model. This model tightens the relevance and meaningfulness of content because training is provided based on short- and long-term performance needs. This performance approach to content selection gives adult learners learning opportunities that are tied to individual, group and organizational performance. In addition to identifying performance driven content, adult learners have the opportunity to select which components of the training solution is appropriate for them. For example, sales, support, service type of employees may take different components within the one training solution.Having a technology-enabled environment gives the e-learning team the capability to present content using visuals, audio and video. Content presentation usually represents a mix of highly interactive solutions that give the adult learner the opportunity to choose their preferred medium for content presentation. In some cases, especially with customer training solutions, e-training courses may have a prompter that supports several languages. The language prompter gives the adult learner an additional opportunity to choose their preferred language of instruction.Again, because of a strong technology infrastructure learning activities within the web-based courses are varied. Simulations, games, knowledge checks, and quizzes are some activities that are frequently used in web-based courses. The purpose of using varied learning activities is to give adult learners an opportunity to listen, interact, and play. Combining different learning strategies also helps the e-learning designers target a broader audience. Business games, for example, are presently used in very specific and strategic areas, because they are a significant investment to build. Business games can teach how to pursue and close a measure of opportunity with major clients by simulating various scenarios for the learners. Teams of employees whose members have specific roles play these games. It must be mentioned, however, that developing these highly interactive courses takes time, and that not all courses are developed with the same level of interactivity. In essence, e-learning designers must find the balance between what they can do in the ideal world and what they can do in the practical world.While web-based courses mainly focus on learner-content interaction, HP uses other mediums to enhance learner-learner and learner-instructor interactions. Virtual classrooms are used for small groups of people to maximize instructor-learner interaction. In the virtual classroom, learners can raise their hand if they want to ask a question or if they want to comment or answer a question. In addition, the virtual classroom has a group chat feature, which can be categorized as private or public.Learners can engage in learner-learner or instructor-learner interaction via the chat space. Virtual classroom allows you to share applications, use a white board, or poll learners' reactions to different questions or issues. Virtual classrooms can be used to poll learners' satisfaction, and measure their level of knowledge acquisition.Learners can draw, paint, and on the more humorous side, even throw tomatoes if they did not like the instructor or give their instructor apples if they enjoyed the learning experience. Web-inars and webcast, on the other hand, are used for large audiences but very little interaction between the instructor and the audience occurs, learners listen only. In addition to virtual classrooms, web-inars and webcast, HP is also using remote virtual labs that are sometimes instructor driven, self-directed or are used for practice or application of learning.The blending of e-training and e-learning gives adult learners more opportunities for interaction and learning. HPS WD group has a robust knowledge management strategy that fosters e-learning through different forms of peer-to-peer learning, one form being communities of practice. Commercially available collaborative tools such as netmeeting, e-rooms, and instant messenger help to support peer-to-peer learning. HPS WD has a formal program called "Professions" which is a structured manner of organizing communities. Through communities of practice, the e-learning designers organize best practice training sessions, which are sometimes delivered using virtual classrooms. Another form of peer-to-peer learning is online mentoring and coaching. Formerly, coaching was offered at the executive level, but presently there is informal coaching at all levels of the organization. Discussion forums, another form of peer-to-peer learning, are also used extensively. Lastly, the formal technical career path is a program that is corporate wide and aims to offer individual contributors a virtual environment where they are valued and where they can strengthen their technological skills. To facilitate peer-to-peer learning, adult learners have access to knowledge management systems that are used to create, share, and reuse information regularly. Documentation of white papers, what HPS WD calls knowledge briefs, is a common practice among the group members.To reduce frustration and to help meet preferred learning styles among adult learners, the e-learning designers are creating development roadmaps for employees that include both virtual and face-to-face learning opportunities. Combining virtual and face-to-face learning opportunities prevents HP from becoming a virtual training environment and gives the adult learner a chance to network during face-to-face classes while meeting their learning needs anytime, and anywhere.Most adult learners around the world use their desktops as their primary location for taking their e-training courses. Some employees in the United States take the courses from their homes, as many are telecommuters. E-learning designers are starting to create learning rooms in major offices around the world to give learners an opportunity to isolate themselves from the day-to-day business pressures in order to concentrate on their learning.Pre- and post-test assessments are frequently used in the e-training courses, especially since several of the training solutions are formal certification tracts. In reference to Kirkpatrick's four levels, reaction, learning, transfer, and results, e-learning designers mostly apply levels one and two. Level one, reaction occurs via the customer satisfaction survey. Level two, learning occurs through final tests or knowledge checks. Final tests and knowledge checks could include open-ended questions, multiple-choice questions, and true or false questions, for example. Levels three, transfer and four, results are done on a case-by-case basis because they are major undertakings and can be very costly.To help with learner-assessment interaction, e-learning designers are starting to use interactive software that can create drag and drop boxes, and select an area on a picture, for example. Like high-end interactivity courses, however, such depends on cost and time constraints on design, development, and deployment.In conclusion, the e-learning team with HPS WD is challenged to reach the adult learners worldwide via e-training and e-learning. The team of designers is pushing ahead with the help of a wide variety of technology tools. The designers, though, are quick to point out that front-end analyses identify which of the available technologies can be relevant for the design, development, and deployment of a given learning solution. Nevertheless, these e-learning designers are doing their best to value the adult learner within the e-training and e-learning environments. E-learning is a valuable training and development solution for many companies. Unlike academic environments (Johnson and Aragon, 2003), very little is known about how the adult learner is valued in e-learning within corporate settings. This study explored a Waight & Stewart conceptual model, which posits that valuing the adult learner in e-learning within corporate settings depends on the interdependence of championing factors, antecedents, and moderators for the achievement of engagement, learning, and transfer. Table I provides an overview how these e-learning teams played out in terms of championing factors, antecedents, moderators, and outcomes. The comparative analysis was made on the incidence of occurrence and not on the extent of occurrence. More research needs to be conducted to identify the extent of occurrence among all the factors.The comparative analysis showed that all e-learning teams had leadership, a learning culture, technology infrastructure, and finance championing their efforts. The analysis also showed that all e-learning teams employed all five antecedents, which are needs assessments, learner analysis, work, work setting, and content analyses to help provide the most meaningful learning experience to the adult learner. In reference to moderators, the e-learning teams shared no incidence of return on investment. Incidence, however, the use of learning theories, technology skills, and creativity was provided. Lastly, all e-learning teams sighted incidents of learner engagement and learning. Only one e-learning team referred to transfer.While there were varying degrees of occurrence among the championing factors, antecedents, moderators and outcomes, this study provides basic confirmation that the adult learner is valued in e-learning within corporate settings. Realizing that there were opportunities and constraints for each e-learning team, the four cases provide a good insight into the energy and creativity that e-learning teams are employing to create sound learning experiences for adult learners. Further research is needed to establish the extent of implementation among the individual factors of the conceptual model (Figure 1). In addition, verification of this conceptual model is needed with more companies. Opportunities exist to explore how this conceptual model is being employed worldwide. Overall, these four case studies reveal that adult learners are valued in e-learning within the four corporate settings. It can be said that these e-learning teams are progressively improving to provide the adult learner the best e-learning experience. These case studies also show that e-learning teams have strong competencies in instructional design, learning theories, and technology and that they are operating within companies that support their efforts in creating instructionally sound e-learning experiences. Opens in a new window.Figure 1 Valuing the adult learner in e-learning within corporate settings Opens in a new window.Table I Comparative analysis conceptual model and four case studies
|
- Case study methodology was used for this research. Four Fortune 500 companies that had active e-learning initiatives for a minimum of four years were selected. Data for the development of the four cases were collected via semi-structured telephone interviews. The questions that guided data collection and case development are: what is the e-learning context in your organization?; How is the adult learner valued in the e-learning environment?; What considerations must be addressed when valuing the adult learner in e-learning environments within corporate settings?
|
[SECTION: Findings] Part one of this paper presented Waight and Stewart conceptual model (Figure 1) on valuing the adult learner in e-learning with the corporate settings. The purpose of this paper is to present how four e-learning teams in four companies are valuing the adult learner in e-learning. First, the methodology that guided the case studies is presented. Second, the four case studies are shared. Third, a discussion of the four case studies and their relationship with the conceptual model is provided. Lastly, conclusions are drawn based on the case studies and discussion.The conceptual model reflects companies where support for e-learning via supportive leadership, learning cultures, technology infrastructure, and finance are apparent. In addition, the model indicated analyses such as needs assessment, learner, work, work setting, and content analyses as viable processes that if implemented well can assist in providing the adult learner targeted and meaningful learning opportunities.The model shows, however, that antecedents start the value process but knowledge and skills on return on investment, learning theories, technology, and creativity add to the value for the adult learner. The model further indicates that when championing factors, antecedents, and moderators are adhered, outcomes such as engagement, learning, and transfer are possible. Research design The method used in this research was a case study. Yin (2003) stated that a case study is an empirical inquiry that investigates a contemporary phenomenon within its real-life context. The purpose of the study was to investigate how the adult learner is valued within the e-learning environment. The following questions guided data collection:1. What is the e-learning context in your organization?2. How is the adult learner valued in the e-learning environment?3. What considerations must be addressed when valuing the adult learner in e-learning environments within corporate settings?Sample The sample of this study was nine e-learning designers whose Fortune 500 companies had active e-learning initiatives for a minimum of four years. These companies represented the retail, insurance, oil and technology industries. Selection of companies occurred through an informational telephone interview with e-learning designers. During the interview, the researchers explained the purpose of the study and asked if the companies had an e-learning initiative. All e-learning designers within the four companies that were initially contacted had active e-learning initiatives and expressed their willingness to partake in the study. Overall, there were three e-learning team leaders, and six instructional designers. Interviewees for the four companies ranged between one and three.Data collection Semi-structured telephone interviews were conducted with e-learning representatives of each company. All participants received the study proposal, which included the interview questions via e-mail. Before the telephone interview was conducted the researchers asked for permission to record the interviews. All participants agreed to be recorded. All the interviews were transcribed.Instrumentation The interview guide included three major sections that paralleled the study's research questions. The three major sections included context, the adult learner in the e-learning environment and e-learning considerations in the corporate world. The researchers tested the interview guide with two of the e-learning participants. These participants were included in the study. The researchers conducted a preliminary content analysis to ensure that research questions intent was being targeted. The pilot test revealed that all the questions were relevant.Data analysis Participants' descriptive content responses to the interview questions were read, and categorized by research question. Each research question contained a main premise, which was used by the researcher to reduce large amounts of data into a smaller number of analytic units or themes. After analyzing each participant's response for each research question, the researcher color-coded the themes. Upon identifying the recurrent themes, the researcher created descriptive tables for each theme. In this way the researcher identified data that supported each theme. Upon completing the analysis, cases were written for each company. Cases were sent to the respective interviewees for their review and feedback. All cases received valuable feedback and were revised. The following are four case studies on how the adult learner is valued in the e-learning within the corporate setting. These cases represent the efforts of e-learning teams within specific divisions of their companies. Of the four companies, only two approved to have their names mentioned in the case study.Case study one: insurance company Context A large personal lines insurer has been a successful implementer of e-learning. The development of e-learning began in 1995 with computer based technology courses. These early courses were designed for agent training and led to the subsequent development of web-based courses. The learners tracked by the learning management system (LMS) include employees, contract agents, and their staffs. In 2003, more than 60,000 learners completed at least one course online. Typically, learners are positive about their e-learning experiences. The results of an annual enterprise-wide opinion survey indicated that 78 percent were satisfied or completely satisfied with their e-learning courses.Originally, the drive to introduce e-learning was precipitated by the geographic diversity of learners. In addition to cost, the need and value for learning, quickly became primary drivers of the e-learning growth. Predominately, e-learning is an asynchronous experience. Less than 10 percent of web-based training is blended with instructor led training. Presently, virtual classrooms are under consideration for possible addition to the e-learning experience.This company currently tracks over 4,000 learning activities including courses, assessments, workshops, and class registrations. There are approximately 700 enterprise wide online courses. These include technology courses such as Microsoft Office; insurance product knowledge courses; strategy courses covering topics such as company history, how the company makes money, and corporate ethics; process courses such as billing and computer applications; soft skills courses; and technology courses. The team charged with the creation of e-learning is within the human resources function. This team delivers enterprise based e-learning courses while business units deliver specific business related e-learning courses.How are adult learners valued? The adult learner is an important consideration for the e-learning designers. The adult learner consideration, however, sits within a realization of constraints and opportunities. First, e-learning designers are aware that bandwidth constrains the type and the application of animations and audio clips. Consequently, their application of video clips in their e-learning courses is minimal. Second, e-learning designers are aware of their expertise and realize that they need additional talent on two and three dimension simulations.Despite these constraints, however, the e-learning designers have also been able to take advantage of design and production opportunities. The use of standardized templates has improved how e-learning designers create courses. This has reduced the design and production time for a typical course from 40 to 15 days. The e-learning designers recognized, however, that interviewing subject matter experts and writing scripts are the processes that take the most time and there has only been limited help from technology for these parts of the course design and development process. Considering their constraints and opportunities, the e-learning designers identified a seven-dimension strategy to value the adult learner. The following describes the seven dimensions.Realizing that the e-learning group mainly serves enterprise-wide e-learning needs, content selection is driven by a steering committee that has the pulse on learning needs that are strategic for individual, group, and organizational performance. The committee has representation from the finance, marketing, and HR functions. This committee recently provided content advice on the economics of the business and compliance policies. This steering committee assists the e-learning team in valuing the adult learner by advising on content that is relevant and meaningful to the operations of the business.In reference to content sequence, e-learning courses may reflect an information-assessment, assessment-information, and an individualized sequence option. All three sequences give adult learners an opportunity to process information differently, and engage in a preferred learning style. The information-assessment sequence forces learners to process information before completing the assessment. In addition, the assessment-information sequence gives learners the opportunity to identify their knowledge gap and then target the specific knowledge that they may need. Instructional designers stated that the latter sequence works well with product related courses.Adult earners also get the opportunity to structure their courses. Learners are able to decide how they want to see things - for example, learners may choose to be asked questions throughout the course or only to be asked questions at the beginning and/or at the end. Learners may also choose the sequence of their learning activities. To avoid information overload, most modules are usually 15 minutes or less.Adult learners also get an opportunity to make choices on content presentation. Typically, courses will include a text and audio portion and have graphics, which are used to enhance the content delivery. Transcripts of the audio portion are also made available for anyone with hearing disabilities and for those who chose not to listen to the audio. Video is rarely used because of bandwidth constraints; a short video clip with the chairman introducing a new set of courses is not uncommon.Interaction focuses mainly on learner-content interaction. Adult learners get the chance to interact with content after every three or four screens, a design standard that applies to all courses. Interaction after three or four screens may include answering a question or completing a matching activity - an example, matching definitions with terms. Learner-content interaction also occurs through problem-solving scenarios. Learners get the opportunity to diagnose a scenario and choose among several solutions. Lastly, learner-content interaction also occurs through games.Value for the adult learner is supported by design standards. In addition to enforcing interaction after three or four screens, the e-learning designers developed five sets of templates for their courses to standardize features and allow adult learners easy navigation. Choice of templates depends on the content under design. Policy compliance courses, for example, use the templates with the office theme. The template background is typically a hallway, desks, or the cafeteria. Graphically, people will appear and ask questions in the context of an office. Overall, all templates also have navigation, glossary, and an area where links to other web sites are uploaded. All these components appear at the same locations on the screens in all five sets of templates.Assessment is another component via which the adult learner is valued. Pre and post assessment occurs in about 20 percent of the courses with about 80 percent only having post assessment. Assessment techniques may include true and false questions, matching and multiple choices, fill in the blanks and problem-solving scenarios. Assessment plays a crucial role in learners' performance on the job, especially since the e-learning courses are tied to the strategic organizational performance. If learners do not pass the test the first time, they retake the test and parts of the course if necessary to meet the passing criteria. Last year, for example, all learners selling insurance had to take the "do not call list" course. Every learner was tracked and anyone who did not take or did not pass the test could make unsolicited calls.Instant feedback, a component of the assessments, helps adult learners to immediately focus on their knowledge gap, and respond appropriately such as reviewing the module before retaking the assessment.Transfer of e-learning to the job has only been tracked with a few courses. The e-learning team, however, is making transfer of learning on the job one of their core competencies and they are presently exploring various transfer assessment tools. Presently, assessing transfer occurs through a 30- and 90-day follow-up with the learner and the manager, which seeks to find what learners have learned and applied on the job. Last year, transfer was tracked for 12 courses. Transfer is presently influenced by managers' involvement with the e-learning process. Transfer for the billing course was tracked by recording how learners answered billing questions and if they had to transfer questions to a more experienced person. Results showed that the transferring questions to an expert had decreased and managers were quite happy.In conclusion, the e-learning group is focused on providing enterprise-wide e-learning courses. Though bandwidth and lack of skills to develop high-end simulation techniques are present, the team keeps the adult learner at the forefront of their design by ensuring that content is relevant and meaningful. Meaningfulness occurs by working with the steering committee and subject matter experts to identify content and capture the best scenarios that are relevant to the organization. Meaningfulness is also inserted into the e-learning design by creating templates that have themes relating to company infrastructure and culture. Content sequence and presentation give the learners locus of control whereas learner-content interaction, standards, and assessments encourage learners to interact and process information in various ways. Enforcing interaction after three or four screens and limiting modules to 15 minutes or less help prevent information overload. Overall, this e-learning team has been proactive in valuing the adult learner in their e-learning design.Case study two: energy services group, Halliburton Context Halliburton is one of the world's largest providers of products and services to the oil and gas industries. Halliburton employs more than 100,000 people in over 120 countries. Halliburton's Energy Services Group consists of four business segments: drilling and formation evaluation, fluids, production optimization, and landmark and other energy services. The second group is the engineering and construction group, known as Kellogg Brown & Root. This mini-case focuses only on the Energy Services Group.For Halliburton's Energy Services Group e-learning is a company-wide initiative introduced in 2000. The driving forces influencing its initiation included a desire to reduce costs and increase access. The travel cost for both learners and instructors inherent in instructor-led training was one cost reduction factor. Additionally, e-learning was viewed as a means to successfully reach learners worldwide.Most of the e-learning opportunities are offered asynchronously. Learning modules are available through the LMS and can be accessed by learners at any time. Additionally, some instructor led courses have a blended format where learners may access materials to prepare for the course and/or may use e-learning assessment tools. In some cases synchronous approaches are used for collaborative and communication purposes. Both learners and instructors can talk to each other and share documents and applications.In many cases e-learning can be accessed at any time. This, however, can be affected by technology-related constraints. Bandwidth related issues are constraints, especially in specific geographic areas such as West Africa. Additional access issues may be as simple as the presence of a computer and appropriate network connections. In some cases, employees in remote areas utilize a satellite office equipped with workstations. Access for employees from home is possible for some, but it is riddled with home equipment, firewall, and security issues. Currently selected employees use virtual private networks (VPN) to connect from their homes. The e-learning team hopes to increase access locations by distributing laptop computers and providing kiosks at specific locations.Halliburton's LMS includes courses developed internally as well as those purchased from outside vendors. About 1,000 courses developed by external entities and more than 140 (mostly technical) internally developed courses are available. Typically, the duration of a course is about one hour. Internally developed courses include technical content for subject matter such as drilling, mud, hardware, and exploration technology. Additional non-technical content includes areas such as legal issues, business conduct, finance, procurement, human resources, and leadership. Purchasing of externally created course in the areas of leadership and management, for example, are driven by business units needs. A recent contract secured the availability of a catalogue of "soft skill" training offered in a number of formats including simulations, online books, quick reference materials, and job aides. While Halliburton purchases online training from vendors it does not sell internally developed e-learning courses.The team of five instructional designers, four multimedia specialists, and one technical publishing specialist manage the e-learning initiative. Together, they serve more than 35,000 employees of the Energy Services Group. For 2002, approximately 279,500 courses were completed, reflecting an average of about eight courses per employee. The e-learning initiative is a part of the human resource development function and it is termed Halliburton University.How are adult learners valued? E-learning team at Halliburton has gone great strides to design e-learning courses that value the adult learner. These strides, however, reside within constraints and opportunities. The first challenge is response time to clients due to a small staff of about ten designers for approximately 35,000 employees. Designers said that their roles entail consulting with product service lines on their learning needs, meeting clients' requests, and reviewing possible third party courses for compatibility with the LMS standards. While having a small staff affects response time, designers shared that the LMS has helped them to deliver course content and track learners' access and performance. Designers shared that business units' value and support for e-learning has helped them value the adult learner. In addition, Halliburton's initiative of identifying competencies for each job role has assisted the designers in identifying relevant content for targeted audiences.The e-learning team shared that they apply adult learning principles to the e-learning courses by adhering to the ADDIE module. The instructional design model, ADDIE, includes analysis, design, development, implementation, and evaluation. In specific, analysis can include performance problem analysis, needs assessment, goal, work, learner, work setting and content analyses. At present, the e-learning team is refining the instructional design process to foster consistency in the application of ISD among all team members. The e-learning team said that even though they are an in-house development team, internal client groups fund their design and development time and this in itself underlines the need to give the client the best service for their monies. Front-end analysis and even more inclusive, the complete instructional design process helps to value the adult learner by providing the most appropriate and relevant learning solution.The e-learning team employs front-end analysis procedures understand the adult learners and their needs. Understanding the adult learners' needs helps the e-learning team in identifying parameters for design, development and deployment. The front-end analysis may include technology infrastructure, type of content, level, and type of interaction necessary, learner analysis, as well as cost and delivery options. Learner analysis, in specific, does not occur for every course or module because the designers are often tasked with delivering e-learning courses to the same target audiences. Technology analysis has proved to be very important, especially when dealing with adult learners worldwide. Bandwidth issues, for example, vary and have major implications on accessing e-learning courses. The e-learning team said that they sometimes visit sites to get a good picture of technology limitations. The e-learning team also noted that the work setting analysis has shown that the multicultural context of their organization communicates the need to have e-learning courses translated into various languages and to be sensitive of cultural differences. The e-learning team said that they are presently translating some courses to Spanish and hope to have other languages in the near future. The e-learning team said that they use cultural informants from respective countries for advice on cultural issues.As a result of front-end analysis the e-learning team may chose one of the three levels of course design. The first level, the most basic, may include power point with audio and an assessment activity. In essence, level one has a linear structure where the user would be going through a page turn type of lesson. Level one is usually chosen when the client has an urgent need to get content to employees as quickly as possible. Level two, on the other hand, gives the learner control of the content sequence and presentation. Adult learners are given the chance to navigate through the module and select how they want the content presented and can choose the sequence of their activities. Text is presented in a static and dynamic form giving adult learners opportunities to access certain links or use graphics, tables or charts to reinforce concepts. Level two would generally contain more in depth assessments activities, which are tied to performance objectives. In contrast, level one does not necessarily state the performance objectives. Level three represents the most complex types of course design. In addition to having clearly stated performance objectives, which are tied to assessments, this level might include video and audio clips, and some more complex animations that require the learner to interact and make decisions at certain sections of the animations. Level three may also include some low level simulations, for example, where the learner is given procedures to simulate the correct use of a piece of machinery. The simulation would have to meet specifications required in the real world.The e-learning team stipulated that on the average they design level two and three type courses. Sometimes they might even use a combination of levels two and three. Designers shared that it could take somewhere between 325 and 425 hours to design level two and three type courses. Designers shared that they recently collaborated with the multimedia development team to identify timeframes for simple to complex animations and simulations, and for creating static graphics to complex tool drawings. In the next six to twelve months, designers will be evaluating projected timeframes for designing and developing level two and three type courses.Course selection is a process driven by product service lines. Product lines refer to the different aspects of the oil field service industry, for example, the logging and perforating service. Once the content area is identified, designers and subject matter experts examine content to ensure that it is relevant to the performance gap, the job, and the target audience. Thus, designers and subject matter experts conduct work analysis to ensure that they have real-life examples and scenarios. Designers shared that learner's characteristics such as educational backgrounds can also influence the selection of content. The e-learning team added that a technically based module that has a target audience of high school and college degreed individuals, for example, has to be designed well to engage all types of learners in the learning process.The e-learning team ensures that content presentation is varied to give the adult learners preferences on how to access content. The e-learning designers use audio, video, print, animations, and simulations and in some situations, case studies to present content. Animations, in particular, serve to give adult learners visual representations of concepts. Modules that have graphic intensive simulations are sometimes presented in a hybrid module because of bandwidth limitations. In hybrid module, learners receive their content in a CD and take their assessments through the LMS. Learners have the options of listening or turning off the audio and of downloading a job-aid, which can be a summary of the module. The use of dynamic text allows learners access to internal web pages that are relevant to the respective module. The designers use Flash, Dream Weaver, Robo Demo and Microsoft Office to assist with content presentation.Eighty percent of the courses are asynchronous and are sometimes the forerunners to classroom learning. Thus, learners are primarily engaged with learner-content interactions. Learner-content interaction may occur by learners choosing their modules, choosing their content presentation options, and answering questions. However, learners may get the opportunity to interact with each other and instructors via a virtual collaborative tool called Interwise. The tool engages learners and instructor in real time using audio, application sharing, white board, and a chat room. The instructor can give control to learners; learners in turn can raise their hands and interact with their peers.Assessments give adult learners feedback on their knowledge acquisition. Assessments may include multiple choice; fill in the blank, and matching. The e-learning team shared that while they would like to design dynamic assessments, the LMS does not presently support those capabilities. Assessments are designed to give instant feedback to the learner especially since learners are expected to receive an 80 percent on most tests. If the 80 percent is not met the first time, learners have the options of retaking the test or returning to the module. The LMS allows learners to take the test twice in one sitting and if the 80 percent is missed on the second try, the learners are automatically logged off and need to log in again if they want to retake the test for the third time. The e-learning team shared that the order of test questions and answers might change every time the test is taken.The standards that guide course design comply with the LMS. Standards include html, flash, navigation structures, titles, fonts, size, color, animations, pop-ups, for example. Keeping the adult learner in mind, the e-learning team also includes performance objectives, overviews and formative and summative learning checks or assessments. Performance objectives are categorized using Bloom's taxonomy reflecting simple to more complex learning outcomes.The e-learning team has also created storyboard screens to assist with standardizing the process of the course design. Designers mentioned that storyboards allow everyone involved including subject matter, and multimedia experts to review and revise instantly. This dynamic storyboard screen allows everyone involved to collaborate and to make changes to the course before it goes into development.Adult learners mostly take their e-learning courses at their desktops. Kiosks and labs are available in some locations. The field employees may access their courses from a truck using a laptop that may be connected via satellite. Designers recognized that bandwidth issues influences where adult learners can access their courses.Transfer of e-learning resides with the learner and supervisor. The e-learning team presently sees their role as identifying and using instructionally sound instruction methods and ensuring that the material is technically accurate. The team stated, however, that they plan to increase their efforts in addressing transfer of learning on the job and return on investment.In conclusion, the e-learning team at Halliburton is continuously looking for ways to serve the adult learner in the e-learning environment. The value for the adult learner resides with the application of their creative use of instructional design process and of the technology infrastructure that exist worldwide. E-learning designers are about deploying a sound learning opportunity that meets clients' and adult learners' needs worldwide.Case study three: retail chain store Context The focus of this case describes an e-learning team effort that serves the stores and asset protection sector of a retail store chain, which operates approximately 260 stores in 14 states. The chain is one component of a larger retail empire, which operates more than three distinct retail formats. This retail store chain employs approximately 29,000 employees.The e-learning initiative began in 2000 and is located within the university of the retail store chain. The e-learning initiative is mainly directed at the executive level employees within the stores and asset protection sector. Primary reasons for the introduction of e-learning were to reach a geographically dispersed workforce within the United States and to create consistency, quality and relevance among the training design, development, delivery and results.Asynchronous, synchronous, and blended formats are used to deliver training. Asynchronous learning focuses on computer-based modules while synchronous utilizes virtual classrooms. Blended learning may include a mix of computer-based modules and virtual classroom modules or face-to-face training with either computer based modules, or virtual classroom modules. A blended solution may have a team of employees taking a computer-based module all at the same time and upon completion the team gets together to share what they have learned. Approximately 30 modules have been developed with about 2 or 3 new offerings being created each quarter. While the retail store chain continues to expand its e-learning options, approximately 70 percent of the training continues to be offered face-to-face.The training and development team is comprised of four persons. Two members focus solely on e-learning. With an e-learning staff of two, these individuals assume roles in project management, design, creation, execution, and evaluation. Typically, a computer-based module takes between 100 and 300 hours to design and develop; a virtual classroom module, on the other hand, requires about two months. While this e-learning initiative resides within the stores and asset protection sector of retail store chain, it shares e-learning modules with other sectors and sometimes purchases modules from various vendors.How are adult learners valued? With e-learning representing approximately 30 percent of the training function and with the development team being relatively small (two persons), this case provides an overview of how the adult learner is valued within an emergent e-learning initiative.Front-end analysis is conducted to ensure that training is relevant, meaningful, and authentic. Performance problem analysis, for example, ensures that the performance gap is linked with training content. Once the training link is identified, e-learning designers further analyze to identify whether the training can be delivered through face-to-face training or e-learning.If e-learning is decided upon, then further analysis occurs to decide whether the medium will be computer-based, virtual classroom or a blended solution. Learner analysis also occurs to ensure that adult learners' characteristics are considered in the module design and development. Given that the target audience is usually executives, e-learning designers may conduct mini learner analyses when introducing a new content category.Once the performance problem has been aligned with training, content selection is directed either through a partnership with senior managers and/or employees in the field or through a review of new processes or procedures. Subject matter experts focus on the target audience, the performance problem, and the respective environments to identify relevant content and most appropriate learning activities given the selected medium of delivery.Content presentation occurs through text, graphics, video, and audio. Most computer-based modules have a combination of these mediums. Blended solutions use more video than computer-based modules because of bandwidth issues. Designers use flash, fireworks, trainer soft, robo-demo, and author ware to present content. At present, the adult learners cannot personalize the content presentation. The e-learning team is aware of this limitation, and is exploring options to give learners more control of content presentation options.With learning occurring asynchronously, synchronously, and in blended formats, adult learners can engage in learner-content, learner-learner and learner-instructor interactions. Learner-content interaction occurs mainly with asynchronous e-learning modules. Learners usually interact via simulations, games, or quizzes. These types of activities can force learners to analyze, synthesize, and evaluate. Learner-content interaction also occurs while listening to the audio portion of the module or watching a video. E-learning designers ensure that adult learners have various types of interactive activities because they want learners to enjoy a preferred type of activity. Learner-learner and instructor-learner interactions occur via the virtual classroom. Virtual classroom modules are chosen primarily because they can meet the just-in-time needs of business units. Virtual classroom have live instructors who communicate to the learners using a radio talk show format. Learners are also able to interact with scenarios and questions posed by the instructor. Adult learners, in turn, can use the chat space to communicate among themselves and with the instructors.With e-learning being an emergent initiative, the e-learning designers do not presently have a LMS. The designers, however, are in the process of developing an e-learning template that has consistent module design features. The template will reflect the retail store chain's university campus. The template will have links to the various colleges, for example, the college of logistics. The college of logistics will encompass all logistic related modules. The template for each module, on the other hand, will include nine major sections. The sections are performance objectives, module overview, content presentation, assessment, journal tool, tip cards, glossary, help tool, and a library. The e-learning team shared that this template will give adult learners more opportunities to access their preferred learning activities and tools. In keeping with adult learning principles, in specific the attention span, e-learning designers are designing modules that take no more than 30 minutes to complete. Lastly, as the designers work towards developing their template, they are conducting usability testing for every module that is released. In specific, designers are evaluating how employees navigate, access and interact with the content. Designers said that listening to adult learners is important if they are to deliver a learner-centered e-learning experience.The journal, one of the module features, will be a tool that can assist the transfer of learning on the job. Designers expect that the learners will use the journal to make personal notes about their learning, in specific, write what they would like to share with their peers and managers upon returning to the job. Presently, the direct supervisor is responsible to ensuring that learning is transferred on the job. The just-in-time delivery of computer-based and virtual classroom modules assist learners to transfer their learning on the job because the modules are based on just-in-time needs.Assessments occur at the formative and summative levels. At the formative level, learners may be asked to answer questions as they proceed through the module. At the summative level, if assessments will not be tracked, learners may take a self-assessment in the form of game such as jeopardy or who wants to be a millionaire. If the summative assessment will be tracked, learners receive a reaction survey and assessment that may include multiple choice, fill in the blanks, true or false and matching type questions. With the absence of a formal LMS, the e-learning designers contract database services to collect and access data from the reaction survey and assessments. Learners usually receive instant feedback on assessments and can return to the assessment or module for clarification. Instant feedback is not given, however, when assessments are for certification. Performance on certification assessments is communicated directly to the learner or to the respective managers or supervisors.Adult learners have access modules only at the stores. Every store has approximately three computers; one computer is dedicated to learning. Modules are delivered to the computers via an executable package. Employees individually or in groups of two or three can take the module at the same time. Introducing e-learning at the stores has been challenging because of the customer driven environment. The e-learning, shared, however, that they are presently exploring how to make learning seamless at the store level; one possibility could be the introduction of kiosks.In conclusion, though the e-learning team at the retail store chain has the human resource, time, money, and bandwidth challenge, they are highly motivated and confident that their e-learning initiative will only improve to serve the adult learner. The designers highly anticipate the completion of the template because it will serve to decrease their design and development time. In addition, e-learning designers are hoping to cross train the other two members of training and development team on e-learning processes such as meeting with clients, analyzing the performance problem, setting objectives, and getting the initial content layout using the instructional design process. This type of expertise could assist the existing designers to focus on coding and development. Although, emergent, the e-learning initiative at retail store chain is progressing in its mission to value the adult learner.Case study four: HP services workforce development, HP Context Hewlett Packard (HP) is a technology solutions provider to consumers, businesses, and institutions globally. The company's offerings span IT infrastructure, personal computing and access devices, global services, imaging and printing for consumers, enterprises and small and medium businesses. With approximately 140,000 employees worldwide, HP serves more than one billion customers in more than 160 countries. While organizationally HP accomplishes this by focusing on multiple business organizations, this case examines only the HP Services Group (HPS), which consists of approximately 60,000 employees. It is recognized that e-learning initiatives exist in multiple organizations and groups within HP and they are beyond the scope of this case. Within HPS, a designated organization exists to develop, design, and deliver learning solutions to this business unit. The name of this organization is HPS Workforce Development (HPS WD).The e-learning initiative with HPS began in 1999. The HPS WD team has created and monitored approximately 400,000 learning incidents in a recent six-month period. Approximately 85 percent of training is delivered electronically. Reasons cited for the introduction of e-learning into the group include cost efficiency; accessibility to learners, especially remote learners and those on customer sites; and time efficiency in that learners in the field can train where they are, without having to travel to a classroom setting.Both synchronous and asynchronous formats are used. Self-paced modules, both those created "in-house" and those purchased from vendors, are available to learners for access at a time and place that meets their scheduled needs. Required company training, such as business conduct, is an example of content that is delivered by asynchronous, self-paced methods. Virtual classroom environments and virtual labs can be accessed remotely for synchronous and/or asynchronous experience. Virtual labs have successfully been used for technical training.HPS e-learning courses are organized into portfolios. The main portfolio contains 1,800 self and instructor based courses. An additional 200 courses bring the total number of courses to 2,000. Portfolios include business conduct, technical, legal and professional skills such as consulting and project management. A virtual team comprised of curriculum developers, learners, delivery professionals and HP businesses work together and interface with product divisions to prioritize course offerings and to develop content. Specifically, the development team is a global organization of about 80 program managers plus developers and designers (total about 140 people) whose task it is to decide format, create content, develop courses, and evaluate feedback for the updating of courses. Increasingly, the outsourcing of course creation is being explored and utilized. As of 1 May 2004, most HPS employees had taken multiple e-learning courses. In general, while HPS WD does share courses across HP, it does not sell its learning packages to outside entities with the exception of customer services training solutions.HPS WD employs a business performance-consulting model. To help learning consultants within the HPS apply the business performance model, the corporate workforce development team at HP introduced an electronic performance support system to assist teams in becoming aware of similarities among performance issues and the types of training that they are providing. As a result of this awareness, teams are combining their strengths and discovering synergies in their training efforts.How are adult learners valued? The adult learner is an important factor in the design of e-training within HPS. E-training includes web-based courses, virtual classrooms, web-inars, webcasts and remote labs. In addition to e-training, the e-learning designers within HPS WD also incorporate e-learning which includes all knowledge management/sharing type of learning activities. For the purpose of this case, e-training refers to all e-learning courses, while e-learning refers to all knowledge management/sharing type of activities. Both e-training and e-learning give the adult learners synchronous and asynchronous learning opportunities.To meet adult learners' needs, the e-learning designers within HPS WD conduct a front-end analysis on several parameters to assist with design and development choices. Cost and time are primary factors as they affect the type of learning solution that can be developed. Size of audience also influences delivery options. Location of audience is critical, because a worldwide audience will have different implications for design and delivery versus a specific audience within a country. Lastly, equipment constraints in the target audiences' locations are considered for the design and development decisions. Once all the constraints are taken into account, the e-learning team ensures that the adult learner receives the best learning solution.Content selection for e-training resides within a business performance consulting model. This model tightens the relevance and meaningfulness of content because training is provided based on short- and long-term performance needs. This performance approach to content selection gives adult learners learning opportunities that are tied to individual, group and organizational performance. In addition to identifying performance driven content, adult learners have the opportunity to select which components of the training solution is appropriate for them. For example, sales, support, service type of employees may take different components within the one training solution.Having a technology-enabled environment gives the e-learning team the capability to present content using visuals, audio and video. Content presentation usually represents a mix of highly interactive solutions that give the adult learner the opportunity to choose their preferred medium for content presentation. In some cases, especially with customer training solutions, e-training courses may have a prompter that supports several languages. The language prompter gives the adult learner an additional opportunity to choose their preferred language of instruction.Again, because of a strong technology infrastructure learning activities within the web-based courses are varied. Simulations, games, knowledge checks, and quizzes are some activities that are frequently used in web-based courses. The purpose of using varied learning activities is to give adult learners an opportunity to listen, interact, and play. Combining different learning strategies also helps the e-learning designers target a broader audience. Business games, for example, are presently used in very specific and strategic areas, because they are a significant investment to build. Business games can teach how to pursue and close a measure of opportunity with major clients by simulating various scenarios for the learners. Teams of employees whose members have specific roles play these games. It must be mentioned, however, that developing these highly interactive courses takes time, and that not all courses are developed with the same level of interactivity. In essence, e-learning designers must find the balance between what they can do in the ideal world and what they can do in the practical world.While web-based courses mainly focus on learner-content interaction, HP uses other mediums to enhance learner-learner and learner-instructor interactions. Virtual classrooms are used for small groups of people to maximize instructor-learner interaction. In the virtual classroom, learners can raise their hand if they want to ask a question or if they want to comment or answer a question. In addition, the virtual classroom has a group chat feature, which can be categorized as private or public.Learners can engage in learner-learner or instructor-learner interaction via the chat space. Virtual classroom allows you to share applications, use a white board, or poll learners' reactions to different questions or issues. Virtual classrooms can be used to poll learners' satisfaction, and measure their level of knowledge acquisition.Learners can draw, paint, and on the more humorous side, even throw tomatoes if they did not like the instructor or give their instructor apples if they enjoyed the learning experience. Web-inars and webcast, on the other hand, are used for large audiences but very little interaction between the instructor and the audience occurs, learners listen only. In addition to virtual classrooms, web-inars and webcast, HP is also using remote virtual labs that are sometimes instructor driven, self-directed or are used for practice or application of learning.The blending of e-training and e-learning gives adult learners more opportunities for interaction and learning. HPS WD group has a robust knowledge management strategy that fosters e-learning through different forms of peer-to-peer learning, one form being communities of practice. Commercially available collaborative tools such as netmeeting, e-rooms, and instant messenger help to support peer-to-peer learning. HPS WD has a formal program called "Professions" which is a structured manner of organizing communities. Through communities of practice, the e-learning designers organize best practice training sessions, which are sometimes delivered using virtual classrooms. Another form of peer-to-peer learning is online mentoring and coaching. Formerly, coaching was offered at the executive level, but presently there is informal coaching at all levels of the organization. Discussion forums, another form of peer-to-peer learning, are also used extensively. Lastly, the formal technical career path is a program that is corporate wide and aims to offer individual contributors a virtual environment where they are valued and where they can strengthen their technological skills. To facilitate peer-to-peer learning, adult learners have access to knowledge management systems that are used to create, share, and reuse information regularly. Documentation of white papers, what HPS WD calls knowledge briefs, is a common practice among the group members.To reduce frustration and to help meet preferred learning styles among adult learners, the e-learning designers are creating development roadmaps for employees that include both virtual and face-to-face learning opportunities. Combining virtual and face-to-face learning opportunities prevents HP from becoming a virtual training environment and gives the adult learner a chance to network during face-to-face classes while meeting their learning needs anytime, and anywhere.Most adult learners around the world use their desktops as their primary location for taking their e-training courses. Some employees in the United States take the courses from their homes, as many are telecommuters. E-learning designers are starting to create learning rooms in major offices around the world to give learners an opportunity to isolate themselves from the day-to-day business pressures in order to concentrate on their learning.Pre- and post-test assessments are frequently used in the e-training courses, especially since several of the training solutions are formal certification tracts. In reference to Kirkpatrick's four levels, reaction, learning, transfer, and results, e-learning designers mostly apply levels one and two. Level one, reaction occurs via the customer satisfaction survey. Level two, learning occurs through final tests or knowledge checks. Final tests and knowledge checks could include open-ended questions, multiple-choice questions, and true or false questions, for example. Levels three, transfer and four, results are done on a case-by-case basis because they are major undertakings and can be very costly.To help with learner-assessment interaction, e-learning designers are starting to use interactive software that can create drag and drop boxes, and select an area on a picture, for example. Like high-end interactivity courses, however, such depends on cost and time constraints on design, development, and deployment.In conclusion, the e-learning team with HPS WD is challenged to reach the adult learners worldwide via e-training and e-learning. The team of designers is pushing ahead with the help of a wide variety of technology tools. The designers, though, are quick to point out that front-end analyses identify which of the available technologies can be relevant for the design, development, and deployment of a given learning solution. Nevertheless, these e-learning designers are doing their best to value the adult learner within the e-training and e-learning environments. E-learning is a valuable training and development solution for many companies. Unlike academic environments (Johnson and Aragon, 2003), very little is known about how the adult learner is valued in e-learning within corporate settings. This study explored a Waight & Stewart conceptual model, which posits that valuing the adult learner in e-learning within corporate settings depends on the interdependence of championing factors, antecedents, and moderators for the achievement of engagement, learning, and transfer. Table I provides an overview how these e-learning teams played out in terms of championing factors, antecedents, moderators, and outcomes. The comparative analysis was made on the incidence of occurrence and not on the extent of occurrence. More research needs to be conducted to identify the extent of occurrence among all the factors.The comparative analysis showed that all e-learning teams had leadership, a learning culture, technology infrastructure, and finance championing their efforts. The analysis also showed that all e-learning teams employed all five antecedents, which are needs assessments, learner analysis, work, work setting, and content analyses to help provide the most meaningful learning experience to the adult learner. In reference to moderators, the e-learning teams shared no incidence of return on investment. Incidence, however, the use of learning theories, technology skills, and creativity was provided. Lastly, all e-learning teams sighted incidents of learner engagement and learning. Only one e-learning team referred to transfer.While there were varying degrees of occurrence among the championing factors, antecedents, moderators and outcomes, this study provides basic confirmation that the adult learner is valued in e-learning within corporate settings. Realizing that there were opportunities and constraints for each e-learning team, the four cases provide a good insight into the energy and creativity that e-learning teams are employing to create sound learning experiences for adult learners. Further research is needed to establish the extent of implementation among the individual factors of the conceptual model (Figure 1). In addition, verification of this conceptual model is needed with more companies. Opportunities exist to explore how this conceptual model is being employed worldwide. Overall, these four case studies reveal that adult learners are valued in e-learning within the four corporate settings. It can be said that these e-learning teams are progressively improving to provide the adult learner the best e-learning experience. These case studies also show that e-learning teams have strong competencies in instructional design, learning theories, and technology and that they are operating within companies that support their efforts in creating instructionally sound e-learning experiences. Opens in a new window.Figure 1 Valuing the adult learner in e-learning within corporate settings Opens in a new window.Table I Comparative analysis conceptual model and four case studies
|
- Four case studies emerged from data collection and revealed that adult learners are being valued and supported in corporate e-learning settings. A comparative analysis of the case studies with the Waight and Stewart conceptual model showed that the e-learning teams are complying with all factors for the exception of transfer and return on investment.
|
[SECTION: Value] Part one of this paper presented Waight and Stewart conceptual model (Figure 1) on valuing the adult learner in e-learning with the corporate settings. The purpose of this paper is to present how four e-learning teams in four companies are valuing the adult learner in e-learning. First, the methodology that guided the case studies is presented. Second, the four case studies are shared. Third, a discussion of the four case studies and their relationship with the conceptual model is provided. Lastly, conclusions are drawn based on the case studies and discussion.The conceptual model reflects companies where support for e-learning via supportive leadership, learning cultures, technology infrastructure, and finance are apparent. In addition, the model indicated analyses such as needs assessment, learner, work, work setting, and content analyses as viable processes that if implemented well can assist in providing the adult learner targeted and meaningful learning opportunities.The model shows, however, that antecedents start the value process but knowledge and skills on return on investment, learning theories, technology, and creativity add to the value for the adult learner. The model further indicates that when championing factors, antecedents, and moderators are adhered, outcomes such as engagement, learning, and transfer are possible. Research design The method used in this research was a case study. Yin (2003) stated that a case study is an empirical inquiry that investigates a contemporary phenomenon within its real-life context. The purpose of the study was to investigate how the adult learner is valued within the e-learning environment. The following questions guided data collection:1. What is the e-learning context in your organization?2. How is the adult learner valued in the e-learning environment?3. What considerations must be addressed when valuing the adult learner in e-learning environments within corporate settings?Sample The sample of this study was nine e-learning designers whose Fortune 500 companies had active e-learning initiatives for a minimum of four years. These companies represented the retail, insurance, oil and technology industries. Selection of companies occurred through an informational telephone interview with e-learning designers. During the interview, the researchers explained the purpose of the study and asked if the companies had an e-learning initiative. All e-learning designers within the four companies that were initially contacted had active e-learning initiatives and expressed their willingness to partake in the study. Overall, there were three e-learning team leaders, and six instructional designers. Interviewees for the four companies ranged between one and three.Data collection Semi-structured telephone interviews were conducted with e-learning representatives of each company. All participants received the study proposal, which included the interview questions via e-mail. Before the telephone interview was conducted the researchers asked for permission to record the interviews. All participants agreed to be recorded. All the interviews were transcribed.Instrumentation The interview guide included three major sections that paralleled the study's research questions. The three major sections included context, the adult learner in the e-learning environment and e-learning considerations in the corporate world. The researchers tested the interview guide with two of the e-learning participants. These participants were included in the study. The researchers conducted a preliminary content analysis to ensure that research questions intent was being targeted. The pilot test revealed that all the questions were relevant.Data analysis Participants' descriptive content responses to the interview questions were read, and categorized by research question. Each research question contained a main premise, which was used by the researcher to reduce large amounts of data into a smaller number of analytic units or themes. After analyzing each participant's response for each research question, the researcher color-coded the themes. Upon identifying the recurrent themes, the researcher created descriptive tables for each theme. In this way the researcher identified data that supported each theme. Upon completing the analysis, cases were written for each company. Cases were sent to the respective interviewees for their review and feedback. All cases received valuable feedback and were revised. The following are four case studies on how the adult learner is valued in the e-learning within the corporate setting. These cases represent the efforts of e-learning teams within specific divisions of their companies. Of the four companies, only two approved to have their names mentioned in the case study.Case study one: insurance company Context A large personal lines insurer has been a successful implementer of e-learning. The development of e-learning began in 1995 with computer based technology courses. These early courses were designed for agent training and led to the subsequent development of web-based courses. The learners tracked by the learning management system (LMS) include employees, contract agents, and their staffs. In 2003, more than 60,000 learners completed at least one course online. Typically, learners are positive about their e-learning experiences. The results of an annual enterprise-wide opinion survey indicated that 78 percent were satisfied or completely satisfied with their e-learning courses.Originally, the drive to introduce e-learning was precipitated by the geographic diversity of learners. In addition to cost, the need and value for learning, quickly became primary drivers of the e-learning growth. Predominately, e-learning is an asynchronous experience. Less than 10 percent of web-based training is blended with instructor led training. Presently, virtual classrooms are under consideration for possible addition to the e-learning experience.This company currently tracks over 4,000 learning activities including courses, assessments, workshops, and class registrations. There are approximately 700 enterprise wide online courses. These include technology courses such as Microsoft Office; insurance product knowledge courses; strategy courses covering topics such as company history, how the company makes money, and corporate ethics; process courses such as billing and computer applications; soft skills courses; and technology courses. The team charged with the creation of e-learning is within the human resources function. This team delivers enterprise based e-learning courses while business units deliver specific business related e-learning courses.How are adult learners valued? The adult learner is an important consideration for the e-learning designers. The adult learner consideration, however, sits within a realization of constraints and opportunities. First, e-learning designers are aware that bandwidth constrains the type and the application of animations and audio clips. Consequently, their application of video clips in their e-learning courses is minimal. Second, e-learning designers are aware of their expertise and realize that they need additional talent on two and three dimension simulations.Despite these constraints, however, the e-learning designers have also been able to take advantage of design and production opportunities. The use of standardized templates has improved how e-learning designers create courses. This has reduced the design and production time for a typical course from 40 to 15 days. The e-learning designers recognized, however, that interviewing subject matter experts and writing scripts are the processes that take the most time and there has only been limited help from technology for these parts of the course design and development process. Considering their constraints and opportunities, the e-learning designers identified a seven-dimension strategy to value the adult learner. The following describes the seven dimensions.Realizing that the e-learning group mainly serves enterprise-wide e-learning needs, content selection is driven by a steering committee that has the pulse on learning needs that are strategic for individual, group, and organizational performance. The committee has representation from the finance, marketing, and HR functions. This committee recently provided content advice on the economics of the business and compliance policies. This steering committee assists the e-learning team in valuing the adult learner by advising on content that is relevant and meaningful to the operations of the business.In reference to content sequence, e-learning courses may reflect an information-assessment, assessment-information, and an individualized sequence option. All three sequences give adult learners an opportunity to process information differently, and engage in a preferred learning style. The information-assessment sequence forces learners to process information before completing the assessment. In addition, the assessment-information sequence gives learners the opportunity to identify their knowledge gap and then target the specific knowledge that they may need. Instructional designers stated that the latter sequence works well with product related courses.Adult earners also get the opportunity to structure their courses. Learners are able to decide how they want to see things - for example, learners may choose to be asked questions throughout the course or only to be asked questions at the beginning and/or at the end. Learners may also choose the sequence of their learning activities. To avoid information overload, most modules are usually 15 minutes or less.Adult learners also get an opportunity to make choices on content presentation. Typically, courses will include a text and audio portion and have graphics, which are used to enhance the content delivery. Transcripts of the audio portion are also made available for anyone with hearing disabilities and for those who chose not to listen to the audio. Video is rarely used because of bandwidth constraints; a short video clip with the chairman introducing a new set of courses is not uncommon.Interaction focuses mainly on learner-content interaction. Adult learners get the chance to interact with content after every three or four screens, a design standard that applies to all courses. Interaction after three or four screens may include answering a question or completing a matching activity - an example, matching definitions with terms. Learner-content interaction also occurs through problem-solving scenarios. Learners get the opportunity to diagnose a scenario and choose among several solutions. Lastly, learner-content interaction also occurs through games.Value for the adult learner is supported by design standards. In addition to enforcing interaction after three or four screens, the e-learning designers developed five sets of templates for their courses to standardize features and allow adult learners easy navigation. Choice of templates depends on the content under design. Policy compliance courses, for example, use the templates with the office theme. The template background is typically a hallway, desks, or the cafeteria. Graphically, people will appear and ask questions in the context of an office. Overall, all templates also have navigation, glossary, and an area where links to other web sites are uploaded. All these components appear at the same locations on the screens in all five sets of templates.Assessment is another component via which the adult learner is valued. Pre and post assessment occurs in about 20 percent of the courses with about 80 percent only having post assessment. Assessment techniques may include true and false questions, matching and multiple choices, fill in the blanks and problem-solving scenarios. Assessment plays a crucial role in learners' performance on the job, especially since the e-learning courses are tied to the strategic organizational performance. If learners do not pass the test the first time, they retake the test and parts of the course if necessary to meet the passing criteria. Last year, for example, all learners selling insurance had to take the "do not call list" course. Every learner was tracked and anyone who did not take or did not pass the test could make unsolicited calls.Instant feedback, a component of the assessments, helps adult learners to immediately focus on their knowledge gap, and respond appropriately such as reviewing the module before retaking the assessment.Transfer of e-learning to the job has only been tracked with a few courses. The e-learning team, however, is making transfer of learning on the job one of their core competencies and they are presently exploring various transfer assessment tools. Presently, assessing transfer occurs through a 30- and 90-day follow-up with the learner and the manager, which seeks to find what learners have learned and applied on the job. Last year, transfer was tracked for 12 courses. Transfer is presently influenced by managers' involvement with the e-learning process. Transfer for the billing course was tracked by recording how learners answered billing questions and if they had to transfer questions to a more experienced person. Results showed that the transferring questions to an expert had decreased and managers were quite happy.In conclusion, the e-learning group is focused on providing enterprise-wide e-learning courses. Though bandwidth and lack of skills to develop high-end simulation techniques are present, the team keeps the adult learner at the forefront of their design by ensuring that content is relevant and meaningful. Meaningfulness occurs by working with the steering committee and subject matter experts to identify content and capture the best scenarios that are relevant to the organization. Meaningfulness is also inserted into the e-learning design by creating templates that have themes relating to company infrastructure and culture. Content sequence and presentation give the learners locus of control whereas learner-content interaction, standards, and assessments encourage learners to interact and process information in various ways. Enforcing interaction after three or four screens and limiting modules to 15 minutes or less help prevent information overload. Overall, this e-learning team has been proactive in valuing the adult learner in their e-learning design.Case study two: energy services group, Halliburton Context Halliburton is one of the world's largest providers of products and services to the oil and gas industries. Halliburton employs more than 100,000 people in over 120 countries. Halliburton's Energy Services Group consists of four business segments: drilling and formation evaluation, fluids, production optimization, and landmark and other energy services. The second group is the engineering and construction group, known as Kellogg Brown & Root. This mini-case focuses only on the Energy Services Group.For Halliburton's Energy Services Group e-learning is a company-wide initiative introduced in 2000. The driving forces influencing its initiation included a desire to reduce costs and increase access. The travel cost for both learners and instructors inherent in instructor-led training was one cost reduction factor. Additionally, e-learning was viewed as a means to successfully reach learners worldwide.Most of the e-learning opportunities are offered asynchronously. Learning modules are available through the LMS and can be accessed by learners at any time. Additionally, some instructor led courses have a blended format where learners may access materials to prepare for the course and/or may use e-learning assessment tools. In some cases synchronous approaches are used for collaborative and communication purposes. Both learners and instructors can talk to each other and share documents and applications.In many cases e-learning can be accessed at any time. This, however, can be affected by technology-related constraints. Bandwidth related issues are constraints, especially in specific geographic areas such as West Africa. Additional access issues may be as simple as the presence of a computer and appropriate network connections. In some cases, employees in remote areas utilize a satellite office equipped with workstations. Access for employees from home is possible for some, but it is riddled with home equipment, firewall, and security issues. Currently selected employees use virtual private networks (VPN) to connect from their homes. The e-learning team hopes to increase access locations by distributing laptop computers and providing kiosks at specific locations.Halliburton's LMS includes courses developed internally as well as those purchased from outside vendors. About 1,000 courses developed by external entities and more than 140 (mostly technical) internally developed courses are available. Typically, the duration of a course is about one hour. Internally developed courses include technical content for subject matter such as drilling, mud, hardware, and exploration technology. Additional non-technical content includes areas such as legal issues, business conduct, finance, procurement, human resources, and leadership. Purchasing of externally created course in the areas of leadership and management, for example, are driven by business units needs. A recent contract secured the availability of a catalogue of "soft skill" training offered in a number of formats including simulations, online books, quick reference materials, and job aides. While Halliburton purchases online training from vendors it does not sell internally developed e-learning courses.The team of five instructional designers, four multimedia specialists, and one technical publishing specialist manage the e-learning initiative. Together, they serve more than 35,000 employees of the Energy Services Group. For 2002, approximately 279,500 courses were completed, reflecting an average of about eight courses per employee. The e-learning initiative is a part of the human resource development function and it is termed Halliburton University.How are adult learners valued? E-learning team at Halliburton has gone great strides to design e-learning courses that value the adult learner. These strides, however, reside within constraints and opportunities. The first challenge is response time to clients due to a small staff of about ten designers for approximately 35,000 employees. Designers said that their roles entail consulting with product service lines on their learning needs, meeting clients' requests, and reviewing possible third party courses for compatibility with the LMS standards. While having a small staff affects response time, designers shared that the LMS has helped them to deliver course content and track learners' access and performance. Designers shared that business units' value and support for e-learning has helped them value the adult learner. In addition, Halliburton's initiative of identifying competencies for each job role has assisted the designers in identifying relevant content for targeted audiences.The e-learning team shared that they apply adult learning principles to the e-learning courses by adhering to the ADDIE module. The instructional design model, ADDIE, includes analysis, design, development, implementation, and evaluation. In specific, analysis can include performance problem analysis, needs assessment, goal, work, learner, work setting and content analyses. At present, the e-learning team is refining the instructional design process to foster consistency in the application of ISD among all team members. The e-learning team said that even though they are an in-house development team, internal client groups fund their design and development time and this in itself underlines the need to give the client the best service for their monies. Front-end analysis and even more inclusive, the complete instructional design process helps to value the adult learner by providing the most appropriate and relevant learning solution.The e-learning team employs front-end analysis procedures understand the adult learners and their needs. Understanding the adult learners' needs helps the e-learning team in identifying parameters for design, development and deployment. The front-end analysis may include technology infrastructure, type of content, level, and type of interaction necessary, learner analysis, as well as cost and delivery options. Learner analysis, in specific, does not occur for every course or module because the designers are often tasked with delivering e-learning courses to the same target audiences. Technology analysis has proved to be very important, especially when dealing with adult learners worldwide. Bandwidth issues, for example, vary and have major implications on accessing e-learning courses. The e-learning team said that they sometimes visit sites to get a good picture of technology limitations. The e-learning team also noted that the work setting analysis has shown that the multicultural context of their organization communicates the need to have e-learning courses translated into various languages and to be sensitive of cultural differences. The e-learning team said that they are presently translating some courses to Spanish and hope to have other languages in the near future. The e-learning team said that they use cultural informants from respective countries for advice on cultural issues.As a result of front-end analysis the e-learning team may chose one of the three levels of course design. The first level, the most basic, may include power point with audio and an assessment activity. In essence, level one has a linear structure where the user would be going through a page turn type of lesson. Level one is usually chosen when the client has an urgent need to get content to employees as quickly as possible. Level two, on the other hand, gives the learner control of the content sequence and presentation. Adult learners are given the chance to navigate through the module and select how they want the content presented and can choose the sequence of their activities. Text is presented in a static and dynamic form giving adult learners opportunities to access certain links or use graphics, tables or charts to reinforce concepts. Level two would generally contain more in depth assessments activities, which are tied to performance objectives. In contrast, level one does not necessarily state the performance objectives. Level three represents the most complex types of course design. In addition to having clearly stated performance objectives, which are tied to assessments, this level might include video and audio clips, and some more complex animations that require the learner to interact and make decisions at certain sections of the animations. Level three may also include some low level simulations, for example, where the learner is given procedures to simulate the correct use of a piece of machinery. The simulation would have to meet specifications required in the real world.The e-learning team stipulated that on the average they design level two and three type courses. Sometimes they might even use a combination of levels two and three. Designers shared that it could take somewhere between 325 and 425 hours to design level two and three type courses. Designers shared that they recently collaborated with the multimedia development team to identify timeframes for simple to complex animations and simulations, and for creating static graphics to complex tool drawings. In the next six to twelve months, designers will be evaluating projected timeframes for designing and developing level two and three type courses.Course selection is a process driven by product service lines. Product lines refer to the different aspects of the oil field service industry, for example, the logging and perforating service. Once the content area is identified, designers and subject matter experts examine content to ensure that it is relevant to the performance gap, the job, and the target audience. Thus, designers and subject matter experts conduct work analysis to ensure that they have real-life examples and scenarios. Designers shared that learner's characteristics such as educational backgrounds can also influence the selection of content. The e-learning team added that a technically based module that has a target audience of high school and college degreed individuals, for example, has to be designed well to engage all types of learners in the learning process.The e-learning team ensures that content presentation is varied to give the adult learners preferences on how to access content. The e-learning designers use audio, video, print, animations, and simulations and in some situations, case studies to present content. Animations, in particular, serve to give adult learners visual representations of concepts. Modules that have graphic intensive simulations are sometimes presented in a hybrid module because of bandwidth limitations. In hybrid module, learners receive their content in a CD and take their assessments through the LMS. Learners have the options of listening or turning off the audio and of downloading a job-aid, which can be a summary of the module. The use of dynamic text allows learners access to internal web pages that are relevant to the respective module. The designers use Flash, Dream Weaver, Robo Demo and Microsoft Office to assist with content presentation.Eighty percent of the courses are asynchronous and are sometimes the forerunners to classroom learning. Thus, learners are primarily engaged with learner-content interactions. Learner-content interaction may occur by learners choosing their modules, choosing their content presentation options, and answering questions. However, learners may get the opportunity to interact with each other and instructors via a virtual collaborative tool called Interwise. The tool engages learners and instructor in real time using audio, application sharing, white board, and a chat room. The instructor can give control to learners; learners in turn can raise their hands and interact with their peers.Assessments give adult learners feedback on their knowledge acquisition. Assessments may include multiple choice; fill in the blank, and matching. The e-learning team shared that while they would like to design dynamic assessments, the LMS does not presently support those capabilities. Assessments are designed to give instant feedback to the learner especially since learners are expected to receive an 80 percent on most tests. If the 80 percent is not met the first time, learners have the options of retaking the test or returning to the module. The LMS allows learners to take the test twice in one sitting and if the 80 percent is missed on the second try, the learners are automatically logged off and need to log in again if they want to retake the test for the third time. The e-learning team shared that the order of test questions and answers might change every time the test is taken.The standards that guide course design comply with the LMS. Standards include html, flash, navigation structures, titles, fonts, size, color, animations, pop-ups, for example. Keeping the adult learner in mind, the e-learning team also includes performance objectives, overviews and formative and summative learning checks or assessments. Performance objectives are categorized using Bloom's taxonomy reflecting simple to more complex learning outcomes.The e-learning team has also created storyboard screens to assist with standardizing the process of the course design. Designers mentioned that storyboards allow everyone involved including subject matter, and multimedia experts to review and revise instantly. This dynamic storyboard screen allows everyone involved to collaborate and to make changes to the course before it goes into development.Adult learners mostly take their e-learning courses at their desktops. Kiosks and labs are available in some locations. The field employees may access their courses from a truck using a laptop that may be connected via satellite. Designers recognized that bandwidth issues influences where adult learners can access their courses.Transfer of e-learning resides with the learner and supervisor. The e-learning team presently sees their role as identifying and using instructionally sound instruction methods and ensuring that the material is technically accurate. The team stated, however, that they plan to increase their efforts in addressing transfer of learning on the job and return on investment.In conclusion, the e-learning team at Halliburton is continuously looking for ways to serve the adult learner in the e-learning environment. The value for the adult learner resides with the application of their creative use of instructional design process and of the technology infrastructure that exist worldwide. E-learning designers are about deploying a sound learning opportunity that meets clients' and adult learners' needs worldwide.Case study three: retail chain store Context The focus of this case describes an e-learning team effort that serves the stores and asset protection sector of a retail store chain, which operates approximately 260 stores in 14 states. The chain is one component of a larger retail empire, which operates more than three distinct retail formats. This retail store chain employs approximately 29,000 employees.The e-learning initiative began in 2000 and is located within the university of the retail store chain. The e-learning initiative is mainly directed at the executive level employees within the stores and asset protection sector. Primary reasons for the introduction of e-learning were to reach a geographically dispersed workforce within the United States and to create consistency, quality and relevance among the training design, development, delivery and results.Asynchronous, synchronous, and blended formats are used to deliver training. Asynchronous learning focuses on computer-based modules while synchronous utilizes virtual classrooms. Blended learning may include a mix of computer-based modules and virtual classroom modules or face-to-face training with either computer based modules, or virtual classroom modules. A blended solution may have a team of employees taking a computer-based module all at the same time and upon completion the team gets together to share what they have learned. Approximately 30 modules have been developed with about 2 or 3 new offerings being created each quarter. While the retail store chain continues to expand its e-learning options, approximately 70 percent of the training continues to be offered face-to-face.The training and development team is comprised of four persons. Two members focus solely on e-learning. With an e-learning staff of two, these individuals assume roles in project management, design, creation, execution, and evaluation. Typically, a computer-based module takes between 100 and 300 hours to design and develop; a virtual classroom module, on the other hand, requires about two months. While this e-learning initiative resides within the stores and asset protection sector of retail store chain, it shares e-learning modules with other sectors and sometimes purchases modules from various vendors.How are adult learners valued? With e-learning representing approximately 30 percent of the training function and with the development team being relatively small (two persons), this case provides an overview of how the adult learner is valued within an emergent e-learning initiative.Front-end analysis is conducted to ensure that training is relevant, meaningful, and authentic. Performance problem analysis, for example, ensures that the performance gap is linked with training content. Once the training link is identified, e-learning designers further analyze to identify whether the training can be delivered through face-to-face training or e-learning.If e-learning is decided upon, then further analysis occurs to decide whether the medium will be computer-based, virtual classroom or a blended solution. Learner analysis also occurs to ensure that adult learners' characteristics are considered in the module design and development. Given that the target audience is usually executives, e-learning designers may conduct mini learner analyses when introducing a new content category.Once the performance problem has been aligned with training, content selection is directed either through a partnership with senior managers and/or employees in the field or through a review of new processes or procedures. Subject matter experts focus on the target audience, the performance problem, and the respective environments to identify relevant content and most appropriate learning activities given the selected medium of delivery.Content presentation occurs through text, graphics, video, and audio. Most computer-based modules have a combination of these mediums. Blended solutions use more video than computer-based modules because of bandwidth issues. Designers use flash, fireworks, trainer soft, robo-demo, and author ware to present content. At present, the adult learners cannot personalize the content presentation. The e-learning team is aware of this limitation, and is exploring options to give learners more control of content presentation options.With learning occurring asynchronously, synchronously, and in blended formats, adult learners can engage in learner-content, learner-learner and learner-instructor interactions. Learner-content interaction occurs mainly with asynchronous e-learning modules. Learners usually interact via simulations, games, or quizzes. These types of activities can force learners to analyze, synthesize, and evaluate. Learner-content interaction also occurs while listening to the audio portion of the module or watching a video. E-learning designers ensure that adult learners have various types of interactive activities because they want learners to enjoy a preferred type of activity. Learner-learner and instructor-learner interactions occur via the virtual classroom. Virtual classroom modules are chosen primarily because they can meet the just-in-time needs of business units. Virtual classroom have live instructors who communicate to the learners using a radio talk show format. Learners are also able to interact with scenarios and questions posed by the instructor. Adult learners, in turn, can use the chat space to communicate among themselves and with the instructors.With e-learning being an emergent initiative, the e-learning designers do not presently have a LMS. The designers, however, are in the process of developing an e-learning template that has consistent module design features. The template will reflect the retail store chain's university campus. The template will have links to the various colleges, for example, the college of logistics. The college of logistics will encompass all logistic related modules. The template for each module, on the other hand, will include nine major sections. The sections are performance objectives, module overview, content presentation, assessment, journal tool, tip cards, glossary, help tool, and a library. The e-learning team shared that this template will give adult learners more opportunities to access their preferred learning activities and tools. In keeping with adult learning principles, in specific the attention span, e-learning designers are designing modules that take no more than 30 minutes to complete. Lastly, as the designers work towards developing their template, they are conducting usability testing for every module that is released. In specific, designers are evaluating how employees navigate, access and interact with the content. Designers said that listening to adult learners is important if they are to deliver a learner-centered e-learning experience.The journal, one of the module features, will be a tool that can assist the transfer of learning on the job. Designers expect that the learners will use the journal to make personal notes about their learning, in specific, write what they would like to share with their peers and managers upon returning to the job. Presently, the direct supervisor is responsible to ensuring that learning is transferred on the job. The just-in-time delivery of computer-based and virtual classroom modules assist learners to transfer their learning on the job because the modules are based on just-in-time needs.Assessments occur at the formative and summative levels. At the formative level, learners may be asked to answer questions as they proceed through the module. At the summative level, if assessments will not be tracked, learners may take a self-assessment in the form of game such as jeopardy or who wants to be a millionaire. If the summative assessment will be tracked, learners receive a reaction survey and assessment that may include multiple choice, fill in the blanks, true or false and matching type questions. With the absence of a formal LMS, the e-learning designers contract database services to collect and access data from the reaction survey and assessments. Learners usually receive instant feedback on assessments and can return to the assessment or module for clarification. Instant feedback is not given, however, when assessments are for certification. Performance on certification assessments is communicated directly to the learner or to the respective managers or supervisors.Adult learners have access modules only at the stores. Every store has approximately three computers; one computer is dedicated to learning. Modules are delivered to the computers via an executable package. Employees individually or in groups of two or three can take the module at the same time. Introducing e-learning at the stores has been challenging because of the customer driven environment. The e-learning, shared, however, that they are presently exploring how to make learning seamless at the store level; one possibility could be the introduction of kiosks.In conclusion, though the e-learning team at the retail store chain has the human resource, time, money, and bandwidth challenge, they are highly motivated and confident that their e-learning initiative will only improve to serve the adult learner. The designers highly anticipate the completion of the template because it will serve to decrease their design and development time. In addition, e-learning designers are hoping to cross train the other two members of training and development team on e-learning processes such as meeting with clients, analyzing the performance problem, setting objectives, and getting the initial content layout using the instructional design process. This type of expertise could assist the existing designers to focus on coding and development. Although, emergent, the e-learning initiative at retail store chain is progressing in its mission to value the adult learner.Case study four: HP services workforce development, HP Context Hewlett Packard (HP) is a technology solutions provider to consumers, businesses, and institutions globally. The company's offerings span IT infrastructure, personal computing and access devices, global services, imaging and printing for consumers, enterprises and small and medium businesses. With approximately 140,000 employees worldwide, HP serves more than one billion customers in more than 160 countries. While organizationally HP accomplishes this by focusing on multiple business organizations, this case examines only the HP Services Group (HPS), which consists of approximately 60,000 employees. It is recognized that e-learning initiatives exist in multiple organizations and groups within HP and they are beyond the scope of this case. Within HPS, a designated organization exists to develop, design, and deliver learning solutions to this business unit. The name of this organization is HPS Workforce Development (HPS WD).The e-learning initiative with HPS began in 1999. The HPS WD team has created and monitored approximately 400,000 learning incidents in a recent six-month period. Approximately 85 percent of training is delivered electronically. Reasons cited for the introduction of e-learning into the group include cost efficiency; accessibility to learners, especially remote learners and those on customer sites; and time efficiency in that learners in the field can train where they are, without having to travel to a classroom setting.Both synchronous and asynchronous formats are used. Self-paced modules, both those created "in-house" and those purchased from vendors, are available to learners for access at a time and place that meets their scheduled needs. Required company training, such as business conduct, is an example of content that is delivered by asynchronous, self-paced methods. Virtual classroom environments and virtual labs can be accessed remotely for synchronous and/or asynchronous experience. Virtual labs have successfully been used for technical training.HPS e-learning courses are organized into portfolios. The main portfolio contains 1,800 self and instructor based courses. An additional 200 courses bring the total number of courses to 2,000. Portfolios include business conduct, technical, legal and professional skills such as consulting and project management. A virtual team comprised of curriculum developers, learners, delivery professionals and HP businesses work together and interface with product divisions to prioritize course offerings and to develop content. Specifically, the development team is a global organization of about 80 program managers plus developers and designers (total about 140 people) whose task it is to decide format, create content, develop courses, and evaluate feedback for the updating of courses. Increasingly, the outsourcing of course creation is being explored and utilized. As of 1 May 2004, most HPS employees had taken multiple e-learning courses. In general, while HPS WD does share courses across HP, it does not sell its learning packages to outside entities with the exception of customer services training solutions.HPS WD employs a business performance-consulting model. To help learning consultants within the HPS apply the business performance model, the corporate workforce development team at HP introduced an electronic performance support system to assist teams in becoming aware of similarities among performance issues and the types of training that they are providing. As a result of this awareness, teams are combining their strengths and discovering synergies in their training efforts.How are adult learners valued? The adult learner is an important factor in the design of e-training within HPS. E-training includes web-based courses, virtual classrooms, web-inars, webcasts and remote labs. In addition to e-training, the e-learning designers within HPS WD also incorporate e-learning which includes all knowledge management/sharing type of learning activities. For the purpose of this case, e-training refers to all e-learning courses, while e-learning refers to all knowledge management/sharing type of activities. Both e-training and e-learning give the adult learners synchronous and asynchronous learning opportunities.To meet adult learners' needs, the e-learning designers within HPS WD conduct a front-end analysis on several parameters to assist with design and development choices. Cost and time are primary factors as they affect the type of learning solution that can be developed. Size of audience also influences delivery options. Location of audience is critical, because a worldwide audience will have different implications for design and delivery versus a specific audience within a country. Lastly, equipment constraints in the target audiences' locations are considered for the design and development decisions. Once all the constraints are taken into account, the e-learning team ensures that the adult learner receives the best learning solution.Content selection for e-training resides within a business performance consulting model. This model tightens the relevance and meaningfulness of content because training is provided based on short- and long-term performance needs. This performance approach to content selection gives adult learners learning opportunities that are tied to individual, group and organizational performance. In addition to identifying performance driven content, adult learners have the opportunity to select which components of the training solution is appropriate for them. For example, sales, support, service type of employees may take different components within the one training solution.Having a technology-enabled environment gives the e-learning team the capability to present content using visuals, audio and video. Content presentation usually represents a mix of highly interactive solutions that give the adult learner the opportunity to choose their preferred medium for content presentation. In some cases, especially with customer training solutions, e-training courses may have a prompter that supports several languages. The language prompter gives the adult learner an additional opportunity to choose their preferred language of instruction.Again, because of a strong technology infrastructure learning activities within the web-based courses are varied. Simulations, games, knowledge checks, and quizzes are some activities that are frequently used in web-based courses. The purpose of using varied learning activities is to give adult learners an opportunity to listen, interact, and play. Combining different learning strategies also helps the e-learning designers target a broader audience. Business games, for example, are presently used in very specific and strategic areas, because they are a significant investment to build. Business games can teach how to pursue and close a measure of opportunity with major clients by simulating various scenarios for the learners. Teams of employees whose members have specific roles play these games. It must be mentioned, however, that developing these highly interactive courses takes time, and that not all courses are developed with the same level of interactivity. In essence, e-learning designers must find the balance between what they can do in the ideal world and what they can do in the practical world.While web-based courses mainly focus on learner-content interaction, HP uses other mediums to enhance learner-learner and learner-instructor interactions. Virtual classrooms are used for small groups of people to maximize instructor-learner interaction. In the virtual classroom, learners can raise their hand if they want to ask a question or if they want to comment or answer a question. In addition, the virtual classroom has a group chat feature, which can be categorized as private or public.Learners can engage in learner-learner or instructor-learner interaction via the chat space. Virtual classroom allows you to share applications, use a white board, or poll learners' reactions to different questions or issues. Virtual classrooms can be used to poll learners' satisfaction, and measure their level of knowledge acquisition.Learners can draw, paint, and on the more humorous side, even throw tomatoes if they did not like the instructor or give their instructor apples if they enjoyed the learning experience. Web-inars and webcast, on the other hand, are used for large audiences but very little interaction between the instructor and the audience occurs, learners listen only. In addition to virtual classrooms, web-inars and webcast, HP is also using remote virtual labs that are sometimes instructor driven, self-directed or are used for practice or application of learning.The blending of e-training and e-learning gives adult learners more opportunities for interaction and learning. HPS WD group has a robust knowledge management strategy that fosters e-learning through different forms of peer-to-peer learning, one form being communities of practice. Commercially available collaborative tools such as netmeeting, e-rooms, and instant messenger help to support peer-to-peer learning. HPS WD has a formal program called "Professions" which is a structured manner of organizing communities. Through communities of practice, the e-learning designers organize best practice training sessions, which are sometimes delivered using virtual classrooms. Another form of peer-to-peer learning is online mentoring and coaching. Formerly, coaching was offered at the executive level, but presently there is informal coaching at all levels of the organization. Discussion forums, another form of peer-to-peer learning, are also used extensively. Lastly, the formal technical career path is a program that is corporate wide and aims to offer individual contributors a virtual environment where they are valued and where they can strengthen their technological skills. To facilitate peer-to-peer learning, adult learners have access to knowledge management systems that are used to create, share, and reuse information regularly. Documentation of white papers, what HPS WD calls knowledge briefs, is a common practice among the group members.To reduce frustration and to help meet preferred learning styles among adult learners, the e-learning designers are creating development roadmaps for employees that include both virtual and face-to-face learning opportunities. Combining virtual and face-to-face learning opportunities prevents HP from becoming a virtual training environment and gives the adult learner a chance to network during face-to-face classes while meeting their learning needs anytime, and anywhere.Most adult learners around the world use their desktops as their primary location for taking their e-training courses. Some employees in the United States take the courses from their homes, as many are telecommuters. E-learning designers are starting to create learning rooms in major offices around the world to give learners an opportunity to isolate themselves from the day-to-day business pressures in order to concentrate on their learning.Pre- and post-test assessments are frequently used in the e-training courses, especially since several of the training solutions are formal certification tracts. In reference to Kirkpatrick's four levels, reaction, learning, transfer, and results, e-learning designers mostly apply levels one and two. Level one, reaction occurs via the customer satisfaction survey. Level two, learning occurs through final tests or knowledge checks. Final tests and knowledge checks could include open-ended questions, multiple-choice questions, and true or false questions, for example. Levels three, transfer and four, results are done on a case-by-case basis because they are major undertakings and can be very costly.To help with learner-assessment interaction, e-learning designers are starting to use interactive software that can create drag and drop boxes, and select an area on a picture, for example. Like high-end interactivity courses, however, such depends on cost and time constraints on design, development, and deployment.In conclusion, the e-learning team with HPS WD is challenged to reach the adult learners worldwide via e-training and e-learning. The team of designers is pushing ahead with the help of a wide variety of technology tools. The designers, though, are quick to point out that front-end analyses identify which of the available technologies can be relevant for the design, development, and deployment of a given learning solution. Nevertheless, these e-learning designers are doing their best to value the adult learner within the e-training and e-learning environments. E-learning is a valuable training and development solution for many companies. Unlike academic environments (Johnson and Aragon, 2003), very little is known about how the adult learner is valued in e-learning within corporate settings. This study explored a Waight & Stewart conceptual model, which posits that valuing the adult learner in e-learning within corporate settings depends on the interdependence of championing factors, antecedents, and moderators for the achievement of engagement, learning, and transfer. Table I provides an overview how these e-learning teams played out in terms of championing factors, antecedents, moderators, and outcomes. The comparative analysis was made on the incidence of occurrence and not on the extent of occurrence. More research needs to be conducted to identify the extent of occurrence among all the factors.The comparative analysis showed that all e-learning teams had leadership, a learning culture, technology infrastructure, and finance championing their efforts. The analysis also showed that all e-learning teams employed all five antecedents, which are needs assessments, learner analysis, work, work setting, and content analyses to help provide the most meaningful learning experience to the adult learner. In reference to moderators, the e-learning teams shared no incidence of return on investment. Incidence, however, the use of learning theories, technology skills, and creativity was provided. Lastly, all e-learning teams sighted incidents of learner engagement and learning. Only one e-learning team referred to transfer.While there were varying degrees of occurrence among the championing factors, antecedents, moderators and outcomes, this study provides basic confirmation that the adult learner is valued in e-learning within corporate settings. Realizing that there were opportunities and constraints for each e-learning team, the four cases provide a good insight into the energy and creativity that e-learning teams are employing to create sound learning experiences for adult learners. Further research is needed to establish the extent of implementation among the individual factors of the conceptual model (Figure 1). In addition, verification of this conceptual model is needed with more companies. Opportunities exist to explore how this conceptual model is being employed worldwide. Overall, these four case studies reveal that adult learners are valued in e-learning within the four corporate settings. It can be said that these e-learning teams are progressively improving to provide the adult learner the best e-learning experience. These case studies also show that e-learning teams have strong competencies in instructional design, learning theories, and technology and that they are operating within companies that support their efforts in creating instructionally sound e-learning experiences. Opens in a new window.Figure 1 Valuing the adult learner in e-learning within corporate settings Opens in a new window.Table I Comparative analysis conceptual model and four case studies
|
- A primary limitation inherent in this study is its inclusion of only four large corporations. Future investigation can extend understanding of how the adult learner is valued by researching more companies and their e-learning teams.
|
[SECTION: Purpose] Innovation is a powerful strategic tool for not just technology firms but also service providers (Mothe and Nguyen, 2012; Ordanini and Parasuraman, 2011). Through innovation, service firms seek to differentiate themselves from their competitors, conquer new markets, or retain existing customers. However, service innovations are not patentable, so such innovative firms must find other ways to protect their ideas. One method is to deploy inter-organizational networks. Through inter-firm cooperation, these service providers can benefit from complementarities with their partners, achieve economies of scale (Calia et al., 2007), share the costs and risks associated with developing an innovation, and ultimately make it easier to exploit their competitive advantage (Dyer and Singh, 1998). Moreover, cooperation can create barriers to entry and make it difficult for others to imitate the innovation, because exactly reproducing a network of inter-organizational relationships designed to innovate is nearly impossible (Borgatti and Foster, 2003). Despite the potential benefits, and issues, related to cooperation for service innovation development, innovation management research mainly focuses on technological innovation networks (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) and ignores the constellations of actors available for services innovation. Services remain the "poor relative" in innovation management literature (Gallouj and Weinstein, 1997). Yet services demand-specific insights, particularly due to their immateriality and interactivity, such that research findings pertaining to other industries do not always transfer to services (Sundbo, 1997). In response, this exploratory study seeks to highlight characteristics of service innovations networks and thereby answer two key research questions: RQ1. What are the primary characteristics of service innovation networks? RQ2. Does the implementation of certain types of innovations require certain types of networks? We analyse innovations implemented by two French ski areas: Portes du Soleil and Paradiski. The winter sports tourism industry is especially relevant for addressing our research questions, because the many changes it has undergone in the past 15 years have required ski resorts to find new routes to innovation, including through collaborations with partners (Petrou and Daskalopoulou, 2013). In the next section, we review literature on innovation in services and provide a summary of predominant characteristics of inter-organizational relationships, which underlies our analysis framework. Next, we explain the importance of investigating winter sports tourism and detail the methodology we use for this study. Finally, we discuss the notable characteristics of innovation networks before concluding with some implications, limitations, and directions for further research. In addition to acknowledging the specificities of services innovation, we characterize inter-organizational networks according to four dimensions: nature of the relationship, regulation mode, architecture, and geographical scope. Characteristics and types of service innovations Services innovations present a unique character that is less tangible than product or industrial innovations, with many incremental and architectural versions (De Vries, 2006). Social or managerial innovations (Hamel, 2006) are not always visible outside the organization. To improve identifications of innovations in services, prior literature suggests several classifications, most of which rely on a single dimension, such as: The element affected by the innovation (product, process, or organization; Belleflamme et al., 1986; Damanpour et al., 2009; Favre-Bonte et al., 2014; Hamdouch and Samuelides, 2001). Garcia and Calantone (2002) call this dimension the "new what". Innovativeness (Birkinshaw et al., 2008; Favre-Bonte et al., 2014; Garcia and Calantone, 2002), which represents a measure of the degree of "newness" of an innovation (e.g. highly innovative vs less innovative; new to the world vs new to the adopting unit vs new to the industry), combined with its risk level. The way the innovation is produced (e.g. with or without customer participation; Sundbo and Gallouj, 1998). For winter sports tourism services, it is often difficult to identify which resorts represent the source of innovations, because there are no formal intellectual property rights, and many firms might claim to have originated new concepts or services. Therefore, it is challenging to assess the degree of novelty of an innovation, and we focus instead on the elements affected by innovation - that is, the "new what". This dimension seems relatively more objective, from an innovation perspective. For this focus on the elements affected by innovation, we use the service delivery system model (Langeard et al., 1981). Unlike a blueprint approach (Bitner et al., 2008), which includes time and various other operations, the service delivery system model focuses on the role of the client, in interaction with the service company. Thus it enables us to go beyond a conventional product/process distinction, in that it separates process elements that are visible to clients from those that are not (Favre-Bonte et al., 2014). Specifically, it includes three main components: back-office, front-office, and output. The back-office component (i.e. internal organization) includes all traditional functions of a company that are invisible to the customer (e.g. marketing services, human resources, purchasing) and their operations (e.g. working methods, equipment, information systems). The front-office component instead comprises all elements that are visible to clients and that make the service more tangible, such as the staff (employees with whom clients interact), physical evidence (equipment used by the staff or customers in service delivery, such as machines, robots, furniture, signage, or more generally the premises in which the service gets delivered), and the customers themselves, who are more or less involved in service production (e.g. define the problem, engage in operational tasks) and can interact with other clients. Finally, the system delivers an output: the service offered to the customer. For this research, we focus on the main elements pertaining to an innovation and acknowledge that its deployment can affect different parts of the service, more or less simultaneously, with cascading effects (Barras, 1990; Damanpour and Evan, 1984; Fritsch and Meschede, 2001). However, our classification includes only those components that are most important to an innovation or represent the source of the innovation process (i.e. the component the firm seeks to improve through innovation). We thus consider three types of innovations: new offers, front-office innovations, and back-office innovations. Because developing innovations often requires closer cooperation among various partners to access new resources and skills (Stieglitz and Heine, 2007), we also consider the different forms that service innovation networks can take. Heterogeneity of inter-organizational network forms Inter-organizational networks provide a way for firms to achieve economies of scale (Powell, 1987) and access new resources and skills (Stieglitz and Heine, 2007). For this study, we define inter-organizational networks as sets of at least three organizations, linked by long-term exchange relationships and by a sense of belonging to a collective entity (Grandori and Soda, 1995). The different forms of inter-organizational networks can be characterized according to four dimensions: the nature of the relationships among the members, the mode of regulation, the architecture, and the geographical scope. Relationship type Relationships among partners can take many forms (Inkpen and Tsang, 2005). In a horizontal relationship, members build relationships with competitors to share the same resources, whereas in a vertical relationship, their aim is to transfer additional resources between a client and a supplier. Finally, in an "inter-industry" relationship, the networks encompass potentially complementary organizations that are neither competitors nor connected by customer-supplier relationships. Such networks are willing to share skills or promote a single resource. These three "pure" forms of inter-organizational networks also can be combined (Gomes-Casseres, 2003); for example, travel services networks bring together airlines (horizontal relationships); tour operators, car rental agencies, and hotel chains (vertical relationships); and even banking and financial institutions (inter-industry relationships) to provide customers with a "global" offer. These different relationships types constitute a central dimension in traditional innovation management research (Gemunden et al., 1996; Nietoa and Santamariab, 2007). Gemunden et al. (1996) examine the link between the type of relationship (e.g. partners, competitors, suppliers, laboratories) and the type of innovation developed. For process innovations, they note the importance of integrating all partners, particularly those connected by customer-supplier relationships. In contrast, product innovations require the intervention of technical partners. However, the results of their research, conducted in an industrial sector, may not transfer to services, for which the technical dimension is not always central. Regulation mode Regulation mode refers to the coordination mechanisms implemented. Economic regulation includes formal, explicit, and written mechanisms. These mechanisms come in several forms, such as standard operating procedures, technical reports, cost accounting systems, budgets and planning, contracts, and confidentiality agreements (Das and Teng, 1998; Gulati, 1998). Contracts can play a key role in inter-organizational relationships that share specific assets. In contrast, sociological regulation is based on adjustment mechanisms, trust, and clan logic. Regulatory mechanisms are thus rather implicit and verbal and include the establishment of joint teams, seminars, meetings, personnel transfers, and mechanisms for shared decision making (Grandori and Soda, 1995). These informal methods have several advantages over formal methods, such as lower transaction costs, increased strategic flexibility, and reduced risk of conflict (Nooteboom et al., 1997). In contrast, formal mechanisms are often problematic for the deployment of certain types of innovations, such as exploratory ones (Nooteboom, 2004). An exploratory innovation is inherently uncertain, and it is difficult to write a contract for an output that is unknown. In the context of service innovation, we question whether the regulation mode is always the same or if it might depend on the type of innovation. Architecture An inter-organizational network can be characterized by its structure or architecture. Two types of networks are commonly distinguished, according to the degree of power sharing: centralized and decentralized networks: In centralized networks, all sources of information are centralized by a single, often large company. There is a formal organization (i.e. focal firm, hub firm, strategic agency, or core) that regulates transactions within the structure (Dhanaraj and Parkhe, 2006; Jarillo, 1993; Lorenzoni and Baden-Fuller, 1995; Miles and Snow, 1986). This hub firm performs three functions: first, designing the value chain, choosing the members of the network, and setting the strategic direction; second, coordinating the value chain, optimizing operational links among members of the network, limiting administrative costs inherent in the hierarchy, and maintaining coordination modes by the market; and third, controlling the value chain and deterring opportunistic behaviour that could disrupt the network's efficiency. In decentralized networks, the architecture is distributed more effectively, and power is more or less shared. In industrial sectors, the presence of a hub firm is essential (Dhanaraj and Parkhe, 2006) because it helps define and make strategic choices. In the absence of authority or a central player, decision making is slower, and it is more difficult to define strategic choices because of the potential differences among partners. In service innovations, we investigate whether the presence of a hub firm is similarly essential to ensure the sustainability of the project, regardless of the type of innovation deployed. Geographic scope Finally, the fourth dimension that describes a network is its geographical scope - that is, the geographical proximity of the partners. A network can be local, national, or international. We retain this last feature because much research has emphasized the importance of geographical proximity among members of a network for its proper functioning (Autant-Bernard, 2001; Fritsch and Lukas, 2001). Studies examine the impact of territory on the formation and operation of networks (Autant-Bernard, 2001; Dunning and Mucchielli, 2002; Fritsch and Lukas, 2001) and conclude that value creation increases when the network achieves territorial fit. Proximity promotes flexibility, frequent interactions of members, and trust (Bell and Zaheer, 2007). Some innovation projects require face-to-face interactions, because knowledge is more easily transmitted in a small, restricted region (Von Hippel, 1994). In addition, considering the differences across countries in terms of culture, customs, and laws, international learning can be more difficult and delay the process of innovation. However, other research stipulates that the transfer of knowledge does not necessarily require geographical proximity (Feldman, 1994). The development of information and communication technologies in particular allows international networks to function alongside local clusters or districts. In summary, "the network form of organization has profoundly impacted how companies innovate" (Dhanaraj and Parkhe, 2006, p. 660), and innovation networks are critical, as is widely recognized by both empirical and theoretical contributions. Despite the increasingly prominent role of service activities in productive systems though (Gallouj and Weinstein, 1997), most existing research still focuses on manufacturing activities. Because services have specific properties (Sundbo, 1997), we seek to extend this literature by highlighting the characteristics of networks built to develop service innovations. Furthermore, we analyse the link between the type of networks that emerge and the types of innovation that arise from the services industry. Specifically, we address whether the implementation of certain types of innovations (new offers, front-office innovations, back-office innovations) requires the creation of inter-organizational networks with unique characteristics. In so doing, we broach an unexplored issue for network management, with notable implications for research into network theory, strategy, and services innovation management. Table I summarizes our analysis framework, including the element affected by innovation (the "new what") and the characteristics of the networks developed to achieve it. We begin by explaining why we examine the winter sports tourism sector in this study and the specificities of this service activity. We then present the methodology we used to collect and process the study data. Winter sports tourism Services are heterogeneous, so a parallel study of several sectors cannot offer meaningful comparisons (Djellal and Gallouj, 2008; Favre-Bonte et al., 2014). Instead, we focus on a single service activity, tourism, which has substantial economic impacts[1] and offers a fertile ground assessing innovation networks (Tremblay, 1998). Winter sports tourism is inherently heterogeneous, involving coordination of many people in its production-distribution process. A winter sports resort is a complex, original system that combines private (e.g. ski lift operators, accommodation providers, transport, ski rental shops) and public partners that own complementary resources and competences (Svensson et al., 2005). Promoting a destination largely depends on the partners' ability to integrate fragmented supply into a single, coherent product (Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008). The nature of this tourism product affirms the central role of coordination activities (Lynch and Morrison, 2007). In addition, this intrinsic characteristic is reinforced by the need to innovate in response to increased competition, which in turn demands more coordination among organizations. Ski resorts must offer new sports practices[2] (e.g. snow park creation, diversification beyond ski activities, snowshoeing, ice diving), more comfort (e.g. improved quality lifts, accommodations), and animation (e.g. discovery of local heritage, cultural activities, events). These essential innovations can be enhanced by concentration and the arrival of new actors (e.g. non-family, larger firms) in the ski resort industry (Cattelin and Thevenard-Puthod, 2006). Finally, technological improvements have contributed to the development of innovations in the tourism sector. For example, the internet has changed the nature of the relationship between organizations and the distribution of power between clients and suppliers. Although the internet has enabled many ski resorts to sell their packages directly online, it also increases transparency and rivalry across ski resorts (Favre-Bonte and Tran, 2014). Despite these challenges and the reality for innovations in winter sports resorts, service innovation researchers (Djellal and Gallouj, 2005; Gallouj and Weinstein, 1997) and innovation network researchers (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) continue to ignore innovation networks in this sector (Hjalager, 2010; Petrou and Daskalopoulou, 2013). Data collection and analysis Because the aim of this study is to explore the characteristics of winter tourism networks and potential links between network characteristics and the purpose of innovations, we opted for a qualitative study, based on an analysis of 12 innovation networks. A multi-case study can handle a limited number of cases and has the advantage of breaking down each network to provide a detailed analysis. This method also provides a detailed description of the events, along with a systematic analysis of the relationships between partners. Multi-case studies require establishing a theoretical sample with common characteristics, in this case, networks that comprise at least three independent organizations (public or private) and innovations that pertain to the ski area. Seven cases pertain to the Portes du Soleil ski area and five involve the Paradiski ski area. Both areas are located in the northern French Alps and have a predominantly European clientele. In this largely homogeneous sample, we also need some variety to delineate the impact of the network characteristics on the innovations implemented. Accordingly, these ski areas have different modes of governance - one is centralized around a mid-sized company (Compagnie des Alpes), whereas the other is more collegial and associative; they are located in two separate territories (Franco-Swiss and 100 per cent French); and they differ in the number of ski resorts they run. We also took care to select networks of different sizes and ages in choosing the ski resorts (see Table II). The initial data collection aimed to identify innovation networks developed in these two ski areas. To reduce the complexity of the analysis, we focused primarily on innovation networks developed around sporting or leisure activities in connection with ski areas; for example, we excluded innovations developed by hotels or residences. Next, we identified innovations driven by an inter-organizational network and the key players who worked with the hub firms in several innovation projects. From this assessment, we identified the tourist office of Avoriaz (major international ski resort connected with the Portes du Soleil area), the Association of Portes du Soleil, and the tourist office of Les Arcs (ski resort attached to the large international ski area Paradiski) as potential hub organizations. To ensure data triangulation (Yin, 2008), we used three data sources: interviews, direct observation, and secondary data. We conducted ten semi-structured interviews, lasting an average of three hours each, during 2011 and 2012 with key network actors (hubs and innovators), including heads of the tourist office, ski areas, and ski lifts. We also interviewed actors who helped us understand the territory, while facilitating access to other key actors (e.g. Savoie Mont Blanc tourism director, member of the executive committee of Savoie Mont Blanc destination, the tourism plan coordinator of the Savoie Travel Agency). The interview guide reflected our analysis framework. The key topics discussed in each interview were as follows: overview of the station's current context and strategy, the role of innovation in its strategy, and the history of major innovations over the last ten years. For each successful innovation, we also asked about the elements affected by innovation, to facilitate its classification as a new offer, front-office innovation, or back-office innovation. We obtained detailed descriptions of the supportive networks, according to the four central characteristics (relationships among members, modes of regulation, architecture, and location; see Table I). For the direct observations, we pretended to be customers of the ski areas and used the innovative services studied. This passive observation enabled us to test the innovations and also capture the feelings of customers, who were full members of the service process. Finally, we used data from both internal and external sources. The internal sources included project presentations, e-mails, and internal reports, which provided a better understanding of the relationships among the actors in the innovation project. The external sources of data included websites for the ski areas and their partners, as well as digital journal articles. Thus we gained background information about the actors (e.g. history of the ski area, target clientele, network collaboration) and accurate, objective descriptions of the innovation and its promise for consumers. The data analysis was identical for each of the 12 innovation networks and each type of data. We conducted thematic coding, crossing data from prior literature and information from our research field to develop a dictionary of topics. The codification of these themes was manual; we distinguished descriptive, explanatory, and interpretive information (Miles and Huberman, 1994). The resulting dictionary classified the data into two broad categories: type of innovation: from prior literature and the case studies, we assigned a code to each dimension: innovation about output, back-office, or front-office; and type of network: from literature and the case studies, we created four themes to which we assigned specific codes: the relationship between members, the mode of regulation, the architecture, and the geographical scope. Using these dictionary themes, we coded the transcript interviews and secondary data. In addition, we mapped each innovation network, including its main characteristics, and submitted these maps to the interviewees. The maps represent, for each innovation project, the relationships among members, and in turn, they facilitate the identification of the roles, resources, and expertise of each partner, to enhance the interpretation and restitution of the data. We therefore characterize the observed networks and classify the identified innovations. After we present the main characteristics of the 12 networks we studied, we analyse their differences and how they developed new offers, front-office innovations, or back-office innovations. Characteristics of ski area innovations networks The 12 innovations that we identified by the two ski areas appear in Table III. In terms of innovation types, new offers are the most prevalent (seven innovations), followed by front-office innovations (three) and back-office innovations (two). This finding is not surprising, considering the intensified competition among ski resorts, which requires each resort to produce visible innovations to retain demanding, novelty-seeking existing customers and attract new customers who are eager to increase their experience (Clydesdale, 2007). According to the director of the Avoriaz tourist office (Portes du Soleil resort), "Avoriaz has a reputation as an innovative ski resort since its genesis and keeps innovation as its permanent genetic code; it must continually deliver new offers to its customers. Innovations must be visible to them. In addition, new offers based on events both optimize resort usage, taking into account the specificities of the different European schedules, and create awareness. It is a double blow". Such new offers help attract new customers, in both winter and summer (e.g. beginners attracted by a "You can ski" package, wealthy customers who prefer Paradiski premium, families in summertime with the Multipass). The front-office innovations seek to improve customer satisfaction by introducing new hardware support (new slope, new wood modules with the Stash, new high-performance ski lift connecting two ski areas with Vanoise Express). They also involve training tools for frontline staff, which helps them provide customers with more relevant offer knowledge. Sustainable dimension is a key component of some of these innovations too. Back-office innovations are fewer but still useful for uniting stakeholders, because, as one interviewee noted, "in a ski resort, we need all the players to be involved and go in the same direction; it is essential to develop innovations in communication tools to create unity between all those stakeholders. Some are in a situation of asymmetric information, so it is necessary to facilitate the transfer of information". The network architecture is centralized in all our cases. It thus appears that innovation in a ski area is not possible without a central actor that coordinates the other members of the network. These hub organizations are mostly located in the ski resort, such as the Tourist Office of Avoriaz (large international ski resort connected to Portes Du Soleil area), the Association of the Portes du Soleil, and the Tourist Office of Les Arcs (large international resort attached to Paradiski area). The innovation networks instead vary increasingly in their members types. Two-thirds of them encompass organizations from other winter sports industries that bring resources and expertise into the original network. As a corollary, we note that the vast majority of the members of the network are located outside the resort (ten of 12 cases). Thus, it not sufficient to collaborate with local actors to innovate. Finally, sociological modes appear to have been abandoned, in favour of more economical modes. The challenge of innovation is such that it is increasingly difficult to do without contracts, explicit procedures, or clear operating rules. These initial results deserve further analyses to detect any possible links between the type of innovation developed and three network characteristics: the nature of relationships, the mode of regulation, and the geographical scope (because architecture is centralized in all case). Table IV summarizes the data we used to draw conclusions about these relationships. New offers For most innovations that focus on new offers, we observe that networks gather a few competitors (to benefit from scale effects generated by alliances) but rely on more partners that can provide additional resources (customers, suppliers, or companies from other industries). Regulation is economic, because it involves actors outside the station; otherwise, it is sociological. However, this sociological mode can lead to malfunctions; as one interviewed actor stated, "it is sometimes hard to know exactly who should do what and how. It would be more effective and would be better for our brand if we wrote more elaborate procedures". The networks have an increasingly wide (national or international) geographic scope, in that the partners from other industries are rarely located within the resort. As an illustration, Figure 1 [3] maps the new offer of the Rock the Pistes festival. The objective of this innovative event was to encourage the public to discover the skiing area, through concerts scattered on the slopes. To implement this innovation, the ski area relied on the participation of various local actors (all the resorts in the ski area), as well as more geographically dispersed actors from other industries, such as the record company Warner (via its subsidiary Nous Prod); a TV channel Canal+, which created the concept but then exited the network; and the distributor in charge of ticketing, Fnac. Front-office innovations For innovations intended to improve the front-office, vertical relationships are often preferred. We found mainly economic regulations, such that members were supervised by strict safety standards (when transporting skiers) or specifications to preserve their brand (issued by Burton, an internationally renowned company). Because these suppliers, distributors, or providers of complementary resources are not located in the ski resort, network coverage is national or international. Figure 2 depicts the "Stash" innovation network. This innovation consisted of a kind of natural, secured snow park, located in the heart of the forest, designed to offer skiers, snowboarders, ski schools, and families more play areas while also delivering a message about environment protection. This innovation was developed under the leadership of the Burton company (snowboard equipment) and required the cooperation of SERMA (ski lift operator) to develop specific wood modules and use special equipment to maintain the snow. The Tourist Office of Avoriaz promotes this space and organizes events with Burton. Back-office innovations Inter-organizational networks that provide support to back-office innovations gather inter-industry members located in the resort. The Tourist Office is often the hub, and the geographical proximity of members is important. We also note the use of an economical mode of regulation, especially when it comes to the distribution of income from lift activity. For example, Nirvanalps is a web portal (Figure 3), through which Les Arcs can see all available beds in the resort belonging to private owners, with constantly updated information about the state and uses of this accommodation reserve. It also seeks to optimize occupancy rates at the resort, such as through promotions. Formal specifications clearly indicate for owners the benefits they will receive in return for participation, such as discounts on ski pass prices. Discussion: types of innovations introduced by certain types of networks This study, based on 12 innovations networks, provides several lessons about ski area innovations networks. In terms of architecture, the presence of a hub firm within networks prevails, regardless of the type of innovation. We find similarities with the industrial sector here (Dhanaraj and Parkhe, 2006). In transaction cost theory (Williamson, 1985), members of a network agree to delegate some authority to a central actor when the degree of uncertainty is high. In this uncertain environment, the importance of a hierarchical network is substantial, so a hub or central actor dominates exchanges and coordinates members. In contrast, such hierarchical forms lose value when the level of uncertainty is low. Therefore, the systematic presence of a hub in this study partly reflects the notion that the ski areas already are centralized around at least one key organization (e.g. ski lift company, accommodation provider), but it also is due to the recent challenges facing mountain territories. Traditionally, heterogeneous actors enjoyed a growing market and tended to operate in isolation. Today, competitive intensity and trends for winter sports markets make the presence of a hub necessary to drive innovation dynamics and involve multiple stakeholders in collaboration (Dhanaraj and Parkhe, 2006). In innovation networks, conflicts of interest or power games among actors are almost inevitable (Miles and Snow, 1986). A hub firm can manage disagreements and differences and facilitate the development of innovative projects. This hub organization changes, depending on the nature of the innovation project. It might be an institution (e.g. Portes du Soleil association, tourist office) or a large company that owns most or some key element of the value chain activities (Compagnie des Alpes, Pierre & Vacances). Despite the presence of some large companies, local organizations or public operators often drive tourism innovations (Hjalager, 2010), though we did not observe any small- or medium-sized enterprises driving networks; instead, they often appear in a situation of high dependence. It is difficult for such firms to create more favourable environments through political activities, such as lobbying (Pfeffer and Salancik, 1978). Regarding the regulation mode, it appears that the economic mode is more favoured than the sociological mode, which requires trust among members (Gulati, 1998). The sociological mode only helps coordinate well-known local actors. Casanueva and Galan Gonzalez (2004) show, in the shoe industry, that network firms exchange tacit information only with firms with which they maintain stronger social and business links. However, the use of economic regulations reflects a change in the mode of operation of ski resorts. Originally, ski resorts were characterized by informal networks based on geographical and cultural proximities. These networks could be described as clans (Ouchi, 1980). However, with the retirement of the first generation of business owners, the arrival of foreign companies based more on economic and financial considerations (Cattelin and Thevenard-Puthod, 2006), increasing competition, and the imperative to innovate, the control mode changed to economic. This choice also reflects the constitutions of the networks, which comprise geographically distant members, selected according to their complementary resources and skills. We can also use the resource-based view to explain this evolution, in that the sustainability of a ski resort depends on its ability to acquire and maintain necessary resources. Moreover, the difficulty of protecting innovations reinforces the need for rational, economic relationships between members of an innovation network. With regard to the two other network dimensions, different types emerged according to the type of innovation developed. For the nature of the relationships, front-office innovation networks appeared more vertical, in that they aim to make service qualities more tangible (improving physical evidence) or convince customers of their quality (staff action, such as tour operator agents). These networks also involve both upstream members (supplier that provides technology) and downstream members (distributor with which it innovates jointly). However, new offer networks mostly span industries. This trait is not surprising, because by definition, a holiday stay entails different types of services (e.g. accommodation, restaurant, ski lift, equipment rental, tourist office; Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008; Svensson et al., 2005). However, beyond these traditional providers, ski resorts seek differentiation and increasingly use actors that are not part of the winter sports tourism industry (e.g. musical production companies, waterparks). New offers also require more horizontal coordination among resorts in the same ski area. These resorts must manage the cooperation-competition duality, or "co-opetition" (Brandenburger and Nalebuff, 1995). To create value and innovation, ski resorts can no longer act in isolation but must recognize their inter-dependence (Lado et al., 1997). Research has identified links between the nature of relationships and the type of innovation developed in an industry (Gemunden et al., 1996); we note the specificities of service innovations. Partners solicited for innovations tend to represent a different type than the focal industry. Moreover, according to cooperation studies, the selection process takes place mainly at the beginning of the cooperation (Rueur et al., 2002), but in the networks we studied, the selection of members occurred throughout the innovation project, depending on newly arising needs. During the selection phase, the main criteria are the resources and skills of each partner. This result reflects a resource-based view, a proactive approach in which a company is aware of its lack of innovation resources and skills and therefore decides to bring in partners. In our research, this approach is often initiated by a public actor - the tourist office. In addition, the hub chooses its partners according to their reputations and the extent of their networks. Finally, to stand out from competitors, ski resorts expand their networks to members who are distant in their activities, which reflects the geographical scope of the network. Mountain resorts used to have highly geographical operations and were sometimes treated as localized productive systems, fully embedded in their territory. Now, members of innovation networks are mostly located outside the resort, in France or abroad. Although proximity traditionally reduces coordination costs (Dyer and Singh, 1998) and facilitates informal exchanges and knowledge transfers (Bell and Zaheer, 2007; Von Hippel, 1994), for innovations developed around ski areas, local partners are no longer sufficient. Rather, resorts must find creative partners that can provide resources and skills that do not reside in the resort. Alliances with foreign partners also provide a way to internationalize the resort and find growth overseas. One type of innovation represents an exception to this rule though: back-office innovations supported by local networks. These innovations, not visible to clients (and therefore not able to differentiate the ski area), are designed to integrate and facilitate coordination among stakeholders in the touristic stay. Logically, it is not surprising that these innovations mainly involve local organizations that must be particularly efficient in their information systems. Back-office innovations require IT companies that are geographically close (if not in the heart of the resort, they remain in the same geographical area). Table V summarizes these results and characterizes the networks formed by the winter sports resorts, according to the type of innovation developed. Two main contributions emerge from our research. First, we characterize service innovation networks in the winter sports industry, a little studied service context. This study of 12 innovations implemented by two ski areas highlights the link between the type of innovation deployed and the type of network formed. Networks that seek to produce new offers, front-office innovations, and back-office innovations differ in the partners involved (competitors, suppliers, distributors, or actors outside the industry) and geographical scope (local, national, or international). However, a central player is always in charge of orchestrating the exchanges among partners, regardless of the type of innovation. This pivotal role is often provided by a public organization (tourist or local institution), with some local legitimacy (Kumar and Das, 2007). Second, we specify the link between the type of innovation deployed and the type of network formed. Two key dimensions (nature of the relationships and geographical scope) appear to differ, depending on the type of innovation developed. Our research thus fills a gap by complementing existing works on the characteristics of innovation networks, which hitherto have focused more on innovations in manufacturing. Our research shows for the first time that implementing certain types of service innovations requires the creation of inter-organizational networks with specific characteristics. At the managerial level, identifying the four dimensions of an innovation network is crucial, because these dimensions differ depending on the type of innovation. For example, ski resorts that want to develop new offers must be open to external partners (i.e. companies that do not belong to the tourism industry or are not geographically proximate to the resort). The openness of the network to such "unusual" partners facilitates the design and implementation of more radical innovations. Ski resorts that want to innovate also must recognize the important role of the hub that drives the innovation dynamics, selects the best members, and coordinate their actions. From a methodological perspective, we selected the tourism industry to test our framework, because changes in this sector have obliged firms to innovate. Ski resorts effectively represent tourism destinations, but we also acknowledge that the results may differ in other service settings (e.g. banking, hospitals). In addition, our relatively small sample consists of in-depth interviews with knowledgeable respondents, but it is not exhaustive. Because of the confidential and strategic nature of our interview topics, it was challenging to interview all partners in each network. Instead, we focused on the hub firm and its main partners. Additional research could use a larger sample and adopt a quantitative methodology. Beyond these traditional limitations associated with qualitative methodologies, we also note a limitation of our questioning method. That is, this study addresses the link between innovation types and characteristics of inter-organizational networks but not the possible reciprocal link. In some situations, it may be that networks determine the innovations implemented. Further research should consider this reciprocity and delve deeper into the relationship between inter-organizational networks and innovation, as well as its direction. It also might be useful to expand the research field to other ski areas to examine, with added cases, other innovations that might be deployed by other types of networks. For example, the nature of the hub organization may differ in countries where public actors are less involved in ski resorts management (e.g. North America). The network structures identified herein should be validated with a larger sample too. Opens in a new window. Figure 1 New offer map ("rock the pistes") Opens in a new window. Figure 2 Front-office innovation map (stash) Opens in a new window. Figure 3 Back-office innovation map (nirvanalps) Opens in a new window. Table I Analysis framework of the link between networks characteristics and innovation types Opens in a new window. Table II Innovation network characteristics Opens in a new window. Table III 12 innovation networks studied Opens in a new window. Table IV Network characteristics and innovation type Opens in a new window. Table V Network characteristics according to nature of innovations
|
Unlike industrial innovations, service innovations cannot be protected by patents or designs. Thus, the implementation of innovation networks is often crucial to generate a sustainable competitive advantage. The focus in this paper is the main forms of inter-organizational networks that have led to service innovations. More precisely, the purpose of this paper is to examine the relationship between the characteristics of inter-organizational networks and the type of service innovations developed.
|
[SECTION: Method] Innovation is a powerful strategic tool for not just technology firms but also service providers (Mothe and Nguyen, 2012; Ordanini and Parasuraman, 2011). Through innovation, service firms seek to differentiate themselves from their competitors, conquer new markets, or retain existing customers. However, service innovations are not patentable, so such innovative firms must find other ways to protect their ideas. One method is to deploy inter-organizational networks. Through inter-firm cooperation, these service providers can benefit from complementarities with their partners, achieve economies of scale (Calia et al., 2007), share the costs and risks associated with developing an innovation, and ultimately make it easier to exploit their competitive advantage (Dyer and Singh, 1998). Moreover, cooperation can create barriers to entry and make it difficult for others to imitate the innovation, because exactly reproducing a network of inter-organizational relationships designed to innovate is nearly impossible (Borgatti and Foster, 2003). Despite the potential benefits, and issues, related to cooperation for service innovation development, innovation management research mainly focuses on technological innovation networks (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) and ignores the constellations of actors available for services innovation. Services remain the "poor relative" in innovation management literature (Gallouj and Weinstein, 1997). Yet services demand-specific insights, particularly due to their immateriality and interactivity, such that research findings pertaining to other industries do not always transfer to services (Sundbo, 1997). In response, this exploratory study seeks to highlight characteristics of service innovations networks and thereby answer two key research questions: RQ1. What are the primary characteristics of service innovation networks? RQ2. Does the implementation of certain types of innovations require certain types of networks? We analyse innovations implemented by two French ski areas: Portes du Soleil and Paradiski. The winter sports tourism industry is especially relevant for addressing our research questions, because the many changes it has undergone in the past 15 years have required ski resorts to find new routes to innovation, including through collaborations with partners (Petrou and Daskalopoulou, 2013). In the next section, we review literature on innovation in services and provide a summary of predominant characteristics of inter-organizational relationships, which underlies our analysis framework. Next, we explain the importance of investigating winter sports tourism and detail the methodology we use for this study. Finally, we discuss the notable characteristics of innovation networks before concluding with some implications, limitations, and directions for further research. In addition to acknowledging the specificities of services innovation, we characterize inter-organizational networks according to four dimensions: nature of the relationship, regulation mode, architecture, and geographical scope. Characteristics and types of service innovations Services innovations present a unique character that is less tangible than product or industrial innovations, with many incremental and architectural versions (De Vries, 2006). Social or managerial innovations (Hamel, 2006) are not always visible outside the organization. To improve identifications of innovations in services, prior literature suggests several classifications, most of which rely on a single dimension, such as: The element affected by the innovation (product, process, or organization; Belleflamme et al., 1986; Damanpour et al., 2009; Favre-Bonte et al., 2014; Hamdouch and Samuelides, 2001). Garcia and Calantone (2002) call this dimension the "new what". Innovativeness (Birkinshaw et al., 2008; Favre-Bonte et al., 2014; Garcia and Calantone, 2002), which represents a measure of the degree of "newness" of an innovation (e.g. highly innovative vs less innovative; new to the world vs new to the adopting unit vs new to the industry), combined with its risk level. The way the innovation is produced (e.g. with or without customer participation; Sundbo and Gallouj, 1998). For winter sports tourism services, it is often difficult to identify which resorts represent the source of innovations, because there are no formal intellectual property rights, and many firms might claim to have originated new concepts or services. Therefore, it is challenging to assess the degree of novelty of an innovation, and we focus instead on the elements affected by innovation - that is, the "new what". This dimension seems relatively more objective, from an innovation perspective. For this focus on the elements affected by innovation, we use the service delivery system model (Langeard et al., 1981). Unlike a blueprint approach (Bitner et al., 2008), which includes time and various other operations, the service delivery system model focuses on the role of the client, in interaction with the service company. Thus it enables us to go beyond a conventional product/process distinction, in that it separates process elements that are visible to clients from those that are not (Favre-Bonte et al., 2014). Specifically, it includes three main components: back-office, front-office, and output. The back-office component (i.e. internal organization) includes all traditional functions of a company that are invisible to the customer (e.g. marketing services, human resources, purchasing) and their operations (e.g. working methods, equipment, information systems). The front-office component instead comprises all elements that are visible to clients and that make the service more tangible, such as the staff (employees with whom clients interact), physical evidence (equipment used by the staff or customers in service delivery, such as machines, robots, furniture, signage, or more generally the premises in which the service gets delivered), and the customers themselves, who are more or less involved in service production (e.g. define the problem, engage in operational tasks) and can interact with other clients. Finally, the system delivers an output: the service offered to the customer. For this research, we focus on the main elements pertaining to an innovation and acknowledge that its deployment can affect different parts of the service, more or less simultaneously, with cascading effects (Barras, 1990; Damanpour and Evan, 1984; Fritsch and Meschede, 2001). However, our classification includes only those components that are most important to an innovation or represent the source of the innovation process (i.e. the component the firm seeks to improve through innovation). We thus consider three types of innovations: new offers, front-office innovations, and back-office innovations. Because developing innovations often requires closer cooperation among various partners to access new resources and skills (Stieglitz and Heine, 2007), we also consider the different forms that service innovation networks can take. Heterogeneity of inter-organizational network forms Inter-organizational networks provide a way for firms to achieve economies of scale (Powell, 1987) and access new resources and skills (Stieglitz and Heine, 2007). For this study, we define inter-organizational networks as sets of at least three organizations, linked by long-term exchange relationships and by a sense of belonging to a collective entity (Grandori and Soda, 1995). The different forms of inter-organizational networks can be characterized according to four dimensions: the nature of the relationships among the members, the mode of regulation, the architecture, and the geographical scope. Relationship type Relationships among partners can take many forms (Inkpen and Tsang, 2005). In a horizontal relationship, members build relationships with competitors to share the same resources, whereas in a vertical relationship, their aim is to transfer additional resources between a client and a supplier. Finally, in an "inter-industry" relationship, the networks encompass potentially complementary organizations that are neither competitors nor connected by customer-supplier relationships. Such networks are willing to share skills or promote a single resource. These three "pure" forms of inter-organizational networks also can be combined (Gomes-Casseres, 2003); for example, travel services networks bring together airlines (horizontal relationships); tour operators, car rental agencies, and hotel chains (vertical relationships); and even banking and financial institutions (inter-industry relationships) to provide customers with a "global" offer. These different relationships types constitute a central dimension in traditional innovation management research (Gemunden et al., 1996; Nietoa and Santamariab, 2007). Gemunden et al. (1996) examine the link between the type of relationship (e.g. partners, competitors, suppliers, laboratories) and the type of innovation developed. For process innovations, they note the importance of integrating all partners, particularly those connected by customer-supplier relationships. In contrast, product innovations require the intervention of technical partners. However, the results of their research, conducted in an industrial sector, may not transfer to services, for which the technical dimension is not always central. Regulation mode Regulation mode refers to the coordination mechanisms implemented. Economic regulation includes formal, explicit, and written mechanisms. These mechanisms come in several forms, such as standard operating procedures, technical reports, cost accounting systems, budgets and planning, contracts, and confidentiality agreements (Das and Teng, 1998; Gulati, 1998). Contracts can play a key role in inter-organizational relationships that share specific assets. In contrast, sociological regulation is based on adjustment mechanisms, trust, and clan logic. Regulatory mechanisms are thus rather implicit and verbal and include the establishment of joint teams, seminars, meetings, personnel transfers, and mechanisms for shared decision making (Grandori and Soda, 1995). These informal methods have several advantages over formal methods, such as lower transaction costs, increased strategic flexibility, and reduced risk of conflict (Nooteboom et al., 1997). In contrast, formal mechanisms are often problematic for the deployment of certain types of innovations, such as exploratory ones (Nooteboom, 2004). An exploratory innovation is inherently uncertain, and it is difficult to write a contract for an output that is unknown. In the context of service innovation, we question whether the regulation mode is always the same or if it might depend on the type of innovation. Architecture An inter-organizational network can be characterized by its structure or architecture. Two types of networks are commonly distinguished, according to the degree of power sharing: centralized and decentralized networks: In centralized networks, all sources of information are centralized by a single, often large company. There is a formal organization (i.e. focal firm, hub firm, strategic agency, or core) that regulates transactions within the structure (Dhanaraj and Parkhe, 2006; Jarillo, 1993; Lorenzoni and Baden-Fuller, 1995; Miles and Snow, 1986). This hub firm performs three functions: first, designing the value chain, choosing the members of the network, and setting the strategic direction; second, coordinating the value chain, optimizing operational links among members of the network, limiting administrative costs inherent in the hierarchy, and maintaining coordination modes by the market; and third, controlling the value chain and deterring opportunistic behaviour that could disrupt the network's efficiency. In decentralized networks, the architecture is distributed more effectively, and power is more or less shared. In industrial sectors, the presence of a hub firm is essential (Dhanaraj and Parkhe, 2006) because it helps define and make strategic choices. In the absence of authority or a central player, decision making is slower, and it is more difficult to define strategic choices because of the potential differences among partners. In service innovations, we investigate whether the presence of a hub firm is similarly essential to ensure the sustainability of the project, regardless of the type of innovation deployed. Geographic scope Finally, the fourth dimension that describes a network is its geographical scope - that is, the geographical proximity of the partners. A network can be local, national, or international. We retain this last feature because much research has emphasized the importance of geographical proximity among members of a network for its proper functioning (Autant-Bernard, 2001; Fritsch and Lukas, 2001). Studies examine the impact of territory on the formation and operation of networks (Autant-Bernard, 2001; Dunning and Mucchielli, 2002; Fritsch and Lukas, 2001) and conclude that value creation increases when the network achieves territorial fit. Proximity promotes flexibility, frequent interactions of members, and trust (Bell and Zaheer, 2007). Some innovation projects require face-to-face interactions, because knowledge is more easily transmitted in a small, restricted region (Von Hippel, 1994). In addition, considering the differences across countries in terms of culture, customs, and laws, international learning can be more difficult and delay the process of innovation. However, other research stipulates that the transfer of knowledge does not necessarily require geographical proximity (Feldman, 1994). The development of information and communication technologies in particular allows international networks to function alongside local clusters or districts. In summary, "the network form of organization has profoundly impacted how companies innovate" (Dhanaraj and Parkhe, 2006, p. 660), and innovation networks are critical, as is widely recognized by both empirical and theoretical contributions. Despite the increasingly prominent role of service activities in productive systems though (Gallouj and Weinstein, 1997), most existing research still focuses on manufacturing activities. Because services have specific properties (Sundbo, 1997), we seek to extend this literature by highlighting the characteristics of networks built to develop service innovations. Furthermore, we analyse the link between the type of networks that emerge and the types of innovation that arise from the services industry. Specifically, we address whether the implementation of certain types of innovations (new offers, front-office innovations, back-office innovations) requires the creation of inter-organizational networks with unique characteristics. In so doing, we broach an unexplored issue for network management, with notable implications for research into network theory, strategy, and services innovation management. Table I summarizes our analysis framework, including the element affected by innovation (the "new what") and the characteristics of the networks developed to achieve it. We begin by explaining why we examine the winter sports tourism sector in this study and the specificities of this service activity. We then present the methodology we used to collect and process the study data. Winter sports tourism Services are heterogeneous, so a parallel study of several sectors cannot offer meaningful comparisons (Djellal and Gallouj, 2008; Favre-Bonte et al., 2014). Instead, we focus on a single service activity, tourism, which has substantial economic impacts[1] and offers a fertile ground assessing innovation networks (Tremblay, 1998). Winter sports tourism is inherently heterogeneous, involving coordination of many people in its production-distribution process. A winter sports resort is a complex, original system that combines private (e.g. ski lift operators, accommodation providers, transport, ski rental shops) and public partners that own complementary resources and competences (Svensson et al., 2005). Promoting a destination largely depends on the partners' ability to integrate fragmented supply into a single, coherent product (Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008). The nature of this tourism product affirms the central role of coordination activities (Lynch and Morrison, 2007). In addition, this intrinsic characteristic is reinforced by the need to innovate in response to increased competition, which in turn demands more coordination among organizations. Ski resorts must offer new sports practices[2] (e.g. snow park creation, diversification beyond ski activities, snowshoeing, ice diving), more comfort (e.g. improved quality lifts, accommodations), and animation (e.g. discovery of local heritage, cultural activities, events). These essential innovations can be enhanced by concentration and the arrival of new actors (e.g. non-family, larger firms) in the ski resort industry (Cattelin and Thevenard-Puthod, 2006). Finally, technological improvements have contributed to the development of innovations in the tourism sector. For example, the internet has changed the nature of the relationship between organizations and the distribution of power between clients and suppliers. Although the internet has enabled many ski resorts to sell their packages directly online, it also increases transparency and rivalry across ski resorts (Favre-Bonte and Tran, 2014). Despite these challenges and the reality for innovations in winter sports resorts, service innovation researchers (Djellal and Gallouj, 2005; Gallouj and Weinstein, 1997) and innovation network researchers (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) continue to ignore innovation networks in this sector (Hjalager, 2010; Petrou and Daskalopoulou, 2013). Data collection and analysis Because the aim of this study is to explore the characteristics of winter tourism networks and potential links between network characteristics and the purpose of innovations, we opted for a qualitative study, based on an analysis of 12 innovation networks. A multi-case study can handle a limited number of cases and has the advantage of breaking down each network to provide a detailed analysis. This method also provides a detailed description of the events, along with a systematic analysis of the relationships between partners. Multi-case studies require establishing a theoretical sample with common characteristics, in this case, networks that comprise at least three independent organizations (public or private) and innovations that pertain to the ski area. Seven cases pertain to the Portes du Soleil ski area and five involve the Paradiski ski area. Both areas are located in the northern French Alps and have a predominantly European clientele. In this largely homogeneous sample, we also need some variety to delineate the impact of the network characteristics on the innovations implemented. Accordingly, these ski areas have different modes of governance - one is centralized around a mid-sized company (Compagnie des Alpes), whereas the other is more collegial and associative; they are located in two separate territories (Franco-Swiss and 100 per cent French); and they differ in the number of ski resorts they run. We also took care to select networks of different sizes and ages in choosing the ski resorts (see Table II). The initial data collection aimed to identify innovation networks developed in these two ski areas. To reduce the complexity of the analysis, we focused primarily on innovation networks developed around sporting or leisure activities in connection with ski areas; for example, we excluded innovations developed by hotels or residences. Next, we identified innovations driven by an inter-organizational network and the key players who worked with the hub firms in several innovation projects. From this assessment, we identified the tourist office of Avoriaz (major international ski resort connected with the Portes du Soleil area), the Association of Portes du Soleil, and the tourist office of Les Arcs (ski resort attached to the large international ski area Paradiski) as potential hub organizations. To ensure data triangulation (Yin, 2008), we used three data sources: interviews, direct observation, and secondary data. We conducted ten semi-structured interviews, lasting an average of three hours each, during 2011 and 2012 with key network actors (hubs and innovators), including heads of the tourist office, ski areas, and ski lifts. We also interviewed actors who helped us understand the territory, while facilitating access to other key actors (e.g. Savoie Mont Blanc tourism director, member of the executive committee of Savoie Mont Blanc destination, the tourism plan coordinator of the Savoie Travel Agency). The interview guide reflected our analysis framework. The key topics discussed in each interview were as follows: overview of the station's current context and strategy, the role of innovation in its strategy, and the history of major innovations over the last ten years. For each successful innovation, we also asked about the elements affected by innovation, to facilitate its classification as a new offer, front-office innovation, or back-office innovation. We obtained detailed descriptions of the supportive networks, according to the four central characteristics (relationships among members, modes of regulation, architecture, and location; see Table I). For the direct observations, we pretended to be customers of the ski areas and used the innovative services studied. This passive observation enabled us to test the innovations and also capture the feelings of customers, who were full members of the service process. Finally, we used data from both internal and external sources. The internal sources included project presentations, e-mails, and internal reports, which provided a better understanding of the relationships among the actors in the innovation project. The external sources of data included websites for the ski areas and their partners, as well as digital journal articles. Thus we gained background information about the actors (e.g. history of the ski area, target clientele, network collaboration) and accurate, objective descriptions of the innovation and its promise for consumers. The data analysis was identical for each of the 12 innovation networks and each type of data. We conducted thematic coding, crossing data from prior literature and information from our research field to develop a dictionary of topics. The codification of these themes was manual; we distinguished descriptive, explanatory, and interpretive information (Miles and Huberman, 1994). The resulting dictionary classified the data into two broad categories: type of innovation: from prior literature and the case studies, we assigned a code to each dimension: innovation about output, back-office, or front-office; and type of network: from literature and the case studies, we created four themes to which we assigned specific codes: the relationship between members, the mode of regulation, the architecture, and the geographical scope. Using these dictionary themes, we coded the transcript interviews and secondary data. In addition, we mapped each innovation network, including its main characteristics, and submitted these maps to the interviewees. The maps represent, for each innovation project, the relationships among members, and in turn, they facilitate the identification of the roles, resources, and expertise of each partner, to enhance the interpretation and restitution of the data. We therefore characterize the observed networks and classify the identified innovations. After we present the main characteristics of the 12 networks we studied, we analyse their differences and how they developed new offers, front-office innovations, or back-office innovations. Characteristics of ski area innovations networks The 12 innovations that we identified by the two ski areas appear in Table III. In terms of innovation types, new offers are the most prevalent (seven innovations), followed by front-office innovations (three) and back-office innovations (two). This finding is not surprising, considering the intensified competition among ski resorts, which requires each resort to produce visible innovations to retain demanding, novelty-seeking existing customers and attract new customers who are eager to increase their experience (Clydesdale, 2007). According to the director of the Avoriaz tourist office (Portes du Soleil resort), "Avoriaz has a reputation as an innovative ski resort since its genesis and keeps innovation as its permanent genetic code; it must continually deliver new offers to its customers. Innovations must be visible to them. In addition, new offers based on events both optimize resort usage, taking into account the specificities of the different European schedules, and create awareness. It is a double blow". Such new offers help attract new customers, in both winter and summer (e.g. beginners attracted by a "You can ski" package, wealthy customers who prefer Paradiski premium, families in summertime with the Multipass). The front-office innovations seek to improve customer satisfaction by introducing new hardware support (new slope, new wood modules with the Stash, new high-performance ski lift connecting two ski areas with Vanoise Express). They also involve training tools for frontline staff, which helps them provide customers with more relevant offer knowledge. Sustainable dimension is a key component of some of these innovations too. Back-office innovations are fewer but still useful for uniting stakeholders, because, as one interviewee noted, "in a ski resort, we need all the players to be involved and go in the same direction; it is essential to develop innovations in communication tools to create unity between all those stakeholders. Some are in a situation of asymmetric information, so it is necessary to facilitate the transfer of information". The network architecture is centralized in all our cases. It thus appears that innovation in a ski area is not possible without a central actor that coordinates the other members of the network. These hub organizations are mostly located in the ski resort, such as the Tourist Office of Avoriaz (large international ski resort connected to Portes Du Soleil area), the Association of the Portes du Soleil, and the Tourist Office of Les Arcs (large international resort attached to Paradiski area). The innovation networks instead vary increasingly in their members types. Two-thirds of them encompass organizations from other winter sports industries that bring resources and expertise into the original network. As a corollary, we note that the vast majority of the members of the network are located outside the resort (ten of 12 cases). Thus, it not sufficient to collaborate with local actors to innovate. Finally, sociological modes appear to have been abandoned, in favour of more economical modes. The challenge of innovation is such that it is increasingly difficult to do without contracts, explicit procedures, or clear operating rules. These initial results deserve further analyses to detect any possible links between the type of innovation developed and three network characteristics: the nature of relationships, the mode of regulation, and the geographical scope (because architecture is centralized in all case). Table IV summarizes the data we used to draw conclusions about these relationships. New offers For most innovations that focus on new offers, we observe that networks gather a few competitors (to benefit from scale effects generated by alliances) but rely on more partners that can provide additional resources (customers, suppliers, or companies from other industries). Regulation is economic, because it involves actors outside the station; otherwise, it is sociological. However, this sociological mode can lead to malfunctions; as one interviewed actor stated, "it is sometimes hard to know exactly who should do what and how. It would be more effective and would be better for our brand if we wrote more elaborate procedures". The networks have an increasingly wide (national or international) geographic scope, in that the partners from other industries are rarely located within the resort. As an illustration, Figure 1 [3] maps the new offer of the Rock the Pistes festival. The objective of this innovative event was to encourage the public to discover the skiing area, through concerts scattered on the slopes. To implement this innovation, the ski area relied on the participation of various local actors (all the resorts in the ski area), as well as more geographically dispersed actors from other industries, such as the record company Warner (via its subsidiary Nous Prod); a TV channel Canal+, which created the concept but then exited the network; and the distributor in charge of ticketing, Fnac. Front-office innovations For innovations intended to improve the front-office, vertical relationships are often preferred. We found mainly economic regulations, such that members were supervised by strict safety standards (when transporting skiers) or specifications to preserve their brand (issued by Burton, an internationally renowned company). Because these suppliers, distributors, or providers of complementary resources are not located in the ski resort, network coverage is national or international. Figure 2 depicts the "Stash" innovation network. This innovation consisted of a kind of natural, secured snow park, located in the heart of the forest, designed to offer skiers, snowboarders, ski schools, and families more play areas while also delivering a message about environment protection. This innovation was developed under the leadership of the Burton company (snowboard equipment) and required the cooperation of SERMA (ski lift operator) to develop specific wood modules and use special equipment to maintain the snow. The Tourist Office of Avoriaz promotes this space and organizes events with Burton. Back-office innovations Inter-organizational networks that provide support to back-office innovations gather inter-industry members located in the resort. The Tourist Office is often the hub, and the geographical proximity of members is important. We also note the use of an economical mode of regulation, especially when it comes to the distribution of income from lift activity. For example, Nirvanalps is a web portal (Figure 3), through which Les Arcs can see all available beds in the resort belonging to private owners, with constantly updated information about the state and uses of this accommodation reserve. It also seeks to optimize occupancy rates at the resort, such as through promotions. Formal specifications clearly indicate for owners the benefits they will receive in return for participation, such as discounts on ski pass prices. Discussion: types of innovations introduced by certain types of networks This study, based on 12 innovations networks, provides several lessons about ski area innovations networks. In terms of architecture, the presence of a hub firm within networks prevails, regardless of the type of innovation. We find similarities with the industrial sector here (Dhanaraj and Parkhe, 2006). In transaction cost theory (Williamson, 1985), members of a network agree to delegate some authority to a central actor when the degree of uncertainty is high. In this uncertain environment, the importance of a hierarchical network is substantial, so a hub or central actor dominates exchanges and coordinates members. In contrast, such hierarchical forms lose value when the level of uncertainty is low. Therefore, the systematic presence of a hub in this study partly reflects the notion that the ski areas already are centralized around at least one key organization (e.g. ski lift company, accommodation provider), but it also is due to the recent challenges facing mountain territories. Traditionally, heterogeneous actors enjoyed a growing market and tended to operate in isolation. Today, competitive intensity and trends for winter sports markets make the presence of a hub necessary to drive innovation dynamics and involve multiple stakeholders in collaboration (Dhanaraj and Parkhe, 2006). In innovation networks, conflicts of interest or power games among actors are almost inevitable (Miles and Snow, 1986). A hub firm can manage disagreements and differences and facilitate the development of innovative projects. This hub organization changes, depending on the nature of the innovation project. It might be an institution (e.g. Portes du Soleil association, tourist office) or a large company that owns most or some key element of the value chain activities (Compagnie des Alpes, Pierre & Vacances). Despite the presence of some large companies, local organizations or public operators often drive tourism innovations (Hjalager, 2010), though we did not observe any small- or medium-sized enterprises driving networks; instead, they often appear in a situation of high dependence. It is difficult for such firms to create more favourable environments through political activities, such as lobbying (Pfeffer and Salancik, 1978). Regarding the regulation mode, it appears that the economic mode is more favoured than the sociological mode, which requires trust among members (Gulati, 1998). The sociological mode only helps coordinate well-known local actors. Casanueva and Galan Gonzalez (2004) show, in the shoe industry, that network firms exchange tacit information only with firms with which they maintain stronger social and business links. However, the use of economic regulations reflects a change in the mode of operation of ski resorts. Originally, ski resorts were characterized by informal networks based on geographical and cultural proximities. These networks could be described as clans (Ouchi, 1980). However, with the retirement of the first generation of business owners, the arrival of foreign companies based more on economic and financial considerations (Cattelin and Thevenard-Puthod, 2006), increasing competition, and the imperative to innovate, the control mode changed to economic. This choice also reflects the constitutions of the networks, which comprise geographically distant members, selected according to their complementary resources and skills. We can also use the resource-based view to explain this evolution, in that the sustainability of a ski resort depends on its ability to acquire and maintain necessary resources. Moreover, the difficulty of protecting innovations reinforces the need for rational, economic relationships between members of an innovation network. With regard to the two other network dimensions, different types emerged according to the type of innovation developed. For the nature of the relationships, front-office innovation networks appeared more vertical, in that they aim to make service qualities more tangible (improving physical evidence) or convince customers of their quality (staff action, such as tour operator agents). These networks also involve both upstream members (supplier that provides technology) and downstream members (distributor with which it innovates jointly). However, new offer networks mostly span industries. This trait is not surprising, because by definition, a holiday stay entails different types of services (e.g. accommodation, restaurant, ski lift, equipment rental, tourist office; Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008; Svensson et al., 2005). However, beyond these traditional providers, ski resorts seek differentiation and increasingly use actors that are not part of the winter sports tourism industry (e.g. musical production companies, waterparks). New offers also require more horizontal coordination among resorts in the same ski area. These resorts must manage the cooperation-competition duality, or "co-opetition" (Brandenburger and Nalebuff, 1995). To create value and innovation, ski resorts can no longer act in isolation but must recognize their inter-dependence (Lado et al., 1997). Research has identified links between the nature of relationships and the type of innovation developed in an industry (Gemunden et al., 1996); we note the specificities of service innovations. Partners solicited for innovations tend to represent a different type than the focal industry. Moreover, according to cooperation studies, the selection process takes place mainly at the beginning of the cooperation (Rueur et al., 2002), but in the networks we studied, the selection of members occurred throughout the innovation project, depending on newly arising needs. During the selection phase, the main criteria are the resources and skills of each partner. This result reflects a resource-based view, a proactive approach in which a company is aware of its lack of innovation resources and skills and therefore decides to bring in partners. In our research, this approach is often initiated by a public actor - the tourist office. In addition, the hub chooses its partners according to their reputations and the extent of their networks. Finally, to stand out from competitors, ski resorts expand their networks to members who are distant in their activities, which reflects the geographical scope of the network. Mountain resorts used to have highly geographical operations and were sometimes treated as localized productive systems, fully embedded in their territory. Now, members of innovation networks are mostly located outside the resort, in France or abroad. Although proximity traditionally reduces coordination costs (Dyer and Singh, 1998) and facilitates informal exchanges and knowledge transfers (Bell and Zaheer, 2007; Von Hippel, 1994), for innovations developed around ski areas, local partners are no longer sufficient. Rather, resorts must find creative partners that can provide resources and skills that do not reside in the resort. Alliances with foreign partners also provide a way to internationalize the resort and find growth overseas. One type of innovation represents an exception to this rule though: back-office innovations supported by local networks. These innovations, not visible to clients (and therefore not able to differentiate the ski area), are designed to integrate and facilitate coordination among stakeholders in the touristic stay. Logically, it is not surprising that these innovations mainly involve local organizations that must be particularly efficient in their information systems. Back-office innovations require IT companies that are geographically close (if not in the heart of the resort, they remain in the same geographical area). Table V summarizes these results and characterizes the networks formed by the winter sports resorts, according to the type of innovation developed. Two main contributions emerge from our research. First, we characterize service innovation networks in the winter sports industry, a little studied service context. This study of 12 innovations implemented by two ski areas highlights the link between the type of innovation deployed and the type of network formed. Networks that seek to produce new offers, front-office innovations, and back-office innovations differ in the partners involved (competitors, suppliers, distributors, or actors outside the industry) and geographical scope (local, national, or international). However, a central player is always in charge of orchestrating the exchanges among partners, regardless of the type of innovation. This pivotal role is often provided by a public organization (tourist or local institution), with some local legitimacy (Kumar and Das, 2007). Second, we specify the link between the type of innovation deployed and the type of network formed. Two key dimensions (nature of the relationships and geographical scope) appear to differ, depending on the type of innovation developed. Our research thus fills a gap by complementing existing works on the characteristics of innovation networks, which hitherto have focused more on innovations in manufacturing. Our research shows for the first time that implementing certain types of service innovations requires the creation of inter-organizational networks with specific characteristics. At the managerial level, identifying the four dimensions of an innovation network is crucial, because these dimensions differ depending on the type of innovation. For example, ski resorts that want to develop new offers must be open to external partners (i.e. companies that do not belong to the tourism industry or are not geographically proximate to the resort). The openness of the network to such "unusual" partners facilitates the design and implementation of more radical innovations. Ski resorts that want to innovate also must recognize the important role of the hub that drives the innovation dynamics, selects the best members, and coordinate their actions. From a methodological perspective, we selected the tourism industry to test our framework, because changes in this sector have obliged firms to innovate. Ski resorts effectively represent tourism destinations, but we also acknowledge that the results may differ in other service settings (e.g. banking, hospitals). In addition, our relatively small sample consists of in-depth interviews with knowledgeable respondents, but it is not exhaustive. Because of the confidential and strategic nature of our interview topics, it was challenging to interview all partners in each network. Instead, we focused on the hub firm and its main partners. Additional research could use a larger sample and adopt a quantitative methodology. Beyond these traditional limitations associated with qualitative methodologies, we also note a limitation of our questioning method. That is, this study addresses the link between innovation types and characteristics of inter-organizational networks but not the possible reciprocal link. In some situations, it may be that networks determine the innovations implemented. Further research should consider this reciprocity and delve deeper into the relationship between inter-organizational networks and innovation, as well as its direction. It also might be useful to expand the research field to other ski areas to examine, with added cases, other innovations that might be deployed by other types of networks. For example, the nature of the hub organization may differ in countries where public actors are less involved in ski resorts management (e.g. North America). The network structures identified herein should be validated with a larger sample too. Opens in a new window. Figure 1 New offer map ("rock the pistes") Opens in a new window. Figure 2 Front-office innovation map (stash) Opens in a new window. Figure 3 Back-office innovation map (nirvanalps) Opens in a new window. Table I Analysis framework of the link between networks characteristics and innovation types Opens in a new window. Table II Innovation network characteristics Opens in a new window. Table III 12 innovation networks studied Opens in a new window. Table IV Network characteristics and innovation type Opens in a new window. Table V Network characteristics according to nature of innovations
|
A typology of service innovations and a network analysis framework allowed us to examine the innovations implemented by two major French ski areas: Portes du Soleil and Paradiski. In total, the authors analyse the structure of 12 innovation networks.
|
[SECTION: Findings] Innovation is a powerful strategic tool for not just technology firms but also service providers (Mothe and Nguyen, 2012; Ordanini and Parasuraman, 2011). Through innovation, service firms seek to differentiate themselves from their competitors, conquer new markets, or retain existing customers. However, service innovations are not patentable, so such innovative firms must find other ways to protect their ideas. One method is to deploy inter-organizational networks. Through inter-firm cooperation, these service providers can benefit from complementarities with their partners, achieve economies of scale (Calia et al., 2007), share the costs and risks associated with developing an innovation, and ultimately make it easier to exploit their competitive advantage (Dyer and Singh, 1998). Moreover, cooperation can create barriers to entry and make it difficult for others to imitate the innovation, because exactly reproducing a network of inter-organizational relationships designed to innovate is nearly impossible (Borgatti and Foster, 2003). Despite the potential benefits, and issues, related to cooperation for service innovation development, innovation management research mainly focuses on technological innovation networks (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) and ignores the constellations of actors available for services innovation. Services remain the "poor relative" in innovation management literature (Gallouj and Weinstein, 1997). Yet services demand-specific insights, particularly due to their immateriality and interactivity, such that research findings pertaining to other industries do not always transfer to services (Sundbo, 1997). In response, this exploratory study seeks to highlight characteristics of service innovations networks and thereby answer two key research questions: RQ1. What are the primary characteristics of service innovation networks? RQ2. Does the implementation of certain types of innovations require certain types of networks? We analyse innovations implemented by two French ski areas: Portes du Soleil and Paradiski. The winter sports tourism industry is especially relevant for addressing our research questions, because the many changes it has undergone in the past 15 years have required ski resorts to find new routes to innovation, including through collaborations with partners (Petrou and Daskalopoulou, 2013). In the next section, we review literature on innovation in services and provide a summary of predominant characteristics of inter-organizational relationships, which underlies our analysis framework. Next, we explain the importance of investigating winter sports tourism and detail the methodology we use for this study. Finally, we discuss the notable characteristics of innovation networks before concluding with some implications, limitations, and directions for further research. In addition to acknowledging the specificities of services innovation, we characterize inter-organizational networks according to four dimensions: nature of the relationship, regulation mode, architecture, and geographical scope. Characteristics and types of service innovations Services innovations present a unique character that is less tangible than product or industrial innovations, with many incremental and architectural versions (De Vries, 2006). Social or managerial innovations (Hamel, 2006) are not always visible outside the organization. To improve identifications of innovations in services, prior literature suggests several classifications, most of which rely on a single dimension, such as: The element affected by the innovation (product, process, or organization; Belleflamme et al., 1986; Damanpour et al., 2009; Favre-Bonte et al., 2014; Hamdouch and Samuelides, 2001). Garcia and Calantone (2002) call this dimension the "new what". Innovativeness (Birkinshaw et al., 2008; Favre-Bonte et al., 2014; Garcia and Calantone, 2002), which represents a measure of the degree of "newness" of an innovation (e.g. highly innovative vs less innovative; new to the world vs new to the adopting unit vs new to the industry), combined with its risk level. The way the innovation is produced (e.g. with or without customer participation; Sundbo and Gallouj, 1998). For winter sports tourism services, it is often difficult to identify which resorts represent the source of innovations, because there are no formal intellectual property rights, and many firms might claim to have originated new concepts or services. Therefore, it is challenging to assess the degree of novelty of an innovation, and we focus instead on the elements affected by innovation - that is, the "new what". This dimension seems relatively more objective, from an innovation perspective. For this focus on the elements affected by innovation, we use the service delivery system model (Langeard et al., 1981). Unlike a blueprint approach (Bitner et al., 2008), which includes time and various other operations, the service delivery system model focuses on the role of the client, in interaction with the service company. Thus it enables us to go beyond a conventional product/process distinction, in that it separates process elements that are visible to clients from those that are not (Favre-Bonte et al., 2014). Specifically, it includes three main components: back-office, front-office, and output. The back-office component (i.e. internal organization) includes all traditional functions of a company that are invisible to the customer (e.g. marketing services, human resources, purchasing) and their operations (e.g. working methods, equipment, information systems). The front-office component instead comprises all elements that are visible to clients and that make the service more tangible, such as the staff (employees with whom clients interact), physical evidence (equipment used by the staff or customers in service delivery, such as machines, robots, furniture, signage, or more generally the premises in which the service gets delivered), and the customers themselves, who are more or less involved in service production (e.g. define the problem, engage in operational tasks) and can interact with other clients. Finally, the system delivers an output: the service offered to the customer. For this research, we focus on the main elements pertaining to an innovation and acknowledge that its deployment can affect different parts of the service, more or less simultaneously, with cascading effects (Barras, 1990; Damanpour and Evan, 1984; Fritsch and Meschede, 2001). However, our classification includes only those components that are most important to an innovation or represent the source of the innovation process (i.e. the component the firm seeks to improve through innovation). We thus consider three types of innovations: new offers, front-office innovations, and back-office innovations. Because developing innovations often requires closer cooperation among various partners to access new resources and skills (Stieglitz and Heine, 2007), we also consider the different forms that service innovation networks can take. Heterogeneity of inter-organizational network forms Inter-organizational networks provide a way for firms to achieve economies of scale (Powell, 1987) and access new resources and skills (Stieglitz and Heine, 2007). For this study, we define inter-organizational networks as sets of at least three organizations, linked by long-term exchange relationships and by a sense of belonging to a collective entity (Grandori and Soda, 1995). The different forms of inter-organizational networks can be characterized according to four dimensions: the nature of the relationships among the members, the mode of regulation, the architecture, and the geographical scope. Relationship type Relationships among partners can take many forms (Inkpen and Tsang, 2005). In a horizontal relationship, members build relationships with competitors to share the same resources, whereas in a vertical relationship, their aim is to transfer additional resources between a client and a supplier. Finally, in an "inter-industry" relationship, the networks encompass potentially complementary organizations that are neither competitors nor connected by customer-supplier relationships. Such networks are willing to share skills or promote a single resource. These three "pure" forms of inter-organizational networks also can be combined (Gomes-Casseres, 2003); for example, travel services networks bring together airlines (horizontal relationships); tour operators, car rental agencies, and hotel chains (vertical relationships); and even banking and financial institutions (inter-industry relationships) to provide customers with a "global" offer. These different relationships types constitute a central dimension in traditional innovation management research (Gemunden et al., 1996; Nietoa and Santamariab, 2007). Gemunden et al. (1996) examine the link between the type of relationship (e.g. partners, competitors, suppliers, laboratories) and the type of innovation developed. For process innovations, they note the importance of integrating all partners, particularly those connected by customer-supplier relationships. In contrast, product innovations require the intervention of technical partners. However, the results of their research, conducted in an industrial sector, may not transfer to services, for which the technical dimension is not always central. Regulation mode Regulation mode refers to the coordination mechanisms implemented. Economic regulation includes formal, explicit, and written mechanisms. These mechanisms come in several forms, such as standard operating procedures, technical reports, cost accounting systems, budgets and planning, contracts, and confidentiality agreements (Das and Teng, 1998; Gulati, 1998). Contracts can play a key role in inter-organizational relationships that share specific assets. In contrast, sociological regulation is based on adjustment mechanisms, trust, and clan logic. Regulatory mechanisms are thus rather implicit and verbal and include the establishment of joint teams, seminars, meetings, personnel transfers, and mechanisms for shared decision making (Grandori and Soda, 1995). These informal methods have several advantages over formal methods, such as lower transaction costs, increased strategic flexibility, and reduced risk of conflict (Nooteboom et al., 1997). In contrast, formal mechanisms are often problematic for the deployment of certain types of innovations, such as exploratory ones (Nooteboom, 2004). An exploratory innovation is inherently uncertain, and it is difficult to write a contract for an output that is unknown. In the context of service innovation, we question whether the regulation mode is always the same or if it might depend on the type of innovation. Architecture An inter-organizational network can be characterized by its structure or architecture. Two types of networks are commonly distinguished, according to the degree of power sharing: centralized and decentralized networks: In centralized networks, all sources of information are centralized by a single, often large company. There is a formal organization (i.e. focal firm, hub firm, strategic agency, or core) that regulates transactions within the structure (Dhanaraj and Parkhe, 2006; Jarillo, 1993; Lorenzoni and Baden-Fuller, 1995; Miles and Snow, 1986). This hub firm performs three functions: first, designing the value chain, choosing the members of the network, and setting the strategic direction; second, coordinating the value chain, optimizing operational links among members of the network, limiting administrative costs inherent in the hierarchy, and maintaining coordination modes by the market; and third, controlling the value chain and deterring opportunistic behaviour that could disrupt the network's efficiency. In decentralized networks, the architecture is distributed more effectively, and power is more or less shared. In industrial sectors, the presence of a hub firm is essential (Dhanaraj and Parkhe, 2006) because it helps define and make strategic choices. In the absence of authority or a central player, decision making is slower, and it is more difficult to define strategic choices because of the potential differences among partners. In service innovations, we investigate whether the presence of a hub firm is similarly essential to ensure the sustainability of the project, regardless of the type of innovation deployed. Geographic scope Finally, the fourth dimension that describes a network is its geographical scope - that is, the geographical proximity of the partners. A network can be local, national, or international. We retain this last feature because much research has emphasized the importance of geographical proximity among members of a network for its proper functioning (Autant-Bernard, 2001; Fritsch and Lukas, 2001). Studies examine the impact of territory on the formation and operation of networks (Autant-Bernard, 2001; Dunning and Mucchielli, 2002; Fritsch and Lukas, 2001) and conclude that value creation increases when the network achieves territorial fit. Proximity promotes flexibility, frequent interactions of members, and trust (Bell and Zaheer, 2007). Some innovation projects require face-to-face interactions, because knowledge is more easily transmitted in a small, restricted region (Von Hippel, 1994). In addition, considering the differences across countries in terms of culture, customs, and laws, international learning can be more difficult and delay the process of innovation. However, other research stipulates that the transfer of knowledge does not necessarily require geographical proximity (Feldman, 1994). The development of information and communication technologies in particular allows international networks to function alongside local clusters or districts. In summary, "the network form of organization has profoundly impacted how companies innovate" (Dhanaraj and Parkhe, 2006, p. 660), and innovation networks are critical, as is widely recognized by both empirical and theoretical contributions. Despite the increasingly prominent role of service activities in productive systems though (Gallouj and Weinstein, 1997), most existing research still focuses on manufacturing activities. Because services have specific properties (Sundbo, 1997), we seek to extend this literature by highlighting the characteristics of networks built to develop service innovations. Furthermore, we analyse the link between the type of networks that emerge and the types of innovation that arise from the services industry. Specifically, we address whether the implementation of certain types of innovations (new offers, front-office innovations, back-office innovations) requires the creation of inter-organizational networks with unique characteristics. In so doing, we broach an unexplored issue for network management, with notable implications for research into network theory, strategy, and services innovation management. Table I summarizes our analysis framework, including the element affected by innovation (the "new what") and the characteristics of the networks developed to achieve it. We begin by explaining why we examine the winter sports tourism sector in this study and the specificities of this service activity. We then present the methodology we used to collect and process the study data. Winter sports tourism Services are heterogeneous, so a parallel study of several sectors cannot offer meaningful comparisons (Djellal and Gallouj, 2008; Favre-Bonte et al., 2014). Instead, we focus on a single service activity, tourism, which has substantial economic impacts[1] and offers a fertile ground assessing innovation networks (Tremblay, 1998). Winter sports tourism is inherently heterogeneous, involving coordination of many people in its production-distribution process. A winter sports resort is a complex, original system that combines private (e.g. ski lift operators, accommodation providers, transport, ski rental shops) and public partners that own complementary resources and competences (Svensson et al., 2005). Promoting a destination largely depends on the partners' ability to integrate fragmented supply into a single, coherent product (Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008). The nature of this tourism product affirms the central role of coordination activities (Lynch and Morrison, 2007). In addition, this intrinsic characteristic is reinforced by the need to innovate in response to increased competition, which in turn demands more coordination among organizations. Ski resorts must offer new sports practices[2] (e.g. snow park creation, diversification beyond ski activities, snowshoeing, ice diving), more comfort (e.g. improved quality lifts, accommodations), and animation (e.g. discovery of local heritage, cultural activities, events). These essential innovations can be enhanced by concentration and the arrival of new actors (e.g. non-family, larger firms) in the ski resort industry (Cattelin and Thevenard-Puthod, 2006). Finally, technological improvements have contributed to the development of innovations in the tourism sector. For example, the internet has changed the nature of the relationship between organizations and the distribution of power between clients and suppliers. Although the internet has enabled many ski resorts to sell their packages directly online, it also increases transparency and rivalry across ski resorts (Favre-Bonte and Tran, 2014). Despite these challenges and the reality for innovations in winter sports resorts, service innovation researchers (Djellal and Gallouj, 2005; Gallouj and Weinstein, 1997) and innovation network researchers (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) continue to ignore innovation networks in this sector (Hjalager, 2010; Petrou and Daskalopoulou, 2013). Data collection and analysis Because the aim of this study is to explore the characteristics of winter tourism networks and potential links between network characteristics and the purpose of innovations, we opted for a qualitative study, based on an analysis of 12 innovation networks. A multi-case study can handle a limited number of cases and has the advantage of breaking down each network to provide a detailed analysis. This method also provides a detailed description of the events, along with a systematic analysis of the relationships between partners. Multi-case studies require establishing a theoretical sample with common characteristics, in this case, networks that comprise at least three independent organizations (public or private) and innovations that pertain to the ski area. Seven cases pertain to the Portes du Soleil ski area and five involve the Paradiski ski area. Both areas are located in the northern French Alps and have a predominantly European clientele. In this largely homogeneous sample, we also need some variety to delineate the impact of the network characteristics on the innovations implemented. Accordingly, these ski areas have different modes of governance - one is centralized around a mid-sized company (Compagnie des Alpes), whereas the other is more collegial and associative; they are located in two separate territories (Franco-Swiss and 100 per cent French); and they differ in the number of ski resorts they run. We also took care to select networks of different sizes and ages in choosing the ski resorts (see Table II). The initial data collection aimed to identify innovation networks developed in these two ski areas. To reduce the complexity of the analysis, we focused primarily on innovation networks developed around sporting or leisure activities in connection with ski areas; for example, we excluded innovations developed by hotels or residences. Next, we identified innovations driven by an inter-organizational network and the key players who worked with the hub firms in several innovation projects. From this assessment, we identified the tourist office of Avoriaz (major international ski resort connected with the Portes du Soleil area), the Association of Portes du Soleil, and the tourist office of Les Arcs (ski resort attached to the large international ski area Paradiski) as potential hub organizations. To ensure data triangulation (Yin, 2008), we used three data sources: interviews, direct observation, and secondary data. We conducted ten semi-structured interviews, lasting an average of three hours each, during 2011 and 2012 with key network actors (hubs and innovators), including heads of the tourist office, ski areas, and ski lifts. We also interviewed actors who helped us understand the territory, while facilitating access to other key actors (e.g. Savoie Mont Blanc tourism director, member of the executive committee of Savoie Mont Blanc destination, the tourism plan coordinator of the Savoie Travel Agency). The interview guide reflected our analysis framework. The key topics discussed in each interview were as follows: overview of the station's current context and strategy, the role of innovation in its strategy, and the history of major innovations over the last ten years. For each successful innovation, we also asked about the elements affected by innovation, to facilitate its classification as a new offer, front-office innovation, or back-office innovation. We obtained detailed descriptions of the supportive networks, according to the four central characteristics (relationships among members, modes of regulation, architecture, and location; see Table I). For the direct observations, we pretended to be customers of the ski areas and used the innovative services studied. This passive observation enabled us to test the innovations and also capture the feelings of customers, who were full members of the service process. Finally, we used data from both internal and external sources. The internal sources included project presentations, e-mails, and internal reports, which provided a better understanding of the relationships among the actors in the innovation project. The external sources of data included websites for the ski areas and their partners, as well as digital journal articles. Thus we gained background information about the actors (e.g. history of the ski area, target clientele, network collaboration) and accurate, objective descriptions of the innovation and its promise for consumers. The data analysis was identical for each of the 12 innovation networks and each type of data. We conducted thematic coding, crossing data from prior literature and information from our research field to develop a dictionary of topics. The codification of these themes was manual; we distinguished descriptive, explanatory, and interpretive information (Miles and Huberman, 1994). The resulting dictionary classified the data into two broad categories: type of innovation: from prior literature and the case studies, we assigned a code to each dimension: innovation about output, back-office, or front-office; and type of network: from literature and the case studies, we created four themes to which we assigned specific codes: the relationship between members, the mode of regulation, the architecture, and the geographical scope. Using these dictionary themes, we coded the transcript interviews and secondary data. In addition, we mapped each innovation network, including its main characteristics, and submitted these maps to the interviewees. The maps represent, for each innovation project, the relationships among members, and in turn, they facilitate the identification of the roles, resources, and expertise of each partner, to enhance the interpretation and restitution of the data. We therefore characterize the observed networks and classify the identified innovations. After we present the main characteristics of the 12 networks we studied, we analyse their differences and how they developed new offers, front-office innovations, or back-office innovations. Characteristics of ski area innovations networks The 12 innovations that we identified by the two ski areas appear in Table III. In terms of innovation types, new offers are the most prevalent (seven innovations), followed by front-office innovations (three) and back-office innovations (two). This finding is not surprising, considering the intensified competition among ski resorts, which requires each resort to produce visible innovations to retain demanding, novelty-seeking existing customers and attract new customers who are eager to increase their experience (Clydesdale, 2007). According to the director of the Avoriaz tourist office (Portes du Soleil resort), "Avoriaz has a reputation as an innovative ski resort since its genesis and keeps innovation as its permanent genetic code; it must continually deliver new offers to its customers. Innovations must be visible to them. In addition, new offers based on events both optimize resort usage, taking into account the specificities of the different European schedules, and create awareness. It is a double blow". Such new offers help attract new customers, in both winter and summer (e.g. beginners attracted by a "You can ski" package, wealthy customers who prefer Paradiski premium, families in summertime with the Multipass). The front-office innovations seek to improve customer satisfaction by introducing new hardware support (new slope, new wood modules with the Stash, new high-performance ski lift connecting two ski areas with Vanoise Express). They also involve training tools for frontline staff, which helps them provide customers with more relevant offer knowledge. Sustainable dimension is a key component of some of these innovations too. Back-office innovations are fewer but still useful for uniting stakeholders, because, as one interviewee noted, "in a ski resort, we need all the players to be involved and go in the same direction; it is essential to develop innovations in communication tools to create unity between all those stakeholders. Some are in a situation of asymmetric information, so it is necessary to facilitate the transfer of information". The network architecture is centralized in all our cases. It thus appears that innovation in a ski area is not possible without a central actor that coordinates the other members of the network. These hub organizations are mostly located in the ski resort, such as the Tourist Office of Avoriaz (large international ski resort connected to Portes Du Soleil area), the Association of the Portes du Soleil, and the Tourist Office of Les Arcs (large international resort attached to Paradiski area). The innovation networks instead vary increasingly in their members types. Two-thirds of them encompass organizations from other winter sports industries that bring resources and expertise into the original network. As a corollary, we note that the vast majority of the members of the network are located outside the resort (ten of 12 cases). Thus, it not sufficient to collaborate with local actors to innovate. Finally, sociological modes appear to have been abandoned, in favour of more economical modes. The challenge of innovation is such that it is increasingly difficult to do without contracts, explicit procedures, or clear operating rules. These initial results deserve further analyses to detect any possible links between the type of innovation developed and three network characteristics: the nature of relationships, the mode of regulation, and the geographical scope (because architecture is centralized in all case). Table IV summarizes the data we used to draw conclusions about these relationships. New offers For most innovations that focus on new offers, we observe that networks gather a few competitors (to benefit from scale effects generated by alliances) but rely on more partners that can provide additional resources (customers, suppliers, or companies from other industries). Regulation is economic, because it involves actors outside the station; otherwise, it is sociological. However, this sociological mode can lead to malfunctions; as one interviewed actor stated, "it is sometimes hard to know exactly who should do what and how. It would be more effective and would be better for our brand if we wrote more elaborate procedures". The networks have an increasingly wide (national or international) geographic scope, in that the partners from other industries are rarely located within the resort. As an illustration, Figure 1 [3] maps the new offer of the Rock the Pistes festival. The objective of this innovative event was to encourage the public to discover the skiing area, through concerts scattered on the slopes. To implement this innovation, the ski area relied on the participation of various local actors (all the resorts in the ski area), as well as more geographically dispersed actors from other industries, such as the record company Warner (via its subsidiary Nous Prod); a TV channel Canal+, which created the concept but then exited the network; and the distributor in charge of ticketing, Fnac. Front-office innovations For innovations intended to improve the front-office, vertical relationships are often preferred. We found mainly economic regulations, such that members were supervised by strict safety standards (when transporting skiers) or specifications to preserve their brand (issued by Burton, an internationally renowned company). Because these suppliers, distributors, or providers of complementary resources are not located in the ski resort, network coverage is national or international. Figure 2 depicts the "Stash" innovation network. This innovation consisted of a kind of natural, secured snow park, located in the heart of the forest, designed to offer skiers, snowboarders, ski schools, and families more play areas while also delivering a message about environment protection. This innovation was developed under the leadership of the Burton company (snowboard equipment) and required the cooperation of SERMA (ski lift operator) to develop specific wood modules and use special equipment to maintain the snow. The Tourist Office of Avoriaz promotes this space and organizes events with Burton. Back-office innovations Inter-organizational networks that provide support to back-office innovations gather inter-industry members located in the resort. The Tourist Office is often the hub, and the geographical proximity of members is important. We also note the use of an economical mode of regulation, especially when it comes to the distribution of income from lift activity. For example, Nirvanalps is a web portal (Figure 3), through which Les Arcs can see all available beds in the resort belonging to private owners, with constantly updated information about the state and uses of this accommodation reserve. It also seeks to optimize occupancy rates at the resort, such as through promotions. Formal specifications clearly indicate for owners the benefits they will receive in return for participation, such as discounts on ski pass prices. Discussion: types of innovations introduced by certain types of networks This study, based on 12 innovations networks, provides several lessons about ski area innovations networks. In terms of architecture, the presence of a hub firm within networks prevails, regardless of the type of innovation. We find similarities with the industrial sector here (Dhanaraj and Parkhe, 2006). In transaction cost theory (Williamson, 1985), members of a network agree to delegate some authority to a central actor when the degree of uncertainty is high. In this uncertain environment, the importance of a hierarchical network is substantial, so a hub or central actor dominates exchanges and coordinates members. In contrast, such hierarchical forms lose value when the level of uncertainty is low. Therefore, the systematic presence of a hub in this study partly reflects the notion that the ski areas already are centralized around at least one key organization (e.g. ski lift company, accommodation provider), but it also is due to the recent challenges facing mountain territories. Traditionally, heterogeneous actors enjoyed a growing market and tended to operate in isolation. Today, competitive intensity and trends for winter sports markets make the presence of a hub necessary to drive innovation dynamics and involve multiple stakeholders in collaboration (Dhanaraj and Parkhe, 2006). In innovation networks, conflicts of interest or power games among actors are almost inevitable (Miles and Snow, 1986). A hub firm can manage disagreements and differences and facilitate the development of innovative projects. This hub organization changes, depending on the nature of the innovation project. It might be an institution (e.g. Portes du Soleil association, tourist office) or a large company that owns most or some key element of the value chain activities (Compagnie des Alpes, Pierre & Vacances). Despite the presence of some large companies, local organizations or public operators often drive tourism innovations (Hjalager, 2010), though we did not observe any small- or medium-sized enterprises driving networks; instead, they often appear in a situation of high dependence. It is difficult for such firms to create more favourable environments through political activities, such as lobbying (Pfeffer and Salancik, 1978). Regarding the regulation mode, it appears that the economic mode is more favoured than the sociological mode, which requires trust among members (Gulati, 1998). The sociological mode only helps coordinate well-known local actors. Casanueva and Galan Gonzalez (2004) show, in the shoe industry, that network firms exchange tacit information only with firms with which they maintain stronger social and business links. However, the use of economic regulations reflects a change in the mode of operation of ski resorts. Originally, ski resorts were characterized by informal networks based on geographical and cultural proximities. These networks could be described as clans (Ouchi, 1980). However, with the retirement of the first generation of business owners, the arrival of foreign companies based more on economic and financial considerations (Cattelin and Thevenard-Puthod, 2006), increasing competition, and the imperative to innovate, the control mode changed to economic. This choice also reflects the constitutions of the networks, which comprise geographically distant members, selected according to their complementary resources and skills. We can also use the resource-based view to explain this evolution, in that the sustainability of a ski resort depends on its ability to acquire and maintain necessary resources. Moreover, the difficulty of protecting innovations reinforces the need for rational, economic relationships between members of an innovation network. With regard to the two other network dimensions, different types emerged according to the type of innovation developed. For the nature of the relationships, front-office innovation networks appeared more vertical, in that they aim to make service qualities more tangible (improving physical evidence) or convince customers of their quality (staff action, such as tour operator agents). These networks also involve both upstream members (supplier that provides technology) and downstream members (distributor with which it innovates jointly). However, new offer networks mostly span industries. This trait is not surprising, because by definition, a holiday stay entails different types of services (e.g. accommodation, restaurant, ski lift, equipment rental, tourist office; Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008; Svensson et al., 2005). However, beyond these traditional providers, ski resorts seek differentiation and increasingly use actors that are not part of the winter sports tourism industry (e.g. musical production companies, waterparks). New offers also require more horizontal coordination among resorts in the same ski area. These resorts must manage the cooperation-competition duality, or "co-opetition" (Brandenburger and Nalebuff, 1995). To create value and innovation, ski resorts can no longer act in isolation but must recognize their inter-dependence (Lado et al., 1997). Research has identified links between the nature of relationships and the type of innovation developed in an industry (Gemunden et al., 1996); we note the specificities of service innovations. Partners solicited for innovations tend to represent a different type than the focal industry. Moreover, according to cooperation studies, the selection process takes place mainly at the beginning of the cooperation (Rueur et al., 2002), but in the networks we studied, the selection of members occurred throughout the innovation project, depending on newly arising needs. During the selection phase, the main criteria are the resources and skills of each partner. This result reflects a resource-based view, a proactive approach in which a company is aware of its lack of innovation resources and skills and therefore decides to bring in partners. In our research, this approach is often initiated by a public actor - the tourist office. In addition, the hub chooses its partners according to their reputations and the extent of their networks. Finally, to stand out from competitors, ski resorts expand their networks to members who are distant in their activities, which reflects the geographical scope of the network. Mountain resorts used to have highly geographical operations and were sometimes treated as localized productive systems, fully embedded in their territory. Now, members of innovation networks are mostly located outside the resort, in France or abroad. Although proximity traditionally reduces coordination costs (Dyer and Singh, 1998) and facilitates informal exchanges and knowledge transfers (Bell and Zaheer, 2007; Von Hippel, 1994), for innovations developed around ski areas, local partners are no longer sufficient. Rather, resorts must find creative partners that can provide resources and skills that do not reside in the resort. Alliances with foreign partners also provide a way to internationalize the resort and find growth overseas. One type of innovation represents an exception to this rule though: back-office innovations supported by local networks. These innovations, not visible to clients (and therefore not able to differentiate the ski area), are designed to integrate and facilitate coordination among stakeholders in the touristic stay. Logically, it is not surprising that these innovations mainly involve local organizations that must be particularly efficient in their information systems. Back-office innovations require IT companies that are geographically close (if not in the heart of the resort, they remain in the same geographical area). Table V summarizes these results and characterizes the networks formed by the winter sports resorts, according to the type of innovation developed. Two main contributions emerge from our research. First, we characterize service innovation networks in the winter sports industry, a little studied service context. This study of 12 innovations implemented by two ski areas highlights the link between the type of innovation deployed and the type of network formed. Networks that seek to produce new offers, front-office innovations, and back-office innovations differ in the partners involved (competitors, suppliers, distributors, or actors outside the industry) and geographical scope (local, national, or international). However, a central player is always in charge of orchestrating the exchanges among partners, regardless of the type of innovation. This pivotal role is often provided by a public organization (tourist or local institution), with some local legitimacy (Kumar and Das, 2007). Second, we specify the link between the type of innovation deployed and the type of network formed. Two key dimensions (nature of the relationships and geographical scope) appear to differ, depending on the type of innovation developed. Our research thus fills a gap by complementing existing works on the characteristics of innovation networks, which hitherto have focused more on innovations in manufacturing. Our research shows for the first time that implementing certain types of service innovations requires the creation of inter-organizational networks with specific characteristics. At the managerial level, identifying the four dimensions of an innovation network is crucial, because these dimensions differ depending on the type of innovation. For example, ski resorts that want to develop new offers must be open to external partners (i.e. companies that do not belong to the tourism industry or are not geographically proximate to the resort). The openness of the network to such "unusual" partners facilitates the design and implementation of more radical innovations. Ski resorts that want to innovate also must recognize the important role of the hub that drives the innovation dynamics, selects the best members, and coordinate their actions. From a methodological perspective, we selected the tourism industry to test our framework, because changes in this sector have obliged firms to innovate. Ski resorts effectively represent tourism destinations, but we also acknowledge that the results may differ in other service settings (e.g. banking, hospitals). In addition, our relatively small sample consists of in-depth interviews with knowledgeable respondents, but it is not exhaustive. Because of the confidential and strategic nature of our interview topics, it was challenging to interview all partners in each network. Instead, we focused on the hub firm and its main partners. Additional research could use a larger sample and adopt a quantitative methodology. Beyond these traditional limitations associated with qualitative methodologies, we also note a limitation of our questioning method. That is, this study addresses the link between innovation types and characteristics of inter-organizational networks but not the possible reciprocal link. In some situations, it may be that networks determine the innovations implemented. Further research should consider this reciprocity and delve deeper into the relationship between inter-organizational networks and innovation, as well as its direction. It also might be useful to expand the research field to other ski areas to examine, with added cases, other innovations that might be deployed by other types of networks. For example, the nature of the hub organization may differ in countries where public actors are less involved in ski resorts management (e.g. North America). The network structures identified herein should be validated with a larger sample too. Opens in a new window. Figure 1 New offer map ("rock the pistes") Opens in a new window. Figure 2 Front-office innovation map (stash) Opens in a new window. Figure 3 Back-office innovation map (nirvanalps) Opens in a new window. Table I Analysis framework of the link between networks characteristics and innovation types Opens in a new window. Table II Innovation network characteristics Opens in a new window. Table III 12 innovation networks studied Opens in a new window. Table IV Network characteristics and innovation type Opens in a new window. Table V Network characteristics according to nature of innovations
|
The results show that, depending on the type of innovation implemented, networks differ in terms of type of partners involved and geographical scope. However, regardless of the innovation developed, it seems necessary to have a central actor to orchestrate the various partners and to use an economic regulation mode.
|
[SECTION: Value] Innovation is a powerful strategic tool for not just technology firms but also service providers (Mothe and Nguyen, 2012; Ordanini and Parasuraman, 2011). Through innovation, service firms seek to differentiate themselves from their competitors, conquer new markets, or retain existing customers. However, service innovations are not patentable, so such innovative firms must find other ways to protect their ideas. One method is to deploy inter-organizational networks. Through inter-firm cooperation, these service providers can benefit from complementarities with their partners, achieve economies of scale (Calia et al., 2007), share the costs and risks associated with developing an innovation, and ultimately make it easier to exploit their competitive advantage (Dyer and Singh, 1998). Moreover, cooperation can create barriers to entry and make it difficult for others to imitate the innovation, because exactly reproducing a network of inter-organizational relationships designed to innovate is nearly impossible (Borgatti and Foster, 2003). Despite the potential benefits, and issues, related to cooperation for service innovation development, innovation management research mainly focuses on technological innovation networks (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) and ignores the constellations of actors available for services innovation. Services remain the "poor relative" in innovation management literature (Gallouj and Weinstein, 1997). Yet services demand-specific insights, particularly due to their immateriality and interactivity, such that research findings pertaining to other industries do not always transfer to services (Sundbo, 1997). In response, this exploratory study seeks to highlight characteristics of service innovations networks and thereby answer two key research questions: RQ1. What are the primary characteristics of service innovation networks? RQ2. Does the implementation of certain types of innovations require certain types of networks? We analyse innovations implemented by two French ski areas: Portes du Soleil and Paradiski. The winter sports tourism industry is especially relevant for addressing our research questions, because the many changes it has undergone in the past 15 years have required ski resorts to find new routes to innovation, including through collaborations with partners (Petrou and Daskalopoulou, 2013). In the next section, we review literature on innovation in services and provide a summary of predominant characteristics of inter-organizational relationships, which underlies our analysis framework. Next, we explain the importance of investigating winter sports tourism and detail the methodology we use for this study. Finally, we discuss the notable characteristics of innovation networks before concluding with some implications, limitations, and directions for further research. In addition to acknowledging the specificities of services innovation, we characterize inter-organizational networks according to four dimensions: nature of the relationship, regulation mode, architecture, and geographical scope. Characteristics and types of service innovations Services innovations present a unique character that is less tangible than product or industrial innovations, with many incremental and architectural versions (De Vries, 2006). Social or managerial innovations (Hamel, 2006) are not always visible outside the organization. To improve identifications of innovations in services, prior literature suggests several classifications, most of which rely on a single dimension, such as: The element affected by the innovation (product, process, or organization; Belleflamme et al., 1986; Damanpour et al., 2009; Favre-Bonte et al., 2014; Hamdouch and Samuelides, 2001). Garcia and Calantone (2002) call this dimension the "new what". Innovativeness (Birkinshaw et al., 2008; Favre-Bonte et al., 2014; Garcia and Calantone, 2002), which represents a measure of the degree of "newness" of an innovation (e.g. highly innovative vs less innovative; new to the world vs new to the adopting unit vs new to the industry), combined with its risk level. The way the innovation is produced (e.g. with or without customer participation; Sundbo and Gallouj, 1998). For winter sports tourism services, it is often difficult to identify which resorts represent the source of innovations, because there are no formal intellectual property rights, and many firms might claim to have originated new concepts or services. Therefore, it is challenging to assess the degree of novelty of an innovation, and we focus instead on the elements affected by innovation - that is, the "new what". This dimension seems relatively more objective, from an innovation perspective. For this focus on the elements affected by innovation, we use the service delivery system model (Langeard et al., 1981). Unlike a blueprint approach (Bitner et al., 2008), which includes time and various other operations, the service delivery system model focuses on the role of the client, in interaction with the service company. Thus it enables us to go beyond a conventional product/process distinction, in that it separates process elements that are visible to clients from those that are not (Favre-Bonte et al., 2014). Specifically, it includes three main components: back-office, front-office, and output. The back-office component (i.e. internal organization) includes all traditional functions of a company that are invisible to the customer (e.g. marketing services, human resources, purchasing) and their operations (e.g. working methods, equipment, information systems). The front-office component instead comprises all elements that are visible to clients and that make the service more tangible, such as the staff (employees with whom clients interact), physical evidence (equipment used by the staff or customers in service delivery, such as machines, robots, furniture, signage, or more generally the premises in which the service gets delivered), and the customers themselves, who are more or less involved in service production (e.g. define the problem, engage in operational tasks) and can interact with other clients. Finally, the system delivers an output: the service offered to the customer. For this research, we focus on the main elements pertaining to an innovation and acknowledge that its deployment can affect different parts of the service, more or less simultaneously, with cascading effects (Barras, 1990; Damanpour and Evan, 1984; Fritsch and Meschede, 2001). However, our classification includes only those components that are most important to an innovation or represent the source of the innovation process (i.e. the component the firm seeks to improve through innovation). We thus consider three types of innovations: new offers, front-office innovations, and back-office innovations. Because developing innovations often requires closer cooperation among various partners to access new resources and skills (Stieglitz and Heine, 2007), we also consider the different forms that service innovation networks can take. Heterogeneity of inter-organizational network forms Inter-organizational networks provide a way for firms to achieve economies of scale (Powell, 1987) and access new resources and skills (Stieglitz and Heine, 2007). For this study, we define inter-organizational networks as sets of at least three organizations, linked by long-term exchange relationships and by a sense of belonging to a collective entity (Grandori and Soda, 1995). The different forms of inter-organizational networks can be characterized according to four dimensions: the nature of the relationships among the members, the mode of regulation, the architecture, and the geographical scope. Relationship type Relationships among partners can take many forms (Inkpen and Tsang, 2005). In a horizontal relationship, members build relationships with competitors to share the same resources, whereas in a vertical relationship, their aim is to transfer additional resources between a client and a supplier. Finally, in an "inter-industry" relationship, the networks encompass potentially complementary organizations that are neither competitors nor connected by customer-supplier relationships. Such networks are willing to share skills or promote a single resource. These three "pure" forms of inter-organizational networks also can be combined (Gomes-Casseres, 2003); for example, travel services networks bring together airlines (horizontal relationships); tour operators, car rental agencies, and hotel chains (vertical relationships); and even banking and financial institutions (inter-industry relationships) to provide customers with a "global" offer. These different relationships types constitute a central dimension in traditional innovation management research (Gemunden et al., 1996; Nietoa and Santamariab, 2007). Gemunden et al. (1996) examine the link between the type of relationship (e.g. partners, competitors, suppliers, laboratories) and the type of innovation developed. For process innovations, they note the importance of integrating all partners, particularly those connected by customer-supplier relationships. In contrast, product innovations require the intervention of technical partners. However, the results of their research, conducted in an industrial sector, may not transfer to services, for which the technical dimension is not always central. Regulation mode Regulation mode refers to the coordination mechanisms implemented. Economic regulation includes formal, explicit, and written mechanisms. These mechanisms come in several forms, such as standard operating procedures, technical reports, cost accounting systems, budgets and planning, contracts, and confidentiality agreements (Das and Teng, 1998; Gulati, 1998). Contracts can play a key role in inter-organizational relationships that share specific assets. In contrast, sociological regulation is based on adjustment mechanisms, trust, and clan logic. Regulatory mechanisms are thus rather implicit and verbal and include the establishment of joint teams, seminars, meetings, personnel transfers, and mechanisms for shared decision making (Grandori and Soda, 1995). These informal methods have several advantages over formal methods, such as lower transaction costs, increased strategic flexibility, and reduced risk of conflict (Nooteboom et al., 1997). In contrast, formal mechanisms are often problematic for the deployment of certain types of innovations, such as exploratory ones (Nooteboom, 2004). An exploratory innovation is inherently uncertain, and it is difficult to write a contract for an output that is unknown. In the context of service innovation, we question whether the regulation mode is always the same or if it might depend on the type of innovation. Architecture An inter-organizational network can be characterized by its structure or architecture. Two types of networks are commonly distinguished, according to the degree of power sharing: centralized and decentralized networks: In centralized networks, all sources of information are centralized by a single, often large company. There is a formal organization (i.e. focal firm, hub firm, strategic agency, or core) that regulates transactions within the structure (Dhanaraj and Parkhe, 2006; Jarillo, 1993; Lorenzoni and Baden-Fuller, 1995; Miles and Snow, 1986). This hub firm performs three functions: first, designing the value chain, choosing the members of the network, and setting the strategic direction; second, coordinating the value chain, optimizing operational links among members of the network, limiting administrative costs inherent in the hierarchy, and maintaining coordination modes by the market; and third, controlling the value chain and deterring opportunistic behaviour that could disrupt the network's efficiency. In decentralized networks, the architecture is distributed more effectively, and power is more or less shared. In industrial sectors, the presence of a hub firm is essential (Dhanaraj and Parkhe, 2006) because it helps define and make strategic choices. In the absence of authority or a central player, decision making is slower, and it is more difficult to define strategic choices because of the potential differences among partners. In service innovations, we investigate whether the presence of a hub firm is similarly essential to ensure the sustainability of the project, regardless of the type of innovation deployed. Geographic scope Finally, the fourth dimension that describes a network is its geographical scope - that is, the geographical proximity of the partners. A network can be local, national, or international. We retain this last feature because much research has emphasized the importance of geographical proximity among members of a network for its proper functioning (Autant-Bernard, 2001; Fritsch and Lukas, 2001). Studies examine the impact of territory on the formation and operation of networks (Autant-Bernard, 2001; Dunning and Mucchielli, 2002; Fritsch and Lukas, 2001) and conclude that value creation increases when the network achieves territorial fit. Proximity promotes flexibility, frequent interactions of members, and trust (Bell and Zaheer, 2007). Some innovation projects require face-to-face interactions, because knowledge is more easily transmitted in a small, restricted region (Von Hippel, 1994). In addition, considering the differences across countries in terms of culture, customs, and laws, international learning can be more difficult and delay the process of innovation. However, other research stipulates that the transfer of knowledge does not necessarily require geographical proximity (Feldman, 1994). The development of information and communication technologies in particular allows international networks to function alongside local clusters or districts. In summary, "the network form of organization has profoundly impacted how companies innovate" (Dhanaraj and Parkhe, 2006, p. 660), and innovation networks are critical, as is widely recognized by both empirical and theoretical contributions. Despite the increasingly prominent role of service activities in productive systems though (Gallouj and Weinstein, 1997), most existing research still focuses on manufacturing activities. Because services have specific properties (Sundbo, 1997), we seek to extend this literature by highlighting the characteristics of networks built to develop service innovations. Furthermore, we analyse the link between the type of networks that emerge and the types of innovation that arise from the services industry. Specifically, we address whether the implementation of certain types of innovations (new offers, front-office innovations, back-office innovations) requires the creation of inter-organizational networks with unique characteristics. In so doing, we broach an unexplored issue for network management, with notable implications for research into network theory, strategy, and services innovation management. Table I summarizes our analysis framework, including the element affected by innovation (the "new what") and the characteristics of the networks developed to achieve it. We begin by explaining why we examine the winter sports tourism sector in this study and the specificities of this service activity. We then present the methodology we used to collect and process the study data. Winter sports tourism Services are heterogeneous, so a parallel study of several sectors cannot offer meaningful comparisons (Djellal and Gallouj, 2008; Favre-Bonte et al., 2014). Instead, we focus on a single service activity, tourism, which has substantial economic impacts[1] and offers a fertile ground assessing innovation networks (Tremblay, 1998). Winter sports tourism is inherently heterogeneous, involving coordination of many people in its production-distribution process. A winter sports resort is a complex, original system that combines private (e.g. ski lift operators, accommodation providers, transport, ski rental shops) and public partners that own complementary resources and competences (Svensson et al., 2005). Promoting a destination largely depends on the partners' ability to integrate fragmented supply into a single, coherent product (Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008). The nature of this tourism product affirms the central role of coordination activities (Lynch and Morrison, 2007). In addition, this intrinsic characteristic is reinforced by the need to innovate in response to increased competition, which in turn demands more coordination among organizations. Ski resorts must offer new sports practices[2] (e.g. snow park creation, diversification beyond ski activities, snowshoeing, ice diving), more comfort (e.g. improved quality lifts, accommodations), and animation (e.g. discovery of local heritage, cultural activities, events). These essential innovations can be enhanced by concentration and the arrival of new actors (e.g. non-family, larger firms) in the ski resort industry (Cattelin and Thevenard-Puthod, 2006). Finally, technological improvements have contributed to the development of innovations in the tourism sector. For example, the internet has changed the nature of the relationship between organizations and the distribution of power between clients and suppliers. Although the internet has enabled many ski resorts to sell their packages directly online, it also increases transparency and rivalry across ski resorts (Favre-Bonte and Tran, 2014). Despite these challenges and the reality for innovations in winter sports resorts, service innovation researchers (Djellal and Gallouj, 2005; Gallouj and Weinstein, 1997) and innovation network researchers (Ethiraj et al., 2005; Gilsing and Nooteboom, 2006) continue to ignore innovation networks in this sector (Hjalager, 2010; Petrou and Daskalopoulou, 2013). Data collection and analysis Because the aim of this study is to explore the characteristics of winter tourism networks and potential links between network characteristics and the purpose of innovations, we opted for a qualitative study, based on an analysis of 12 innovation networks. A multi-case study can handle a limited number of cases and has the advantage of breaking down each network to provide a detailed analysis. This method also provides a detailed description of the events, along with a systematic analysis of the relationships between partners. Multi-case studies require establishing a theoretical sample with common characteristics, in this case, networks that comprise at least three independent organizations (public or private) and innovations that pertain to the ski area. Seven cases pertain to the Portes du Soleil ski area and five involve the Paradiski ski area. Both areas are located in the northern French Alps and have a predominantly European clientele. In this largely homogeneous sample, we also need some variety to delineate the impact of the network characteristics on the innovations implemented. Accordingly, these ski areas have different modes of governance - one is centralized around a mid-sized company (Compagnie des Alpes), whereas the other is more collegial and associative; they are located in two separate territories (Franco-Swiss and 100 per cent French); and they differ in the number of ski resorts they run. We also took care to select networks of different sizes and ages in choosing the ski resorts (see Table II). The initial data collection aimed to identify innovation networks developed in these two ski areas. To reduce the complexity of the analysis, we focused primarily on innovation networks developed around sporting or leisure activities in connection with ski areas; for example, we excluded innovations developed by hotels or residences. Next, we identified innovations driven by an inter-organizational network and the key players who worked with the hub firms in several innovation projects. From this assessment, we identified the tourist office of Avoriaz (major international ski resort connected with the Portes du Soleil area), the Association of Portes du Soleil, and the tourist office of Les Arcs (ski resort attached to the large international ski area Paradiski) as potential hub organizations. To ensure data triangulation (Yin, 2008), we used three data sources: interviews, direct observation, and secondary data. We conducted ten semi-structured interviews, lasting an average of three hours each, during 2011 and 2012 with key network actors (hubs and innovators), including heads of the tourist office, ski areas, and ski lifts. We also interviewed actors who helped us understand the territory, while facilitating access to other key actors (e.g. Savoie Mont Blanc tourism director, member of the executive committee of Savoie Mont Blanc destination, the tourism plan coordinator of the Savoie Travel Agency). The interview guide reflected our analysis framework. The key topics discussed in each interview were as follows: overview of the station's current context and strategy, the role of innovation in its strategy, and the history of major innovations over the last ten years. For each successful innovation, we also asked about the elements affected by innovation, to facilitate its classification as a new offer, front-office innovation, or back-office innovation. We obtained detailed descriptions of the supportive networks, according to the four central characteristics (relationships among members, modes of regulation, architecture, and location; see Table I). For the direct observations, we pretended to be customers of the ski areas and used the innovative services studied. This passive observation enabled us to test the innovations and also capture the feelings of customers, who were full members of the service process. Finally, we used data from both internal and external sources. The internal sources included project presentations, e-mails, and internal reports, which provided a better understanding of the relationships among the actors in the innovation project. The external sources of data included websites for the ski areas and their partners, as well as digital journal articles. Thus we gained background information about the actors (e.g. history of the ski area, target clientele, network collaboration) and accurate, objective descriptions of the innovation and its promise for consumers. The data analysis was identical for each of the 12 innovation networks and each type of data. We conducted thematic coding, crossing data from prior literature and information from our research field to develop a dictionary of topics. The codification of these themes was manual; we distinguished descriptive, explanatory, and interpretive information (Miles and Huberman, 1994). The resulting dictionary classified the data into two broad categories: type of innovation: from prior literature and the case studies, we assigned a code to each dimension: innovation about output, back-office, or front-office; and type of network: from literature and the case studies, we created four themes to which we assigned specific codes: the relationship between members, the mode of regulation, the architecture, and the geographical scope. Using these dictionary themes, we coded the transcript interviews and secondary data. In addition, we mapped each innovation network, including its main characteristics, and submitted these maps to the interviewees. The maps represent, for each innovation project, the relationships among members, and in turn, they facilitate the identification of the roles, resources, and expertise of each partner, to enhance the interpretation and restitution of the data. We therefore characterize the observed networks and classify the identified innovations. After we present the main characteristics of the 12 networks we studied, we analyse their differences and how they developed new offers, front-office innovations, or back-office innovations. Characteristics of ski area innovations networks The 12 innovations that we identified by the two ski areas appear in Table III. In terms of innovation types, new offers are the most prevalent (seven innovations), followed by front-office innovations (three) and back-office innovations (two). This finding is not surprising, considering the intensified competition among ski resorts, which requires each resort to produce visible innovations to retain demanding, novelty-seeking existing customers and attract new customers who are eager to increase their experience (Clydesdale, 2007). According to the director of the Avoriaz tourist office (Portes du Soleil resort), "Avoriaz has a reputation as an innovative ski resort since its genesis and keeps innovation as its permanent genetic code; it must continually deliver new offers to its customers. Innovations must be visible to them. In addition, new offers based on events both optimize resort usage, taking into account the specificities of the different European schedules, and create awareness. It is a double blow". Such new offers help attract new customers, in both winter and summer (e.g. beginners attracted by a "You can ski" package, wealthy customers who prefer Paradiski premium, families in summertime with the Multipass). The front-office innovations seek to improve customer satisfaction by introducing new hardware support (new slope, new wood modules with the Stash, new high-performance ski lift connecting two ski areas with Vanoise Express). They also involve training tools for frontline staff, which helps them provide customers with more relevant offer knowledge. Sustainable dimension is a key component of some of these innovations too. Back-office innovations are fewer but still useful for uniting stakeholders, because, as one interviewee noted, "in a ski resort, we need all the players to be involved and go in the same direction; it is essential to develop innovations in communication tools to create unity between all those stakeholders. Some are in a situation of asymmetric information, so it is necessary to facilitate the transfer of information". The network architecture is centralized in all our cases. It thus appears that innovation in a ski area is not possible without a central actor that coordinates the other members of the network. These hub organizations are mostly located in the ski resort, such as the Tourist Office of Avoriaz (large international ski resort connected to Portes Du Soleil area), the Association of the Portes du Soleil, and the Tourist Office of Les Arcs (large international resort attached to Paradiski area). The innovation networks instead vary increasingly in their members types. Two-thirds of them encompass organizations from other winter sports industries that bring resources and expertise into the original network. As a corollary, we note that the vast majority of the members of the network are located outside the resort (ten of 12 cases). Thus, it not sufficient to collaborate with local actors to innovate. Finally, sociological modes appear to have been abandoned, in favour of more economical modes. The challenge of innovation is such that it is increasingly difficult to do without contracts, explicit procedures, or clear operating rules. These initial results deserve further analyses to detect any possible links between the type of innovation developed and three network characteristics: the nature of relationships, the mode of regulation, and the geographical scope (because architecture is centralized in all case). Table IV summarizes the data we used to draw conclusions about these relationships. New offers For most innovations that focus on new offers, we observe that networks gather a few competitors (to benefit from scale effects generated by alliances) but rely on more partners that can provide additional resources (customers, suppliers, or companies from other industries). Regulation is economic, because it involves actors outside the station; otherwise, it is sociological. However, this sociological mode can lead to malfunctions; as one interviewed actor stated, "it is sometimes hard to know exactly who should do what and how. It would be more effective and would be better for our brand if we wrote more elaborate procedures". The networks have an increasingly wide (national or international) geographic scope, in that the partners from other industries are rarely located within the resort. As an illustration, Figure 1 [3] maps the new offer of the Rock the Pistes festival. The objective of this innovative event was to encourage the public to discover the skiing area, through concerts scattered on the slopes. To implement this innovation, the ski area relied on the participation of various local actors (all the resorts in the ski area), as well as more geographically dispersed actors from other industries, such as the record company Warner (via its subsidiary Nous Prod); a TV channel Canal+, which created the concept but then exited the network; and the distributor in charge of ticketing, Fnac. Front-office innovations For innovations intended to improve the front-office, vertical relationships are often preferred. We found mainly economic regulations, such that members were supervised by strict safety standards (when transporting skiers) or specifications to preserve their brand (issued by Burton, an internationally renowned company). Because these suppliers, distributors, or providers of complementary resources are not located in the ski resort, network coverage is national or international. Figure 2 depicts the "Stash" innovation network. This innovation consisted of a kind of natural, secured snow park, located in the heart of the forest, designed to offer skiers, snowboarders, ski schools, and families more play areas while also delivering a message about environment protection. This innovation was developed under the leadership of the Burton company (snowboard equipment) and required the cooperation of SERMA (ski lift operator) to develop specific wood modules and use special equipment to maintain the snow. The Tourist Office of Avoriaz promotes this space and organizes events with Burton. Back-office innovations Inter-organizational networks that provide support to back-office innovations gather inter-industry members located in the resort. The Tourist Office is often the hub, and the geographical proximity of members is important. We also note the use of an economical mode of regulation, especially when it comes to the distribution of income from lift activity. For example, Nirvanalps is a web portal (Figure 3), through which Les Arcs can see all available beds in the resort belonging to private owners, with constantly updated information about the state and uses of this accommodation reserve. It also seeks to optimize occupancy rates at the resort, such as through promotions. Formal specifications clearly indicate for owners the benefits they will receive in return for participation, such as discounts on ski pass prices. Discussion: types of innovations introduced by certain types of networks This study, based on 12 innovations networks, provides several lessons about ski area innovations networks. In terms of architecture, the presence of a hub firm within networks prevails, regardless of the type of innovation. We find similarities with the industrial sector here (Dhanaraj and Parkhe, 2006). In transaction cost theory (Williamson, 1985), members of a network agree to delegate some authority to a central actor when the degree of uncertainty is high. In this uncertain environment, the importance of a hierarchical network is substantial, so a hub or central actor dominates exchanges and coordinates members. In contrast, such hierarchical forms lose value when the level of uncertainty is low. Therefore, the systematic presence of a hub in this study partly reflects the notion that the ski areas already are centralized around at least one key organization (e.g. ski lift company, accommodation provider), but it also is due to the recent challenges facing mountain territories. Traditionally, heterogeneous actors enjoyed a growing market and tended to operate in isolation. Today, competitive intensity and trends for winter sports markets make the presence of a hub necessary to drive innovation dynamics and involve multiple stakeholders in collaboration (Dhanaraj and Parkhe, 2006). In innovation networks, conflicts of interest or power games among actors are almost inevitable (Miles and Snow, 1986). A hub firm can manage disagreements and differences and facilitate the development of innovative projects. This hub organization changes, depending on the nature of the innovation project. It might be an institution (e.g. Portes du Soleil association, tourist office) or a large company that owns most or some key element of the value chain activities (Compagnie des Alpes, Pierre & Vacances). Despite the presence of some large companies, local organizations or public operators often drive tourism innovations (Hjalager, 2010), though we did not observe any small- or medium-sized enterprises driving networks; instead, they often appear in a situation of high dependence. It is difficult for such firms to create more favourable environments through political activities, such as lobbying (Pfeffer and Salancik, 1978). Regarding the regulation mode, it appears that the economic mode is more favoured than the sociological mode, which requires trust among members (Gulati, 1998). The sociological mode only helps coordinate well-known local actors. Casanueva and Galan Gonzalez (2004) show, in the shoe industry, that network firms exchange tacit information only with firms with which they maintain stronger social and business links. However, the use of economic regulations reflects a change in the mode of operation of ski resorts. Originally, ski resorts were characterized by informal networks based on geographical and cultural proximities. These networks could be described as clans (Ouchi, 1980). However, with the retirement of the first generation of business owners, the arrival of foreign companies based more on economic and financial considerations (Cattelin and Thevenard-Puthod, 2006), increasing competition, and the imperative to innovate, the control mode changed to economic. This choice also reflects the constitutions of the networks, which comprise geographically distant members, selected according to their complementary resources and skills. We can also use the resource-based view to explain this evolution, in that the sustainability of a ski resort depends on its ability to acquire and maintain necessary resources. Moreover, the difficulty of protecting innovations reinforces the need for rational, economic relationships between members of an innovation network. With regard to the two other network dimensions, different types emerged according to the type of innovation developed. For the nature of the relationships, front-office innovation networks appeared more vertical, in that they aim to make service qualities more tangible (improving physical evidence) or convince customers of their quality (staff action, such as tour operator agents). These networks also involve both upstream members (supplier that provides technology) and downstream members (distributor with which it innovates jointly). However, new offer networks mostly span industries. This trait is not surprising, because by definition, a holiday stay entails different types of services (e.g. accommodation, restaurant, ski lift, equipment rental, tourist office; Gibson et al., 2005; Pavlovich, 2003; Saxena, 2005; Scott et al., 2008; Svensson et al., 2005). However, beyond these traditional providers, ski resorts seek differentiation and increasingly use actors that are not part of the winter sports tourism industry (e.g. musical production companies, waterparks). New offers also require more horizontal coordination among resorts in the same ski area. These resorts must manage the cooperation-competition duality, or "co-opetition" (Brandenburger and Nalebuff, 1995). To create value and innovation, ski resorts can no longer act in isolation but must recognize their inter-dependence (Lado et al., 1997). Research has identified links between the nature of relationships and the type of innovation developed in an industry (Gemunden et al., 1996); we note the specificities of service innovations. Partners solicited for innovations tend to represent a different type than the focal industry. Moreover, according to cooperation studies, the selection process takes place mainly at the beginning of the cooperation (Rueur et al., 2002), but in the networks we studied, the selection of members occurred throughout the innovation project, depending on newly arising needs. During the selection phase, the main criteria are the resources and skills of each partner. This result reflects a resource-based view, a proactive approach in which a company is aware of its lack of innovation resources and skills and therefore decides to bring in partners. In our research, this approach is often initiated by a public actor - the tourist office. In addition, the hub chooses its partners according to their reputations and the extent of their networks. Finally, to stand out from competitors, ski resorts expand their networks to members who are distant in their activities, which reflects the geographical scope of the network. Mountain resorts used to have highly geographical operations and were sometimes treated as localized productive systems, fully embedded in their territory. Now, members of innovation networks are mostly located outside the resort, in France or abroad. Although proximity traditionally reduces coordination costs (Dyer and Singh, 1998) and facilitates informal exchanges and knowledge transfers (Bell and Zaheer, 2007; Von Hippel, 1994), for innovations developed around ski areas, local partners are no longer sufficient. Rather, resorts must find creative partners that can provide resources and skills that do not reside in the resort. Alliances with foreign partners also provide a way to internationalize the resort and find growth overseas. One type of innovation represents an exception to this rule though: back-office innovations supported by local networks. These innovations, not visible to clients (and therefore not able to differentiate the ski area), are designed to integrate and facilitate coordination among stakeholders in the touristic stay. Logically, it is not surprising that these innovations mainly involve local organizations that must be particularly efficient in their information systems. Back-office innovations require IT companies that are geographically close (if not in the heart of the resort, they remain in the same geographical area). Table V summarizes these results and characterizes the networks formed by the winter sports resorts, according to the type of innovation developed. Two main contributions emerge from our research. First, we characterize service innovation networks in the winter sports industry, a little studied service context. This study of 12 innovations implemented by two ski areas highlights the link between the type of innovation deployed and the type of network formed. Networks that seek to produce new offers, front-office innovations, and back-office innovations differ in the partners involved (competitors, suppliers, distributors, or actors outside the industry) and geographical scope (local, national, or international). However, a central player is always in charge of orchestrating the exchanges among partners, regardless of the type of innovation. This pivotal role is often provided by a public organization (tourist or local institution), with some local legitimacy (Kumar and Das, 2007). Second, we specify the link between the type of innovation deployed and the type of network formed. Two key dimensions (nature of the relationships and geographical scope) appear to differ, depending on the type of innovation developed. Our research thus fills a gap by complementing existing works on the characteristics of innovation networks, which hitherto have focused more on innovations in manufacturing. Our research shows for the first time that implementing certain types of service innovations requires the creation of inter-organizational networks with specific characteristics. At the managerial level, identifying the four dimensions of an innovation network is crucial, because these dimensions differ depending on the type of innovation. For example, ski resorts that want to develop new offers must be open to external partners (i.e. companies that do not belong to the tourism industry or are not geographically proximate to the resort). The openness of the network to such "unusual" partners facilitates the design and implementation of more radical innovations. Ski resorts that want to innovate also must recognize the important role of the hub that drives the innovation dynamics, selects the best members, and coordinate their actions. From a methodological perspective, we selected the tourism industry to test our framework, because changes in this sector have obliged firms to innovate. Ski resorts effectively represent tourism destinations, but we also acknowledge that the results may differ in other service settings (e.g. banking, hospitals). In addition, our relatively small sample consists of in-depth interviews with knowledgeable respondents, but it is not exhaustive. Because of the confidential and strategic nature of our interview topics, it was challenging to interview all partners in each network. Instead, we focused on the hub firm and its main partners. Additional research could use a larger sample and adopt a quantitative methodology. Beyond these traditional limitations associated with qualitative methodologies, we also note a limitation of our questioning method. That is, this study addresses the link between innovation types and characteristics of inter-organizational networks but not the possible reciprocal link. In some situations, it may be that networks determine the innovations implemented. Further research should consider this reciprocity and delve deeper into the relationship between inter-organizational networks and innovation, as well as its direction. It also might be useful to expand the research field to other ski areas to examine, with added cases, other innovations that might be deployed by other types of networks. For example, the nature of the hub organization may differ in countries where public actors are less involved in ski resorts management (e.g. North America). The network structures identified herein should be validated with a larger sample too. Opens in a new window. Figure 1 New offer map ("rock the pistes") Opens in a new window. Figure 2 Front-office innovation map (stash) Opens in a new window. Figure 3 Back-office innovation map (nirvanalps) Opens in a new window. Table I Analysis framework of the link between networks characteristics and innovation types Opens in a new window. Table II Innovation network characteristics Opens in a new window. Table III 12 innovation networks studied Opens in a new window. Table IV Network characteristics and innovation type Opens in a new window. Table V Network characteristics according to nature of innovations
|
This paper gives advice to managers involved in touristic innovations management about the network they may build. For example, ski resorts that want to develop new offers must be open to external partners (companies that do not belong to the tourism industry and/or are not geographically localized in the resort).
|
[SECTION: Purpose] Higher integration of financial markets, surge of foreign institutional investments, and real-time information streaming, on the one hand, have facilitated the discovery of fair market price and the enhancement of market efficiencies, on the other hand, have resulted in higher market volatility. Contagion or the spill-over effect is more prominent in an integrated market than in a segmented one. These have led to an increasing concern both among the potential investors as well as the financial institutions over the assessment and management of risks. While the two-fund separation theorem asserts that a rational investor always allocates his/her endowments between risky asset and risk free asset in accordance to his/her degree of risk averseness, the same, however, changes with the change in market scenario. Empirically, it has been seen that investors in general tends to become more risk averse during the regimes of higher volatility and more risk loving during the counter regimes. Thus, the prime objective of utility maximization can be visualized as a combination of two sub-objectives: first being the protection of the investors' wealth during the bad times and second, being the maximization of returns during the good times. Portfolio insurance strategies address both these needs. Traditionally, there are two categories of portfolio insurance strategies: static and dynamic. The former strategy chooses stock index options or futures to hedge the downside risk of the portfolio, while the latter one relies on continuous rebalancing of the portfolio between the risky and risk-free asset with an objective of insuring the investments from all possible erosions. While the option-based portfolio insurance (OBPI) strategy is the popular example of static portfolio insurance, constant-mix strategy, constant proportion portfolio insurance (CPPI), dynamic proportion portfolio insurance, time invariant portfolio protection, etc., are the popular examples of the dynamic portfolio insurance. Among them, however, the CPPI strategy is still the most popular and widely practiced (Pain and Rand, 2008). Investment schemes developed utilizing these strategies are generally coined as capital protection funds or capital guaranteed funds. Among the existing literature on CPPI strategy a common assumption has been that the investment in the risk free asset grows at a constant rate in spite of frequent trading. Empirical evidences buttress the fact that interest rate follows a stochastic mean reverting behavior, and thus frequent reshuffling of portfolio between risky and risk free asset makes it impractical to assume that the investment in the money market account will grow at a constant rate along the entire investment horizon. Considering this gap in the existing literatures, the paper proposes to construct a model of JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature and evaluates the effectiveness of the algorithm during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. The rest of the paper is organized as follows: Section 2 briefly probes into the existing literature on the CPPI strategy. Section 3 presents the theoretical framework of the JD-MR CPPI strategy, Section 4 empirically validates the model using the scenario-based simulation technique and Section 5 concludes the paper. The CPPI is introduced by Perold (1986) on fixed income assets. Black and Jones (1987) extended this method on equity-based underlying assets. Black and Perold (1992) further developed the algorithm by probing into how transaction costs and borrowing constraints impacts the insurance strategy. Their research revealed that in the absence of any transaction cost the CPPI is equivalent to investing in perpetual American call option and that the strategy is best for the HARA utility function under minimum consumption constraints. Their study further revealed that with the increase in the "multiple" value, the payoff under the CPPI approaches to that of the stop-loss strategy. CPPI strategies in presence of jumps in stock prices were first considered by Prigent and Tahar (2005) in a diffusion model with finite intensity jumps. Following their work, Cont and Tankov (2009) quantified the gap risk of classical CPPI strategies with the assumption that the risky asset follows Merton's jump diffusion (JD) model. Furthermore, their study probed into the analytical expression for the expected losses and the distribution of the loses given the gap event has occurred. Unlike the former research they incorporated infinite activity jumps and stochastic volatility in their algorithm. Estep and Kritzman (1988) came up with the time invariant portfolio insurance (TIPI) strategy where the floor is proposed to be dynamic proportional to the current wealth. Thereby, this strategy was tuned toward protection of the current wealth and not the floor value. Compared to the traditional CPPI, the TIPI strategy is more conservative in terms of restricted exposure during growth phases. In line with their work Chen and Liao (2006), proposes a goal-directed CPPI strategy to combine an investor's goal-directed trading behavior with the traditional CPPI strategy. The objective is to maintain conservative exposure in the risky asset when the portfolio value approaches the pre-set goal and to take an aggressive exposure when the deviation from the goal is large. However, the approach suffers from one major drawback that it fails to utilize the upside potential to the fullest extent. Hainaut (2010) analyzes the influence of switches of assets regimes on the CPPI performance and risk exposure under the additional assumption that the dynamics of the risky asset is driven by a hidden Markov process. The paper shows how the value at risk and the tail VaR can be retrieved by inversion of the Fourier transform of the characteristic function of the return density. Another important line of research on the CPPI strategy concentrated on the determination of the "multiple" that guides the exposure to the risky asset, and hence the overall risk exposure of the portfolio. Authors like Bertrand and Prigent (2002) and Prigent and Tahar (2005) probed into the development of unconditional "multiple" estimates using the extreme value approach, while Hamidi et al. (2009) concentrated on conditional "multiple" determination where the multiplier is defined as a function of an extended expected value-at-risk with an objective to keep a constant exposure to risk. It is widely assumed in most of the literature on CPPI that the floor grows at a constant risk free rate. An alternative to this notion was introduced by Boulier and Kanniganti (1995) and later extended by Mkaouar and Prigent (2007), where they assumed that floor value at any given time is partially dependent on the portfolio value. The partial dependence can be explained by the fact that the floor value increases when the risky asset in the portfolio performs strongly but the same does not decreases during poor performance. In contrast to the previous work, the current paper assumes the floor of the model to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literature have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which helps the algorithm to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase. The JD-MR-CPPI model: the CPPI strategy dynamically reallocates fund between a risky asset and a risk free money market account with an objective of protecting the investors' initial capital along the investment horizon. The algorithm starts by setting a floor, which is normally kept equal to the present value of the initial investment, discounted at the risk free rate over the investment horizon. Capital allocated as floor today will grow at the risk free rate to the initial investment at maturity. The idea is that if through dynamic rebalancing, the fund manager can ensure the portfolio value to never fall below the floor then irrespective of the price movement of the risky asset, the portfolio value will always remain above or equal to the initial investment at maturity. Suppose "v0" is the initial investment, "r" is the risk free rate and "T" is the investment horizon (in years). The initial floor (f0) is set equal to an amount which when invested in the risk free rate will grow to v0 at time T. The difference between the invested fund and the initial floor is called the cushion c0:(1) c 0 = v 0 - f 0 The initial exposure E0 in the risky asset is determined as some multiple (m) of the initial cushion under the constraint that the same cannot be more than the value of the portfolio v0 The expression of the exposure is given by the following equation. The multiple (m) is an important parameter in the algorithm, because it controls the exposure of the fund to the risky asset. Higher is the multiple, higher is the exposure and higher will be the expected return of the portfolio. But a high multiple also increases the probability of gap risk:(2) E 0 = min ( v 0 , m x ( v 0 - f 0 ) ) Once the exposure has been determined, the same amount is invested in the risky asset and the remaining fund is parked into the money market account. Thus, the amount (B0) invested in the money market account is, thus, given by:(3) B 0 = v 0 - min ( v 0 , m x ( v 0 - f 0 ) ) Once the initial allocation has been done the next task is to decide upon the rebalancing approach. There are two commonly used rebalancing approaches: the time-based rebalancing, where rebalancing is done at a fixed time interval over the investment horizon and the move-based rebalancing, where rebalancing is done once the percentage change in exposure to the risky asset crosses a predetermined threshold value. Sometimes, a combination of both the approaches is used. Move-based rebalancing is suitable in the world with high transaction costs, as this method prevents unnecessary rebalancing during minor fluctuations and thereby minimizes the transaction cost. However, in this approach the threshold value has to be chosen carefully. A higher threshold value reduces the number of rebalancing, and hence the total transaction cost, but at the same time increases the probability of the portfolio value crashing down the floor. The optimal threshold limit should be the one that minimizes both the transaction cost and the cost of gap risk. In case of time-based rebalancing, the decision parameter is the rebalancing interval. Higher rebalancing interval reduces the total transaction cost, but increases the cost of gap risk. Hence, the optimal rebalancing interval is the one that minimizes both the total transaction cost and the cost of gap risk. During each rebalancing period (t) along the investment horizon the cushion is recalculated by considering the difference between the portfolio value and the floor ft:(4) c t = max ( 0 , ( v t - f t ) ) where0 t T In the event, if the portfolio value goes below the floor, the cushion is set to 0 (Equation (4)). The exposure is then calculated as:(5a) E t = min ( v t , m x c t ) Replacing the value of ct from Equation (4), we get:(5b) E t = min ( v t , m x max ( 0 , ( v t - f t ) ) ) The exposure can never be more than the portfolio value at any point of time. If the exposure at any rebalancing period exceeds the portfolio value, then the exposure is reset to the current portfolio value and the entire fund is invested in the risky asset. This explains the "min" function in Equations (5a) and (5b). After investing the exposure amount in the risky asset the difference (vt-Et) is invested in the money market account. The procedure is repeated at each of the rebalancing terminal till the maturity of the portfolio. Geometric Brownian motion had been widely used for depicting the diffusion process of risky asset. But the empirical evidence of leptokurtic distribution and the presence of flat tail of the financial asset return distribution necessitated the search for alternatives. Merton (1976), for the first time, introduces the JD model where the diffusion process is assumed to be composed of two parts: a geometric Brownian motion with a constant drift and volatility and a compound Poisson process guiding the arrival of jumps. Merton further assumed that the jump size is log-normally distributed with a constant mean and variance. The jumps signify the arrival of news (both good and bad) that results in the sharp movement of the asset price within a short-time interval. Following the seminal work of Merton (1976), Kou (2002) delivered the double exponential jump diffusion model (DEJD), where the arrival of news is still guided by the Poisson distribution but the jump magnitude is depicted by the double exponential distribution. Ramezani and Zeng (1998) arrived at the Pareto-Beta jump diffusion (PBJD) model, where the jumps, caused by good news, are assumed to follow Pareto distribution and those caused by bad news are assumed to follow beta distribution. Though from conceptual point of view both the DEJD and PBJD models are alike, they, however, differ structurally. While Kou (2002) suggested using two exponential distributions with dissimilar parameters to define the jumps, Ramezani and Zeng (1998) assume that both good and bad news are produced by two autonomous Poisson processes with different intensities and that the corresponding jump magnitudes are drawn from Pareto and beta distributions, respectively. However, simplistic assumptions and ease of use makes the Merton's JD model a popular modeling tool among the practitioners in comparison to the other complicated models. This model is used in the current study. Under the JD model, the incremental change in the price of the risky asset is given by:(6) d S t = ( m - l K - s 2 2 ) S t d t + s S t d W t + S t ( Y t - 1 ) d N t where Wt is a standard Weiner process with zero drift and variance equal to dt. The increments dWt are independent of one another in the interval [0, T]. T is the maturity period of the portfolio. u is the constant drift and s2 is the constant variance of the risky asset. The term s2/2 is used for convexity correction. Nt is the compound Poisson process with intensity l signifying the number of jumps within the time interval [0, t]. Yt is log-normally distributed random process signifying the jump magnitude. For a small time interval dt, the asset price jumps from St to StYt. Thus, the percentage change in the asset price caused by the jump is given by:(7) d S t S t = Y t S t - S t S t = Y t - 1 The incremental change dNt gives jumps occurring within the incremental time dt such that:(8) P ( d N t = 1 ) = l d t and P ( d N t = 0 ) = 1 - l d t The log of the jump magnitude is ~i.i.d normal (mJ, sJ). The relative jump magnitude (Yt-1) is also log-normally distributed with the mean and variance given by:(9) E ( Y t - 1 ) = e ( m J + 12 s J 2 ) - 1 = K (10) V ( Y t - 1 ) = e ( 2 m J + s J 2 ) ( e s J 2 - 1 ) Under the CPPI strategy, rebalancing is done frequently to avoid the loss because of gap risk and thereby the paper assumes that the amount invested in the money market account grows at the short-term interest rate. It is widely documented that short-term interest rates for any economy are neither constant nor do they follow random walk, but display a well-known phenomenon of mean reversion. It refers to the tendency that the interest rate drifts at a certain rate toward the long-term average. Empirically, this means that the change in interest rate should be significantly positively correlated with the deviation from the long-term mean. Several researchers like Vasicek (1977), Dothan (1978), Brennan and Schwartz (1979), Cox et al. (1985), Heath et al. (1992), and others contributed significantly toward the interest rate modeling. While Vasicek (1977) presented one of the earliest stochastic mean reverting model for interest rate where the author assumed that in their one-factor model the interest rate follows the Ornstein Uhlenbeck process, Cox et al. (1985) in their general equilibrium model (CIR model) improved upon the same to ensure the interest rate not to go below 0. Furthermore, unlike the Vasicek model, in the CIR model the short-term interest rate does not display a normal or lognormal distribution, but instead exhibit a non-central kh2 distribution. The paper adopts the CIR model to govern the interest rate process of the money market account. Under the CIR model the short-term interest rate diffusion process is given by:(11) d r t = k ( th - r t ) d t + s r t d o t where k is the speed of adjustment of the instantaneous interest rate toward the target th. s is the standard deviation of the interest rate and dot is standard Weiner process with zero drift and variance equal to dt. The model also inculcates two set of restrictions, namely, k, th, s>0 and 2kth>s2, where the second restriction prevents the interest rate from going negative. Now, under the stochastic interest rate environment the initial floor of the CPPI strategy should be set equal to the value of a zero coupon bond that grows to the initial investment (I) at the stochastic interest rate (rt) at maturity (T). According to the CIR model the price of a zero coupon bond at time t (t[?]{0, T}) with a maturity value of I and maturity period of T is given by:(12) B ( t , T ) = I A ( t , T ) e - r t R ( t , T ) where I is the maturity value; rt is the interest rate on the valuation date; T is the maturity period and the rest of the parameters are depicted below:(13) A ( t , T ) = [ 2 h e ( h + k ) ( T - t ) / 2 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] 2 k th / s 2 (14) R ( t , T ) = [ 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] (15) h = k 2 + 2 s 2 Thus, the initial floor f0 for the capital protection fund under stochastic interest rate is given by:(16) f 0 = I [ 2 h e ( h + k ) ( T) / 2 2 h + ( h + k ) ( e h ( T) - 1 ) ] 2 k th / s 2 x e - r 0 [ 2 ( e h ( T) - 1 ) 2 h + ( h + k ) ( e h ( T) - 1 ) ] where r0 is the spot interest rate at time (t=0) when the floor valuation is done. The diffusion process of the zero coupon bond is given by:(17) d B t = r t B t d t - s r t B t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t Coming back to the CPPI strategy at any rebalancing date (t) the exposure in the risky asset is computed as:(18) e t = min ( v t , m | v t - f t | + ) where | v t - f t | + = max { 0 , ( v t - f t ) } Thus, the amount invested in the risk free money market account is given by:(19) B t = ( v t - min ( v t , m | v t - f t | + ) ) where Bt grows at a stochastic risk free rate rt through time and its movement is depicted in Equation (17). A vital assumption of the CPPI portfolio is self-financing. It means that for every small time increments, the incremental change of the risky asset holding and the incremental change of the risk free asset holding will only contribute to the incremental change of the portfolio value and no infusion of extra fund at any stage is made. The mathematical representation of the self-financing strategy is given by the following equation:(20) d v t = min ( v t , m | v t - f t | + ) d S t S t + [ v t - min ( v t , m | v t - f t | + ) ] d B t B t The terms (dSt/St) and (dBt/Bt) in Equation (20) can be replaced by the corresponding terms from Equations (6) and (17), respectively to obtain:(21) d v t = min ( v t , m | v t - f t | + ) [ ( m - l K - s 2 2 ) d t + s d W t + ( Y t - 1 ) d N t ] + [ v t - min ( v t , m | v t - f t | + ) ] [ r t d t - s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t ] A slight rearrangement of the terms in Equation (21) results in the following equation:(22) d v t = ( ( m - l K - s 2 2 - r t ) min ( v t , m | v t - f t | + ) + r t v t ) d t + min ( v t , m | v t - f t | + ) s d W t - [ v t - min ( v t , m | v t - f t | + ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + min ( v t , m | v t - f t | + ) ( Y t - 1 ) d N t Equation (22) represents the diffusion process of the JD-MR CPPI portfolio value. It consists of four components: a deterministic drift, a stochastic term representing the unpredictability of the risky asset investment, a stochastic term representing the Poisson distributed jump process, and finally, a stochastic term representing the randomness of the money market investment. The objective of rebalancing is to keep the portfolio value (vt) above or equal to the floor (ft). Once vt touches the floor (vt=ft) then |vt-ft|+ is set to 0 and so is the exposure min (vt, m |vt-ft|+). As a result the stochastic component because of the jump and the risky asset diffusion vanishes. The entire fund is now allocated to the risk free asset. The differential Equation (22) is now reduced to the diffusion process followed by the money market account:(23) d v t = r t v t d t - v t s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t However, if the rebalancing cannot be achieved when vt touches ft and vt goes below ft then the whole idea of protection will be compromised. Such a situation is more probable at times of steep fall in the underlying risky asset price and it gives rise to the so called "gap-risk." Now, during the bull phase when the risky assets price surges and the exposure crosses the portfolio value, the entire fund is allocated to the risky asset. The boundary condition for its occurrence is: v t = m ( v t - f t ) and which implies that:(24) v t = f t 1 - 1 / m Under this condition, the exposure min (vt, m |vt-ft|+) is set to vt and the stochastic differential equation (Equation (13)) reduces to:(25) d v t = v t ( m - l K - s 2 2 ) d t + v t s d W t + v t ( Y t - 1 ) d N t Equation (25) reveals that the portfolio value follows the geometric Brownian motion with jump having the same drift and variance as that of the underlying risky asset. Higher is the expected return of the underlying risky asset higher will be the expected return of the portfolio during a bull run. Thus the JD-MR-CPPI strategy address the upside potential effectively and at the same time takes care of the downside risk by eliminating the stochastic component once vt=ft (see Equation (16)). When the portfolio value lies within the interval given by the inequality (Equation (27)) the allocation is done both in risky and risk free asset. In that case the stochastic differential equation governing the portfolio value process is shown in Equation (27):(26) f t v t f t 1 - 1 / m (27) d v t = ( ( m - l K - s 2 2 - r t ) m ( v t - f t ) + r t v t ) d t + m ( v t - f t ) s d W t - [ v t - m ( v t - f t ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + m ( v t - f t ) ( Y t - 1 ) d N t Finally, the observations can be summarized as displayed in the Table I. 4.1 Data analysis For empirical analysis of the developed model, suitable proxies of the risky asset and the risk free assets are prerequisite. The National Stock Exchange of India, though relatively new compared to the Bombay Stock Exchange, has witnessed a considerable growth during the last ten years both in terms of volume traded and number of companies listed. This qualifies CNX-NIFTY 50 to be a suitable proxy for the risky asset. Indian financial market witnessed a considerable swing during the financial crisis of 2008. CNX-NIFTY 50 plunged down from a record high of 6,288 on January 8, 2008 to 2,524 on October 27, 2008 (source: Yahoo Finance). Post-recession recovery of the Indian market was also quick compared to the developed economies. CNX-NIFTY 50 touched 6,312.45 again on November 5, 2010, which was the new high after such crisis. The paper proposes to capture these two phases of the market in order to compare the performance of the JD-MR-CPPI algorithm against the CNX-NIFTY 50 index (taken as a bench mark as well as the underlying asset) in both these phases. For this purpose the period from January 8, 2008 to October 27, 2008 is coined as the downswing phase and the period from October 27, 2008 to November 5, 2010 as the recovery phase. We have stressed tested our model in these two historical extreme periods to check the boundary of our model. Further, using Monte Carlo simulation on these extreme regions we have derived a series of hypothetical stressed scenarios. We have subsequently stressed tested our model on all these extreme scenarios. Stress testing of any model on the historical and hypothetical stressed scenario is an efficient way of testing the model performance and robustness. This methodology has gained considerable importance after the 2008 subprime crisis and the regulators are increasingly considering stress-testing as a viable means of model validation and model risk management (see Dodd-Frank Act and Comprehensive Capital Adequacy and Review Guidelines of Federal Reserve System for details, Ref.: www.federalreserve.gov/bankinforeg/ccar.htm). Daily price data of the CNX-NIFTY 50 is collected across both the phases. On the other hand, the low level of development of the Indian debt market coupled with illiquid instrument and lack of reliable data hinders the selection of a suitable proxy for the money market accounts. As a proxy for the short-term interest rate of the money market account, the call money rate is selected because of two reasons: first, the reliable quotations are available on a daily basis and second, previous experience with the Indian market have revealed that the interbank call rates influence significantly the interest rate of the economy. Daily call money rates across both the phase are collected from IFMR Data Centre (Source: www.ifmr.ac.in/). The summary statistics of the collected data are displayed in Tables II and III, respectively. 4.2 JD-MR CPPI model calibration The diffusion models guiding the risky asset price and the money market account are calibrated against the CNX-NIFTY and the call money rate data during both the phase using the maximum likelihood estimation technique. For a given set of data and the assumed underlying model the MLE technique returns the optimal set of the model parameters that maximizes the probability or likelihood that the model output and the observed data will match. In MATLAB the same is achieved by maximizing the log likelihood function of the process against the set of parameters using the "fminsearch" non-linear optimization routine present in the optimization toolbox. The calibrated parameters for the risky asset and the money market account are displayed in Tables IV and V, respectively. 4.3 JD-MR CPPI model simulations Once the model has been calibrated to the historical data the same is used to simulate 100,000 trajectories of the possible paths of the portfolio during each of the downswing and recovery phase. The expected return and risks are then calculated by taking the corresponding means across all the simulated paths. The initial investment in the portfolio is taken at Rs1,000 with a maturity period of one year. The initial floor is set to the price of the zero coupon bond that provides a maturity value of Rs1,000 after one year, with the interest rate following the CIR mean reversion process (given by Equation (11)). The multiple is set to 3 for the present study. Transaction cost is taken as 0.01 percent of the total volume transaction. It is also assumed that the same transaction cost prevails for buying and selling of the risky and risk-free asset in the financial market. Rebalancing frequency is kept at 200 times a year at equal interval. The initial NIFTY value is normalized to the initial investment for comparison purpose. The results are displayed in Table VI. 4.4 JD-MR CPPI model performance analysis Table VI indicates that during the downswing phase when the aggregate market return was -75.11 percent, the JD-MR CPPI portfolio manages to maintain an average return of 1.2 percent. The 99 percent VaR of the portfolio is also significantly less than that of the benchmarked market index. During the downswing phase it can be deduced that for an initial investment of Rs1,000 the loss will not exceed Rs10.5251 for 99 percent of the cases if the investment is made in the JD-MR-CPPI portfolio; whereas, for an equal investment in the market during the downswing phase, the corresponding loss value increases to a whooping Rs907.6475. During the recovery phase the portfolio generates an aggregate return of 85.45 percent against a market return of 212.41 percent, but manages to control the VaR at Rs35.0508 as opposed to that of the market (Rs90.5709). Thus, the JD-MR-CPPI portfolio performs better that the risky market during the downswing and performs better than the fixed income market during the growth phase, which makes it a value enhancing proposition for the risk adverse investors. Figure 1 provides the path followed by the portfolio, the market and the floor for a particular simulation, the corresponding allocation in risky assets and the final histogram of the terminal value of the portfolio. The histograms for both the phases are right skewed indicating the hedging effectiveness under the extreme market environments. Figure 2 provides the terminal value of the portfolio (in green) and the terminal value of the floor (in red) for all the 100,000 simulations. The lower cut-off of the terminal value of the portfolio indicates the capital protective feature of the algorithm, while the volatility of the terminal value of the floor is expected because of its stochastic nature as defined in the algorithm. Figure 3 displays the cumulative average transaction cost curve across 100,000 simulations. The curve is concave and the value stabilizes near Rs16 during the market downswing phase. This is primarily because of low transactions and stable investment in the debt segment during the crisis period. During the recovery phase the curve displays convex characteristics primarily because of the heavy transactions and the average costs shoots up at increasing rate as the market rises. The paper develops a theoretical model of a JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature. The model is validated via back testing during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. Besides providing capital protection, the strategy is found to hedges the downside risk effectively during bad times and is also found to leverages the upside potentials during the good times. Coming to the Indian context, till the year 2006 the Security and Exchange Board of India (SEBI) voted against the entry of any capital protected schemes in the Indian market, although the same were widely flourishing in the foreign counterparts. However, in 2006, following several rounds of discussions and constant persuasions from the Association of Mutual Fund of India, SEBI allowed the entry of the capital protected schemes by amendment of the SEBI (Mutual Fund) Regulations, 1996 vide a circular dated August 14, 2006. As per the regulations, capital protection schemes floated by the AMC should be close ended and should be mandatorily rated by a registered credit rating agency to ascertain the degree of certainty of achieving the objective of the fund. The regulation also clearly indicated that the asset management companies can market the scheme as "Capital protection oriented" fund and not "Capital Guaranteed fund" (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). The difference is also evidently identified in the next line in the circular - "the orientation toward capital protection initiates from the portfolio structure and not from any bank guarantee, insurance, cover, etc." (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). Thus embedded options, which invokes guarantee by virtue of its design, are excluded and thereby the OBPI strategy was not encouraged in the Indian market. As per the SEBI guideline, capital protection-oriented funds are sought to be structured by suitable combination of risky and risk free assets and by dynamic rebalancing the same through time with an objective of protecting the investor's initial fund. Thereby, only the CPPI strategy fits perfectly within the scope provided by SEBI. Given this backdrop, the developed JD-MR-CPPI model will be best suited for engineering of the structured products in the Indian market.
|
The purpose of this paper is to develop a theoretical model of a jump diffusion-mean reversion constant proportion portfolio insurance strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literatures.
|
[SECTION: Method] Higher integration of financial markets, surge of foreign institutional investments, and real-time information streaming, on the one hand, have facilitated the discovery of fair market price and the enhancement of market efficiencies, on the other hand, have resulted in higher market volatility. Contagion or the spill-over effect is more prominent in an integrated market than in a segmented one. These have led to an increasing concern both among the potential investors as well as the financial institutions over the assessment and management of risks. While the two-fund separation theorem asserts that a rational investor always allocates his/her endowments between risky asset and risk free asset in accordance to his/her degree of risk averseness, the same, however, changes with the change in market scenario. Empirically, it has been seen that investors in general tends to become more risk averse during the regimes of higher volatility and more risk loving during the counter regimes. Thus, the prime objective of utility maximization can be visualized as a combination of two sub-objectives: first being the protection of the investors' wealth during the bad times and second, being the maximization of returns during the good times. Portfolio insurance strategies address both these needs. Traditionally, there are two categories of portfolio insurance strategies: static and dynamic. The former strategy chooses stock index options or futures to hedge the downside risk of the portfolio, while the latter one relies on continuous rebalancing of the portfolio between the risky and risk-free asset with an objective of insuring the investments from all possible erosions. While the option-based portfolio insurance (OBPI) strategy is the popular example of static portfolio insurance, constant-mix strategy, constant proportion portfolio insurance (CPPI), dynamic proportion portfolio insurance, time invariant portfolio protection, etc., are the popular examples of the dynamic portfolio insurance. Among them, however, the CPPI strategy is still the most popular and widely practiced (Pain and Rand, 2008). Investment schemes developed utilizing these strategies are generally coined as capital protection funds or capital guaranteed funds. Among the existing literature on CPPI strategy a common assumption has been that the investment in the risk free asset grows at a constant rate in spite of frequent trading. Empirical evidences buttress the fact that interest rate follows a stochastic mean reverting behavior, and thus frequent reshuffling of portfolio between risky and risk free asset makes it impractical to assume that the investment in the money market account will grow at a constant rate along the entire investment horizon. Considering this gap in the existing literatures, the paper proposes to construct a model of JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature and evaluates the effectiveness of the algorithm during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. The rest of the paper is organized as follows: Section 2 briefly probes into the existing literature on the CPPI strategy. Section 3 presents the theoretical framework of the JD-MR CPPI strategy, Section 4 empirically validates the model using the scenario-based simulation technique and Section 5 concludes the paper. The CPPI is introduced by Perold (1986) on fixed income assets. Black and Jones (1987) extended this method on equity-based underlying assets. Black and Perold (1992) further developed the algorithm by probing into how transaction costs and borrowing constraints impacts the insurance strategy. Their research revealed that in the absence of any transaction cost the CPPI is equivalent to investing in perpetual American call option and that the strategy is best for the HARA utility function under minimum consumption constraints. Their study further revealed that with the increase in the "multiple" value, the payoff under the CPPI approaches to that of the stop-loss strategy. CPPI strategies in presence of jumps in stock prices were first considered by Prigent and Tahar (2005) in a diffusion model with finite intensity jumps. Following their work, Cont and Tankov (2009) quantified the gap risk of classical CPPI strategies with the assumption that the risky asset follows Merton's jump diffusion (JD) model. Furthermore, their study probed into the analytical expression for the expected losses and the distribution of the loses given the gap event has occurred. Unlike the former research they incorporated infinite activity jumps and stochastic volatility in their algorithm. Estep and Kritzman (1988) came up with the time invariant portfolio insurance (TIPI) strategy where the floor is proposed to be dynamic proportional to the current wealth. Thereby, this strategy was tuned toward protection of the current wealth and not the floor value. Compared to the traditional CPPI, the TIPI strategy is more conservative in terms of restricted exposure during growth phases. In line with their work Chen and Liao (2006), proposes a goal-directed CPPI strategy to combine an investor's goal-directed trading behavior with the traditional CPPI strategy. The objective is to maintain conservative exposure in the risky asset when the portfolio value approaches the pre-set goal and to take an aggressive exposure when the deviation from the goal is large. However, the approach suffers from one major drawback that it fails to utilize the upside potential to the fullest extent. Hainaut (2010) analyzes the influence of switches of assets regimes on the CPPI performance and risk exposure under the additional assumption that the dynamics of the risky asset is driven by a hidden Markov process. The paper shows how the value at risk and the tail VaR can be retrieved by inversion of the Fourier transform of the characteristic function of the return density. Another important line of research on the CPPI strategy concentrated on the determination of the "multiple" that guides the exposure to the risky asset, and hence the overall risk exposure of the portfolio. Authors like Bertrand and Prigent (2002) and Prigent and Tahar (2005) probed into the development of unconditional "multiple" estimates using the extreme value approach, while Hamidi et al. (2009) concentrated on conditional "multiple" determination where the multiplier is defined as a function of an extended expected value-at-risk with an objective to keep a constant exposure to risk. It is widely assumed in most of the literature on CPPI that the floor grows at a constant risk free rate. An alternative to this notion was introduced by Boulier and Kanniganti (1995) and later extended by Mkaouar and Prigent (2007), where they assumed that floor value at any given time is partially dependent on the portfolio value. The partial dependence can be explained by the fact that the floor value increases when the risky asset in the portfolio performs strongly but the same does not decreases during poor performance. In contrast to the previous work, the current paper assumes the floor of the model to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literature have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which helps the algorithm to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase. The JD-MR-CPPI model: the CPPI strategy dynamically reallocates fund between a risky asset and a risk free money market account with an objective of protecting the investors' initial capital along the investment horizon. The algorithm starts by setting a floor, which is normally kept equal to the present value of the initial investment, discounted at the risk free rate over the investment horizon. Capital allocated as floor today will grow at the risk free rate to the initial investment at maturity. The idea is that if through dynamic rebalancing, the fund manager can ensure the portfolio value to never fall below the floor then irrespective of the price movement of the risky asset, the portfolio value will always remain above or equal to the initial investment at maturity. Suppose "v0" is the initial investment, "r" is the risk free rate and "T" is the investment horizon (in years). The initial floor (f0) is set equal to an amount which when invested in the risk free rate will grow to v0 at time T. The difference between the invested fund and the initial floor is called the cushion c0:(1) c 0 = v 0 - f 0 The initial exposure E0 in the risky asset is determined as some multiple (m) of the initial cushion under the constraint that the same cannot be more than the value of the portfolio v0 The expression of the exposure is given by the following equation. The multiple (m) is an important parameter in the algorithm, because it controls the exposure of the fund to the risky asset. Higher is the multiple, higher is the exposure and higher will be the expected return of the portfolio. But a high multiple also increases the probability of gap risk:(2) E 0 = min ( v 0 , m x ( v 0 - f 0 ) ) Once the exposure has been determined, the same amount is invested in the risky asset and the remaining fund is parked into the money market account. Thus, the amount (B0) invested in the money market account is, thus, given by:(3) B 0 = v 0 - min ( v 0 , m x ( v 0 - f 0 ) ) Once the initial allocation has been done the next task is to decide upon the rebalancing approach. There are two commonly used rebalancing approaches: the time-based rebalancing, where rebalancing is done at a fixed time interval over the investment horizon and the move-based rebalancing, where rebalancing is done once the percentage change in exposure to the risky asset crosses a predetermined threshold value. Sometimes, a combination of both the approaches is used. Move-based rebalancing is suitable in the world with high transaction costs, as this method prevents unnecessary rebalancing during minor fluctuations and thereby minimizes the transaction cost. However, in this approach the threshold value has to be chosen carefully. A higher threshold value reduces the number of rebalancing, and hence the total transaction cost, but at the same time increases the probability of the portfolio value crashing down the floor. The optimal threshold limit should be the one that minimizes both the transaction cost and the cost of gap risk. In case of time-based rebalancing, the decision parameter is the rebalancing interval. Higher rebalancing interval reduces the total transaction cost, but increases the cost of gap risk. Hence, the optimal rebalancing interval is the one that minimizes both the total transaction cost and the cost of gap risk. During each rebalancing period (t) along the investment horizon the cushion is recalculated by considering the difference between the portfolio value and the floor ft:(4) c t = max ( 0 , ( v t - f t ) ) where0 t T In the event, if the portfolio value goes below the floor, the cushion is set to 0 (Equation (4)). The exposure is then calculated as:(5a) E t = min ( v t , m x c t ) Replacing the value of ct from Equation (4), we get:(5b) E t = min ( v t , m x max ( 0 , ( v t - f t ) ) ) The exposure can never be more than the portfolio value at any point of time. If the exposure at any rebalancing period exceeds the portfolio value, then the exposure is reset to the current portfolio value and the entire fund is invested in the risky asset. This explains the "min" function in Equations (5a) and (5b). After investing the exposure amount in the risky asset the difference (vt-Et) is invested in the money market account. The procedure is repeated at each of the rebalancing terminal till the maturity of the portfolio. Geometric Brownian motion had been widely used for depicting the diffusion process of risky asset. But the empirical evidence of leptokurtic distribution and the presence of flat tail of the financial asset return distribution necessitated the search for alternatives. Merton (1976), for the first time, introduces the JD model where the diffusion process is assumed to be composed of two parts: a geometric Brownian motion with a constant drift and volatility and a compound Poisson process guiding the arrival of jumps. Merton further assumed that the jump size is log-normally distributed with a constant mean and variance. The jumps signify the arrival of news (both good and bad) that results in the sharp movement of the asset price within a short-time interval. Following the seminal work of Merton (1976), Kou (2002) delivered the double exponential jump diffusion model (DEJD), where the arrival of news is still guided by the Poisson distribution but the jump magnitude is depicted by the double exponential distribution. Ramezani and Zeng (1998) arrived at the Pareto-Beta jump diffusion (PBJD) model, where the jumps, caused by good news, are assumed to follow Pareto distribution and those caused by bad news are assumed to follow beta distribution. Though from conceptual point of view both the DEJD and PBJD models are alike, they, however, differ structurally. While Kou (2002) suggested using two exponential distributions with dissimilar parameters to define the jumps, Ramezani and Zeng (1998) assume that both good and bad news are produced by two autonomous Poisson processes with different intensities and that the corresponding jump magnitudes are drawn from Pareto and beta distributions, respectively. However, simplistic assumptions and ease of use makes the Merton's JD model a popular modeling tool among the practitioners in comparison to the other complicated models. This model is used in the current study. Under the JD model, the incremental change in the price of the risky asset is given by:(6) d S t = ( m - l K - s 2 2 ) S t d t + s S t d W t + S t ( Y t - 1 ) d N t where Wt is a standard Weiner process with zero drift and variance equal to dt. The increments dWt are independent of one another in the interval [0, T]. T is the maturity period of the portfolio. u is the constant drift and s2 is the constant variance of the risky asset. The term s2/2 is used for convexity correction. Nt is the compound Poisson process with intensity l signifying the number of jumps within the time interval [0, t]. Yt is log-normally distributed random process signifying the jump magnitude. For a small time interval dt, the asset price jumps from St to StYt. Thus, the percentage change in the asset price caused by the jump is given by:(7) d S t S t = Y t S t - S t S t = Y t - 1 The incremental change dNt gives jumps occurring within the incremental time dt such that:(8) P ( d N t = 1 ) = l d t and P ( d N t = 0 ) = 1 - l d t The log of the jump magnitude is ~i.i.d normal (mJ, sJ). The relative jump magnitude (Yt-1) is also log-normally distributed with the mean and variance given by:(9) E ( Y t - 1 ) = e ( m J + 12 s J 2 ) - 1 = K (10) V ( Y t - 1 ) = e ( 2 m J + s J 2 ) ( e s J 2 - 1 ) Under the CPPI strategy, rebalancing is done frequently to avoid the loss because of gap risk and thereby the paper assumes that the amount invested in the money market account grows at the short-term interest rate. It is widely documented that short-term interest rates for any economy are neither constant nor do they follow random walk, but display a well-known phenomenon of mean reversion. It refers to the tendency that the interest rate drifts at a certain rate toward the long-term average. Empirically, this means that the change in interest rate should be significantly positively correlated with the deviation from the long-term mean. Several researchers like Vasicek (1977), Dothan (1978), Brennan and Schwartz (1979), Cox et al. (1985), Heath et al. (1992), and others contributed significantly toward the interest rate modeling. While Vasicek (1977) presented one of the earliest stochastic mean reverting model for interest rate where the author assumed that in their one-factor model the interest rate follows the Ornstein Uhlenbeck process, Cox et al. (1985) in their general equilibrium model (CIR model) improved upon the same to ensure the interest rate not to go below 0. Furthermore, unlike the Vasicek model, in the CIR model the short-term interest rate does not display a normal or lognormal distribution, but instead exhibit a non-central kh2 distribution. The paper adopts the CIR model to govern the interest rate process of the money market account. Under the CIR model the short-term interest rate diffusion process is given by:(11) d r t = k ( th - r t ) d t + s r t d o t where k is the speed of adjustment of the instantaneous interest rate toward the target th. s is the standard deviation of the interest rate and dot is standard Weiner process with zero drift and variance equal to dt. The model also inculcates two set of restrictions, namely, k, th, s>0 and 2kth>s2, where the second restriction prevents the interest rate from going negative. Now, under the stochastic interest rate environment the initial floor of the CPPI strategy should be set equal to the value of a zero coupon bond that grows to the initial investment (I) at the stochastic interest rate (rt) at maturity (T). According to the CIR model the price of a zero coupon bond at time t (t[?]{0, T}) with a maturity value of I and maturity period of T is given by:(12) B ( t , T ) = I A ( t , T ) e - r t R ( t , T ) where I is the maturity value; rt is the interest rate on the valuation date; T is the maturity period and the rest of the parameters are depicted below:(13) A ( t , T ) = [ 2 h e ( h + k ) ( T - t ) / 2 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] 2 k th / s 2 (14) R ( t , T ) = [ 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] (15) h = k 2 + 2 s 2 Thus, the initial floor f0 for the capital protection fund under stochastic interest rate is given by:(16) f 0 = I [ 2 h e ( h + k ) ( T) / 2 2 h + ( h + k ) ( e h ( T) - 1 ) ] 2 k th / s 2 x e - r 0 [ 2 ( e h ( T) - 1 ) 2 h + ( h + k ) ( e h ( T) - 1 ) ] where r0 is the spot interest rate at time (t=0) when the floor valuation is done. The diffusion process of the zero coupon bond is given by:(17) d B t = r t B t d t - s r t B t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t Coming back to the CPPI strategy at any rebalancing date (t) the exposure in the risky asset is computed as:(18) e t = min ( v t , m | v t - f t | + ) where | v t - f t | + = max { 0 , ( v t - f t ) } Thus, the amount invested in the risk free money market account is given by:(19) B t = ( v t - min ( v t , m | v t - f t | + ) ) where Bt grows at a stochastic risk free rate rt through time and its movement is depicted in Equation (17). A vital assumption of the CPPI portfolio is self-financing. It means that for every small time increments, the incremental change of the risky asset holding and the incremental change of the risk free asset holding will only contribute to the incremental change of the portfolio value and no infusion of extra fund at any stage is made. The mathematical representation of the self-financing strategy is given by the following equation:(20) d v t = min ( v t , m | v t - f t | + ) d S t S t + [ v t - min ( v t , m | v t - f t | + ) ] d B t B t The terms (dSt/St) and (dBt/Bt) in Equation (20) can be replaced by the corresponding terms from Equations (6) and (17), respectively to obtain:(21) d v t = min ( v t , m | v t - f t | + ) [ ( m - l K - s 2 2 ) d t + s d W t + ( Y t - 1 ) d N t ] + [ v t - min ( v t , m | v t - f t | + ) ] [ r t d t - s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t ] A slight rearrangement of the terms in Equation (21) results in the following equation:(22) d v t = ( ( m - l K - s 2 2 - r t ) min ( v t , m | v t - f t | + ) + r t v t ) d t + min ( v t , m | v t - f t | + ) s d W t - [ v t - min ( v t , m | v t - f t | + ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + min ( v t , m | v t - f t | + ) ( Y t - 1 ) d N t Equation (22) represents the diffusion process of the JD-MR CPPI portfolio value. It consists of four components: a deterministic drift, a stochastic term representing the unpredictability of the risky asset investment, a stochastic term representing the Poisson distributed jump process, and finally, a stochastic term representing the randomness of the money market investment. The objective of rebalancing is to keep the portfolio value (vt) above or equal to the floor (ft). Once vt touches the floor (vt=ft) then |vt-ft|+ is set to 0 and so is the exposure min (vt, m |vt-ft|+). As a result the stochastic component because of the jump and the risky asset diffusion vanishes. The entire fund is now allocated to the risk free asset. The differential Equation (22) is now reduced to the diffusion process followed by the money market account:(23) d v t = r t v t d t - v t s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t However, if the rebalancing cannot be achieved when vt touches ft and vt goes below ft then the whole idea of protection will be compromised. Such a situation is more probable at times of steep fall in the underlying risky asset price and it gives rise to the so called "gap-risk." Now, during the bull phase when the risky assets price surges and the exposure crosses the portfolio value, the entire fund is allocated to the risky asset. The boundary condition for its occurrence is: v t = m ( v t - f t ) and which implies that:(24) v t = f t 1 - 1 / m Under this condition, the exposure min (vt, m |vt-ft|+) is set to vt and the stochastic differential equation (Equation (13)) reduces to:(25) d v t = v t ( m - l K - s 2 2 ) d t + v t s d W t + v t ( Y t - 1 ) d N t Equation (25) reveals that the portfolio value follows the geometric Brownian motion with jump having the same drift and variance as that of the underlying risky asset. Higher is the expected return of the underlying risky asset higher will be the expected return of the portfolio during a bull run. Thus the JD-MR-CPPI strategy address the upside potential effectively and at the same time takes care of the downside risk by eliminating the stochastic component once vt=ft (see Equation (16)). When the portfolio value lies within the interval given by the inequality (Equation (27)) the allocation is done both in risky and risk free asset. In that case the stochastic differential equation governing the portfolio value process is shown in Equation (27):(26) f t v t f t 1 - 1 / m (27) d v t = ( ( m - l K - s 2 2 - r t ) m ( v t - f t ) + r t v t ) d t + m ( v t - f t ) s d W t - [ v t - m ( v t - f t ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + m ( v t - f t ) ( Y t - 1 ) d N t Finally, the observations can be summarized as displayed in the Table I. 4.1 Data analysis For empirical analysis of the developed model, suitable proxies of the risky asset and the risk free assets are prerequisite. The National Stock Exchange of India, though relatively new compared to the Bombay Stock Exchange, has witnessed a considerable growth during the last ten years both in terms of volume traded and number of companies listed. This qualifies CNX-NIFTY 50 to be a suitable proxy for the risky asset. Indian financial market witnessed a considerable swing during the financial crisis of 2008. CNX-NIFTY 50 plunged down from a record high of 6,288 on January 8, 2008 to 2,524 on October 27, 2008 (source: Yahoo Finance). Post-recession recovery of the Indian market was also quick compared to the developed economies. CNX-NIFTY 50 touched 6,312.45 again on November 5, 2010, which was the new high after such crisis. The paper proposes to capture these two phases of the market in order to compare the performance of the JD-MR-CPPI algorithm against the CNX-NIFTY 50 index (taken as a bench mark as well as the underlying asset) in both these phases. For this purpose the period from January 8, 2008 to October 27, 2008 is coined as the downswing phase and the period from October 27, 2008 to November 5, 2010 as the recovery phase. We have stressed tested our model in these two historical extreme periods to check the boundary of our model. Further, using Monte Carlo simulation on these extreme regions we have derived a series of hypothetical stressed scenarios. We have subsequently stressed tested our model on all these extreme scenarios. Stress testing of any model on the historical and hypothetical stressed scenario is an efficient way of testing the model performance and robustness. This methodology has gained considerable importance after the 2008 subprime crisis and the regulators are increasingly considering stress-testing as a viable means of model validation and model risk management (see Dodd-Frank Act and Comprehensive Capital Adequacy and Review Guidelines of Federal Reserve System for details, Ref.: www.federalreserve.gov/bankinforeg/ccar.htm). Daily price data of the CNX-NIFTY 50 is collected across both the phases. On the other hand, the low level of development of the Indian debt market coupled with illiquid instrument and lack of reliable data hinders the selection of a suitable proxy for the money market accounts. As a proxy for the short-term interest rate of the money market account, the call money rate is selected because of two reasons: first, the reliable quotations are available on a daily basis and second, previous experience with the Indian market have revealed that the interbank call rates influence significantly the interest rate of the economy. Daily call money rates across both the phase are collected from IFMR Data Centre (Source: www.ifmr.ac.in/). The summary statistics of the collected data are displayed in Tables II and III, respectively. 4.2 JD-MR CPPI model calibration The diffusion models guiding the risky asset price and the money market account are calibrated against the CNX-NIFTY and the call money rate data during both the phase using the maximum likelihood estimation technique. For a given set of data and the assumed underlying model the MLE technique returns the optimal set of the model parameters that maximizes the probability or likelihood that the model output and the observed data will match. In MATLAB the same is achieved by maximizing the log likelihood function of the process against the set of parameters using the "fminsearch" non-linear optimization routine present in the optimization toolbox. The calibrated parameters for the risky asset and the money market account are displayed in Tables IV and V, respectively. 4.3 JD-MR CPPI model simulations Once the model has been calibrated to the historical data the same is used to simulate 100,000 trajectories of the possible paths of the portfolio during each of the downswing and recovery phase. The expected return and risks are then calculated by taking the corresponding means across all the simulated paths. The initial investment in the portfolio is taken at Rs1,000 with a maturity period of one year. The initial floor is set to the price of the zero coupon bond that provides a maturity value of Rs1,000 after one year, with the interest rate following the CIR mean reversion process (given by Equation (11)). The multiple is set to 3 for the present study. Transaction cost is taken as 0.01 percent of the total volume transaction. It is also assumed that the same transaction cost prevails for buying and selling of the risky and risk-free asset in the financial market. Rebalancing frequency is kept at 200 times a year at equal interval. The initial NIFTY value is normalized to the initial investment for comparison purpose. The results are displayed in Table VI. 4.4 JD-MR CPPI model performance analysis Table VI indicates that during the downswing phase when the aggregate market return was -75.11 percent, the JD-MR CPPI portfolio manages to maintain an average return of 1.2 percent. The 99 percent VaR of the portfolio is also significantly less than that of the benchmarked market index. During the downswing phase it can be deduced that for an initial investment of Rs1,000 the loss will not exceed Rs10.5251 for 99 percent of the cases if the investment is made in the JD-MR-CPPI portfolio; whereas, for an equal investment in the market during the downswing phase, the corresponding loss value increases to a whooping Rs907.6475. During the recovery phase the portfolio generates an aggregate return of 85.45 percent against a market return of 212.41 percent, but manages to control the VaR at Rs35.0508 as opposed to that of the market (Rs90.5709). Thus, the JD-MR-CPPI portfolio performs better that the risky market during the downswing and performs better than the fixed income market during the growth phase, which makes it a value enhancing proposition for the risk adverse investors. Figure 1 provides the path followed by the portfolio, the market and the floor for a particular simulation, the corresponding allocation in risky assets and the final histogram of the terminal value of the portfolio. The histograms for both the phases are right skewed indicating the hedging effectiveness under the extreme market environments. Figure 2 provides the terminal value of the portfolio (in green) and the terminal value of the floor (in red) for all the 100,000 simulations. The lower cut-off of the terminal value of the portfolio indicates the capital protective feature of the algorithm, while the volatility of the terminal value of the floor is expected because of its stochastic nature as defined in the algorithm. Figure 3 displays the cumulative average transaction cost curve across 100,000 simulations. The curve is concave and the value stabilizes near Rs16 during the market downswing phase. This is primarily because of low transactions and stable investment in the debt segment during the crisis period. During the recovery phase the curve displays convex characteristics primarily because of the heavy transactions and the average costs shoots up at increasing rate as the market rises. The paper develops a theoretical model of a JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature. The model is validated via back testing during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. Besides providing capital protection, the strategy is found to hedges the downside risk effectively during bad times and is also found to leverages the upside potentials during the good times. Coming to the Indian context, till the year 2006 the Security and Exchange Board of India (SEBI) voted against the entry of any capital protected schemes in the Indian market, although the same were widely flourishing in the foreign counterparts. However, in 2006, following several rounds of discussions and constant persuasions from the Association of Mutual Fund of India, SEBI allowed the entry of the capital protected schemes by amendment of the SEBI (Mutual Fund) Regulations, 1996 vide a circular dated August 14, 2006. As per the regulations, capital protection schemes floated by the AMC should be close ended and should be mandatorily rated by a registered credit rating agency to ascertain the degree of certainty of achieving the objective of the fund. The regulation also clearly indicated that the asset management companies can market the scheme as "Capital protection oriented" fund and not "Capital Guaranteed fund" (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). The difference is also evidently identified in the next line in the circular - "the orientation toward capital protection initiates from the portfolio structure and not from any bank guarantee, insurance, cover, etc." (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). Thus embedded options, which invokes guarantee by virtue of its design, are excluded and thereby the OBPI strategy was not encouraged in the Indian market. As per the SEBI guideline, capital protection-oriented funds are sought to be structured by suitable combination of risky and risk free assets and by dynamic rebalancing the same through time with an objective of protecting the investor's initial fund. Thereby, only the CPPI strategy fits perfectly within the scope provided by SEBI. Given this backdrop, the developed JD-MR-CPPI model will be best suited for engineering of the structured products in the Indian market.
|
The paper adopts Merton's jump diffusion (JD) model to simulate the price path followed by risky assets and the CIR mean reversion model to simulate the path followed by the short-term interest rate. The floor of the CPPI strategy is linked to the stochastic process driving the value of a fixed income instrument whose yield follows the CIR mean reversion model. The developed model is benchmarked against CNX-NIFTY 50 and is back tested during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique.
|
[SECTION: Findings] Higher integration of financial markets, surge of foreign institutional investments, and real-time information streaming, on the one hand, have facilitated the discovery of fair market price and the enhancement of market efficiencies, on the other hand, have resulted in higher market volatility. Contagion or the spill-over effect is more prominent in an integrated market than in a segmented one. These have led to an increasing concern both among the potential investors as well as the financial institutions over the assessment and management of risks. While the two-fund separation theorem asserts that a rational investor always allocates his/her endowments between risky asset and risk free asset in accordance to his/her degree of risk averseness, the same, however, changes with the change in market scenario. Empirically, it has been seen that investors in general tends to become more risk averse during the regimes of higher volatility and more risk loving during the counter regimes. Thus, the prime objective of utility maximization can be visualized as a combination of two sub-objectives: first being the protection of the investors' wealth during the bad times and second, being the maximization of returns during the good times. Portfolio insurance strategies address both these needs. Traditionally, there are two categories of portfolio insurance strategies: static and dynamic. The former strategy chooses stock index options or futures to hedge the downside risk of the portfolio, while the latter one relies on continuous rebalancing of the portfolio between the risky and risk-free asset with an objective of insuring the investments from all possible erosions. While the option-based portfolio insurance (OBPI) strategy is the popular example of static portfolio insurance, constant-mix strategy, constant proportion portfolio insurance (CPPI), dynamic proportion portfolio insurance, time invariant portfolio protection, etc., are the popular examples of the dynamic portfolio insurance. Among them, however, the CPPI strategy is still the most popular and widely practiced (Pain and Rand, 2008). Investment schemes developed utilizing these strategies are generally coined as capital protection funds or capital guaranteed funds. Among the existing literature on CPPI strategy a common assumption has been that the investment in the risk free asset grows at a constant rate in spite of frequent trading. Empirical evidences buttress the fact that interest rate follows a stochastic mean reverting behavior, and thus frequent reshuffling of portfolio between risky and risk free asset makes it impractical to assume that the investment in the money market account will grow at a constant rate along the entire investment horizon. Considering this gap in the existing literatures, the paper proposes to construct a model of JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature and evaluates the effectiveness of the algorithm during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. The rest of the paper is organized as follows: Section 2 briefly probes into the existing literature on the CPPI strategy. Section 3 presents the theoretical framework of the JD-MR CPPI strategy, Section 4 empirically validates the model using the scenario-based simulation technique and Section 5 concludes the paper. The CPPI is introduced by Perold (1986) on fixed income assets. Black and Jones (1987) extended this method on equity-based underlying assets. Black and Perold (1992) further developed the algorithm by probing into how transaction costs and borrowing constraints impacts the insurance strategy. Their research revealed that in the absence of any transaction cost the CPPI is equivalent to investing in perpetual American call option and that the strategy is best for the HARA utility function under minimum consumption constraints. Their study further revealed that with the increase in the "multiple" value, the payoff under the CPPI approaches to that of the stop-loss strategy. CPPI strategies in presence of jumps in stock prices were first considered by Prigent and Tahar (2005) in a diffusion model with finite intensity jumps. Following their work, Cont and Tankov (2009) quantified the gap risk of classical CPPI strategies with the assumption that the risky asset follows Merton's jump diffusion (JD) model. Furthermore, their study probed into the analytical expression for the expected losses and the distribution of the loses given the gap event has occurred. Unlike the former research they incorporated infinite activity jumps and stochastic volatility in their algorithm. Estep and Kritzman (1988) came up with the time invariant portfolio insurance (TIPI) strategy where the floor is proposed to be dynamic proportional to the current wealth. Thereby, this strategy was tuned toward protection of the current wealth and not the floor value. Compared to the traditional CPPI, the TIPI strategy is more conservative in terms of restricted exposure during growth phases. In line with their work Chen and Liao (2006), proposes a goal-directed CPPI strategy to combine an investor's goal-directed trading behavior with the traditional CPPI strategy. The objective is to maintain conservative exposure in the risky asset when the portfolio value approaches the pre-set goal and to take an aggressive exposure when the deviation from the goal is large. However, the approach suffers from one major drawback that it fails to utilize the upside potential to the fullest extent. Hainaut (2010) analyzes the influence of switches of assets regimes on the CPPI performance and risk exposure under the additional assumption that the dynamics of the risky asset is driven by a hidden Markov process. The paper shows how the value at risk and the tail VaR can be retrieved by inversion of the Fourier transform of the characteristic function of the return density. Another important line of research on the CPPI strategy concentrated on the determination of the "multiple" that guides the exposure to the risky asset, and hence the overall risk exposure of the portfolio. Authors like Bertrand and Prigent (2002) and Prigent and Tahar (2005) probed into the development of unconditional "multiple" estimates using the extreme value approach, while Hamidi et al. (2009) concentrated on conditional "multiple" determination where the multiplier is defined as a function of an extended expected value-at-risk with an objective to keep a constant exposure to risk. It is widely assumed in most of the literature on CPPI that the floor grows at a constant risk free rate. An alternative to this notion was introduced by Boulier and Kanniganti (1995) and later extended by Mkaouar and Prigent (2007), where they assumed that floor value at any given time is partially dependent on the portfolio value. The partial dependence can be explained by the fact that the floor value increases when the risky asset in the portfolio performs strongly but the same does not decreases during poor performance. In contrast to the previous work, the current paper assumes the floor of the model to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literature have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which helps the algorithm to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase. The JD-MR-CPPI model: the CPPI strategy dynamically reallocates fund between a risky asset and a risk free money market account with an objective of protecting the investors' initial capital along the investment horizon. The algorithm starts by setting a floor, which is normally kept equal to the present value of the initial investment, discounted at the risk free rate over the investment horizon. Capital allocated as floor today will grow at the risk free rate to the initial investment at maturity. The idea is that if through dynamic rebalancing, the fund manager can ensure the portfolio value to never fall below the floor then irrespective of the price movement of the risky asset, the portfolio value will always remain above or equal to the initial investment at maturity. Suppose "v0" is the initial investment, "r" is the risk free rate and "T" is the investment horizon (in years). The initial floor (f0) is set equal to an amount which when invested in the risk free rate will grow to v0 at time T. The difference between the invested fund and the initial floor is called the cushion c0:(1) c 0 = v 0 - f 0 The initial exposure E0 in the risky asset is determined as some multiple (m) of the initial cushion under the constraint that the same cannot be more than the value of the portfolio v0 The expression of the exposure is given by the following equation. The multiple (m) is an important parameter in the algorithm, because it controls the exposure of the fund to the risky asset. Higher is the multiple, higher is the exposure and higher will be the expected return of the portfolio. But a high multiple also increases the probability of gap risk:(2) E 0 = min ( v 0 , m x ( v 0 - f 0 ) ) Once the exposure has been determined, the same amount is invested in the risky asset and the remaining fund is parked into the money market account. Thus, the amount (B0) invested in the money market account is, thus, given by:(3) B 0 = v 0 - min ( v 0 , m x ( v 0 - f 0 ) ) Once the initial allocation has been done the next task is to decide upon the rebalancing approach. There are two commonly used rebalancing approaches: the time-based rebalancing, where rebalancing is done at a fixed time interval over the investment horizon and the move-based rebalancing, where rebalancing is done once the percentage change in exposure to the risky asset crosses a predetermined threshold value. Sometimes, a combination of both the approaches is used. Move-based rebalancing is suitable in the world with high transaction costs, as this method prevents unnecessary rebalancing during minor fluctuations and thereby minimizes the transaction cost. However, in this approach the threshold value has to be chosen carefully. A higher threshold value reduces the number of rebalancing, and hence the total transaction cost, but at the same time increases the probability of the portfolio value crashing down the floor. The optimal threshold limit should be the one that minimizes both the transaction cost and the cost of gap risk. In case of time-based rebalancing, the decision parameter is the rebalancing interval. Higher rebalancing interval reduces the total transaction cost, but increases the cost of gap risk. Hence, the optimal rebalancing interval is the one that minimizes both the total transaction cost and the cost of gap risk. During each rebalancing period (t) along the investment horizon the cushion is recalculated by considering the difference between the portfolio value and the floor ft:(4) c t = max ( 0 , ( v t - f t ) ) where0 t T In the event, if the portfolio value goes below the floor, the cushion is set to 0 (Equation (4)). The exposure is then calculated as:(5a) E t = min ( v t , m x c t ) Replacing the value of ct from Equation (4), we get:(5b) E t = min ( v t , m x max ( 0 , ( v t - f t ) ) ) The exposure can never be more than the portfolio value at any point of time. If the exposure at any rebalancing period exceeds the portfolio value, then the exposure is reset to the current portfolio value and the entire fund is invested in the risky asset. This explains the "min" function in Equations (5a) and (5b). After investing the exposure amount in the risky asset the difference (vt-Et) is invested in the money market account. The procedure is repeated at each of the rebalancing terminal till the maturity of the portfolio. Geometric Brownian motion had been widely used for depicting the diffusion process of risky asset. But the empirical evidence of leptokurtic distribution and the presence of flat tail of the financial asset return distribution necessitated the search for alternatives. Merton (1976), for the first time, introduces the JD model where the diffusion process is assumed to be composed of two parts: a geometric Brownian motion with a constant drift and volatility and a compound Poisson process guiding the arrival of jumps. Merton further assumed that the jump size is log-normally distributed with a constant mean and variance. The jumps signify the arrival of news (both good and bad) that results in the sharp movement of the asset price within a short-time interval. Following the seminal work of Merton (1976), Kou (2002) delivered the double exponential jump diffusion model (DEJD), where the arrival of news is still guided by the Poisson distribution but the jump magnitude is depicted by the double exponential distribution. Ramezani and Zeng (1998) arrived at the Pareto-Beta jump diffusion (PBJD) model, where the jumps, caused by good news, are assumed to follow Pareto distribution and those caused by bad news are assumed to follow beta distribution. Though from conceptual point of view both the DEJD and PBJD models are alike, they, however, differ structurally. While Kou (2002) suggested using two exponential distributions with dissimilar parameters to define the jumps, Ramezani and Zeng (1998) assume that both good and bad news are produced by two autonomous Poisson processes with different intensities and that the corresponding jump magnitudes are drawn from Pareto and beta distributions, respectively. However, simplistic assumptions and ease of use makes the Merton's JD model a popular modeling tool among the practitioners in comparison to the other complicated models. This model is used in the current study. Under the JD model, the incremental change in the price of the risky asset is given by:(6) d S t = ( m - l K - s 2 2 ) S t d t + s S t d W t + S t ( Y t - 1 ) d N t where Wt is a standard Weiner process with zero drift and variance equal to dt. The increments dWt are independent of one another in the interval [0, T]. T is the maturity period of the portfolio. u is the constant drift and s2 is the constant variance of the risky asset. The term s2/2 is used for convexity correction. Nt is the compound Poisson process with intensity l signifying the number of jumps within the time interval [0, t]. Yt is log-normally distributed random process signifying the jump magnitude. For a small time interval dt, the asset price jumps from St to StYt. Thus, the percentage change in the asset price caused by the jump is given by:(7) d S t S t = Y t S t - S t S t = Y t - 1 The incremental change dNt gives jumps occurring within the incremental time dt such that:(8) P ( d N t = 1 ) = l d t and P ( d N t = 0 ) = 1 - l d t The log of the jump magnitude is ~i.i.d normal (mJ, sJ). The relative jump magnitude (Yt-1) is also log-normally distributed with the mean and variance given by:(9) E ( Y t - 1 ) = e ( m J + 12 s J 2 ) - 1 = K (10) V ( Y t - 1 ) = e ( 2 m J + s J 2 ) ( e s J 2 - 1 ) Under the CPPI strategy, rebalancing is done frequently to avoid the loss because of gap risk and thereby the paper assumes that the amount invested in the money market account grows at the short-term interest rate. It is widely documented that short-term interest rates for any economy are neither constant nor do they follow random walk, but display a well-known phenomenon of mean reversion. It refers to the tendency that the interest rate drifts at a certain rate toward the long-term average. Empirically, this means that the change in interest rate should be significantly positively correlated with the deviation from the long-term mean. Several researchers like Vasicek (1977), Dothan (1978), Brennan and Schwartz (1979), Cox et al. (1985), Heath et al. (1992), and others contributed significantly toward the interest rate modeling. While Vasicek (1977) presented one of the earliest stochastic mean reverting model for interest rate where the author assumed that in their one-factor model the interest rate follows the Ornstein Uhlenbeck process, Cox et al. (1985) in their general equilibrium model (CIR model) improved upon the same to ensure the interest rate not to go below 0. Furthermore, unlike the Vasicek model, in the CIR model the short-term interest rate does not display a normal or lognormal distribution, but instead exhibit a non-central kh2 distribution. The paper adopts the CIR model to govern the interest rate process of the money market account. Under the CIR model the short-term interest rate diffusion process is given by:(11) d r t = k ( th - r t ) d t + s r t d o t where k is the speed of adjustment of the instantaneous interest rate toward the target th. s is the standard deviation of the interest rate and dot is standard Weiner process with zero drift and variance equal to dt. The model also inculcates two set of restrictions, namely, k, th, s>0 and 2kth>s2, where the second restriction prevents the interest rate from going negative. Now, under the stochastic interest rate environment the initial floor of the CPPI strategy should be set equal to the value of a zero coupon bond that grows to the initial investment (I) at the stochastic interest rate (rt) at maturity (T). According to the CIR model the price of a zero coupon bond at time t (t[?]{0, T}) with a maturity value of I and maturity period of T is given by:(12) B ( t , T ) = I A ( t , T ) e - r t R ( t , T ) where I is the maturity value; rt is the interest rate on the valuation date; T is the maturity period and the rest of the parameters are depicted below:(13) A ( t , T ) = [ 2 h e ( h + k ) ( T - t ) / 2 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] 2 k th / s 2 (14) R ( t , T ) = [ 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] (15) h = k 2 + 2 s 2 Thus, the initial floor f0 for the capital protection fund under stochastic interest rate is given by:(16) f 0 = I [ 2 h e ( h + k ) ( T) / 2 2 h + ( h + k ) ( e h ( T) - 1 ) ] 2 k th / s 2 x e - r 0 [ 2 ( e h ( T) - 1 ) 2 h + ( h + k ) ( e h ( T) - 1 ) ] where r0 is the spot interest rate at time (t=0) when the floor valuation is done. The diffusion process of the zero coupon bond is given by:(17) d B t = r t B t d t - s r t B t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t Coming back to the CPPI strategy at any rebalancing date (t) the exposure in the risky asset is computed as:(18) e t = min ( v t , m | v t - f t | + ) where | v t - f t | + = max { 0 , ( v t - f t ) } Thus, the amount invested in the risk free money market account is given by:(19) B t = ( v t - min ( v t , m | v t - f t | + ) ) where Bt grows at a stochastic risk free rate rt through time and its movement is depicted in Equation (17). A vital assumption of the CPPI portfolio is self-financing. It means that for every small time increments, the incremental change of the risky asset holding and the incremental change of the risk free asset holding will only contribute to the incremental change of the portfolio value and no infusion of extra fund at any stage is made. The mathematical representation of the self-financing strategy is given by the following equation:(20) d v t = min ( v t , m | v t - f t | + ) d S t S t + [ v t - min ( v t , m | v t - f t | + ) ] d B t B t The terms (dSt/St) and (dBt/Bt) in Equation (20) can be replaced by the corresponding terms from Equations (6) and (17), respectively to obtain:(21) d v t = min ( v t , m | v t - f t | + ) [ ( m - l K - s 2 2 ) d t + s d W t + ( Y t - 1 ) d N t ] + [ v t - min ( v t , m | v t - f t | + ) ] [ r t d t - s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t ] A slight rearrangement of the terms in Equation (21) results in the following equation:(22) d v t = ( ( m - l K - s 2 2 - r t ) min ( v t , m | v t - f t | + ) + r t v t ) d t + min ( v t , m | v t - f t | + ) s d W t - [ v t - min ( v t , m | v t - f t | + ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + min ( v t , m | v t - f t | + ) ( Y t - 1 ) d N t Equation (22) represents the diffusion process of the JD-MR CPPI portfolio value. It consists of four components: a deterministic drift, a stochastic term representing the unpredictability of the risky asset investment, a stochastic term representing the Poisson distributed jump process, and finally, a stochastic term representing the randomness of the money market investment. The objective of rebalancing is to keep the portfolio value (vt) above or equal to the floor (ft). Once vt touches the floor (vt=ft) then |vt-ft|+ is set to 0 and so is the exposure min (vt, m |vt-ft|+). As a result the stochastic component because of the jump and the risky asset diffusion vanishes. The entire fund is now allocated to the risk free asset. The differential Equation (22) is now reduced to the diffusion process followed by the money market account:(23) d v t = r t v t d t - v t s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t However, if the rebalancing cannot be achieved when vt touches ft and vt goes below ft then the whole idea of protection will be compromised. Such a situation is more probable at times of steep fall in the underlying risky asset price and it gives rise to the so called "gap-risk." Now, during the bull phase when the risky assets price surges and the exposure crosses the portfolio value, the entire fund is allocated to the risky asset. The boundary condition for its occurrence is: v t = m ( v t - f t ) and which implies that:(24) v t = f t 1 - 1 / m Under this condition, the exposure min (vt, m |vt-ft|+) is set to vt and the stochastic differential equation (Equation (13)) reduces to:(25) d v t = v t ( m - l K - s 2 2 ) d t + v t s d W t + v t ( Y t - 1 ) d N t Equation (25) reveals that the portfolio value follows the geometric Brownian motion with jump having the same drift and variance as that of the underlying risky asset. Higher is the expected return of the underlying risky asset higher will be the expected return of the portfolio during a bull run. Thus the JD-MR-CPPI strategy address the upside potential effectively and at the same time takes care of the downside risk by eliminating the stochastic component once vt=ft (see Equation (16)). When the portfolio value lies within the interval given by the inequality (Equation (27)) the allocation is done both in risky and risk free asset. In that case the stochastic differential equation governing the portfolio value process is shown in Equation (27):(26) f t v t f t 1 - 1 / m (27) d v t = ( ( m - l K - s 2 2 - r t ) m ( v t - f t ) + r t v t ) d t + m ( v t - f t ) s d W t - [ v t - m ( v t - f t ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + m ( v t - f t ) ( Y t - 1 ) d N t Finally, the observations can be summarized as displayed in the Table I. 4.1 Data analysis For empirical analysis of the developed model, suitable proxies of the risky asset and the risk free assets are prerequisite. The National Stock Exchange of India, though relatively new compared to the Bombay Stock Exchange, has witnessed a considerable growth during the last ten years both in terms of volume traded and number of companies listed. This qualifies CNX-NIFTY 50 to be a suitable proxy for the risky asset. Indian financial market witnessed a considerable swing during the financial crisis of 2008. CNX-NIFTY 50 plunged down from a record high of 6,288 on January 8, 2008 to 2,524 on October 27, 2008 (source: Yahoo Finance). Post-recession recovery of the Indian market was also quick compared to the developed economies. CNX-NIFTY 50 touched 6,312.45 again on November 5, 2010, which was the new high after such crisis. The paper proposes to capture these two phases of the market in order to compare the performance of the JD-MR-CPPI algorithm against the CNX-NIFTY 50 index (taken as a bench mark as well as the underlying asset) in both these phases. For this purpose the period from January 8, 2008 to October 27, 2008 is coined as the downswing phase and the period from October 27, 2008 to November 5, 2010 as the recovery phase. We have stressed tested our model in these two historical extreme periods to check the boundary of our model. Further, using Monte Carlo simulation on these extreme regions we have derived a series of hypothetical stressed scenarios. We have subsequently stressed tested our model on all these extreme scenarios. Stress testing of any model on the historical and hypothetical stressed scenario is an efficient way of testing the model performance and robustness. This methodology has gained considerable importance after the 2008 subprime crisis and the regulators are increasingly considering stress-testing as a viable means of model validation and model risk management (see Dodd-Frank Act and Comprehensive Capital Adequacy and Review Guidelines of Federal Reserve System for details, Ref.: www.federalreserve.gov/bankinforeg/ccar.htm). Daily price data of the CNX-NIFTY 50 is collected across both the phases. On the other hand, the low level of development of the Indian debt market coupled with illiquid instrument and lack of reliable data hinders the selection of a suitable proxy for the money market accounts. As a proxy for the short-term interest rate of the money market account, the call money rate is selected because of two reasons: first, the reliable quotations are available on a daily basis and second, previous experience with the Indian market have revealed that the interbank call rates influence significantly the interest rate of the economy. Daily call money rates across both the phase are collected from IFMR Data Centre (Source: www.ifmr.ac.in/). The summary statistics of the collected data are displayed in Tables II and III, respectively. 4.2 JD-MR CPPI model calibration The diffusion models guiding the risky asset price and the money market account are calibrated against the CNX-NIFTY and the call money rate data during both the phase using the maximum likelihood estimation technique. For a given set of data and the assumed underlying model the MLE technique returns the optimal set of the model parameters that maximizes the probability or likelihood that the model output and the observed data will match. In MATLAB the same is achieved by maximizing the log likelihood function of the process against the set of parameters using the "fminsearch" non-linear optimization routine present in the optimization toolbox. The calibrated parameters for the risky asset and the money market account are displayed in Tables IV and V, respectively. 4.3 JD-MR CPPI model simulations Once the model has been calibrated to the historical data the same is used to simulate 100,000 trajectories of the possible paths of the portfolio during each of the downswing and recovery phase. The expected return and risks are then calculated by taking the corresponding means across all the simulated paths. The initial investment in the portfolio is taken at Rs1,000 with a maturity period of one year. The initial floor is set to the price of the zero coupon bond that provides a maturity value of Rs1,000 after one year, with the interest rate following the CIR mean reversion process (given by Equation (11)). The multiple is set to 3 for the present study. Transaction cost is taken as 0.01 percent of the total volume transaction. It is also assumed that the same transaction cost prevails for buying and selling of the risky and risk-free asset in the financial market. Rebalancing frequency is kept at 200 times a year at equal interval. The initial NIFTY value is normalized to the initial investment for comparison purpose. The results are displayed in Table VI. 4.4 JD-MR CPPI model performance analysis Table VI indicates that during the downswing phase when the aggregate market return was -75.11 percent, the JD-MR CPPI portfolio manages to maintain an average return of 1.2 percent. The 99 percent VaR of the portfolio is also significantly less than that of the benchmarked market index. During the downswing phase it can be deduced that for an initial investment of Rs1,000 the loss will not exceed Rs10.5251 for 99 percent of the cases if the investment is made in the JD-MR-CPPI portfolio; whereas, for an equal investment in the market during the downswing phase, the corresponding loss value increases to a whooping Rs907.6475. During the recovery phase the portfolio generates an aggregate return of 85.45 percent against a market return of 212.41 percent, but manages to control the VaR at Rs35.0508 as opposed to that of the market (Rs90.5709). Thus, the JD-MR-CPPI portfolio performs better that the risky market during the downswing and performs better than the fixed income market during the growth phase, which makes it a value enhancing proposition for the risk adverse investors. Figure 1 provides the path followed by the portfolio, the market and the floor for a particular simulation, the corresponding allocation in risky assets and the final histogram of the terminal value of the portfolio. The histograms for both the phases are right skewed indicating the hedging effectiveness under the extreme market environments. Figure 2 provides the terminal value of the portfolio (in green) and the terminal value of the floor (in red) for all the 100,000 simulations. The lower cut-off of the terminal value of the portfolio indicates the capital protective feature of the algorithm, while the volatility of the terminal value of the floor is expected because of its stochastic nature as defined in the algorithm. Figure 3 displays the cumulative average transaction cost curve across 100,000 simulations. The curve is concave and the value stabilizes near Rs16 during the market downswing phase. This is primarily because of low transactions and stable investment in the debt segment during the crisis period. During the recovery phase the curve displays convex characteristics primarily because of the heavy transactions and the average costs shoots up at increasing rate as the market rises. The paper develops a theoretical model of a JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature. The model is validated via back testing during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. Besides providing capital protection, the strategy is found to hedges the downside risk effectively during bad times and is also found to leverages the upside potentials during the good times. Coming to the Indian context, till the year 2006 the Security and Exchange Board of India (SEBI) voted against the entry of any capital protected schemes in the Indian market, although the same were widely flourishing in the foreign counterparts. However, in 2006, following several rounds of discussions and constant persuasions from the Association of Mutual Fund of India, SEBI allowed the entry of the capital protected schemes by amendment of the SEBI (Mutual Fund) Regulations, 1996 vide a circular dated August 14, 2006. As per the regulations, capital protection schemes floated by the AMC should be close ended and should be mandatorily rated by a registered credit rating agency to ascertain the degree of certainty of achieving the objective of the fund. The regulation also clearly indicated that the asset management companies can market the scheme as "Capital protection oriented" fund and not "Capital Guaranteed fund" (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). The difference is also evidently identified in the next line in the circular - "the orientation toward capital protection initiates from the portfolio structure and not from any bank guarantee, insurance, cover, etc." (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). Thus embedded options, which invokes guarantee by virtue of its design, are excluded and thereby the OBPI strategy was not encouraged in the Indian market. As per the SEBI guideline, capital protection-oriented funds are sought to be structured by suitable combination of risky and risk free assets and by dynamic rebalancing the same through time with an objective of protecting the investor's initial fund. Thereby, only the CPPI strategy fits perfectly within the scope provided by SEBI. Given this backdrop, the developed JD-MR-CPPI model will be best suited for engineering of the structured products in the Indian market.
|
Back testing the algorithm using Monte Carlo simulation across the crisis and recovery phases of the 2008 recession regime revealed that the portfolio performs better than the risky markets during the crisis by hedging the downside risk effectively and performs better than the fixed income instruments during the growth phase by leveraging on the upside potential. This makes it a value-enhancing proposition for the risk-averse investors.
|
[SECTION: Value] Higher integration of financial markets, surge of foreign institutional investments, and real-time information streaming, on the one hand, have facilitated the discovery of fair market price and the enhancement of market efficiencies, on the other hand, have resulted in higher market volatility. Contagion or the spill-over effect is more prominent in an integrated market than in a segmented one. These have led to an increasing concern both among the potential investors as well as the financial institutions over the assessment and management of risks. While the two-fund separation theorem asserts that a rational investor always allocates his/her endowments between risky asset and risk free asset in accordance to his/her degree of risk averseness, the same, however, changes with the change in market scenario. Empirically, it has been seen that investors in general tends to become more risk averse during the regimes of higher volatility and more risk loving during the counter regimes. Thus, the prime objective of utility maximization can be visualized as a combination of two sub-objectives: first being the protection of the investors' wealth during the bad times and second, being the maximization of returns during the good times. Portfolio insurance strategies address both these needs. Traditionally, there are two categories of portfolio insurance strategies: static and dynamic. The former strategy chooses stock index options or futures to hedge the downside risk of the portfolio, while the latter one relies on continuous rebalancing of the portfolio between the risky and risk-free asset with an objective of insuring the investments from all possible erosions. While the option-based portfolio insurance (OBPI) strategy is the popular example of static portfolio insurance, constant-mix strategy, constant proportion portfolio insurance (CPPI), dynamic proportion portfolio insurance, time invariant portfolio protection, etc., are the popular examples of the dynamic portfolio insurance. Among them, however, the CPPI strategy is still the most popular and widely practiced (Pain and Rand, 2008). Investment schemes developed utilizing these strategies are generally coined as capital protection funds or capital guaranteed funds. Among the existing literature on CPPI strategy a common assumption has been that the investment in the risk free asset grows at a constant rate in spite of frequent trading. Empirical evidences buttress the fact that interest rate follows a stochastic mean reverting behavior, and thus frequent reshuffling of portfolio between risky and risk free asset makes it impractical to assume that the investment in the money market account will grow at a constant rate along the entire investment horizon. Considering this gap in the existing literatures, the paper proposes to construct a model of JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature and evaluates the effectiveness of the algorithm during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. The rest of the paper is organized as follows: Section 2 briefly probes into the existing literature on the CPPI strategy. Section 3 presents the theoretical framework of the JD-MR CPPI strategy, Section 4 empirically validates the model using the scenario-based simulation technique and Section 5 concludes the paper. The CPPI is introduced by Perold (1986) on fixed income assets. Black and Jones (1987) extended this method on equity-based underlying assets. Black and Perold (1992) further developed the algorithm by probing into how transaction costs and borrowing constraints impacts the insurance strategy. Their research revealed that in the absence of any transaction cost the CPPI is equivalent to investing in perpetual American call option and that the strategy is best for the HARA utility function under minimum consumption constraints. Their study further revealed that with the increase in the "multiple" value, the payoff under the CPPI approaches to that of the stop-loss strategy. CPPI strategies in presence of jumps in stock prices were first considered by Prigent and Tahar (2005) in a diffusion model with finite intensity jumps. Following their work, Cont and Tankov (2009) quantified the gap risk of classical CPPI strategies with the assumption that the risky asset follows Merton's jump diffusion (JD) model. Furthermore, their study probed into the analytical expression for the expected losses and the distribution of the loses given the gap event has occurred. Unlike the former research they incorporated infinite activity jumps and stochastic volatility in their algorithm. Estep and Kritzman (1988) came up with the time invariant portfolio insurance (TIPI) strategy where the floor is proposed to be dynamic proportional to the current wealth. Thereby, this strategy was tuned toward protection of the current wealth and not the floor value. Compared to the traditional CPPI, the TIPI strategy is more conservative in terms of restricted exposure during growth phases. In line with their work Chen and Liao (2006), proposes a goal-directed CPPI strategy to combine an investor's goal-directed trading behavior with the traditional CPPI strategy. The objective is to maintain conservative exposure in the risky asset when the portfolio value approaches the pre-set goal and to take an aggressive exposure when the deviation from the goal is large. However, the approach suffers from one major drawback that it fails to utilize the upside potential to the fullest extent. Hainaut (2010) analyzes the influence of switches of assets regimes on the CPPI performance and risk exposure under the additional assumption that the dynamics of the risky asset is driven by a hidden Markov process. The paper shows how the value at risk and the tail VaR can be retrieved by inversion of the Fourier transform of the characteristic function of the return density. Another important line of research on the CPPI strategy concentrated on the determination of the "multiple" that guides the exposure to the risky asset, and hence the overall risk exposure of the portfolio. Authors like Bertrand and Prigent (2002) and Prigent and Tahar (2005) probed into the development of unconditional "multiple" estimates using the extreme value approach, while Hamidi et al. (2009) concentrated on conditional "multiple" determination where the multiplier is defined as a function of an extended expected value-at-risk with an objective to keep a constant exposure to risk. It is widely assumed in most of the literature on CPPI that the floor grows at a constant risk free rate. An alternative to this notion was introduced by Boulier and Kanniganti (1995) and later extended by Mkaouar and Prigent (2007), where they assumed that floor value at any given time is partially dependent on the portfolio value. The partial dependence can be explained by the fact that the floor value increases when the risky asset in the portfolio performs strongly but the same does not decreases during poor performance. In contrast to the previous work, the current paper assumes the floor of the model to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literature have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which helps the algorithm to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase. The JD-MR-CPPI model: the CPPI strategy dynamically reallocates fund between a risky asset and a risk free money market account with an objective of protecting the investors' initial capital along the investment horizon. The algorithm starts by setting a floor, which is normally kept equal to the present value of the initial investment, discounted at the risk free rate over the investment horizon. Capital allocated as floor today will grow at the risk free rate to the initial investment at maturity. The idea is that if through dynamic rebalancing, the fund manager can ensure the portfolio value to never fall below the floor then irrespective of the price movement of the risky asset, the portfolio value will always remain above or equal to the initial investment at maturity. Suppose "v0" is the initial investment, "r" is the risk free rate and "T" is the investment horizon (in years). The initial floor (f0) is set equal to an amount which when invested in the risk free rate will grow to v0 at time T. The difference between the invested fund and the initial floor is called the cushion c0:(1) c 0 = v 0 - f 0 The initial exposure E0 in the risky asset is determined as some multiple (m) of the initial cushion under the constraint that the same cannot be more than the value of the portfolio v0 The expression of the exposure is given by the following equation. The multiple (m) is an important parameter in the algorithm, because it controls the exposure of the fund to the risky asset. Higher is the multiple, higher is the exposure and higher will be the expected return of the portfolio. But a high multiple also increases the probability of gap risk:(2) E 0 = min ( v 0 , m x ( v 0 - f 0 ) ) Once the exposure has been determined, the same amount is invested in the risky asset and the remaining fund is parked into the money market account. Thus, the amount (B0) invested in the money market account is, thus, given by:(3) B 0 = v 0 - min ( v 0 , m x ( v 0 - f 0 ) ) Once the initial allocation has been done the next task is to decide upon the rebalancing approach. There are two commonly used rebalancing approaches: the time-based rebalancing, where rebalancing is done at a fixed time interval over the investment horizon and the move-based rebalancing, where rebalancing is done once the percentage change in exposure to the risky asset crosses a predetermined threshold value. Sometimes, a combination of both the approaches is used. Move-based rebalancing is suitable in the world with high transaction costs, as this method prevents unnecessary rebalancing during minor fluctuations and thereby minimizes the transaction cost. However, in this approach the threshold value has to be chosen carefully. A higher threshold value reduces the number of rebalancing, and hence the total transaction cost, but at the same time increases the probability of the portfolio value crashing down the floor. The optimal threshold limit should be the one that minimizes both the transaction cost and the cost of gap risk. In case of time-based rebalancing, the decision parameter is the rebalancing interval. Higher rebalancing interval reduces the total transaction cost, but increases the cost of gap risk. Hence, the optimal rebalancing interval is the one that minimizes both the total transaction cost and the cost of gap risk. During each rebalancing period (t) along the investment horizon the cushion is recalculated by considering the difference between the portfolio value and the floor ft:(4) c t = max ( 0 , ( v t - f t ) ) where0 t T In the event, if the portfolio value goes below the floor, the cushion is set to 0 (Equation (4)). The exposure is then calculated as:(5a) E t = min ( v t , m x c t ) Replacing the value of ct from Equation (4), we get:(5b) E t = min ( v t , m x max ( 0 , ( v t - f t ) ) ) The exposure can never be more than the portfolio value at any point of time. If the exposure at any rebalancing period exceeds the portfolio value, then the exposure is reset to the current portfolio value and the entire fund is invested in the risky asset. This explains the "min" function in Equations (5a) and (5b). After investing the exposure amount in the risky asset the difference (vt-Et) is invested in the money market account. The procedure is repeated at each of the rebalancing terminal till the maturity of the portfolio. Geometric Brownian motion had been widely used for depicting the diffusion process of risky asset. But the empirical evidence of leptokurtic distribution and the presence of flat tail of the financial asset return distribution necessitated the search for alternatives. Merton (1976), for the first time, introduces the JD model where the diffusion process is assumed to be composed of two parts: a geometric Brownian motion with a constant drift and volatility and a compound Poisson process guiding the arrival of jumps. Merton further assumed that the jump size is log-normally distributed with a constant mean and variance. The jumps signify the arrival of news (both good and bad) that results in the sharp movement of the asset price within a short-time interval. Following the seminal work of Merton (1976), Kou (2002) delivered the double exponential jump diffusion model (DEJD), where the arrival of news is still guided by the Poisson distribution but the jump magnitude is depicted by the double exponential distribution. Ramezani and Zeng (1998) arrived at the Pareto-Beta jump diffusion (PBJD) model, where the jumps, caused by good news, are assumed to follow Pareto distribution and those caused by bad news are assumed to follow beta distribution. Though from conceptual point of view both the DEJD and PBJD models are alike, they, however, differ structurally. While Kou (2002) suggested using two exponential distributions with dissimilar parameters to define the jumps, Ramezani and Zeng (1998) assume that both good and bad news are produced by two autonomous Poisson processes with different intensities and that the corresponding jump magnitudes are drawn from Pareto and beta distributions, respectively. However, simplistic assumptions and ease of use makes the Merton's JD model a popular modeling tool among the practitioners in comparison to the other complicated models. This model is used in the current study. Under the JD model, the incremental change in the price of the risky asset is given by:(6) d S t = ( m - l K - s 2 2 ) S t d t + s S t d W t + S t ( Y t - 1 ) d N t where Wt is a standard Weiner process with zero drift and variance equal to dt. The increments dWt are independent of one another in the interval [0, T]. T is the maturity period of the portfolio. u is the constant drift and s2 is the constant variance of the risky asset. The term s2/2 is used for convexity correction. Nt is the compound Poisson process with intensity l signifying the number of jumps within the time interval [0, t]. Yt is log-normally distributed random process signifying the jump magnitude. For a small time interval dt, the asset price jumps from St to StYt. Thus, the percentage change in the asset price caused by the jump is given by:(7) d S t S t = Y t S t - S t S t = Y t - 1 The incremental change dNt gives jumps occurring within the incremental time dt such that:(8) P ( d N t = 1 ) = l d t and P ( d N t = 0 ) = 1 - l d t The log of the jump magnitude is ~i.i.d normal (mJ, sJ). The relative jump magnitude (Yt-1) is also log-normally distributed with the mean and variance given by:(9) E ( Y t - 1 ) = e ( m J + 12 s J 2 ) - 1 = K (10) V ( Y t - 1 ) = e ( 2 m J + s J 2 ) ( e s J 2 - 1 ) Under the CPPI strategy, rebalancing is done frequently to avoid the loss because of gap risk and thereby the paper assumes that the amount invested in the money market account grows at the short-term interest rate. It is widely documented that short-term interest rates for any economy are neither constant nor do they follow random walk, but display a well-known phenomenon of mean reversion. It refers to the tendency that the interest rate drifts at a certain rate toward the long-term average. Empirically, this means that the change in interest rate should be significantly positively correlated with the deviation from the long-term mean. Several researchers like Vasicek (1977), Dothan (1978), Brennan and Schwartz (1979), Cox et al. (1985), Heath et al. (1992), and others contributed significantly toward the interest rate modeling. While Vasicek (1977) presented one of the earliest stochastic mean reverting model for interest rate where the author assumed that in their one-factor model the interest rate follows the Ornstein Uhlenbeck process, Cox et al. (1985) in their general equilibrium model (CIR model) improved upon the same to ensure the interest rate not to go below 0. Furthermore, unlike the Vasicek model, in the CIR model the short-term interest rate does not display a normal or lognormal distribution, but instead exhibit a non-central kh2 distribution. The paper adopts the CIR model to govern the interest rate process of the money market account. Under the CIR model the short-term interest rate diffusion process is given by:(11) d r t = k ( th - r t ) d t + s r t d o t where k is the speed of adjustment of the instantaneous interest rate toward the target th. s is the standard deviation of the interest rate and dot is standard Weiner process with zero drift and variance equal to dt. The model also inculcates two set of restrictions, namely, k, th, s>0 and 2kth>s2, where the second restriction prevents the interest rate from going negative. Now, under the stochastic interest rate environment the initial floor of the CPPI strategy should be set equal to the value of a zero coupon bond that grows to the initial investment (I) at the stochastic interest rate (rt) at maturity (T). According to the CIR model the price of a zero coupon bond at time t (t[?]{0, T}) with a maturity value of I and maturity period of T is given by:(12) B ( t , T ) = I A ( t , T ) e - r t R ( t , T ) where I is the maturity value; rt is the interest rate on the valuation date; T is the maturity period and the rest of the parameters are depicted below:(13) A ( t , T ) = [ 2 h e ( h + k ) ( T - t ) / 2 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] 2 k th / s 2 (14) R ( t , T ) = [ 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) ] (15) h = k 2 + 2 s 2 Thus, the initial floor f0 for the capital protection fund under stochastic interest rate is given by:(16) f 0 = I [ 2 h e ( h + k ) ( T) / 2 2 h + ( h + k ) ( e h ( T) - 1 ) ] 2 k th / s 2 x e - r 0 [ 2 ( e h ( T) - 1 ) 2 h + ( h + k ) ( e h ( T) - 1 ) ] where r0 is the spot interest rate at time (t=0) when the floor valuation is done. The diffusion process of the zero coupon bond is given by:(17) d B t = r t B t d t - s r t B t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t Coming back to the CPPI strategy at any rebalancing date (t) the exposure in the risky asset is computed as:(18) e t = min ( v t , m | v t - f t | + ) where | v t - f t | + = max { 0 , ( v t - f t ) } Thus, the amount invested in the risk free money market account is given by:(19) B t = ( v t - min ( v t , m | v t - f t | + ) ) where Bt grows at a stochastic risk free rate rt through time and its movement is depicted in Equation (17). A vital assumption of the CPPI portfolio is self-financing. It means that for every small time increments, the incremental change of the risky asset holding and the incremental change of the risk free asset holding will only contribute to the incremental change of the portfolio value and no infusion of extra fund at any stage is made. The mathematical representation of the self-financing strategy is given by the following equation:(20) d v t = min ( v t , m | v t - f t | + ) d S t S t + [ v t - min ( v t , m | v t - f t | + ) ] d B t B t The terms (dSt/St) and (dBt/Bt) in Equation (20) can be replaced by the corresponding terms from Equations (6) and (17), respectively to obtain:(21) d v t = min ( v t , m | v t - f t | + ) [ ( m - l K - s 2 2 ) d t + s d W t + ( Y t - 1 ) d N t ] + [ v t - min ( v t , m | v t - f t | + ) ] [ r t d t - s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t ] A slight rearrangement of the terms in Equation (21) results in the following equation:(22) d v t = ( ( m - l K - s 2 2 - r t ) min ( v t , m | v t - f t | + ) + r t v t ) d t + min ( v t , m | v t - f t | + ) s d W t - [ v t - min ( v t , m | v t - f t | + ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + min ( v t , m | v t - f t | + ) ( Y t - 1 ) d N t Equation (22) represents the diffusion process of the JD-MR CPPI portfolio value. It consists of four components: a deterministic drift, a stochastic term representing the unpredictability of the risky asset investment, a stochastic term representing the Poisson distributed jump process, and finally, a stochastic term representing the randomness of the money market investment. The objective of rebalancing is to keep the portfolio value (vt) above or equal to the floor (ft). Once vt touches the floor (vt=ft) then |vt-ft|+ is set to 0 and so is the exposure min (vt, m |vt-ft|+). As a result the stochastic component because of the jump and the risky asset diffusion vanishes. The entire fund is now allocated to the risk free asset. The differential Equation (22) is now reduced to the diffusion process followed by the money market account:(23) d v t = r t v t d t - v t s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t However, if the rebalancing cannot be achieved when vt touches ft and vt goes below ft then the whole idea of protection will be compromised. Such a situation is more probable at times of steep fall in the underlying risky asset price and it gives rise to the so called "gap-risk." Now, during the bull phase when the risky assets price surges and the exposure crosses the portfolio value, the entire fund is allocated to the risky asset. The boundary condition for its occurrence is: v t = m ( v t - f t ) and which implies that:(24) v t = f t 1 - 1 / m Under this condition, the exposure min (vt, m |vt-ft|+) is set to vt and the stochastic differential equation (Equation (13)) reduces to:(25) d v t = v t ( m - l K - s 2 2 ) d t + v t s d W t + v t ( Y t - 1 ) d N t Equation (25) reveals that the portfolio value follows the geometric Brownian motion with jump having the same drift and variance as that of the underlying risky asset. Higher is the expected return of the underlying risky asset higher will be the expected return of the portfolio during a bull run. Thus the JD-MR-CPPI strategy address the upside potential effectively and at the same time takes care of the downside risk by eliminating the stochastic component once vt=ft (see Equation (16)). When the portfolio value lies within the interval given by the inequality (Equation (27)) the allocation is done both in risky and risk free asset. In that case the stochastic differential equation governing the portfolio value process is shown in Equation (27):(26) f t v t f t 1 - 1 / m (27) d v t = ( ( m - l K - s 2 2 - r t ) m ( v t - f t ) + r t v t ) d t + m ( v t - f t ) s d W t - [ v t - m ( v t - f t ) ] s r t 2 ( e h ( T - t ) - 1 ) 2 h + ( h + k ) ( e h ( T - t ) - 1 ) d o t + m ( v t - f t ) ( Y t - 1 ) d N t Finally, the observations can be summarized as displayed in the Table I. 4.1 Data analysis For empirical analysis of the developed model, suitable proxies of the risky asset and the risk free assets are prerequisite. The National Stock Exchange of India, though relatively new compared to the Bombay Stock Exchange, has witnessed a considerable growth during the last ten years both in terms of volume traded and number of companies listed. This qualifies CNX-NIFTY 50 to be a suitable proxy for the risky asset. Indian financial market witnessed a considerable swing during the financial crisis of 2008. CNX-NIFTY 50 plunged down from a record high of 6,288 on January 8, 2008 to 2,524 on October 27, 2008 (source: Yahoo Finance). Post-recession recovery of the Indian market was also quick compared to the developed economies. CNX-NIFTY 50 touched 6,312.45 again on November 5, 2010, which was the new high after such crisis. The paper proposes to capture these two phases of the market in order to compare the performance of the JD-MR-CPPI algorithm against the CNX-NIFTY 50 index (taken as a bench mark as well as the underlying asset) in both these phases. For this purpose the period from January 8, 2008 to October 27, 2008 is coined as the downswing phase and the period from October 27, 2008 to November 5, 2010 as the recovery phase. We have stressed tested our model in these two historical extreme periods to check the boundary of our model. Further, using Monte Carlo simulation on these extreme regions we have derived a series of hypothetical stressed scenarios. We have subsequently stressed tested our model on all these extreme scenarios. Stress testing of any model on the historical and hypothetical stressed scenario is an efficient way of testing the model performance and robustness. This methodology has gained considerable importance after the 2008 subprime crisis and the regulators are increasingly considering stress-testing as a viable means of model validation and model risk management (see Dodd-Frank Act and Comprehensive Capital Adequacy and Review Guidelines of Federal Reserve System for details, Ref.: www.federalreserve.gov/bankinforeg/ccar.htm). Daily price data of the CNX-NIFTY 50 is collected across both the phases. On the other hand, the low level of development of the Indian debt market coupled with illiquid instrument and lack of reliable data hinders the selection of a suitable proxy for the money market accounts. As a proxy for the short-term interest rate of the money market account, the call money rate is selected because of two reasons: first, the reliable quotations are available on a daily basis and second, previous experience with the Indian market have revealed that the interbank call rates influence significantly the interest rate of the economy. Daily call money rates across both the phase are collected from IFMR Data Centre (Source: www.ifmr.ac.in/). The summary statistics of the collected data are displayed in Tables II and III, respectively. 4.2 JD-MR CPPI model calibration The diffusion models guiding the risky asset price and the money market account are calibrated against the CNX-NIFTY and the call money rate data during both the phase using the maximum likelihood estimation technique. For a given set of data and the assumed underlying model the MLE technique returns the optimal set of the model parameters that maximizes the probability or likelihood that the model output and the observed data will match. In MATLAB the same is achieved by maximizing the log likelihood function of the process against the set of parameters using the "fminsearch" non-linear optimization routine present in the optimization toolbox. The calibrated parameters for the risky asset and the money market account are displayed in Tables IV and V, respectively. 4.3 JD-MR CPPI model simulations Once the model has been calibrated to the historical data the same is used to simulate 100,000 trajectories of the possible paths of the portfolio during each of the downswing and recovery phase. The expected return and risks are then calculated by taking the corresponding means across all the simulated paths. The initial investment in the portfolio is taken at Rs1,000 with a maturity period of one year. The initial floor is set to the price of the zero coupon bond that provides a maturity value of Rs1,000 after one year, with the interest rate following the CIR mean reversion process (given by Equation (11)). The multiple is set to 3 for the present study. Transaction cost is taken as 0.01 percent of the total volume transaction. It is also assumed that the same transaction cost prevails for buying and selling of the risky and risk-free asset in the financial market. Rebalancing frequency is kept at 200 times a year at equal interval. The initial NIFTY value is normalized to the initial investment for comparison purpose. The results are displayed in Table VI. 4.4 JD-MR CPPI model performance analysis Table VI indicates that during the downswing phase when the aggregate market return was -75.11 percent, the JD-MR CPPI portfolio manages to maintain an average return of 1.2 percent. The 99 percent VaR of the portfolio is also significantly less than that of the benchmarked market index. During the downswing phase it can be deduced that for an initial investment of Rs1,000 the loss will not exceed Rs10.5251 for 99 percent of the cases if the investment is made in the JD-MR-CPPI portfolio; whereas, for an equal investment in the market during the downswing phase, the corresponding loss value increases to a whooping Rs907.6475. During the recovery phase the portfolio generates an aggregate return of 85.45 percent against a market return of 212.41 percent, but manages to control the VaR at Rs35.0508 as opposed to that of the market (Rs90.5709). Thus, the JD-MR-CPPI portfolio performs better that the risky market during the downswing and performs better than the fixed income market during the growth phase, which makes it a value enhancing proposition for the risk adverse investors. Figure 1 provides the path followed by the portfolio, the market and the floor for a particular simulation, the corresponding allocation in risky assets and the final histogram of the terminal value of the portfolio. The histograms for both the phases are right skewed indicating the hedging effectiveness under the extreme market environments. Figure 2 provides the terminal value of the portfolio (in green) and the terminal value of the floor (in red) for all the 100,000 simulations. The lower cut-off of the terminal value of the portfolio indicates the capital protective feature of the algorithm, while the volatility of the terminal value of the floor is expected because of its stochastic nature as defined in the algorithm. Figure 3 displays the cumulative average transaction cost curve across 100,000 simulations. The curve is concave and the value stabilizes near Rs16 during the market downswing phase. This is primarily because of low transactions and stable investment in the debt segment during the crisis period. During the recovery phase the curve displays convex characteristics primarily because of the heavy transactions and the average costs shoots up at increasing rate as the market rises. The paper develops a theoretical model of a JD-MR-CPPI strategy under the presence of transaction cost and stochastic floor as opposed to the deterministic floor used in the previous literature. The model is validated via back testing during the extreme regimes in the Indian market using the scenario-based Monte Carlo simulation technique. Besides providing capital protection, the strategy is found to hedges the downside risk effectively during bad times and is also found to leverages the upside potentials during the good times. Coming to the Indian context, till the year 2006 the Security and Exchange Board of India (SEBI) voted against the entry of any capital protected schemes in the Indian market, although the same were widely flourishing in the foreign counterparts. However, in 2006, following several rounds of discussions and constant persuasions from the Association of Mutual Fund of India, SEBI allowed the entry of the capital protected schemes by amendment of the SEBI (Mutual Fund) Regulations, 1996 vide a circular dated August 14, 2006. As per the regulations, capital protection schemes floated by the AMC should be close ended and should be mandatorily rated by a registered credit rating agency to ascertain the degree of certainty of achieving the objective of the fund. The regulation also clearly indicated that the asset management companies can market the scheme as "Capital protection oriented" fund and not "Capital Guaranteed fund" (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). The difference is also evidently identified in the next line in the circular - "the orientation toward capital protection initiates from the portfolio structure and not from any bank guarantee, insurance, cover, etc." (ref: SEBI Circular No. SEBI/IMD/CIR No. 9/74364/06). Thus embedded options, which invokes guarantee by virtue of its design, are excluded and thereby the OBPI strategy was not encouraged in the Indian market. As per the SEBI guideline, capital protection-oriented funds are sought to be structured by suitable combination of risky and risk free assets and by dynamic rebalancing the same through time with an objective of protecting the investor's initial fund. Thereby, only the CPPI strategy fits perfectly within the scope provided by SEBI. Given this backdrop, the developed JD-MR-CPPI model will be best suited for engineering of the structured products in the Indian market.
|
The study modifies the CPPI algorithm by re-defining the floor of the algorithm to be a stochastic mean reverting process which is guided by the movement of the short-term interest rate in the economy. This development is more relevant for two reasons: first, the short-term interest rate changes with time, and hence the constant yield during each rebalancing steps is not practically feasible; second, the historical literatures have revealed that the short-term interest rate tends to move opposite to that of the equity market. Thereby, during the bear run the floor will increase at a higher rate, whereas the growth of the floor will stagnate during the bull phase which aids the model to capitalize on the upward potential during the growth phase and to cut down on the exposure during the crisis phase.
|
[SECTION: Purpose] Klein et al. (1998) pioneered the study of the consumer animosity stream of research with the introduction of the animosity model of foreign product purchase. They defined animosity as "anger related to previous or ongoing political, military, economic, or diplomatic events" (p. 90). Animosity is a hostile attitude aimed at national out-groups, while hostility comprises both cognitive and attitudinal components. The cognitive component entails cynical beliefs and mistrust of others. The attitudinal component includes the negative emotions of anger, contempt, and disgust (Jung et al., 2002). According to Averill (1982), animosity is a strong emotion of dislike and hatred based on beliefs resulting from past or present military, political, or economic conflict and on actions between nations or people perceived to be unjustifiable or contradictory to socially acceptable norms. Since Klein et al.'s (1998) seminal study, dozens of papers have been published on the subject of consumer animosity. While some studies are replications, others delve more deeply into the consumer animosity phenomenon by examining its potential antecedents, mediators, moderators (Klein and Ettenson, 1999; Maher and Mady, 2010; Riefler and Diamantopoulos, 2007; Shoham et al., 2006; Wang et al., 2013), and consequences (Huang et al., 2010). Previous research demonstrates that consumer animosity may stem from an array of factors including past and ongoing political tensions between countries, past wars, and trade discords (Ettenson and Klein, 2005; Klein et al., 1998). Consumer animosity, in turn, is likely to result in anger which negatively affects consumers' judgments of product quality (Rose et al., 2009) as well as their willingness to buy (WTB) products made in the offending country (Fernandez-Ferrin et al., 2015; Wang et al., 2013). Although previous research has focused on the relationship between consumer animosity and a myriad of constructs, the possible relationship between consumer animosity and the consumption context (i.e. conspicuous vs inconspicuous consumption) has been largely overlooked. Patsiaouras and Fitchett (2012) defined conspicuous consumption as "the competitive and extravagant consumption practices and leisure activities that aim to indicate membership to a superior social class" (p. 154). A previous pilot study suggests that consumer animosity is associated with a lowered proclivity to engage in conspicuous consumption (Al-Hyari et al., 2012). This finding is in line with Klein's (2002) contention that the effects of consumer animosity may be more pivotal in the context of conspicuous consumption compared to other consumption contexts. Examining the relationship between consumer animosity and conspicuous consumption provides a broader understanding as to the particular contexts in which animosity is more likely to influence the consumption of products made in the offending country. A more profound understanding of this relationship may also serve to aid marketing managers in devising more focused marketing strategies and thus allocating marketing resources more efficiently. Hence, the main objective of this research was to examine whether consumer animosity acts as an antecedent to conspicuous consumption. This paper is comprised of two studies aimed at testing the generalizability of the proposed model. This paper is organized as follows: a review of the related literature is followed by an elaborate description of the two major studies conducted in the framework of the present research effort. This is followed by a discussion of the implications emanating from both studies. Finally, the authors point out the research limitations and suggest directions for future research. Consumer animosity and conspicuous consumption Veblen (1918) pioneered the research into conspicuous consumption. This type of consumption focuses on how consumers shop for and use brands, and how social status is shown off via brand image (Griskevicius et al., 2010). Hence, conspicuous consumption is a form of symbolic consumption. A large body of literature exists on the role of symbolic consumption in relation to consumer behavior (Holt, 2002; Hoyer and MacInnis, 1997). According to Grubb and Grathwohl (1967), products are social tools "serving as a means of communication between the individual and his or her significant references" (p. 24). One of the most common means of social communication is conspicuous consumption (Bushman, 1993). Klein et al.'s (1998) and Klein's (2002) studies suggest a negative association between consumer animosity and the ownership of conspicuous products such as cameras and automobiles. These findings are corroborated by more recent research. Al-Hyari et al. (2012) conducted a pilot study in Saudi Arabia in the context of the 2005 Muhammad controversy. The controversy emanated from a call for cartoonists to contribute cartoons depicting the prophet Muhammad by Jyllands-Posten (a Danish newspaper). The cartoon, deemed responsible for stirring unrest among Muslims across the globe, was one that depicted the prophet Muhammad with a bomb in his turban (Knight et al., 2009). Consistent with the two earlier studies, the study by Al-Hyari et al. (2012) points to a negative relationship between consumer animosity and conspicuous consumption. In particular, it suggests that consumers may avoid conspicuously consuming the products made in another country to signal that they are angered by its actions. This behavioral outcome is consistent with signaling theory. According to the theory, the party sending information must choose if and how to communicate (or signal) particular information while the receiver must choose how to interpret the signal (Connelly et al., 2011). Thus, it stands to reason that consumers are unlikely to conspicuously consume products as a means of communication associated with countries toward which they feel resentful. Similar consumer attitudes appear in other contexts as well. Consider the Armenian Genocide perpetrated by the Ottoman Empire in 1915, in which approximately one million Armenians were slaughtered[1]. Nowadays, the raw materials with which luxury brands such as Prada and Versace are made are imported from Turkey. Would Armenians living in Armenia, as opposed to Turkish Armenians (the majority of whom live in Istanbul), want to be associated with a superior social class that expresses its sense of membership in a superior social class by indulging in the conspicuous consumption of luxury brands made from Turkish raw materials? Would they be willing to buy a Prada bag or a Versace suit made from these materials? Likewise, how would an Israeli-Jewish consumer feel about conspicuously consuming a brand manufactured in Germany? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for the extermination of six million Jews? Alternatively, how would a Russian consumer feel about conspicuously consuming brands manufactured in the USA? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for his or her country's economic crisis? Perhaps, but the anecdotal evidence presented above suggests that the conspicuous consumption of brands made in the country that is the target of animosity is unlikely in all three cases. Thus, the following is hypothesized: H1a. Animosity will negatively affect Jewish-Israeli consumers' conspicuous consumption of German products. The relationship between susceptibility to normative influence (SNI), consumer animosity, and WTB SNI is a well-researched concept in the consumer behavior literature (Aaker, 1999; Josiassen and Assaf, 2013; Lee and Green, 1991). SNI, which refers to the utilitarian and value-expressive influence dimensions of the interpersonal influence concept, is defined as the propensity to conform to norms set by others (Batra et al., 2001). Some studies have focused on individuals' susceptibility to these various forms of influence (Aaker, 1999). Others, however, have delved into the effects of SNI on consumer behavior (Wang et al., 2013). Despite the large body of consumer behavior literature focusing on the effects of SNI, little is known about the relationship between SNI and consumer animosity and how this may vary based on the type of society, e.g., individualistic vs collectivistic (Maher and Mady, 2010; Wang et al., 2013). According to the theory of planned behavior (Ajzen, 1991), there are three determinants that predict behavior: attitude toward the behavior, subjective norm, and perceived behavioral control. Overall, the more positive the attitude toward the behavior, the more favorable the subjective norm toward the behavior; and, the greater the perceived control over the behavior, the stronger the intention to perform the intended action will be. While in certain instances merely one of these determinants would suffice to predict intention (e.g. attitudes), in other cases, several determinants will interact in the prediction of the intention (e.g. attitudes and subjective norms). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms - reasoning that is supported by previous research. Huang et al. (2010), for example, conducted a study in the context of the tense political relationships between Taiwan-Japan and China-Japan. Their study findings imply a positive and significant relationship between SNI and consumer animosity. These findings are in line with previous research, which suggests that consumers are more likely to be susceptible to norm influence in collectivistic rather than individualistic societies (Lee and Green, 1991). Huang et al.'s study and those of other researchers (Maher and Mady, 2010; Wang et al., 2013) have made significant contributions to furthering consumer behavior researchers' understanding of the consumer animosity phenomenon. However, the generalizability of these findings is limited as these studies were conducted among consumers more susceptible to norm influence. Previous research suggests that SNI varies with the type of society (i.e. individualistic vs collectivistic society). Hence, it would be of great value to consumer behavior scholars and practitioners alike to examine whether consumer animosity is influenced by SNI in societies which are moderately collectivistic such as Russia and Israel (Hofstede, 2001; Oyserman et al., 2002; Zeigler-Hill and Besser, 2011). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms. Hence, the following is hypothesized: H2a. Jewish-Israeli consumers' SNI will be positively associated with the level of animosity harbored toward Germany. SNI is a critical determinant of WTB (Hoyer et al., 2008). Grinblatt et al. (2008), for instance, analyzed the automobile purchasing behavior of Finnish consumers. The authors found a positive association between SNI and the purchase of automobiles. In particular, a consumer's car choice is likely to be influenced by the car make and model of his or her nearest neighbors. In the context of consumer animosity research, however, the relationship between SNI and WTB is quite different. Previous research suggests that SNI is negatively associated with WTB (Maher and Mady, 2010). Although the influence of SNI on WTB differs in each one of the abovementioned contexts, the motive is identical (i.e. the desire to conform to group norms). In Grinblatt et al.'s (2008) study, consumers purchased the makes and models owned by their neighbors to show that they too could afford such cars. In Maher and Mady's (2010) study, however, consumers were reluctant to purchase the products made in the offending country because they wanted to demonstrate to their reference groups that they were not acting in defiance of their groups' norms. Thus, the following is hypothesized: H3a. SNI will be negatively associated with Jewish consumers' WTB German-made products. Consumer animosity and WTB A large body of research points to a negative relationship between consumer animosity and WTB (Cui et al., 2012; Rose et al., 2009; Wang et al., 2013). Previous research suggests that animosity affects consumers' WTB products made in the offending country not only in the short term (Ettenson and Klein, 2005; Shoham et al., 2006) but also in the long term (Shimp et al., 2004). Even though several decades have elapsed since atrocities like the Nanjing Massacre, which occurred during the period of Japanese occupation of parts of China, Chinese consumers are still reluctant to buy Japanese products (Klein et al.,1998). Likewise, Shimp et al. (2004) demonstrated that Southerners in the USA still maintain enmity toward Northerners over the Civil War and its aftermath. This animosity by Southerners toward Northerners is pronounced in the reluctance of the former to purchase the products of the latter. A similar long-term effect of animosity has also been observed among American Jewish consumers, who are still unwilling to purchase German-made products despite the fact that over seven decades have elapsed since the Holocaust (Podoshen, 2009). Hence, the following is hypothesized: H4a. Consumer animosity will be negatively associated with WTB German-made products. The main objective of this study was to examine the effects of consumer animosity on conspicuous consumption in two research settings: Israel and Russia. More specifically, the study aimed to: examine the relationship between SNI and consumer animosity, examine whether SNI impacts consumers' WTB products made in the offending country, and study whether consumer animosity is associated with consumers' WTB products made in the offending country. Previous studies have emphasized the importance of examining the effects of animosity in contexts other than ones in which its trigger was an extreme historical event (Klein, 2002). To assess the stability of the proposed model (see Figure 1) and its applicability in various contexts, two contexts were tested: the Holocaust and the recent political discord between the USA and Russia over the Obama administration's imposition of economic sanctions on the latter. The two contexts were chosen for a number of reasons. First, they represent potentially differing levels of animosity toward an offending country (Germany vs the USA). Second, according to Hofstede (2001), Israel and Russia represent two different cultures; the former is relatively more individualistic (54) than the latter (39). Previous research suggests that consumers in collectivistic societies are more likely to harbor animosity toward a target country due to greater SNI (Huang et al., 2010). According to the realistic group conflict theory, a perceived threat from an out-group reinforces peoples' sense of belonging to their in-group (Levine and Campbell, 1972). Huang et al.'s research finding along with the realistic group conflict theory would suggest that the level of animosity is more likely to be closely linked to SNI levels among Russian consumers than Israeli consumers. Furthermore, examining the attitudes of Russian consumers toward American products is of practical importance to the many American firms marketing their goods to Russian consumers. American companies view Russia as one of their most important markets (Liuhto et al., 2016). Finally, previous research suggests that Jewish consumers still harbor animosity toward Germany, thereby making it a suitable context to study the proposed research model (Podoshen, 2009). Method This study employed the mall-intercept method to collect data from a sample of adult consumers in Tel-Aviv, Israel (Rose et al., 2009). The questionnaire was translated and back-translated with a technique suggested by Douglas and Craig (1983). Consumers' participation was solicited at the entrance to major malls, where roughly every tenth individual was asked to complete a questionnaire (Josiassen and Assaf, 2013). In cases where the tenth individual was in a group, only one of the group members was invited to take part in the study. Respondents were not informed about the focus of the study. The questionnaires were collected upon completion. The large number of passersby and the mix of individuals from all walks of life influenced the choice of locations. A total of 264 respondents were recruited. Of the questionnaires collected, 14 were eliminated due to incompleteness. Consequently, 250 were valid for analysis. Females composed 54 percent of the sample. Most respondents were single (46 percent) and their monthly incomes were above the national average (57 percent), which, based on purchase power parity (PPP) conversion rates, was equivalent to USD2,374[2] at the time of data collection (see Table AI). Measures Seven-point Likert scales (1=strongly disagree; 7= strongly agree) adapted from previous research were employed to test the relationships proposed in the hypothesized model. The hypothesized model comprised three dependent variables - animosity, WTB, and conspicuous consumption - and a single independent variable, SNI. General animosity was measured with three items adapted from Klein (2002). Similarly, three items from this source were also adapted to measure war animosity. Conspicuous consumption was measured using five items adapted from Marcoux et al. (1997). The original conspicuous consumption scale comprises a relatively large number of items. Due to concerns over the length of the questionnaire and the time necessary for its completion, a pilot study examined whether the original scale (14 items) could be reduced without undermining its validity. Thus, the two studies conducted following the pilot study used a shortened scale consisting of only five items (see Table AI). SNI was measured using three items adapted from Lee and Green (1991), and six items were adapted from Klein et al. (1998) to measure WTB. The questionnaire was pre-tested using a small sample of Israeli-Jewish consumers (n=20). The items employed in the research, their sources, Cronbach's a values, and average variance extracted (AVE) are illustrated in Table AII. Israeli-Jews comprise two major cultural subgroups (Shavit, 1990): Mizrahim (i.e. Jews of Asian or African descent) and Ashkenazim (Jews of European decent). Unlike Ashkenazi Jews, Mizrahi Jews were not victims of the atrocities committed by the Nazi regime. Consequently, it may be assumed that Ashkenzim harbor a greater level of animosity toward Germany and would be less willing to buy products made there than Mizrahim. Hence, subcultural affiliation was a control variable in the relationship between animosity and WTB. In all, 38 percent of the sample were Ashkenazi, 43 percent were Mizrachi, and 19 percent categorized themselves as others. Analysis Prior to analysis, all relevant items were reversed-scored. Using AMOS 22, we employed structural equation modeling (SEM) to test the hypothesized paths. As reported in Table AI, Cronbach's a (cutoff =0.7) and AVE (cutoff =0.5) were measured using SPSS 21 to examine the convergent validity of the constructs (Fornell and Larcker, 1981). Omitted from further analysis were items with loadings below the recommended 0.4 cutoff in the structured model (Hair et al., 1999). Two items were omitted from the WTB scale and two additional items from the war animosity and general animosity scales (one item from each scale). Following the deletion of an item from the susceptibility-to-norm-influence construct, the scale a=0.87. Noteworthy is the fact that one item from the general animosity scale ("I feel angry toward Germany") and two items from the war animosity scale ("I still feel angry toward Germany because of the Second World War" and "I cannot forgive Germany for what it did to the Jews in the Second World War") loaded on the same factor. Consequently, these were merged into a single scale, general animosity. Convergent validity was assessed by estimating composite reliability employing AMOS 22 and AVE using SPSS 21. In line with Fornell (1992), the construct reliability values of all latent variables were at or above the recommended threshold of 0.6 (see Table I). Discriminant validity was estimated along the lines recommended by Fornell and Larcker (1981). SEM in AMOS 22 was employed to test the hypothesized paths in the research model. Cronbach's a and AVE scores exceeded the recommended 0.7 cutoff suggested by Hair et al. (1999). In all, 14 observed items are retained in the hypothesized model. The cutoff sets recommended for latent factor models having between 12 and 30 observed items are at least 0.92 for the comparative fit index (CFI) and no more than 0.07 for the root mean-squared error of approximation (RMSEA) (Hair et al., 2006). Hence, the results of the hypothesized model point to an adequate fit (kh2=224.48, df=142, p=0.00, CFI=0.95, RMSEA=0.04). Two rival models were tested. Previous research suggests that age may be an antecedent of consumer animosity (Hinck, 2004; Klein, 2002). Hence, in rival Model 1, a path from age to animosity was drawn. The results of rival Model 1 point to a poorer model fit (kh2=265.00, df=168, p=0.00, CFI=0.94, RMSEA=0.04) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (sequential kh2 difference test (SCDT)=40.51, df=26, p=0.03). Our main research objective was to explore the relationship between consumer animosity and conspicuous consumption. However, since according to previous research SNI is a predictor of conspicuous consumption (Tsai et al., 2015), we drew a path between the two constructs in rival Model 2 aiming to test whether the new path enhances the model fit. The results of rival Model 2 point to a poorer model fit (kh2=258.74, df=161 p=0.00, CFI=0.95, RMSEA=0.07) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (SCDT =34.26, df=19, p=0.01). Results Given the superior fit statistics for the hypothesized model vs both rival models, the reported results pertain to the findings of the hypothesized model (Table II). According to H1a, consumer animosity will be negatively associated with Jewish-Israeli consumers' conspicuous consumption tendencies. This was confirmed (b=-0.40, t=-2.85, r<0.05). H2a posits that Jewish-Israeli consumers' SNI will be positively associated with the level of animosity toward Germany, and this was also supported (b=0.32, t=2.13, p<0.05). A negative association between SNI and WTB German-made products was posited. The path was not significant, thereby not supporting H3a (b=-0.22, t=-1.92, r>0.05). H4a posits that consumer animosity will be negatively associated with WTB German-made products, and this too was corroborated by the data (b=-0.31, t=-2.19, r<0.05). Multi-group analysis Since cultural subgroup affiliation was identified as a control variable in the study, a multi-group SEM analysis was performed to examine whether the predictive power of the independent variables varied with respondents' cultural group affiliation. Mizrachi Jews did not directly experience the atrocities of the Holocaust. Consequently, they were expected to harbor more positive attitudes toward the purchase of products made in Germany vis-a-vis Ashkenazim. A fully constrained model was created and compared to the original unconstrained model across groups. Cultural affiliation (Ashkenazi or other) formed the unit of analysis. A kh2 difference test showed that the two models were variant across subgroup affiliation (SCDT=81.61; df =24, p>0.00). The path from SNI to WTB is insignificant and therefore we removed it from the model. The results of the multi-group analysis point to a significant relationship between SNI and consumer animosity. The study findings suggest that SNI is a stronger predictor of consumer animosity among the Mizrachim (b=0.41, t=2.85, r<0.05) vs Ashkenazim (b=0.32, t=2.13, p<0.05). However, a kh2 difference test did not uphold the significance of this difference (SCDT=0.2, df=1, p=0.65). Furthermore, a significant association was found between consumer animosity and WTB products made in Germany. Consumer animosity is a stronger predictor of WTB products made in Germany among Ashkenazim (b=-0.31, t=-2.19, r<0.05) compared to the Mizrachim (b=-0.24, t=-5.38, r<0.001). Here again, however, the kh2 difference test points to no significant group difference concerning this relationship (SCDT=0.16, df=1, p=0.68). There was no significant relationship between SNI and WTB among either the Ashkenazim (b=-0.23, t=-1.92, r>0.05) or Mizrachim (b=-0.09, t=-1.24, r>0.05). The study results also point to a significant association between consumer animosity and conspicuous consumption among Ashkenazi (b=-0.40, t=-2.85, r<0.05), but not Mizrachi Jews (b=-0.03, t=-0.75, r>0.05). Here the kh2 difference test upheld the significance of this finding (SCDT=7.86, df=1, p<0.05). Discussion of Study 1 findings Previous research demonstrates that conspicuous consumption is motivated by consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The results illustrate that those who harbor animosity toward an offending country lack the desire to consume products originating from it as a means either to improve their self-concept or to impress others. Perhaps certain Israeli-Jewish consumers avoid conspicuous German products merely because they cannot afford to purchase luxury items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, no significant difference was found between income level and WTB conspicuous German products (SCDT=30.69, df=26, p=0.16). In line with previous research (Huang et al., 2010), a positive and significant relationship between SNI and consumer animosity was found. This relationship may be accounted for by the social identity and realistic group conflict theories. The observed relationship between SNI and consumer animosity suggests that Israeli-Jews susceptible to normative influence believe their referents have negative opinions of Germany. Hence, they feel the need to comply with them and maintain this animosity toward Germany. In contrast to previous research (Maher and Mady, 2010), an insignificant association was observed between SNI and WTB. However, a multi-group analysis showed that the relationship between SNI and WTB varies depending on subgroup affiliation. There was a significant relationship between SNI and WTB among Ashkenazi Jews, but none among Mizrachi Jews. Although, at first glance, these findings may seem surprising, the literature does provide at least one plausible explanation. A study conducted by Thomas et al. (2015) suggests that consumers may engage in what they term "hidden consumption behavior." This behavior is more likely to occur when the chances of being caught are minimal and when the sanctions (in case of being caught) are negligible. It would, therefore, seem that Mizrachi Jews are willing to consume German products in the privacy of their homes, both because the chances of being caught by fellow Mizrachim is low, and the potential disapproval if caught is not likely to be severe. However, an Ashkenazi caught consuming a German product may endure more severe social disapproval from fellow Ashkenazim. This proposition should be regarded with care as hidden consumption behavior was not studied in either one of the present research settings. Hence, further research would be necessary to verify the explanation advanced here. Moreover, in line with past research, a negative relationship was found between consumer animosity and WTB (Klein et al., 1998; Shimp et al., 2004). A stronger relationship was observed between consumer animosity and WTB among Ashkenazi vs Mizrachi Jews. The observed differences in the predictive power of consumer animosity in each one of the subsamples may be accounted for by a significant difference between Ashkenazi (M=5.49, SD=1.3) and Mizrachi (M=3.87, SD=1.6) Jews in the level of animosity harbored toward Germany, as corroborated by an independent samples t-test (DM=1.62, t=6.34, p=<0.01). Finally, there was a significant relationship between consumer animosity and conspicuous consumption. However, a multi-group analysis revealed that this relationship is only significant among Ashkenazi Jews. This finding may also be accounted for by the fact that Mizrachi Jews harbor lower levels of animosity toward Germany than Ashkenazi Jews. In sum, differences in the effect of animosity on WTB may be partially attributable to consumers' cultural subgroup affiliation. Background Russians and Americans have a long history of political conflict. The Cold War, which erupted after the Second World War, involved a four-decade power struggle between the two countries. The lingering historical tensions between the two nations have been exacerbated by recent events. As is known, in 2014, the Obama administration imposed sanctions on certain Russian individuals and businesses (BBC News, 2014) in response to Russia's annexation of Crimea and the crisis in Eastern Ukraine. These sanctions seem to have had economic ramifications for Russian consumers, as prices of imported products have increased substantially (Boghani, 2015). Previous research suggests that political discord between countries is likely to lead to consumer animosity (Huang et al., 2010; Maher and Mady, 2010). However, as opposed to the genocidal character of the Holocaust, the recent political discord between the USA and Russia is presumed to have a less profound, long-term psychological effect on Russian consumers regarding their attitudes toward consumption of American products. Hypotheses The hypotheses for Study 1 were modified to fit the particular context of the latest political discord between Russia and the USA. Thus, the following are hypothesized: H1b. Animosity will negatively affect Russian consumers' reluctance to conspicuously consume American products. H2b. Russian consumers' SNI will be positively associated with the level of animosity harbored toward the USA. H3b. SNI will be negatively associated with Russian consumers' WTB US-made products. H4b. Consumer animosity will be negatively associated with Russian consumers' WTB US-made products. Method Measures The measures employed in Study 1 were modified to fit the particular context of Study 2 (Table AII). Respondents were not informed about the focus of the study. As in Study 1, the hypothesized model tested included three dependent variables (i.e. animosity, WTB, and conspicuous consumption) and a single independent variable (i.e. SNI). Prior to administrating the questionnaire, the survey was pre-tested on a small sample of Russian consumers (n=20). Several items were rephrased following participant feedback. Procedure Procedures followed the mall-intercept method employed in Study 1. The questionnaire was translated and back-translated with a method suggested by Douglas and Craig (1983). Data were collected from adult consumers living in St Petersburg, Russia. Over a period of ten days, approximately every tenth individual (or a single individual if part of a group) was approached at the entrance to one of several major malls in the city. A total of 259 Russian respondents were recruited. Of the 259 questionnaires collected, 12 were eliminated due to incompleteness. As a result, 247 questionnaires were valid for analysis. Females made up 52 percent of the sample. Most respondents were single (44 percent), and their monthly incomes were above the national average (52 percent) which, based on PPP conversion rates, was equivalent to USD2,460 (see footnote 2) at the time of data collection (see Table AI). Analysis Analyses followed those employed for Study 1 (see Table III). Two items were deleted from the WTB scale, and a total of three items were deleted from the SNI, war animosity, and general animosity scales. Similar to the Israeli study, one item from the general animosity scale ("I feel angry toward the USA") and two items from the war animosity scale ("I am angry with the USA's interference in Russia's affairs" and "I cannot forgive the USA for its policy of sanctions toward my country") loaded on the same factor. Consequently, these three items were merged into a single scale, general animosity. The items included in the study are shown in Table AII. Similar to Study 1, the number of observed items included in the research model was 14. The results of the research model pointed to an adequate fit (kh2=288.81, df=148, p=0.00, CFI=0.93, RMSEA=0.06). As in Study 1, two rival models were tested. In rival Model 1, a path was drawn from age to animosity. The results of rival Model 1 (kh2=245.18, df=126, p=0.00, CFI=0.92, RMSEA=0.07) were inferior to those of the hypothesized model. This was confirmed by a kh2 difference test (SCDT=43.62, df=22, p=0.00). In another rival model (rival Model 2) and in line with Study 1, a path was drawn from SNI and conspicuous consumption. The results of rival Model 2 also point to a worse model fit (kh2=254.68, df=130, p=0.00, CFI=0.94, RMSEA=0.06) in comparison to the hypothesized model. This was confirmed by a kh2 difference test (SCDT=34.13, df=18, p=0.01). Results H1b posits a negative association between consumer animosity and the tendency to conspicuously consume American products, and this was confirmed by the data (b=-0.27, t=-3.08, r<0.05). H2b posits a positive association between SNI and consumer animosity, and again this was corroborated by the study findings (b=0.47, t=5.01, p<0.00). A negative association was posited between SNI and WTB products made in the USA. The observed relationship was negative but insignificant (b=-0.11, t=-1.02, r>0.05), thereby refuting H3b. A negative relationship was also posited between consumer animosity and WTB products made in the USA. This was corroborated by the data, hence confirming H4b (b=-0.40, t=-2.93, r<0.05) (Table IV). Discussion of Study 2 findings Study 2 supports the stability of the hypothesized model and the generalizability of the findings to various contexts. Previous research has demonstrated that conspicuous consumption is motivated by such factors as consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The economic sanctions imposed by the USA on Russia have led to price increases of goods imported from the USA (Boghani, 2015), which are felt in the pockets of Russian consumers. Hence, it is understandable why Russian consumers may lack the desire to consume American-made products as a means to improve their self-concept or impress others. One may well argue that some Russian consumers avoid conspicuous products made in the USA merely because they cannot afford to purchase luxury American items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, similar to the findings observed in Study 1, no significant difference was found between the various salary levels (i.e. below-average income vs above-average income) and conspicuous consumption (SCDT=43.62; p=0.00). Similar to Study 1, there was a negative and significant relationship between consumer animosity and WTB. This finding is in line with a large body of research which demonstrates that animosity affects consumer attitudes (Klein et al., 1998; Shimp et al., 2004). In line with the findings of Study 1, a positive and statistically significant relationship was observed between SNI and consumer animosity. Previous research demonstrates that consumers in collectivistic societies are more susceptible to norm influence than those living in individualistic societies (Mourali et al., 2005). Consistent with the findings observed in Study 1, the results point to an insignificant relationship between SNI and WTB. The findings of both studies describe a negative association between consumer animosity and conspicuous consumption. Likewise, both studies point to a negative relationship between consumer animosity and WTB. Furthermore, data from both studies suggest that SNI is positively associated with consumer animosity. In addition, the hypothesized relationship between SNI and animosity was also confirmed by both studies. However, both studies conducted in the framework of the present research failed to confirm a relationship between SNI and WTB. The findings of Study 1 suggest that the apparent lack of observed relationship may be accounted for by the moderating role of cultural groups differences. Other differences observed in the two studies pertain to mean scores on the general animosity scale. In the Russian sample, the mean score was higher (M=4.84) than in the Israeli sample (M=4.17). This finding is in line with previous research suggesting that during a conflict animosity levels are likely to be higher than when tensions subside (Heslop et al., 2009) The results of the present research point to the importance of taking into account not only the level of consumer animosity but also the nature of the consumption context (conspicuous vs inconspicuous consumption). Managerial implications First, the findings of the present research suggest that marketers need to consider the potential effects of consumer animosity on WTB not only in strictly collectivistic societies, but also in societies that are relatively more individualistic. Firms targeting consumers harboring animosity toward the former's country of origin should focus their advertising and promotion campaigns on products used for inconspicuous consumption rather than on conspicuous consumption. Campaigns focusing on conspicuously consumer products are especially pertinent in collectivistic societies and when the target market is susceptible to norm influence. Second, firms based in a country targeted by consumer animosity may consider relocating their manufacturing operations to a third country in order to avoid the marketing repercussions of the negative political associations. Alternatively, they could also move manufacturing and/or other operations to the country where the target market lives. Finally, firms can use imagery to mask undesirable COO stemming from animosity (D'Antone and Merunka, 2015). Implications for theory The study makes several theoretical contributions. First, by empirically demonstrating an association between consumer animosity and conspicuous consumption, it contributes to the consumer animosity literature by suggesting that not only does consumer animosity affect consumers' WTB products from the target country, but that this effect is partially context specific (conspicuous consumption vs inconspicuous consumption). Past research associated SNI with collectivistic societies, but not with individualistic ones (Lee and Green, 1991; Mourali et al., 2005). The current research suggests that consumers in moderately collectivistic societies like Israel are also susceptible to norm influence. Hence, the present study also contributes to the body of research focusing on the effects of social influence on consumer behavior. In particular, the study implies that societies which may have been overlooked in previous SNI research due to their relatively higher score on the individualism dimension (Hofstede, 2001) should be considered in future consumer animosity. Consistent with previous research on country-of-origin effects and subcultural affiliation (Laroche et al., 2003), the present investigation points to the importance of taking into account the potential effects of cultural subgroup differences on consumer behavior. However, the theoretical contribution of the present study lies in the observed moderating role of subgroup affiliation in the relationship between SNI and WTB, in the context of consumer animosity. In other words, cultural subgroup belonging may strengthen the effect of SNI on WTB when the sample is comprised of consumers who have been either directly or indirectly victimized by the country which is the target of animosity and are moderately collectivistic. The present research has two main limitations. First, although the present study was conducted in two very different research settings, extrapolation to other contexts must be treated with care. Second, data were collected from a convenient sample of consumers and from a single major city in each country. Hence, the findings do not necessarily reflect the attitudes and behavioral patterns of the general consumer population in either one of the countries. Future research would benefit from testing the hypothesized model with a sample drawn from several major cities in each one of the countries. The findings of the present study suggest that consumers may be willing to consume products associated with the offending country in the privacy of their homes because of the belief that the chances of being observed doing so are low. However, this proposition should be regarded with care as hidden consumption behavior was not studied in the present research. Hence, further research would be needed to verify the explanation advanced here. Similarly, the results of the present study suggest that consumers with strong negative feelings toward a country may be reluctant to consume its products conspicuously. Certain consumers may not consume the products of a country in public, not because of personal feelings of animosity toward the country itself, but rather due to normative influence and the desire to conform to the norms dictated by one's in-group. However, these very consumers may not feel guilty consuming the products made in the target country in the privacy of their homes. The underpinnings of private vs public consumption in the context of consumer would be a valuable research avenue to undertake. The findings of the present research shed light on the importance of the consumption context to the study of consumer animosity. In particular, it points to the complexity of the consumer animosity construct emanating primarily from its broad social underpinnings.
|
The purpose of this paper is to explore the effects of consumer animosity on conspicuous consumption in two research settings: Israel and Russia. The study also examines: the relationship between susceptibility to norm influence (SNI) and consumer animosity, whether SNI affects consumers' willingness to buy (WTB) products from a country toward which they harbor animosity, and the relationship between consumer animosity and WTB in contexts differing in the level of animosity harbored toward a target country.
|
[SECTION: Method] Klein et al. (1998) pioneered the study of the consumer animosity stream of research with the introduction of the animosity model of foreign product purchase. They defined animosity as "anger related to previous or ongoing political, military, economic, or diplomatic events" (p. 90). Animosity is a hostile attitude aimed at national out-groups, while hostility comprises both cognitive and attitudinal components. The cognitive component entails cynical beliefs and mistrust of others. The attitudinal component includes the negative emotions of anger, contempt, and disgust (Jung et al., 2002). According to Averill (1982), animosity is a strong emotion of dislike and hatred based on beliefs resulting from past or present military, political, or economic conflict and on actions between nations or people perceived to be unjustifiable or contradictory to socially acceptable norms. Since Klein et al.'s (1998) seminal study, dozens of papers have been published on the subject of consumer animosity. While some studies are replications, others delve more deeply into the consumer animosity phenomenon by examining its potential antecedents, mediators, moderators (Klein and Ettenson, 1999; Maher and Mady, 2010; Riefler and Diamantopoulos, 2007; Shoham et al., 2006; Wang et al., 2013), and consequences (Huang et al., 2010). Previous research demonstrates that consumer animosity may stem from an array of factors including past and ongoing political tensions between countries, past wars, and trade discords (Ettenson and Klein, 2005; Klein et al., 1998). Consumer animosity, in turn, is likely to result in anger which negatively affects consumers' judgments of product quality (Rose et al., 2009) as well as their willingness to buy (WTB) products made in the offending country (Fernandez-Ferrin et al., 2015; Wang et al., 2013). Although previous research has focused on the relationship between consumer animosity and a myriad of constructs, the possible relationship between consumer animosity and the consumption context (i.e. conspicuous vs inconspicuous consumption) has been largely overlooked. Patsiaouras and Fitchett (2012) defined conspicuous consumption as "the competitive and extravagant consumption practices and leisure activities that aim to indicate membership to a superior social class" (p. 154). A previous pilot study suggests that consumer animosity is associated with a lowered proclivity to engage in conspicuous consumption (Al-Hyari et al., 2012). This finding is in line with Klein's (2002) contention that the effects of consumer animosity may be more pivotal in the context of conspicuous consumption compared to other consumption contexts. Examining the relationship between consumer animosity and conspicuous consumption provides a broader understanding as to the particular contexts in which animosity is more likely to influence the consumption of products made in the offending country. A more profound understanding of this relationship may also serve to aid marketing managers in devising more focused marketing strategies and thus allocating marketing resources more efficiently. Hence, the main objective of this research was to examine whether consumer animosity acts as an antecedent to conspicuous consumption. This paper is comprised of two studies aimed at testing the generalizability of the proposed model. This paper is organized as follows: a review of the related literature is followed by an elaborate description of the two major studies conducted in the framework of the present research effort. This is followed by a discussion of the implications emanating from both studies. Finally, the authors point out the research limitations and suggest directions for future research. Consumer animosity and conspicuous consumption Veblen (1918) pioneered the research into conspicuous consumption. This type of consumption focuses on how consumers shop for and use brands, and how social status is shown off via brand image (Griskevicius et al., 2010). Hence, conspicuous consumption is a form of symbolic consumption. A large body of literature exists on the role of symbolic consumption in relation to consumer behavior (Holt, 2002; Hoyer and MacInnis, 1997). According to Grubb and Grathwohl (1967), products are social tools "serving as a means of communication between the individual and his or her significant references" (p. 24). One of the most common means of social communication is conspicuous consumption (Bushman, 1993). Klein et al.'s (1998) and Klein's (2002) studies suggest a negative association between consumer animosity and the ownership of conspicuous products such as cameras and automobiles. These findings are corroborated by more recent research. Al-Hyari et al. (2012) conducted a pilot study in Saudi Arabia in the context of the 2005 Muhammad controversy. The controversy emanated from a call for cartoonists to contribute cartoons depicting the prophet Muhammad by Jyllands-Posten (a Danish newspaper). The cartoon, deemed responsible for stirring unrest among Muslims across the globe, was one that depicted the prophet Muhammad with a bomb in his turban (Knight et al., 2009). Consistent with the two earlier studies, the study by Al-Hyari et al. (2012) points to a negative relationship between consumer animosity and conspicuous consumption. In particular, it suggests that consumers may avoid conspicuously consuming the products made in another country to signal that they are angered by its actions. This behavioral outcome is consistent with signaling theory. According to the theory, the party sending information must choose if and how to communicate (or signal) particular information while the receiver must choose how to interpret the signal (Connelly et al., 2011). Thus, it stands to reason that consumers are unlikely to conspicuously consume products as a means of communication associated with countries toward which they feel resentful. Similar consumer attitudes appear in other contexts as well. Consider the Armenian Genocide perpetrated by the Ottoman Empire in 1915, in which approximately one million Armenians were slaughtered[1]. Nowadays, the raw materials with which luxury brands such as Prada and Versace are made are imported from Turkey. Would Armenians living in Armenia, as opposed to Turkish Armenians (the majority of whom live in Istanbul), want to be associated with a superior social class that expresses its sense of membership in a superior social class by indulging in the conspicuous consumption of luxury brands made from Turkish raw materials? Would they be willing to buy a Prada bag or a Versace suit made from these materials? Likewise, how would an Israeli-Jewish consumer feel about conspicuously consuming a brand manufactured in Germany? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for the extermination of six million Jews? Alternatively, how would a Russian consumer feel about conspicuously consuming brands manufactured in the USA? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for his or her country's economic crisis? Perhaps, but the anecdotal evidence presented above suggests that the conspicuous consumption of brands made in the country that is the target of animosity is unlikely in all three cases. Thus, the following is hypothesized: H1a. Animosity will negatively affect Jewish-Israeli consumers' conspicuous consumption of German products. The relationship between susceptibility to normative influence (SNI), consumer animosity, and WTB SNI is a well-researched concept in the consumer behavior literature (Aaker, 1999; Josiassen and Assaf, 2013; Lee and Green, 1991). SNI, which refers to the utilitarian and value-expressive influence dimensions of the interpersonal influence concept, is defined as the propensity to conform to norms set by others (Batra et al., 2001). Some studies have focused on individuals' susceptibility to these various forms of influence (Aaker, 1999). Others, however, have delved into the effects of SNI on consumer behavior (Wang et al., 2013). Despite the large body of consumer behavior literature focusing on the effects of SNI, little is known about the relationship between SNI and consumer animosity and how this may vary based on the type of society, e.g., individualistic vs collectivistic (Maher and Mady, 2010; Wang et al., 2013). According to the theory of planned behavior (Ajzen, 1991), there are three determinants that predict behavior: attitude toward the behavior, subjective norm, and perceived behavioral control. Overall, the more positive the attitude toward the behavior, the more favorable the subjective norm toward the behavior; and, the greater the perceived control over the behavior, the stronger the intention to perform the intended action will be. While in certain instances merely one of these determinants would suffice to predict intention (e.g. attitudes), in other cases, several determinants will interact in the prediction of the intention (e.g. attitudes and subjective norms). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms - reasoning that is supported by previous research. Huang et al. (2010), for example, conducted a study in the context of the tense political relationships between Taiwan-Japan and China-Japan. Their study findings imply a positive and significant relationship between SNI and consumer animosity. These findings are in line with previous research, which suggests that consumers are more likely to be susceptible to norm influence in collectivistic rather than individualistic societies (Lee and Green, 1991). Huang et al.'s study and those of other researchers (Maher and Mady, 2010; Wang et al., 2013) have made significant contributions to furthering consumer behavior researchers' understanding of the consumer animosity phenomenon. However, the generalizability of these findings is limited as these studies were conducted among consumers more susceptible to norm influence. Previous research suggests that SNI varies with the type of society (i.e. individualistic vs collectivistic society). Hence, it would be of great value to consumer behavior scholars and practitioners alike to examine whether consumer animosity is influenced by SNI in societies which are moderately collectivistic such as Russia and Israel (Hofstede, 2001; Oyserman et al., 2002; Zeigler-Hill and Besser, 2011). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms. Hence, the following is hypothesized: H2a. Jewish-Israeli consumers' SNI will be positively associated with the level of animosity harbored toward Germany. SNI is a critical determinant of WTB (Hoyer et al., 2008). Grinblatt et al. (2008), for instance, analyzed the automobile purchasing behavior of Finnish consumers. The authors found a positive association between SNI and the purchase of automobiles. In particular, a consumer's car choice is likely to be influenced by the car make and model of his or her nearest neighbors. In the context of consumer animosity research, however, the relationship between SNI and WTB is quite different. Previous research suggests that SNI is negatively associated with WTB (Maher and Mady, 2010). Although the influence of SNI on WTB differs in each one of the abovementioned contexts, the motive is identical (i.e. the desire to conform to group norms). In Grinblatt et al.'s (2008) study, consumers purchased the makes and models owned by their neighbors to show that they too could afford such cars. In Maher and Mady's (2010) study, however, consumers were reluctant to purchase the products made in the offending country because they wanted to demonstrate to their reference groups that they were not acting in defiance of their groups' norms. Thus, the following is hypothesized: H3a. SNI will be negatively associated with Jewish consumers' WTB German-made products. Consumer animosity and WTB A large body of research points to a negative relationship between consumer animosity and WTB (Cui et al., 2012; Rose et al., 2009; Wang et al., 2013). Previous research suggests that animosity affects consumers' WTB products made in the offending country not only in the short term (Ettenson and Klein, 2005; Shoham et al., 2006) but also in the long term (Shimp et al., 2004). Even though several decades have elapsed since atrocities like the Nanjing Massacre, which occurred during the period of Japanese occupation of parts of China, Chinese consumers are still reluctant to buy Japanese products (Klein et al.,1998). Likewise, Shimp et al. (2004) demonstrated that Southerners in the USA still maintain enmity toward Northerners over the Civil War and its aftermath. This animosity by Southerners toward Northerners is pronounced in the reluctance of the former to purchase the products of the latter. A similar long-term effect of animosity has also been observed among American Jewish consumers, who are still unwilling to purchase German-made products despite the fact that over seven decades have elapsed since the Holocaust (Podoshen, 2009). Hence, the following is hypothesized: H4a. Consumer animosity will be negatively associated with WTB German-made products. The main objective of this study was to examine the effects of consumer animosity on conspicuous consumption in two research settings: Israel and Russia. More specifically, the study aimed to: examine the relationship between SNI and consumer animosity, examine whether SNI impacts consumers' WTB products made in the offending country, and study whether consumer animosity is associated with consumers' WTB products made in the offending country. Previous studies have emphasized the importance of examining the effects of animosity in contexts other than ones in which its trigger was an extreme historical event (Klein, 2002). To assess the stability of the proposed model (see Figure 1) and its applicability in various contexts, two contexts were tested: the Holocaust and the recent political discord between the USA and Russia over the Obama administration's imposition of economic sanctions on the latter. The two contexts were chosen for a number of reasons. First, they represent potentially differing levels of animosity toward an offending country (Germany vs the USA). Second, according to Hofstede (2001), Israel and Russia represent two different cultures; the former is relatively more individualistic (54) than the latter (39). Previous research suggests that consumers in collectivistic societies are more likely to harbor animosity toward a target country due to greater SNI (Huang et al., 2010). According to the realistic group conflict theory, a perceived threat from an out-group reinforces peoples' sense of belonging to their in-group (Levine and Campbell, 1972). Huang et al.'s research finding along with the realistic group conflict theory would suggest that the level of animosity is more likely to be closely linked to SNI levels among Russian consumers than Israeli consumers. Furthermore, examining the attitudes of Russian consumers toward American products is of practical importance to the many American firms marketing their goods to Russian consumers. American companies view Russia as one of their most important markets (Liuhto et al., 2016). Finally, previous research suggests that Jewish consumers still harbor animosity toward Germany, thereby making it a suitable context to study the proposed research model (Podoshen, 2009). Method This study employed the mall-intercept method to collect data from a sample of adult consumers in Tel-Aviv, Israel (Rose et al., 2009). The questionnaire was translated and back-translated with a technique suggested by Douglas and Craig (1983). Consumers' participation was solicited at the entrance to major malls, where roughly every tenth individual was asked to complete a questionnaire (Josiassen and Assaf, 2013). In cases where the tenth individual was in a group, only one of the group members was invited to take part in the study. Respondents were not informed about the focus of the study. The questionnaires were collected upon completion. The large number of passersby and the mix of individuals from all walks of life influenced the choice of locations. A total of 264 respondents were recruited. Of the questionnaires collected, 14 were eliminated due to incompleteness. Consequently, 250 were valid for analysis. Females composed 54 percent of the sample. Most respondents were single (46 percent) and their monthly incomes were above the national average (57 percent), which, based on purchase power parity (PPP) conversion rates, was equivalent to USD2,374[2] at the time of data collection (see Table AI). Measures Seven-point Likert scales (1=strongly disagree; 7= strongly agree) adapted from previous research were employed to test the relationships proposed in the hypothesized model. The hypothesized model comprised three dependent variables - animosity, WTB, and conspicuous consumption - and a single independent variable, SNI. General animosity was measured with three items adapted from Klein (2002). Similarly, three items from this source were also adapted to measure war animosity. Conspicuous consumption was measured using five items adapted from Marcoux et al. (1997). The original conspicuous consumption scale comprises a relatively large number of items. Due to concerns over the length of the questionnaire and the time necessary for its completion, a pilot study examined whether the original scale (14 items) could be reduced without undermining its validity. Thus, the two studies conducted following the pilot study used a shortened scale consisting of only five items (see Table AI). SNI was measured using three items adapted from Lee and Green (1991), and six items were adapted from Klein et al. (1998) to measure WTB. The questionnaire was pre-tested using a small sample of Israeli-Jewish consumers (n=20). The items employed in the research, their sources, Cronbach's a values, and average variance extracted (AVE) are illustrated in Table AII. Israeli-Jews comprise two major cultural subgroups (Shavit, 1990): Mizrahim (i.e. Jews of Asian or African descent) and Ashkenazim (Jews of European decent). Unlike Ashkenazi Jews, Mizrahi Jews were not victims of the atrocities committed by the Nazi regime. Consequently, it may be assumed that Ashkenzim harbor a greater level of animosity toward Germany and would be less willing to buy products made there than Mizrahim. Hence, subcultural affiliation was a control variable in the relationship between animosity and WTB. In all, 38 percent of the sample were Ashkenazi, 43 percent were Mizrachi, and 19 percent categorized themselves as others. Analysis Prior to analysis, all relevant items were reversed-scored. Using AMOS 22, we employed structural equation modeling (SEM) to test the hypothesized paths. As reported in Table AI, Cronbach's a (cutoff =0.7) and AVE (cutoff =0.5) were measured using SPSS 21 to examine the convergent validity of the constructs (Fornell and Larcker, 1981). Omitted from further analysis were items with loadings below the recommended 0.4 cutoff in the structured model (Hair et al., 1999). Two items were omitted from the WTB scale and two additional items from the war animosity and general animosity scales (one item from each scale). Following the deletion of an item from the susceptibility-to-norm-influence construct, the scale a=0.87. Noteworthy is the fact that one item from the general animosity scale ("I feel angry toward Germany") and two items from the war animosity scale ("I still feel angry toward Germany because of the Second World War" and "I cannot forgive Germany for what it did to the Jews in the Second World War") loaded on the same factor. Consequently, these were merged into a single scale, general animosity. Convergent validity was assessed by estimating composite reliability employing AMOS 22 and AVE using SPSS 21. In line with Fornell (1992), the construct reliability values of all latent variables were at or above the recommended threshold of 0.6 (see Table I). Discriminant validity was estimated along the lines recommended by Fornell and Larcker (1981). SEM in AMOS 22 was employed to test the hypothesized paths in the research model. Cronbach's a and AVE scores exceeded the recommended 0.7 cutoff suggested by Hair et al. (1999). In all, 14 observed items are retained in the hypothesized model. The cutoff sets recommended for latent factor models having between 12 and 30 observed items are at least 0.92 for the comparative fit index (CFI) and no more than 0.07 for the root mean-squared error of approximation (RMSEA) (Hair et al., 2006). Hence, the results of the hypothesized model point to an adequate fit (kh2=224.48, df=142, p=0.00, CFI=0.95, RMSEA=0.04). Two rival models were tested. Previous research suggests that age may be an antecedent of consumer animosity (Hinck, 2004; Klein, 2002). Hence, in rival Model 1, a path from age to animosity was drawn. The results of rival Model 1 point to a poorer model fit (kh2=265.00, df=168, p=0.00, CFI=0.94, RMSEA=0.04) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (sequential kh2 difference test (SCDT)=40.51, df=26, p=0.03). Our main research objective was to explore the relationship between consumer animosity and conspicuous consumption. However, since according to previous research SNI is a predictor of conspicuous consumption (Tsai et al., 2015), we drew a path between the two constructs in rival Model 2 aiming to test whether the new path enhances the model fit. The results of rival Model 2 point to a poorer model fit (kh2=258.74, df=161 p=0.00, CFI=0.95, RMSEA=0.07) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (SCDT =34.26, df=19, p=0.01). Results Given the superior fit statistics for the hypothesized model vs both rival models, the reported results pertain to the findings of the hypothesized model (Table II). According to H1a, consumer animosity will be negatively associated with Jewish-Israeli consumers' conspicuous consumption tendencies. This was confirmed (b=-0.40, t=-2.85, r<0.05). H2a posits that Jewish-Israeli consumers' SNI will be positively associated with the level of animosity toward Germany, and this was also supported (b=0.32, t=2.13, p<0.05). A negative association between SNI and WTB German-made products was posited. The path was not significant, thereby not supporting H3a (b=-0.22, t=-1.92, r>0.05). H4a posits that consumer animosity will be negatively associated with WTB German-made products, and this too was corroborated by the data (b=-0.31, t=-2.19, r<0.05). Multi-group analysis Since cultural subgroup affiliation was identified as a control variable in the study, a multi-group SEM analysis was performed to examine whether the predictive power of the independent variables varied with respondents' cultural group affiliation. Mizrachi Jews did not directly experience the atrocities of the Holocaust. Consequently, they were expected to harbor more positive attitudes toward the purchase of products made in Germany vis-a-vis Ashkenazim. A fully constrained model was created and compared to the original unconstrained model across groups. Cultural affiliation (Ashkenazi or other) formed the unit of analysis. A kh2 difference test showed that the two models were variant across subgroup affiliation (SCDT=81.61; df =24, p>0.00). The path from SNI to WTB is insignificant and therefore we removed it from the model. The results of the multi-group analysis point to a significant relationship between SNI and consumer animosity. The study findings suggest that SNI is a stronger predictor of consumer animosity among the Mizrachim (b=0.41, t=2.85, r<0.05) vs Ashkenazim (b=0.32, t=2.13, p<0.05). However, a kh2 difference test did not uphold the significance of this difference (SCDT=0.2, df=1, p=0.65). Furthermore, a significant association was found between consumer animosity and WTB products made in Germany. Consumer animosity is a stronger predictor of WTB products made in Germany among Ashkenazim (b=-0.31, t=-2.19, r<0.05) compared to the Mizrachim (b=-0.24, t=-5.38, r<0.001). Here again, however, the kh2 difference test points to no significant group difference concerning this relationship (SCDT=0.16, df=1, p=0.68). There was no significant relationship between SNI and WTB among either the Ashkenazim (b=-0.23, t=-1.92, r>0.05) or Mizrachim (b=-0.09, t=-1.24, r>0.05). The study results also point to a significant association between consumer animosity and conspicuous consumption among Ashkenazi (b=-0.40, t=-2.85, r<0.05), but not Mizrachi Jews (b=-0.03, t=-0.75, r>0.05). Here the kh2 difference test upheld the significance of this finding (SCDT=7.86, df=1, p<0.05). Discussion of Study 1 findings Previous research demonstrates that conspicuous consumption is motivated by consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The results illustrate that those who harbor animosity toward an offending country lack the desire to consume products originating from it as a means either to improve their self-concept or to impress others. Perhaps certain Israeli-Jewish consumers avoid conspicuous German products merely because they cannot afford to purchase luxury items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, no significant difference was found between income level and WTB conspicuous German products (SCDT=30.69, df=26, p=0.16). In line with previous research (Huang et al., 2010), a positive and significant relationship between SNI and consumer animosity was found. This relationship may be accounted for by the social identity and realistic group conflict theories. The observed relationship between SNI and consumer animosity suggests that Israeli-Jews susceptible to normative influence believe their referents have negative opinions of Germany. Hence, they feel the need to comply with them and maintain this animosity toward Germany. In contrast to previous research (Maher and Mady, 2010), an insignificant association was observed between SNI and WTB. However, a multi-group analysis showed that the relationship between SNI and WTB varies depending on subgroup affiliation. There was a significant relationship between SNI and WTB among Ashkenazi Jews, but none among Mizrachi Jews. Although, at first glance, these findings may seem surprising, the literature does provide at least one plausible explanation. A study conducted by Thomas et al. (2015) suggests that consumers may engage in what they term "hidden consumption behavior." This behavior is more likely to occur when the chances of being caught are minimal and when the sanctions (in case of being caught) are negligible. It would, therefore, seem that Mizrachi Jews are willing to consume German products in the privacy of their homes, both because the chances of being caught by fellow Mizrachim is low, and the potential disapproval if caught is not likely to be severe. However, an Ashkenazi caught consuming a German product may endure more severe social disapproval from fellow Ashkenazim. This proposition should be regarded with care as hidden consumption behavior was not studied in either one of the present research settings. Hence, further research would be necessary to verify the explanation advanced here. Moreover, in line with past research, a negative relationship was found between consumer animosity and WTB (Klein et al., 1998; Shimp et al., 2004). A stronger relationship was observed between consumer animosity and WTB among Ashkenazi vs Mizrachi Jews. The observed differences in the predictive power of consumer animosity in each one of the subsamples may be accounted for by a significant difference between Ashkenazi (M=5.49, SD=1.3) and Mizrachi (M=3.87, SD=1.6) Jews in the level of animosity harbored toward Germany, as corroborated by an independent samples t-test (DM=1.62, t=6.34, p=<0.01). Finally, there was a significant relationship between consumer animosity and conspicuous consumption. However, a multi-group analysis revealed that this relationship is only significant among Ashkenazi Jews. This finding may also be accounted for by the fact that Mizrachi Jews harbor lower levels of animosity toward Germany than Ashkenazi Jews. In sum, differences in the effect of animosity on WTB may be partially attributable to consumers' cultural subgroup affiliation. Background Russians and Americans have a long history of political conflict. The Cold War, which erupted after the Second World War, involved a four-decade power struggle between the two countries. The lingering historical tensions between the two nations have been exacerbated by recent events. As is known, in 2014, the Obama administration imposed sanctions on certain Russian individuals and businesses (BBC News, 2014) in response to Russia's annexation of Crimea and the crisis in Eastern Ukraine. These sanctions seem to have had economic ramifications for Russian consumers, as prices of imported products have increased substantially (Boghani, 2015). Previous research suggests that political discord between countries is likely to lead to consumer animosity (Huang et al., 2010; Maher and Mady, 2010). However, as opposed to the genocidal character of the Holocaust, the recent political discord between the USA and Russia is presumed to have a less profound, long-term psychological effect on Russian consumers regarding their attitudes toward consumption of American products. Hypotheses The hypotheses for Study 1 were modified to fit the particular context of the latest political discord between Russia and the USA. Thus, the following are hypothesized: H1b. Animosity will negatively affect Russian consumers' reluctance to conspicuously consume American products. H2b. Russian consumers' SNI will be positively associated with the level of animosity harbored toward the USA. H3b. SNI will be negatively associated with Russian consumers' WTB US-made products. H4b. Consumer animosity will be negatively associated with Russian consumers' WTB US-made products. Method Measures The measures employed in Study 1 were modified to fit the particular context of Study 2 (Table AII). Respondents were not informed about the focus of the study. As in Study 1, the hypothesized model tested included three dependent variables (i.e. animosity, WTB, and conspicuous consumption) and a single independent variable (i.e. SNI). Prior to administrating the questionnaire, the survey was pre-tested on a small sample of Russian consumers (n=20). Several items were rephrased following participant feedback. Procedure Procedures followed the mall-intercept method employed in Study 1. The questionnaire was translated and back-translated with a method suggested by Douglas and Craig (1983). Data were collected from adult consumers living in St Petersburg, Russia. Over a period of ten days, approximately every tenth individual (or a single individual if part of a group) was approached at the entrance to one of several major malls in the city. A total of 259 Russian respondents were recruited. Of the 259 questionnaires collected, 12 were eliminated due to incompleteness. As a result, 247 questionnaires were valid for analysis. Females made up 52 percent of the sample. Most respondents were single (44 percent), and their monthly incomes were above the national average (52 percent) which, based on PPP conversion rates, was equivalent to USD2,460 (see footnote 2) at the time of data collection (see Table AI). Analysis Analyses followed those employed for Study 1 (see Table III). Two items were deleted from the WTB scale, and a total of three items were deleted from the SNI, war animosity, and general animosity scales. Similar to the Israeli study, one item from the general animosity scale ("I feel angry toward the USA") and two items from the war animosity scale ("I am angry with the USA's interference in Russia's affairs" and "I cannot forgive the USA for its policy of sanctions toward my country") loaded on the same factor. Consequently, these three items were merged into a single scale, general animosity. The items included in the study are shown in Table AII. Similar to Study 1, the number of observed items included in the research model was 14. The results of the research model pointed to an adequate fit (kh2=288.81, df=148, p=0.00, CFI=0.93, RMSEA=0.06). As in Study 1, two rival models were tested. In rival Model 1, a path was drawn from age to animosity. The results of rival Model 1 (kh2=245.18, df=126, p=0.00, CFI=0.92, RMSEA=0.07) were inferior to those of the hypothesized model. This was confirmed by a kh2 difference test (SCDT=43.62, df=22, p=0.00). In another rival model (rival Model 2) and in line with Study 1, a path was drawn from SNI and conspicuous consumption. The results of rival Model 2 also point to a worse model fit (kh2=254.68, df=130, p=0.00, CFI=0.94, RMSEA=0.06) in comparison to the hypothesized model. This was confirmed by a kh2 difference test (SCDT=34.13, df=18, p=0.01). Results H1b posits a negative association between consumer animosity and the tendency to conspicuously consume American products, and this was confirmed by the data (b=-0.27, t=-3.08, r<0.05). H2b posits a positive association between SNI and consumer animosity, and again this was corroborated by the study findings (b=0.47, t=5.01, p<0.00). A negative association was posited between SNI and WTB products made in the USA. The observed relationship was negative but insignificant (b=-0.11, t=-1.02, r>0.05), thereby refuting H3b. A negative relationship was also posited between consumer animosity and WTB products made in the USA. This was corroborated by the data, hence confirming H4b (b=-0.40, t=-2.93, r<0.05) (Table IV). Discussion of Study 2 findings Study 2 supports the stability of the hypothesized model and the generalizability of the findings to various contexts. Previous research has demonstrated that conspicuous consumption is motivated by such factors as consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The economic sanctions imposed by the USA on Russia have led to price increases of goods imported from the USA (Boghani, 2015), which are felt in the pockets of Russian consumers. Hence, it is understandable why Russian consumers may lack the desire to consume American-made products as a means to improve their self-concept or impress others. One may well argue that some Russian consumers avoid conspicuous products made in the USA merely because they cannot afford to purchase luxury American items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, similar to the findings observed in Study 1, no significant difference was found between the various salary levels (i.e. below-average income vs above-average income) and conspicuous consumption (SCDT=43.62; p=0.00). Similar to Study 1, there was a negative and significant relationship between consumer animosity and WTB. This finding is in line with a large body of research which demonstrates that animosity affects consumer attitudes (Klein et al., 1998; Shimp et al., 2004). In line with the findings of Study 1, a positive and statistically significant relationship was observed between SNI and consumer animosity. Previous research demonstrates that consumers in collectivistic societies are more susceptible to norm influence than those living in individualistic societies (Mourali et al., 2005). Consistent with the findings observed in Study 1, the results point to an insignificant relationship between SNI and WTB. The findings of both studies describe a negative association between consumer animosity and conspicuous consumption. Likewise, both studies point to a negative relationship between consumer animosity and WTB. Furthermore, data from both studies suggest that SNI is positively associated with consumer animosity. In addition, the hypothesized relationship between SNI and animosity was also confirmed by both studies. However, both studies conducted in the framework of the present research failed to confirm a relationship between SNI and WTB. The findings of Study 1 suggest that the apparent lack of observed relationship may be accounted for by the moderating role of cultural groups differences. Other differences observed in the two studies pertain to mean scores on the general animosity scale. In the Russian sample, the mean score was higher (M=4.84) than in the Israeli sample (M=4.17). This finding is in line with previous research suggesting that during a conflict animosity levels are likely to be higher than when tensions subside (Heslop et al., 2009) The results of the present research point to the importance of taking into account not only the level of consumer animosity but also the nature of the consumption context (conspicuous vs inconspicuous consumption). Managerial implications First, the findings of the present research suggest that marketers need to consider the potential effects of consumer animosity on WTB not only in strictly collectivistic societies, but also in societies that are relatively more individualistic. Firms targeting consumers harboring animosity toward the former's country of origin should focus their advertising and promotion campaigns on products used for inconspicuous consumption rather than on conspicuous consumption. Campaigns focusing on conspicuously consumer products are especially pertinent in collectivistic societies and when the target market is susceptible to norm influence. Second, firms based in a country targeted by consumer animosity may consider relocating their manufacturing operations to a third country in order to avoid the marketing repercussions of the negative political associations. Alternatively, they could also move manufacturing and/or other operations to the country where the target market lives. Finally, firms can use imagery to mask undesirable COO stemming from animosity (D'Antone and Merunka, 2015). Implications for theory The study makes several theoretical contributions. First, by empirically demonstrating an association between consumer animosity and conspicuous consumption, it contributes to the consumer animosity literature by suggesting that not only does consumer animosity affect consumers' WTB products from the target country, but that this effect is partially context specific (conspicuous consumption vs inconspicuous consumption). Past research associated SNI with collectivistic societies, but not with individualistic ones (Lee and Green, 1991; Mourali et al., 2005). The current research suggests that consumers in moderately collectivistic societies like Israel are also susceptible to norm influence. Hence, the present study also contributes to the body of research focusing on the effects of social influence on consumer behavior. In particular, the study implies that societies which may have been overlooked in previous SNI research due to their relatively higher score on the individualism dimension (Hofstede, 2001) should be considered in future consumer animosity. Consistent with previous research on country-of-origin effects and subcultural affiliation (Laroche et al., 2003), the present investigation points to the importance of taking into account the potential effects of cultural subgroup differences on consumer behavior. However, the theoretical contribution of the present study lies in the observed moderating role of subgroup affiliation in the relationship between SNI and WTB, in the context of consumer animosity. In other words, cultural subgroup belonging may strengthen the effect of SNI on WTB when the sample is comprised of consumers who have been either directly or indirectly victimized by the country which is the target of animosity and are moderately collectivistic. The present research has two main limitations. First, although the present study was conducted in two very different research settings, extrapolation to other contexts must be treated with care. Second, data were collected from a convenient sample of consumers and from a single major city in each country. Hence, the findings do not necessarily reflect the attitudes and behavioral patterns of the general consumer population in either one of the countries. Future research would benefit from testing the hypothesized model with a sample drawn from several major cities in each one of the countries. The findings of the present study suggest that consumers may be willing to consume products associated with the offending country in the privacy of their homes because of the belief that the chances of being observed doing so are low. However, this proposition should be regarded with care as hidden consumption behavior was not studied in the present research. Hence, further research would be needed to verify the explanation advanced here. Similarly, the results of the present study suggest that consumers with strong negative feelings toward a country may be reluctant to consume its products conspicuously. Certain consumers may not consume the products of a country in public, not because of personal feelings of animosity toward the country itself, but rather due to normative influence and the desire to conform to the norms dictated by one's in-group. However, these very consumers may not feel guilty consuming the products made in the target country in the privacy of their homes. The underpinnings of private vs public consumption in the context of consumer would be a valuable research avenue to undertake. The findings of the present research shed light on the importance of the consumption context to the study of consumer animosity. In particular, it points to the complexity of the consumer animosity construct emanating primarily from its broad social underpinnings.
|
To probe generalizability, the hypothesized model was tested in two different contexts: Study 1 was conducted in Israel using the context of the Holocaust and Study 2 was conducted in Russia using the context of the recent political discord with the USA. A convenience sample of Israeli-Jewish (n=264) and Russian (n=259) consumers yielded a total of 523 questionnaires.
|
[SECTION: Findings] Klein et al. (1998) pioneered the study of the consumer animosity stream of research with the introduction of the animosity model of foreign product purchase. They defined animosity as "anger related to previous or ongoing political, military, economic, or diplomatic events" (p. 90). Animosity is a hostile attitude aimed at national out-groups, while hostility comprises both cognitive and attitudinal components. The cognitive component entails cynical beliefs and mistrust of others. The attitudinal component includes the negative emotions of anger, contempt, and disgust (Jung et al., 2002). According to Averill (1982), animosity is a strong emotion of dislike and hatred based on beliefs resulting from past or present military, political, or economic conflict and on actions between nations or people perceived to be unjustifiable or contradictory to socially acceptable norms. Since Klein et al.'s (1998) seminal study, dozens of papers have been published on the subject of consumer animosity. While some studies are replications, others delve more deeply into the consumer animosity phenomenon by examining its potential antecedents, mediators, moderators (Klein and Ettenson, 1999; Maher and Mady, 2010; Riefler and Diamantopoulos, 2007; Shoham et al., 2006; Wang et al., 2013), and consequences (Huang et al., 2010). Previous research demonstrates that consumer animosity may stem from an array of factors including past and ongoing political tensions between countries, past wars, and trade discords (Ettenson and Klein, 2005; Klein et al., 1998). Consumer animosity, in turn, is likely to result in anger which negatively affects consumers' judgments of product quality (Rose et al., 2009) as well as their willingness to buy (WTB) products made in the offending country (Fernandez-Ferrin et al., 2015; Wang et al., 2013). Although previous research has focused on the relationship between consumer animosity and a myriad of constructs, the possible relationship between consumer animosity and the consumption context (i.e. conspicuous vs inconspicuous consumption) has been largely overlooked. Patsiaouras and Fitchett (2012) defined conspicuous consumption as "the competitive and extravagant consumption practices and leisure activities that aim to indicate membership to a superior social class" (p. 154). A previous pilot study suggests that consumer animosity is associated with a lowered proclivity to engage in conspicuous consumption (Al-Hyari et al., 2012). This finding is in line with Klein's (2002) contention that the effects of consumer animosity may be more pivotal in the context of conspicuous consumption compared to other consumption contexts. Examining the relationship between consumer animosity and conspicuous consumption provides a broader understanding as to the particular contexts in which animosity is more likely to influence the consumption of products made in the offending country. A more profound understanding of this relationship may also serve to aid marketing managers in devising more focused marketing strategies and thus allocating marketing resources more efficiently. Hence, the main objective of this research was to examine whether consumer animosity acts as an antecedent to conspicuous consumption. This paper is comprised of two studies aimed at testing the generalizability of the proposed model. This paper is organized as follows: a review of the related literature is followed by an elaborate description of the two major studies conducted in the framework of the present research effort. This is followed by a discussion of the implications emanating from both studies. Finally, the authors point out the research limitations and suggest directions for future research. Consumer animosity and conspicuous consumption Veblen (1918) pioneered the research into conspicuous consumption. This type of consumption focuses on how consumers shop for and use brands, and how social status is shown off via brand image (Griskevicius et al., 2010). Hence, conspicuous consumption is a form of symbolic consumption. A large body of literature exists on the role of symbolic consumption in relation to consumer behavior (Holt, 2002; Hoyer and MacInnis, 1997). According to Grubb and Grathwohl (1967), products are social tools "serving as a means of communication between the individual and his or her significant references" (p. 24). One of the most common means of social communication is conspicuous consumption (Bushman, 1993). Klein et al.'s (1998) and Klein's (2002) studies suggest a negative association between consumer animosity and the ownership of conspicuous products such as cameras and automobiles. These findings are corroborated by more recent research. Al-Hyari et al. (2012) conducted a pilot study in Saudi Arabia in the context of the 2005 Muhammad controversy. The controversy emanated from a call for cartoonists to contribute cartoons depicting the prophet Muhammad by Jyllands-Posten (a Danish newspaper). The cartoon, deemed responsible for stirring unrest among Muslims across the globe, was one that depicted the prophet Muhammad with a bomb in his turban (Knight et al., 2009). Consistent with the two earlier studies, the study by Al-Hyari et al. (2012) points to a negative relationship between consumer animosity and conspicuous consumption. In particular, it suggests that consumers may avoid conspicuously consuming the products made in another country to signal that they are angered by its actions. This behavioral outcome is consistent with signaling theory. According to the theory, the party sending information must choose if and how to communicate (or signal) particular information while the receiver must choose how to interpret the signal (Connelly et al., 2011). Thus, it stands to reason that consumers are unlikely to conspicuously consume products as a means of communication associated with countries toward which they feel resentful. Similar consumer attitudes appear in other contexts as well. Consider the Armenian Genocide perpetrated by the Ottoman Empire in 1915, in which approximately one million Armenians were slaughtered[1]. Nowadays, the raw materials with which luxury brands such as Prada and Versace are made are imported from Turkey. Would Armenians living in Armenia, as opposed to Turkish Armenians (the majority of whom live in Istanbul), want to be associated with a superior social class that expresses its sense of membership in a superior social class by indulging in the conspicuous consumption of luxury brands made from Turkish raw materials? Would they be willing to buy a Prada bag or a Versace suit made from these materials? Likewise, how would an Israeli-Jewish consumer feel about conspicuously consuming a brand manufactured in Germany? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for the extermination of six million Jews? Alternatively, how would a Russian consumer feel about conspicuously consuming brands manufactured in the USA? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for his or her country's economic crisis? Perhaps, but the anecdotal evidence presented above suggests that the conspicuous consumption of brands made in the country that is the target of animosity is unlikely in all three cases. Thus, the following is hypothesized: H1a. Animosity will negatively affect Jewish-Israeli consumers' conspicuous consumption of German products. The relationship between susceptibility to normative influence (SNI), consumer animosity, and WTB SNI is a well-researched concept in the consumer behavior literature (Aaker, 1999; Josiassen and Assaf, 2013; Lee and Green, 1991). SNI, which refers to the utilitarian and value-expressive influence dimensions of the interpersonal influence concept, is defined as the propensity to conform to norms set by others (Batra et al., 2001). Some studies have focused on individuals' susceptibility to these various forms of influence (Aaker, 1999). Others, however, have delved into the effects of SNI on consumer behavior (Wang et al., 2013). Despite the large body of consumer behavior literature focusing on the effects of SNI, little is known about the relationship between SNI and consumer animosity and how this may vary based on the type of society, e.g., individualistic vs collectivistic (Maher and Mady, 2010; Wang et al., 2013). According to the theory of planned behavior (Ajzen, 1991), there are three determinants that predict behavior: attitude toward the behavior, subjective norm, and perceived behavioral control. Overall, the more positive the attitude toward the behavior, the more favorable the subjective norm toward the behavior; and, the greater the perceived control over the behavior, the stronger the intention to perform the intended action will be. While in certain instances merely one of these determinants would suffice to predict intention (e.g. attitudes), in other cases, several determinants will interact in the prediction of the intention (e.g. attitudes and subjective norms). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms - reasoning that is supported by previous research. Huang et al. (2010), for example, conducted a study in the context of the tense political relationships between Taiwan-Japan and China-Japan. Their study findings imply a positive and significant relationship between SNI and consumer animosity. These findings are in line with previous research, which suggests that consumers are more likely to be susceptible to norm influence in collectivistic rather than individualistic societies (Lee and Green, 1991). Huang et al.'s study and those of other researchers (Maher and Mady, 2010; Wang et al., 2013) have made significant contributions to furthering consumer behavior researchers' understanding of the consumer animosity phenomenon. However, the generalizability of these findings is limited as these studies were conducted among consumers more susceptible to norm influence. Previous research suggests that SNI varies with the type of society (i.e. individualistic vs collectivistic society). Hence, it would be of great value to consumer behavior scholars and practitioners alike to examine whether consumer animosity is influenced by SNI in societies which are moderately collectivistic such as Russia and Israel (Hofstede, 2001; Oyserman et al., 2002; Zeigler-Hill and Besser, 2011). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms. Hence, the following is hypothesized: H2a. Jewish-Israeli consumers' SNI will be positively associated with the level of animosity harbored toward Germany. SNI is a critical determinant of WTB (Hoyer et al., 2008). Grinblatt et al. (2008), for instance, analyzed the automobile purchasing behavior of Finnish consumers. The authors found a positive association between SNI and the purchase of automobiles. In particular, a consumer's car choice is likely to be influenced by the car make and model of his or her nearest neighbors. In the context of consumer animosity research, however, the relationship between SNI and WTB is quite different. Previous research suggests that SNI is negatively associated with WTB (Maher and Mady, 2010). Although the influence of SNI on WTB differs in each one of the abovementioned contexts, the motive is identical (i.e. the desire to conform to group norms). In Grinblatt et al.'s (2008) study, consumers purchased the makes and models owned by their neighbors to show that they too could afford such cars. In Maher and Mady's (2010) study, however, consumers were reluctant to purchase the products made in the offending country because they wanted to demonstrate to their reference groups that they were not acting in defiance of their groups' norms. Thus, the following is hypothesized: H3a. SNI will be negatively associated with Jewish consumers' WTB German-made products. Consumer animosity and WTB A large body of research points to a negative relationship between consumer animosity and WTB (Cui et al., 2012; Rose et al., 2009; Wang et al., 2013). Previous research suggests that animosity affects consumers' WTB products made in the offending country not only in the short term (Ettenson and Klein, 2005; Shoham et al., 2006) but also in the long term (Shimp et al., 2004). Even though several decades have elapsed since atrocities like the Nanjing Massacre, which occurred during the period of Japanese occupation of parts of China, Chinese consumers are still reluctant to buy Japanese products (Klein et al.,1998). Likewise, Shimp et al. (2004) demonstrated that Southerners in the USA still maintain enmity toward Northerners over the Civil War and its aftermath. This animosity by Southerners toward Northerners is pronounced in the reluctance of the former to purchase the products of the latter. A similar long-term effect of animosity has also been observed among American Jewish consumers, who are still unwilling to purchase German-made products despite the fact that over seven decades have elapsed since the Holocaust (Podoshen, 2009). Hence, the following is hypothesized: H4a. Consumer animosity will be negatively associated with WTB German-made products. The main objective of this study was to examine the effects of consumer animosity on conspicuous consumption in two research settings: Israel and Russia. More specifically, the study aimed to: examine the relationship between SNI and consumer animosity, examine whether SNI impacts consumers' WTB products made in the offending country, and study whether consumer animosity is associated with consumers' WTB products made in the offending country. Previous studies have emphasized the importance of examining the effects of animosity in contexts other than ones in which its trigger was an extreme historical event (Klein, 2002). To assess the stability of the proposed model (see Figure 1) and its applicability in various contexts, two contexts were tested: the Holocaust and the recent political discord between the USA and Russia over the Obama administration's imposition of economic sanctions on the latter. The two contexts were chosen for a number of reasons. First, they represent potentially differing levels of animosity toward an offending country (Germany vs the USA). Second, according to Hofstede (2001), Israel and Russia represent two different cultures; the former is relatively more individualistic (54) than the latter (39). Previous research suggests that consumers in collectivistic societies are more likely to harbor animosity toward a target country due to greater SNI (Huang et al., 2010). According to the realistic group conflict theory, a perceived threat from an out-group reinforces peoples' sense of belonging to their in-group (Levine and Campbell, 1972). Huang et al.'s research finding along with the realistic group conflict theory would suggest that the level of animosity is more likely to be closely linked to SNI levels among Russian consumers than Israeli consumers. Furthermore, examining the attitudes of Russian consumers toward American products is of practical importance to the many American firms marketing their goods to Russian consumers. American companies view Russia as one of their most important markets (Liuhto et al., 2016). Finally, previous research suggests that Jewish consumers still harbor animosity toward Germany, thereby making it a suitable context to study the proposed research model (Podoshen, 2009). Method This study employed the mall-intercept method to collect data from a sample of adult consumers in Tel-Aviv, Israel (Rose et al., 2009). The questionnaire was translated and back-translated with a technique suggested by Douglas and Craig (1983). Consumers' participation was solicited at the entrance to major malls, where roughly every tenth individual was asked to complete a questionnaire (Josiassen and Assaf, 2013). In cases where the tenth individual was in a group, only one of the group members was invited to take part in the study. Respondents were not informed about the focus of the study. The questionnaires were collected upon completion. The large number of passersby and the mix of individuals from all walks of life influenced the choice of locations. A total of 264 respondents were recruited. Of the questionnaires collected, 14 were eliminated due to incompleteness. Consequently, 250 were valid for analysis. Females composed 54 percent of the sample. Most respondents were single (46 percent) and their monthly incomes were above the national average (57 percent), which, based on purchase power parity (PPP) conversion rates, was equivalent to USD2,374[2] at the time of data collection (see Table AI). Measures Seven-point Likert scales (1=strongly disagree; 7= strongly agree) adapted from previous research were employed to test the relationships proposed in the hypothesized model. The hypothesized model comprised three dependent variables - animosity, WTB, and conspicuous consumption - and a single independent variable, SNI. General animosity was measured with three items adapted from Klein (2002). Similarly, three items from this source were also adapted to measure war animosity. Conspicuous consumption was measured using five items adapted from Marcoux et al. (1997). The original conspicuous consumption scale comprises a relatively large number of items. Due to concerns over the length of the questionnaire and the time necessary for its completion, a pilot study examined whether the original scale (14 items) could be reduced without undermining its validity. Thus, the two studies conducted following the pilot study used a shortened scale consisting of only five items (see Table AI). SNI was measured using three items adapted from Lee and Green (1991), and six items were adapted from Klein et al. (1998) to measure WTB. The questionnaire was pre-tested using a small sample of Israeli-Jewish consumers (n=20). The items employed in the research, their sources, Cronbach's a values, and average variance extracted (AVE) are illustrated in Table AII. Israeli-Jews comprise two major cultural subgroups (Shavit, 1990): Mizrahim (i.e. Jews of Asian or African descent) and Ashkenazim (Jews of European decent). Unlike Ashkenazi Jews, Mizrahi Jews were not victims of the atrocities committed by the Nazi regime. Consequently, it may be assumed that Ashkenzim harbor a greater level of animosity toward Germany and would be less willing to buy products made there than Mizrahim. Hence, subcultural affiliation was a control variable in the relationship between animosity and WTB. In all, 38 percent of the sample were Ashkenazi, 43 percent were Mizrachi, and 19 percent categorized themselves as others. Analysis Prior to analysis, all relevant items were reversed-scored. Using AMOS 22, we employed structural equation modeling (SEM) to test the hypothesized paths. As reported in Table AI, Cronbach's a (cutoff =0.7) and AVE (cutoff =0.5) were measured using SPSS 21 to examine the convergent validity of the constructs (Fornell and Larcker, 1981). Omitted from further analysis were items with loadings below the recommended 0.4 cutoff in the structured model (Hair et al., 1999). Two items were omitted from the WTB scale and two additional items from the war animosity and general animosity scales (one item from each scale). Following the deletion of an item from the susceptibility-to-norm-influence construct, the scale a=0.87. Noteworthy is the fact that one item from the general animosity scale ("I feel angry toward Germany") and two items from the war animosity scale ("I still feel angry toward Germany because of the Second World War" and "I cannot forgive Germany for what it did to the Jews in the Second World War") loaded on the same factor. Consequently, these were merged into a single scale, general animosity. Convergent validity was assessed by estimating composite reliability employing AMOS 22 and AVE using SPSS 21. In line with Fornell (1992), the construct reliability values of all latent variables were at or above the recommended threshold of 0.6 (see Table I). Discriminant validity was estimated along the lines recommended by Fornell and Larcker (1981). SEM in AMOS 22 was employed to test the hypothesized paths in the research model. Cronbach's a and AVE scores exceeded the recommended 0.7 cutoff suggested by Hair et al. (1999). In all, 14 observed items are retained in the hypothesized model. The cutoff sets recommended for latent factor models having between 12 and 30 observed items are at least 0.92 for the comparative fit index (CFI) and no more than 0.07 for the root mean-squared error of approximation (RMSEA) (Hair et al., 2006). Hence, the results of the hypothesized model point to an adequate fit (kh2=224.48, df=142, p=0.00, CFI=0.95, RMSEA=0.04). Two rival models were tested. Previous research suggests that age may be an antecedent of consumer animosity (Hinck, 2004; Klein, 2002). Hence, in rival Model 1, a path from age to animosity was drawn. The results of rival Model 1 point to a poorer model fit (kh2=265.00, df=168, p=0.00, CFI=0.94, RMSEA=0.04) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (sequential kh2 difference test (SCDT)=40.51, df=26, p=0.03). Our main research objective was to explore the relationship between consumer animosity and conspicuous consumption. However, since according to previous research SNI is a predictor of conspicuous consumption (Tsai et al., 2015), we drew a path between the two constructs in rival Model 2 aiming to test whether the new path enhances the model fit. The results of rival Model 2 point to a poorer model fit (kh2=258.74, df=161 p=0.00, CFI=0.95, RMSEA=0.07) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (SCDT =34.26, df=19, p=0.01). Results Given the superior fit statistics for the hypothesized model vs both rival models, the reported results pertain to the findings of the hypothesized model (Table II). According to H1a, consumer animosity will be negatively associated with Jewish-Israeli consumers' conspicuous consumption tendencies. This was confirmed (b=-0.40, t=-2.85, r<0.05). H2a posits that Jewish-Israeli consumers' SNI will be positively associated with the level of animosity toward Germany, and this was also supported (b=0.32, t=2.13, p<0.05). A negative association between SNI and WTB German-made products was posited. The path was not significant, thereby not supporting H3a (b=-0.22, t=-1.92, r>0.05). H4a posits that consumer animosity will be negatively associated with WTB German-made products, and this too was corroborated by the data (b=-0.31, t=-2.19, r<0.05). Multi-group analysis Since cultural subgroup affiliation was identified as a control variable in the study, a multi-group SEM analysis was performed to examine whether the predictive power of the independent variables varied with respondents' cultural group affiliation. Mizrachi Jews did not directly experience the atrocities of the Holocaust. Consequently, they were expected to harbor more positive attitudes toward the purchase of products made in Germany vis-a-vis Ashkenazim. A fully constrained model was created and compared to the original unconstrained model across groups. Cultural affiliation (Ashkenazi or other) formed the unit of analysis. A kh2 difference test showed that the two models were variant across subgroup affiliation (SCDT=81.61; df =24, p>0.00). The path from SNI to WTB is insignificant and therefore we removed it from the model. The results of the multi-group analysis point to a significant relationship between SNI and consumer animosity. The study findings suggest that SNI is a stronger predictor of consumer animosity among the Mizrachim (b=0.41, t=2.85, r<0.05) vs Ashkenazim (b=0.32, t=2.13, p<0.05). However, a kh2 difference test did not uphold the significance of this difference (SCDT=0.2, df=1, p=0.65). Furthermore, a significant association was found between consumer animosity and WTB products made in Germany. Consumer animosity is a stronger predictor of WTB products made in Germany among Ashkenazim (b=-0.31, t=-2.19, r<0.05) compared to the Mizrachim (b=-0.24, t=-5.38, r<0.001). Here again, however, the kh2 difference test points to no significant group difference concerning this relationship (SCDT=0.16, df=1, p=0.68). There was no significant relationship between SNI and WTB among either the Ashkenazim (b=-0.23, t=-1.92, r>0.05) or Mizrachim (b=-0.09, t=-1.24, r>0.05). The study results also point to a significant association between consumer animosity and conspicuous consumption among Ashkenazi (b=-0.40, t=-2.85, r<0.05), but not Mizrachi Jews (b=-0.03, t=-0.75, r>0.05). Here the kh2 difference test upheld the significance of this finding (SCDT=7.86, df=1, p<0.05). Discussion of Study 1 findings Previous research demonstrates that conspicuous consumption is motivated by consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The results illustrate that those who harbor animosity toward an offending country lack the desire to consume products originating from it as a means either to improve their self-concept or to impress others. Perhaps certain Israeli-Jewish consumers avoid conspicuous German products merely because they cannot afford to purchase luxury items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, no significant difference was found between income level and WTB conspicuous German products (SCDT=30.69, df=26, p=0.16). In line with previous research (Huang et al., 2010), a positive and significant relationship between SNI and consumer animosity was found. This relationship may be accounted for by the social identity and realistic group conflict theories. The observed relationship between SNI and consumer animosity suggests that Israeli-Jews susceptible to normative influence believe their referents have negative opinions of Germany. Hence, they feel the need to comply with them and maintain this animosity toward Germany. In contrast to previous research (Maher and Mady, 2010), an insignificant association was observed between SNI and WTB. However, a multi-group analysis showed that the relationship between SNI and WTB varies depending on subgroup affiliation. There was a significant relationship between SNI and WTB among Ashkenazi Jews, but none among Mizrachi Jews. Although, at first glance, these findings may seem surprising, the literature does provide at least one plausible explanation. A study conducted by Thomas et al. (2015) suggests that consumers may engage in what they term "hidden consumption behavior." This behavior is more likely to occur when the chances of being caught are minimal and when the sanctions (in case of being caught) are negligible. It would, therefore, seem that Mizrachi Jews are willing to consume German products in the privacy of their homes, both because the chances of being caught by fellow Mizrachim is low, and the potential disapproval if caught is not likely to be severe. However, an Ashkenazi caught consuming a German product may endure more severe social disapproval from fellow Ashkenazim. This proposition should be regarded with care as hidden consumption behavior was not studied in either one of the present research settings. Hence, further research would be necessary to verify the explanation advanced here. Moreover, in line with past research, a negative relationship was found between consumer animosity and WTB (Klein et al., 1998; Shimp et al., 2004). A stronger relationship was observed between consumer animosity and WTB among Ashkenazi vs Mizrachi Jews. The observed differences in the predictive power of consumer animosity in each one of the subsamples may be accounted for by a significant difference between Ashkenazi (M=5.49, SD=1.3) and Mizrachi (M=3.87, SD=1.6) Jews in the level of animosity harbored toward Germany, as corroborated by an independent samples t-test (DM=1.62, t=6.34, p=<0.01). Finally, there was a significant relationship between consumer animosity and conspicuous consumption. However, a multi-group analysis revealed that this relationship is only significant among Ashkenazi Jews. This finding may also be accounted for by the fact that Mizrachi Jews harbor lower levels of animosity toward Germany than Ashkenazi Jews. In sum, differences in the effect of animosity on WTB may be partially attributable to consumers' cultural subgroup affiliation. Background Russians and Americans have a long history of political conflict. The Cold War, which erupted after the Second World War, involved a four-decade power struggle between the two countries. The lingering historical tensions between the two nations have been exacerbated by recent events. As is known, in 2014, the Obama administration imposed sanctions on certain Russian individuals and businesses (BBC News, 2014) in response to Russia's annexation of Crimea and the crisis in Eastern Ukraine. These sanctions seem to have had economic ramifications for Russian consumers, as prices of imported products have increased substantially (Boghani, 2015). Previous research suggests that political discord between countries is likely to lead to consumer animosity (Huang et al., 2010; Maher and Mady, 2010). However, as opposed to the genocidal character of the Holocaust, the recent political discord between the USA and Russia is presumed to have a less profound, long-term psychological effect on Russian consumers regarding their attitudes toward consumption of American products. Hypotheses The hypotheses for Study 1 were modified to fit the particular context of the latest political discord between Russia and the USA. Thus, the following are hypothesized: H1b. Animosity will negatively affect Russian consumers' reluctance to conspicuously consume American products. H2b. Russian consumers' SNI will be positively associated with the level of animosity harbored toward the USA. H3b. SNI will be negatively associated with Russian consumers' WTB US-made products. H4b. Consumer animosity will be negatively associated with Russian consumers' WTB US-made products. Method Measures The measures employed in Study 1 were modified to fit the particular context of Study 2 (Table AII). Respondents were not informed about the focus of the study. As in Study 1, the hypothesized model tested included three dependent variables (i.e. animosity, WTB, and conspicuous consumption) and a single independent variable (i.e. SNI). Prior to administrating the questionnaire, the survey was pre-tested on a small sample of Russian consumers (n=20). Several items were rephrased following participant feedback. Procedure Procedures followed the mall-intercept method employed in Study 1. The questionnaire was translated and back-translated with a method suggested by Douglas and Craig (1983). Data were collected from adult consumers living in St Petersburg, Russia. Over a period of ten days, approximately every tenth individual (or a single individual if part of a group) was approached at the entrance to one of several major malls in the city. A total of 259 Russian respondents were recruited. Of the 259 questionnaires collected, 12 were eliminated due to incompleteness. As a result, 247 questionnaires were valid for analysis. Females made up 52 percent of the sample. Most respondents were single (44 percent), and their monthly incomes were above the national average (52 percent) which, based on PPP conversion rates, was equivalent to USD2,460 (see footnote 2) at the time of data collection (see Table AI). Analysis Analyses followed those employed for Study 1 (see Table III). Two items were deleted from the WTB scale, and a total of three items were deleted from the SNI, war animosity, and general animosity scales. Similar to the Israeli study, one item from the general animosity scale ("I feel angry toward the USA") and two items from the war animosity scale ("I am angry with the USA's interference in Russia's affairs" and "I cannot forgive the USA for its policy of sanctions toward my country") loaded on the same factor. Consequently, these three items were merged into a single scale, general animosity. The items included in the study are shown in Table AII. Similar to Study 1, the number of observed items included in the research model was 14. The results of the research model pointed to an adequate fit (kh2=288.81, df=148, p=0.00, CFI=0.93, RMSEA=0.06). As in Study 1, two rival models were tested. In rival Model 1, a path was drawn from age to animosity. The results of rival Model 1 (kh2=245.18, df=126, p=0.00, CFI=0.92, RMSEA=0.07) were inferior to those of the hypothesized model. This was confirmed by a kh2 difference test (SCDT=43.62, df=22, p=0.00). In another rival model (rival Model 2) and in line with Study 1, a path was drawn from SNI and conspicuous consumption. The results of rival Model 2 also point to a worse model fit (kh2=254.68, df=130, p=0.00, CFI=0.94, RMSEA=0.06) in comparison to the hypothesized model. This was confirmed by a kh2 difference test (SCDT=34.13, df=18, p=0.01). Results H1b posits a negative association between consumer animosity and the tendency to conspicuously consume American products, and this was confirmed by the data (b=-0.27, t=-3.08, r<0.05). H2b posits a positive association between SNI and consumer animosity, and again this was corroborated by the study findings (b=0.47, t=5.01, p<0.00). A negative association was posited between SNI and WTB products made in the USA. The observed relationship was negative but insignificant (b=-0.11, t=-1.02, r>0.05), thereby refuting H3b. A negative relationship was also posited between consumer animosity and WTB products made in the USA. This was corroborated by the data, hence confirming H4b (b=-0.40, t=-2.93, r<0.05) (Table IV). Discussion of Study 2 findings Study 2 supports the stability of the hypothesized model and the generalizability of the findings to various contexts. Previous research has demonstrated that conspicuous consumption is motivated by such factors as consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The economic sanctions imposed by the USA on Russia have led to price increases of goods imported from the USA (Boghani, 2015), which are felt in the pockets of Russian consumers. Hence, it is understandable why Russian consumers may lack the desire to consume American-made products as a means to improve their self-concept or impress others. One may well argue that some Russian consumers avoid conspicuous products made in the USA merely because they cannot afford to purchase luxury American items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, similar to the findings observed in Study 1, no significant difference was found between the various salary levels (i.e. below-average income vs above-average income) and conspicuous consumption (SCDT=43.62; p=0.00). Similar to Study 1, there was a negative and significant relationship between consumer animosity and WTB. This finding is in line with a large body of research which demonstrates that animosity affects consumer attitudes (Klein et al., 1998; Shimp et al., 2004). In line with the findings of Study 1, a positive and statistically significant relationship was observed between SNI and consumer animosity. Previous research demonstrates that consumers in collectivistic societies are more susceptible to norm influence than those living in individualistic societies (Mourali et al., 2005). Consistent with the findings observed in Study 1, the results point to an insignificant relationship between SNI and WTB. The findings of both studies describe a negative association between consumer animosity and conspicuous consumption. Likewise, both studies point to a negative relationship between consumer animosity and WTB. Furthermore, data from both studies suggest that SNI is positively associated with consumer animosity. In addition, the hypothesized relationship between SNI and animosity was also confirmed by both studies. However, both studies conducted in the framework of the present research failed to confirm a relationship between SNI and WTB. The findings of Study 1 suggest that the apparent lack of observed relationship may be accounted for by the moderating role of cultural groups differences. Other differences observed in the two studies pertain to mean scores on the general animosity scale. In the Russian sample, the mean score was higher (M=4.84) than in the Israeli sample (M=4.17). This finding is in line with previous research suggesting that during a conflict animosity levels are likely to be higher than when tensions subside (Heslop et al., 2009) The results of the present research point to the importance of taking into account not only the level of consumer animosity but also the nature of the consumption context (conspicuous vs inconspicuous consumption). Managerial implications First, the findings of the present research suggest that marketers need to consider the potential effects of consumer animosity on WTB not only in strictly collectivistic societies, but also in societies that are relatively more individualistic. Firms targeting consumers harboring animosity toward the former's country of origin should focus their advertising and promotion campaigns on products used for inconspicuous consumption rather than on conspicuous consumption. Campaigns focusing on conspicuously consumer products are especially pertinent in collectivistic societies and when the target market is susceptible to norm influence. Second, firms based in a country targeted by consumer animosity may consider relocating their manufacturing operations to a third country in order to avoid the marketing repercussions of the negative political associations. Alternatively, they could also move manufacturing and/or other operations to the country where the target market lives. Finally, firms can use imagery to mask undesirable COO stemming from animosity (D'Antone and Merunka, 2015). Implications for theory The study makes several theoretical contributions. First, by empirically demonstrating an association between consumer animosity and conspicuous consumption, it contributes to the consumer animosity literature by suggesting that not only does consumer animosity affect consumers' WTB products from the target country, but that this effect is partially context specific (conspicuous consumption vs inconspicuous consumption). Past research associated SNI with collectivistic societies, but not with individualistic ones (Lee and Green, 1991; Mourali et al., 2005). The current research suggests that consumers in moderately collectivistic societies like Israel are also susceptible to norm influence. Hence, the present study also contributes to the body of research focusing on the effects of social influence on consumer behavior. In particular, the study implies that societies which may have been overlooked in previous SNI research due to their relatively higher score on the individualism dimension (Hofstede, 2001) should be considered in future consumer animosity. Consistent with previous research on country-of-origin effects and subcultural affiliation (Laroche et al., 2003), the present investigation points to the importance of taking into account the potential effects of cultural subgroup differences on consumer behavior. However, the theoretical contribution of the present study lies in the observed moderating role of subgroup affiliation in the relationship between SNI and WTB, in the context of consumer animosity. In other words, cultural subgroup belonging may strengthen the effect of SNI on WTB when the sample is comprised of consumers who have been either directly or indirectly victimized by the country which is the target of animosity and are moderately collectivistic. The present research has two main limitations. First, although the present study was conducted in two very different research settings, extrapolation to other contexts must be treated with care. Second, data were collected from a convenient sample of consumers and from a single major city in each country. Hence, the findings do not necessarily reflect the attitudes and behavioral patterns of the general consumer population in either one of the countries. Future research would benefit from testing the hypothesized model with a sample drawn from several major cities in each one of the countries. The findings of the present study suggest that consumers may be willing to consume products associated with the offending country in the privacy of their homes because of the belief that the chances of being observed doing so are low. However, this proposition should be regarded with care as hidden consumption behavior was not studied in the present research. Hence, further research would be needed to verify the explanation advanced here. Similarly, the results of the present study suggest that consumers with strong negative feelings toward a country may be reluctant to consume its products conspicuously. Certain consumers may not consume the products of a country in public, not because of personal feelings of animosity toward the country itself, but rather due to normative influence and the desire to conform to the norms dictated by one's in-group. However, these very consumers may not feel guilty consuming the products made in the target country in the privacy of their homes. The underpinnings of private vs public consumption in the context of consumer would be a valuable research avenue to undertake. The findings of the present research shed light on the importance of the consumption context to the study of consumer animosity. In particular, it points to the complexity of the consumer animosity construct emanating primarily from its broad social underpinnings.
|
In both contexts, the results from the SPSS and AMOS analyses indicated a negative and significant relationship between consumer animosity and conspicuous consumption. Moreover, SNI was positively associated with consumer animosity. Finally, the study findings point to a negative association between consumer animosity and WTB, regardless of the level of animosity.
|
[SECTION: Value] Klein et al. (1998) pioneered the study of the consumer animosity stream of research with the introduction of the animosity model of foreign product purchase. They defined animosity as "anger related to previous or ongoing political, military, economic, or diplomatic events" (p. 90). Animosity is a hostile attitude aimed at national out-groups, while hostility comprises both cognitive and attitudinal components. The cognitive component entails cynical beliefs and mistrust of others. The attitudinal component includes the negative emotions of anger, contempt, and disgust (Jung et al., 2002). According to Averill (1982), animosity is a strong emotion of dislike and hatred based on beliefs resulting from past or present military, political, or economic conflict and on actions between nations or people perceived to be unjustifiable or contradictory to socially acceptable norms. Since Klein et al.'s (1998) seminal study, dozens of papers have been published on the subject of consumer animosity. While some studies are replications, others delve more deeply into the consumer animosity phenomenon by examining its potential antecedents, mediators, moderators (Klein and Ettenson, 1999; Maher and Mady, 2010; Riefler and Diamantopoulos, 2007; Shoham et al., 2006; Wang et al., 2013), and consequences (Huang et al., 2010). Previous research demonstrates that consumer animosity may stem from an array of factors including past and ongoing political tensions between countries, past wars, and trade discords (Ettenson and Klein, 2005; Klein et al., 1998). Consumer animosity, in turn, is likely to result in anger which negatively affects consumers' judgments of product quality (Rose et al., 2009) as well as their willingness to buy (WTB) products made in the offending country (Fernandez-Ferrin et al., 2015; Wang et al., 2013). Although previous research has focused on the relationship between consumer animosity and a myriad of constructs, the possible relationship between consumer animosity and the consumption context (i.e. conspicuous vs inconspicuous consumption) has been largely overlooked. Patsiaouras and Fitchett (2012) defined conspicuous consumption as "the competitive and extravagant consumption practices and leisure activities that aim to indicate membership to a superior social class" (p. 154). A previous pilot study suggests that consumer animosity is associated with a lowered proclivity to engage in conspicuous consumption (Al-Hyari et al., 2012). This finding is in line with Klein's (2002) contention that the effects of consumer animosity may be more pivotal in the context of conspicuous consumption compared to other consumption contexts. Examining the relationship between consumer animosity and conspicuous consumption provides a broader understanding as to the particular contexts in which animosity is more likely to influence the consumption of products made in the offending country. A more profound understanding of this relationship may also serve to aid marketing managers in devising more focused marketing strategies and thus allocating marketing resources more efficiently. Hence, the main objective of this research was to examine whether consumer animosity acts as an antecedent to conspicuous consumption. This paper is comprised of two studies aimed at testing the generalizability of the proposed model. This paper is organized as follows: a review of the related literature is followed by an elaborate description of the two major studies conducted in the framework of the present research effort. This is followed by a discussion of the implications emanating from both studies. Finally, the authors point out the research limitations and suggest directions for future research. Consumer animosity and conspicuous consumption Veblen (1918) pioneered the research into conspicuous consumption. This type of consumption focuses on how consumers shop for and use brands, and how social status is shown off via brand image (Griskevicius et al., 2010). Hence, conspicuous consumption is a form of symbolic consumption. A large body of literature exists on the role of symbolic consumption in relation to consumer behavior (Holt, 2002; Hoyer and MacInnis, 1997). According to Grubb and Grathwohl (1967), products are social tools "serving as a means of communication between the individual and his or her significant references" (p. 24). One of the most common means of social communication is conspicuous consumption (Bushman, 1993). Klein et al.'s (1998) and Klein's (2002) studies suggest a negative association between consumer animosity and the ownership of conspicuous products such as cameras and automobiles. These findings are corroborated by more recent research. Al-Hyari et al. (2012) conducted a pilot study in Saudi Arabia in the context of the 2005 Muhammad controversy. The controversy emanated from a call for cartoonists to contribute cartoons depicting the prophet Muhammad by Jyllands-Posten (a Danish newspaper). The cartoon, deemed responsible for stirring unrest among Muslims across the globe, was one that depicted the prophet Muhammad with a bomb in his turban (Knight et al., 2009). Consistent with the two earlier studies, the study by Al-Hyari et al. (2012) points to a negative relationship between consumer animosity and conspicuous consumption. In particular, it suggests that consumers may avoid conspicuously consuming the products made in another country to signal that they are angered by its actions. This behavioral outcome is consistent with signaling theory. According to the theory, the party sending information must choose if and how to communicate (or signal) particular information while the receiver must choose how to interpret the signal (Connelly et al., 2011). Thus, it stands to reason that consumers are unlikely to conspicuously consume products as a means of communication associated with countries toward which they feel resentful. Similar consumer attitudes appear in other contexts as well. Consider the Armenian Genocide perpetrated by the Ottoman Empire in 1915, in which approximately one million Armenians were slaughtered[1]. Nowadays, the raw materials with which luxury brands such as Prada and Versace are made are imported from Turkey. Would Armenians living in Armenia, as opposed to Turkish Armenians (the majority of whom live in Istanbul), want to be associated with a superior social class that expresses its sense of membership in a superior social class by indulging in the conspicuous consumption of luxury brands made from Turkish raw materials? Would they be willing to buy a Prada bag or a Versace suit made from these materials? Likewise, how would an Israeli-Jewish consumer feel about conspicuously consuming a brand manufactured in Germany? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for the extermination of six million Jews? Alternatively, how would a Russian consumer feel about conspicuously consuming brands manufactured in the USA? Would he or she want to associate him or herself with a superior social class by conspicuously consuming brands made in a country held responsible for his or her country's economic crisis? Perhaps, but the anecdotal evidence presented above suggests that the conspicuous consumption of brands made in the country that is the target of animosity is unlikely in all three cases. Thus, the following is hypothesized: H1a. Animosity will negatively affect Jewish-Israeli consumers' conspicuous consumption of German products. The relationship between susceptibility to normative influence (SNI), consumer animosity, and WTB SNI is a well-researched concept in the consumer behavior literature (Aaker, 1999; Josiassen and Assaf, 2013; Lee and Green, 1991). SNI, which refers to the utilitarian and value-expressive influence dimensions of the interpersonal influence concept, is defined as the propensity to conform to norms set by others (Batra et al., 2001). Some studies have focused on individuals' susceptibility to these various forms of influence (Aaker, 1999). Others, however, have delved into the effects of SNI on consumer behavior (Wang et al., 2013). Despite the large body of consumer behavior literature focusing on the effects of SNI, little is known about the relationship between SNI and consumer animosity and how this may vary based on the type of society, e.g., individualistic vs collectivistic (Maher and Mady, 2010; Wang et al., 2013). According to the theory of planned behavior (Ajzen, 1991), there are three determinants that predict behavior: attitude toward the behavior, subjective norm, and perceived behavioral control. Overall, the more positive the attitude toward the behavior, the more favorable the subjective norm toward the behavior; and, the greater the perceived control over the behavior, the stronger the intention to perform the intended action will be. While in certain instances merely one of these determinants would suffice to predict intention (e.g. attitudes), in other cases, several determinants will interact in the prediction of the intention (e.g. attitudes and subjective norms). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms - reasoning that is supported by previous research. Huang et al. (2010), for example, conducted a study in the context of the tense political relationships between Taiwan-Japan and China-Japan. Their study findings imply a positive and significant relationship between SNI and consumer animosity. These findings are in line with previous research, which suggests that consumers are more likely to be susceptible to norm influence in collectivistic rather than individualistic societies (Lee and Green, 1991). Huang et al.'s study and those of other researchers (Maher and Mady, 2010; Wang et al., 2013) have made significant contributions to furthering consumer behavior researchers' understanding of the consumer animosity phenomenon. However, the generalizability of these findings is limited as these studies were conducted among consumers more susceptible to norm influence. Previous research suggests that SNI varies with the type of society (i.e. individualistic vs collectivistic society). Hence, it would be of great value to consumer behavior scholars and practitioners alike to examine whether consumer animosity is influenced by SNI in societies which are moderately collectivistic such as Russia and Israel (Hofstede, 2001; Oyserman et al., 2002; Zeigler-Hill and Besser, 2011). The theory of planned behavior also postulates that subjective norms predict attitudes toward behavior (Ajzen, 1991). Hence, since animosity is an attitudinal component (Jung et al., 2002), it stands to reason that it will be predicted by subjective norms. Hence, the following is hypothesized: H2a. Jewish-Israeli consumers' SNI will be positively associated with the level of animosity harbored toward Germany. SNI is a critical determinant of WTB (Hoyer et al., 2008). Grinblatt et al. (2008), for instance, analyzed the automobile purchasing behavior of Finnish consumers. The authors found a positive association between SNI and the purchase of automobiles. In particular, a consumer's car choice is likely to be influenced by the car make and model of his or her nearest neighbors. In the context of consumer animosity research, however, the relationship between SNI and WTB is quite different. Previous research suggests that SNI is negatively associated with WTB (Maher and Mady, 2010). Although the influence of SNI on WTB differs in each one of the abovementioned contexts, the motive is identical (i.e. the desire to conform to group norms). In Grinblatt et al.'s (2008) study, consumers purchased the makes and models owned by their neighbors to show that they too could afford such cars. In Maher and Mady's (2010) study, however, consumers were reluctant to purchase the products made in the offending country because they wanted to demonstrate to their reference groups that they were not acting in defiance of their groups' norms. Thus, the following is hypothesized: H3a. SNI will be negatively associated with Jewish consumers' WTB German-made products. Consumer animosity and WTB A large body of research points to a negative relationship between consumer animosity and WTB (Cui et al., 2012; Rose et al., 2009; Wang et al., 2013). Previous research suggests that animosity affects consumers' WTB products made in the offending country not only in the short term (Ettenson and Klein, 2005; Shoham et al., 2006) but also in the long term (Shimp et al., 2004). Even though several decades have elapsed since atrocities like the Nanjing Massacre, which occurred during the period of Japanese occupation of parts of China, Chinese consumers are still reluctant to buy Japanese products (Klein et al.,1998). Likewise, Shimp et al. (2004) demonstrated that Southerners in the USA still maintain enmity toward Northerners over the Civil War and its aftermath. This animosity by Southerners toward Northerners is pronounced in the reluctance of the former to purchase the products of the latter. A similar long-term effect of animosity has also been observed among American Jewish consumers, who are still unwilling to purchase German-made products despite the fact that over seven decades have elapsed since the Holocaust (Podoshen, 2009). Hence, the following is hypothesized: H4a. Consumer animosity will be negatively associated with WTB German-made products. The main objective of this study was to examine the effects of consumer animosity on conspicuous consumption in two research settings: Israel and Russia. More specifically, the study aimed to: examine the relationship between SNI and consumer animosity, examine whether SNI impacts consumers' WTB products made in the offending country, and study whether consumer animosity is associated with consumers' WTB products made in the offending country. Previous studies have emphasized the importance of examining the effects of animosity in contexts other than ones in which its trigger was an extreme historical event (Klein, 2002). To assess the stability of the proposed model (see Figure 1) and its applicability in various contexts, two contexts were tested: the Holocaust and the recent political discord between the USA and Russia over the Obama administration's imposition of economic sanctions on the latter. The two contexts were chosen for a number of reasons. First, they represent potentially differing levels of animosity toward an offending country (Germany vs the USA). Second, according to Hofstede (2001), Israel and Russia represent two different cultures; the former is relatively more individualistic (54) than the latter (39). Previous research suggests that consumers in collectivistic societies are more likely to harbor animosity toward a target country due to greater SNI (Huang et al., 2010). According to the realistic group conflict theory, a perceived threat from an out-group reinforces peoples' sense of belonging to their in-group (Levine and Campbell, 1972). Huang et al.'s research finding along with the realistic group conflict theory would suggest that the level of animosity is more likely to be closely linked to SNI levels among Russian consumers than Israeli consumers. Furthermore, examining the attitudes of Russian consumers toward American products is of practical importance to the many American firms marketing their goods to Russian consumers. American companies view Russia as one of their most important markets (Liuhto et al., 2016). Finally, previous research suggests that Jewish consumers still harbor animosity toward Germany, thereby making it a suitable context to study the proposed research model (Podoshen, 2009). Method This study employed the mall-intercept method to collect data from a sample of adult consumers in Tel-Aviv, Israel (Rose et al., 2009). The questionnaire was translated and back-translated with a technique suggested by Douglas and Craig (1983). Consumers' participation was solicited at the entrance to major malls, where roughly every tenth individual was asked to complete a questionnaire (Josiassen and Assaf, 2013). In cases where the tenth individual was in a group, only one of the group members was invited to take part in the study. Respondents were not informed about the focus of the study. The questionnaires were collected upon completion. The large number of passersby and the mix of individuals from all walks of life influenced the choice of locations. A total of 264 respondents were recruited. Of the questionnaires collected, 14 were eliminated due to incompleteness. Consequently, 250 were valid for analysis. Females composed 54 percent of the sample. Most respondents were single (46 percent) and their monthly incomes were above the national average (57 percent), which, based on purchase power parity (PPP) conversion rates, was equivalent to USD2,374[2] at the time of data collection (see Table AI). Measures Seven-point Likert scales (1=strongly disagree; 7= strongly agree) adapted from previous research were employed to test the relationships proposed in the hypothesized model. The hypothesized model comprised three dependent variables - animosity, WTB, and conspicuous consumption - and a single independent variable, SNI. General animosity was measured with three items adapted from Klein (2002). Similarly, three items from this source were also adapted to measure war animosity. Conspicuous consumption was measured using five items adapted from Marcoux et al. (1997). The original conspicuous consumption scale comprises a relatively large number of items. Due to concerns over the length of the questionnaire and the time necessary for its completion, a pilot study examined whether the original scale (14 items) could be reduced without undermining its validity. Thus, the two studies conducted following the pilot study used a shortened scale consisting of only five items (see Table AI). SNI was measured using three items adapted from Lee and Green (1991), and six items were adapted from Klein et al. (1998) to measure WTB. The questionnaire was pre-tested using a small sample of Israeli-Jewish consumers (n=20). The items employed in the research, their sources, Cronbach's a values, and average variance extracted (AVE) are illustrated in Table AII. Israeli-Jews comprise two major cultural subgroups (Shavit, 1990): Mizrahim (i.e. Jews of Asian or African descent) and Ashkenazim (Jews of European decent). Unlike Ashkenazi Jews, Mizrahi Jews were not victims of the atrocities committed by the Nazi regime. Consequently, it may be assumed that Ashkenzim harbor a greater level of animosity toward Germany and would be less willing to buy products made there than Mizrahim. Hence, subcultural affiliation was a control variable in the relationship between animosity and WTB. In all, 38 percent of the sample were Ashkenazi, 43 percent were Mizrachi, and 19 percent categorized themselves as others. Analysis Prior to analysis, all relevant items were reversed-scored. Using AMOS 22, we employed structural equation modeling (SEM) to test the hypothesized paths. As reported in Table AI, Cronbach's a (cutoff =0.7) and AVE (cutoff =0.5) were measured using SPSS 21 to examine the convergent validity of the constructs (Fornell and Larcker, 1981). Omitted from further analysis were items with loadings below the recommended 0.4 cutoff in the structured model (Hair et al., 1999). Two items were omitted from the WTB scale and two additional items from the war animosity and general animosity scales (one item from each scale). Following the deletion of an item from the susceptibility-to-norm-influence construct, the scale a=0.87. Noteworthy is the fact that one item from the general animosity scale ("I feel angry toward Germany") and two items from the war animosity scale ("I still feel angry toward Germany because of the Second World War" and "I cannot forgive Germany for what it did to the Jews in the Second World War") loaded on the same factor. Consequently, these were merged into a single scale, general animosity. Convergent validity was assessed by estimating composite reliability employing AMOS 22 and AVE using SPSS 21. In line with Fornell (1992), the construct reliability values of all latent variables were at or above the recommended threshold of 0.6 (see Table I). Discriminant validity was estimated along the lines recommended by Fornell and Larcker (1981). SEM in AMOS 22 was employed to test the hypothesized paths in the research model. Cronbach's a and AVE scores exceeded the recommended 0.7 cutoff suggested by Hair et al. (1999). In all, 14 observed items are retained in the hypothesized model. The cutoff sets recommended for latent factor models having between 12 and 30 observed items are at least 0.92 for the comparative fit index (CFI) and no more than 0.07 for the root mean-squared error of approximation (RMSEA) (Hair et al., 2006). Hence, the results of the hypothesized model point to an adequate fit (kh2=224.48, df=142, p=0.00, CFI=0.95, RMSEA=0.04). Two rival models were tested. Previous research suggests that age may be an antecedent of consumer animosity (Hinck, 2004; Klein, 2002). Hence, in rival Model 1, a path from age to animosity was drawn. The results of rival Model 1 point to a poorer model fit (kh2=265.00, df=168, p=0.00, CFI=0.94, RMSEA=0.04) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (sequential kh2 difference test (SCDT)=40.51, df=26, p=0.03). Our main research objective was to explore the relationship between consumer animosity and conspicuous consumption. However, since according to previous research SNI is a predictor of conspicuous consumption (Tsai et al., 2015), we drew a path between the two constructs in rival Model 2 aiming to test whether the new path enhances the model fit. The results of rival Model 2 point to a poorer model fit (kh2=258.74, df=161 p=0.00, CFI=0.95, RMSEA=0.07) than the hypothesized model. This is corroborated with a kh2 difference test which points to a significant difference between the models (SCDT =34.26, df=19, p=0.01). Results Given the superior fit statistics for the hypothesized model vs both rival models, the reported results pertain to the findings of the hypothesized model (Table II). According to H1a, consumer animosity will be negatively associated with Jewish-Israeli consumers' conspicuous consumption tendencies. This was confirmed (b=-0.40, t=-2.85, r<0.05). H2a posits that Jewish-Israeli consumers' SNI will be positively associated with the level of animosity toward Germany, and this was also supported (b=0.32, t=2.13, p<0.05). A negative association between SNI and WTB German-made products was posited. The path was not significant, thereby not supporting H3a (b=-0.22, t=-1.92, r>0.05). H4a posits that consumer animosity will be negatively associated with WTB German-made products, and this too was corroborated by the data (b=-0.31, t=-2.19, r<0.05). Multi-group analysis Since cultural subgroup affiliation was identified as a control variable in the study, a multi-group SEM analysis was performed to examine whether the predictive power of the independent variables varied with respondents' cultural group affiliation. Mizrachi Jews did not directly experience the atrocities of the Holocaust. Consequently, they were expected to harbor more positive attitudes toward the purchase of products made in Germany vis-a-vis Ashkenazim. A fully constrained model was created and compared to the original unconstrained model across groups. Cultural affiliation (Ashkenazi or other) formed the unit of analysis. A kh2 difference test showed that the two models were variant across subgroup affiliation (SCDT=81.61; df =24, p>0.00). The path from SNI to WTB is insignificant and therefore we removed it from the model. The results of the multi-group analysis point to a significant relationship between SNI and consumer animosity. The study findings suggest that SNI is a stronger predictor of consumer animosity among the Mizrachim (b=0.41, t=2.85, r<0.05) vs Ashkenazim (b=0.32, t=2.13, p<0.05). However, a kh2 difference test did not uphold the significance of this difference (SCDT=0.2, df=1, p=0.65). Furthermore, a significant association was found between consumer animosity and WTB products made in Germany. Consumer animosity is a stronger predictor of WTB products made in Germany among Ashkenazim (b=-0.31, t=-2.19, r<0.05) compared to the Mizrachim (b=-0.24, t=-5.38, r<0.001). Here again, however, the kh2 difference test points to no significant group difference concerning this relationship (SCDT=0.16, df=1, p=0.68). There was no significant relationship between SNI and WTB among either the Ashkenazim (b=-0.23, t=-1.92, r>0.05) or Mizrachim (b=-0.09, t=-1.24, r>0.05). The study results also point to a significant association between consumer animosity and conspicuous consumption among Ashkenazi (b=-0.40, t=-2.85, r<0.05), but not Mizrachi Jews (b=-0.03, t=-0.75, r>0.05). Here the kh2 difference test upheld the significance of this finding (SCDT=7.86, df=1, p<0.05). Discussion of Study 1 findings Previous research demonstrates that conspicuous consumption is motivated by consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The results illustrate that those who harbor animosity toward an offending country lack the desire to consume products originating from it as a means either to improve their self-concept or to impress others. Perhaps certain Israeli-Jewish consumers avoid conspicuous German products merely because they cannot afford to purchase luxury items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, no significant difference was found between income level and WTB conspicuous German products (SCDT=30.69, df=26, p=0.16). In line with previous research (Huang et al., 2010), a positive and significant relationship between SNI and consumer animosity was found. This relationship may be accounted for by the social identity and realistic group conflict theories. The observed relationship between SNI and consumer animosity suggests that Israeli-Jews susceptible to normative influence believe their referents have negative opinions of Germany. Hence, they feel the need to comply with them and maintain this animosity toward Germany. In contrast to previous research (Maher and Mady, 2010), an insignificant association was observed between SNI and WTB. However, a multi-group analysis showed that the relationship between SNI and WTB varies depending on subgroup affiliation. There was a significant relationship between SNI and WTB among Ashkenazi Jews, but none among Mizrachi Jews. Although, at first glance, these findings may seem surprising, the literature does provide at least one plausible explanation. A study conducted by Thomas et al. (2015) suggests that consumers may engage in what they term "hidden consumption behavior." This behavior is more likely to occur when the chances of being caught are minimal and when the sanctions (in case of being caught) are negligible. It would, therefore, seem that Mizrachi Jews are willing to consume German products in the privacy of their homes, both because the chances of being caught by fellow Mizrachim is low, and the potential disapproval if caught is not likely to be severe. However, an Ashkenazi caught consuming a German product may endure more severe social disapproval from fellow Ashkenazim. This proposition should be regarded with care as hidden consumption behavior was not studied in either one of the present research settings. Hence, further research would be necessary to verify the explanation advanced here. Moreover, in line with past research, a negative relationship was found between consumer animosity and WTB (Klein et al., 1998; Shimp et al., 2004). A stronger relationship was observed between consumer animosity and WTB among Ashkenazi vs Mizrachi Jews. The observed differences in the predictive power of consumer animosity in each one of the subsamples may be accounted for by a significant difference between Ashkenazi (M=5.49, SD=1.3) and Mizrachi (M=3.87, SD=1.6) Jews in the level of animosity harbored toward Germany, as corroborated by an independent samples t-test (DM=1.62, t=6.34, p=<0.01). Finally, there was a significant relationship between consumer animosity and conspicuous consumption. However, a multi-group analysis revealed that this relationship is only significant among Ashkenazi Jews. This finding may also be accounted for by the fact that Mizrachi Jews harbor lower levels of animosity toward Germany than Ashkenazi Jews. In sum, differences in the effect of animosity on WTB may be partially attributable to consumers' cultural subgroup affiliation. Background Russians and Americans have a long history of political conflict. The Cold War, which erupted after the Second World War, involved a four-decade power struggle between the two countries. The lingering historical tensions between the two nations have been exacerbated by recent events. As is known, in 2014, the Obama administration imposed sanctions on certain Russian individuals and businesses (BBC News, 2014) in response to Russia's annexation of Crimea and the crisis in Eastern Ukraine. These sanctions seem to have had economic ramifications for Russian consumers, as prices of imported products have increased substantially (Boghani, 2015). Previous research suggests that political discord between countries is likely to lead to consumer animosity (Huang et al., 2010; Maher and Mady, 2010). However, as opposed to the genocidal character of the Holocaust, the recent political discord between the USA and Russia is presumed to have a less profound, long-term psychological effect on Russian consumers regarding their attitudes toward consumption of American products. Hypotheses The hypotheses for Study 1 were modified to fit the particular context of the latest political discord between Russia and the USA. Thus, the following are hypothesized: H1b. Animosity will negatively affect Russian consumers' reluctance to conspicuously consume American products. H2b. Russian consumers' SNI will be positively associated with the level of animosity harbored toward the USA. H3b. SNI will be negatively associated with Russian consumers' WTB US-made products. H4b. Consumer animosity will be negatively associated with Russian consumers' WTB US-made products. Method Measures The measures employed in Study 1 were modified to fit the particular context of Study 2 (Table AII). Respondents were not informed about the focus of the study. As in Study 1, the hypothesized model tested included three dependent variables (i.e. animosity, WTB, and conspicuous consumption) and a single independent variable (i.e. SNI). Prior to administrating the questionnaire, the survey was pre-tested on a small sample of Russian consumers (n=20). Several items were rephrased following participant feedback. Procedure Procedures followed the mall-intercept method employed in Study 1. The questionnaire was translated and back-translated with a method suggested by Douglas and Craig (1983). Data were collected from adult consumers living in St Petersburg, Russia. Over a period of ten days, approximately every tenth individual (or a single individual if part of a group) was approached at the entrance to one of several major malls in the city. A total of 259 Russian respondents were recruited. Of the 259 questionnaires collected, 12 were eliminated due to incompleteness. As a result, 247 questionnaires were valid for analysis. Females made up 52 percent of the sample. Most respondents were single (44 percent), and their monthly incomes were above the national average (52 percent) which, based on PPP conversion rates, was equivalent to USD2,460 (see footnote 2) at the time of data collection (see Table AI). Analysis Analyses followed those employed for Study 1 (see Table III). Two items were deleted from the WTB scale, and a total of three items were deleted from the SNI, war animosity, and general animosity scales. Similar to the Israeli study, one item from the general animosity scale ("I feel angry toward the USA") and two items from the war animosity scale ("I am angry with the USA's interference in Russia's affairs" and "I cannot forgive the USA for its policy of sanctions toward my country") loaded on the same factor. Consequently, these three items were merged into a single scale, general animosity. The items included in the study are shown in Table AII. Similar to Study 1, the number of observed items included in the research model was 14. The results of the research model pointed to an adequate fit (kh2=288.81, df=148, p=0.00, CFI=0.93, RMSEA=0.06). As in Study 1, two rival models were tested. In rival Model 1, a path was drawn from age to animosity. The results of rival Model 1 (kh2=245.18, df=126, p=0.00, CFI=0.92, RMSEA=0.07) were inferior to those of the hypothesized model. This was confirmed by a kh2 difference test (SCDT=43.62, df=22, p=0.00). In another rival model (rival Model 2) and in line with Study 1, a path was drawn from SNI and conspicuous consumption. The results of rival Model 2 also point to a worse model fit (kh2=254.68, df=130, p=0.00, CFI=0.94, RMSEA=0.06) in comparison to the hypothesized model. This was confirmed by a kh2 difference test (SCDT=34.13, df=18, p=0.01). Results H1b posits a negative association between consumer animosity and the tendency to conspicuously consume American products, and this was confirmed by the data (b=-0.27, t=-3.08, r<0.05). H2b posits a positive association between SNI and consumer animosity, and again this was corroborated by the study findings (b=0.47, t=5.01, p<0.00). A negative association was posited between SNI and WTB products made in the USA. The observed relationship was negative but insignificant (b=-0.11, t=-1.02, r>0.05), thereby refuting H3b. A negative relationship was also posited between consumer animosity and WTB products made in the USA. This was corroborated by the data, hence confirming H4b (b=-0.40, t=-2.93, r<0.05) (Table IV). Discussion of Study 2 findings Study 2 supports the stability of the hypothesized model and the generalizability of the findings to various contexts. Previous research has demonstrated that conspicuous consumption is motivated by such factors as consumers' desire to enhance their self-concept and to impress others (O'Shaughnessy and O'Shaughnessy, 2002). The economic sanctions imposed by the USA on Russia have led to price increases of goods imported from the USA (Boghani, 2015), which are felt in the pockets of Russian consumers. Hence, it is understandable why Russian consumers may lack the desire to consume American-made products as a means to improve their self-concept or impress others. One may well argue that some Russian consumers avoid conspicuous products made in the USA merely because they cannot afford to purchase luxury American items. However, when income was included as a control variable in the relationship between animosity and conspicuous consumption, similar to the findings observed in Study 1, no significant difference was found between the various salary levels (i.e. below-average income vs above-average income) and conspicuous consumption (SCDT=43.62; p=0.00). Similar to Study 1, there was a negative and significant relationship between consumer animosity and WTB. This finding is in line with a large body of research which demonstrates that animosity affects consumer attitudes (Klein et al., 1998; Shimp et al., 2004). In line with the findings of Study 1, a positive and statistically significant relationship was observed between SNI and consumer animosity. Previous research demonstrates that consumers in collectivistic societies are more susceptible to norm influence than those living in individualistic societies (Mourali et al., 2005). Consistent with the findings observed in Study 1, the results point to an insignificant relationship between SNI and WTB. The findings of both studies describe a negative association between consumer animosity and conspicuous consumption. Likewise, both studies point to a negative relationship between consumer animosity and WTB. Furthermore, data from both studies suggest that SNI is positively associated with consumer animosity. In addition, the hypothesized relationship between SNI and animosity was also confirmed by both studies. However, both studies conducted in the framework of the present research failed to confirm a relationship between SNI and WTB. The findings of Study 1 suggest that the apparent lack of observed relationship may be accounted for by the moderating role of cultural groups differences. Other differences observed in the two studies pertain to mean scores on the general animosity scale. In the Russian sample, the mean score was higher (M=4.84) than in the Israeli sample (M=4.17). This finding is in line with previous research suggesting that during a conflict animosity levels are likely to be higher than when tensions subside (Heslop et al., 2009) The results of the present research point to the importance of taking into account not only the level of consumer animosity but also the nature of the consumption context (conspicuous vs inconspicuous consumption). Managerial implications First, the findings of the present research suggest that marketers need to consider the potential effects of consumer animosity on WTB not only in strictly collectivistic societies, but also in societies that are relatively more individualistic. Firms targeting consumers harboring animosity toward the former's country of origin should focus their advertising and promotion campaigns on products used for inconspicuous consumption rather than on conspicuous consumption. Campaigns focusing on conspicuously consumer products are especially pertinent in collectivistic societies and when the target market is susceptible to norm influence. Second, firms based in a country targeted by consumer animosity may consider relocating their manufacturing operations to a third country in order to avoid the marketing repercussions of the negative political associations. Alternatively, they could also move manufacturing and/or other operations to the country where the target market lives. Finally, firms can use imagery to mask undesirable COO stemming from animosity (D'Antone and Merunka, 2015). Implications for theory The study makes several theoretical contributions. First, by empirically demonstrating an association between consumer animosity and conspicuous consumption, it contributes to the consumer animosity literature by suggesting that not only does consumer animosity affect consumers' WTB products from the target country, but that this effect is partially context specific (conspicuous consumption vs inconspicuous consumption). Past research associated SNI with collectivistic societies, but not with individualistic ones (Lee and Green, 1991; Mourali et al., 2005). The current research suggests that consumers in moderately collectivistic societies like Israel are also susceptible to norm influence. Hence, the present study also contributes to the body of research focusing on the effects of social influence on consumer behavior. In particular, the study implies that societies which may have been overlooked in previous SNI research due to their relatively higher score on the individualism dimension (Hofstede, 2001) should be considered in future consumer animosity. Consistent with previous research on country-of-origin effects and subcultural affiliation (Laroche et al., 2003), the present investigation points to the importance of taking into account the potential effects of cultural subgroup differences on consumer behavior. However, the theoretical contribution of the present study lies in the observed moderating role of subgroup affiliation in the relationship between SNI and WTB, in the context of consumer animosity. In other words, cultural subgroup belonging may strengthen the effect of SNI on WTB when the sample is comprised of consumers who have been either directly or indirectly victimized by the country which is the target of animosity and are moderately collectivistic. The present research has two main limitations. First, although the present study was conducted in two very different research settings, extrapolation to other contexts must be treated with care. Second, data were collected from a convenient sample of consumers and from a single major city in each country. Hence, the findings do not necessarily reflect the attitudes and behavioral patterns of the general consumer population in either one of the countries. Future research would benefit from testing the hypothesized model with a sample drawn from several major cities in each one of the countries. The findings of the present study suggest that consumers may be willing to consume products associated with the offending country in the privacy of their homes because of the belief that the chances of being observed doing so are low. However, this proposition should be regarded with care as hidden consumption behavior was not studied in the present research. Hence, further research would be needed to verify the explanation advanced here. Similarly, the results of the present study suggest that consumers with strong negative feelings toward a country may be reluctant to consume its products conspicuously. Certain consumers may not consume the products of a country in public, not because of personal feelings of animosity toward the country itself, but rather due to normative influence and the desire to conform to the norms dictated by one's in-group. However, these very consumers may not feel guilty consuming the products made in the target country in the privacy of their homes. The underpinnings of private vs public consumption in the context of consumer would be a valuable research avenue to undertake. The findings of the present research shed light on the importance of the consumption context to the study of consumer animosity. In particular, it points to the complexity of the consumer animosity construct emanating primarily from its broad social underpinnings.
|
The research findings suggest that consumer animosity may be a stronger predictor for the consumption of conspicuous products than for the consumption of necessity goods.
|
[SECTION: Purpose] The current differentiation strategies in the fresh produce (fruit, vegetable, and salad) industry are analysed in the light of new procurement policies carried out by retailers at the global level. These retailers pay growing attention to product differentiation and innovation as a means to put new value (rather than simply ripping out costs) into the supply chain.Differentiation strategies are analysed. On a theoretical level, the main findings of the literature on product differentiation and market structure are reviewed in order to assess the opportunities and the possible welfare effects of differentiation strategies in the food market. On an empirical level, the current structure and organisation of the fresh produce market are analysed, using both data at the aggregate level and the findings of a case study. The case study refers to a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers.The findings of the study indicate that in the fresh produce industry there are good opportunities for successful differentiation strategies. Nevertheless, actors at the different vertical stages of the marketing channel take very different advantage from it, depending on their "power" to lead the channel. Moreover, the fact that product differentiation tends to foster the oligopolistic structure of the market might have general negative welfare effects. Differentiation strategies are pervasive in market economies and are a powerful means of obtaining competitive advantages. This is because the "master" of competitive advantage offers superior value which "stems from offering lower prices than competitors for equivalent benefits; or providing unique benefits that more than offset a higher price" (Porter, 1985, p. 3). Firms create value by cost leadership or differentiation (Porter, 1985). Using the latter strategy, firms differentiate their product to avoid ruinous price competition and seek some form of monopoly rent. Differentiation offers firms market power, naturally resolving the Bertrand paradox (whereby two undifferentiated players reduce price in order to capture the market but find themselves in a state of Nash equilibrium-without profit).The industrial economic literature focuses on the effects of differentiation strategies on market structure, firms' performances, and welfare effects (Beath and Katsoulacos, 1991). A basic tenet is the distinction made between horizontal and vertical differentiation. Products are said to be horizontally differentiated when, if offered at the same price, consumers, if asked to do so, would rank them differently showing different preferences for different varieties. Instead, they are said to be vertically differentiated if, when offered at the same price, all consumer choose to purchase the same one, that of highest quality.Horizontal and vertical differentiation leads to quite different general results in terms of market structure. Horizontal differentiation is the implicit assumption at the core of models of monopolistic competition and have basically given rise to two classes of models based on the assumption of symmetric consumer preferences (or representative consumer) and asymmetric preferences. In the case of symmetric preferences, one brand is an equally good substitute for any other and the consumer's actual choice will depend on income and relative prices. When preferences are asymmetric, brands are not all equally substitute: if a consumer's ideal brand is i then the consumer prefers brands that are "near" to i in terms of his specification (i.e. in the space of product characteristics in the Lancaster lessicon; more than those that are "far" from it). Asymmetric preferences are assumed in location models, whereas symmetric preferences are assumed in models grounded in the Chamberlin paradigm.The simplest and seminal location model is the Hotelling model of a spatial duopoly, sanctioning the famous principle of minimum differentiation. Successive studies have shown that the Nash equilibrium in the Hotelling model relies on its restrictive assumption, as the zero conjectural variation assumption and the prices and the number of firms being fixed exogenously. When these assumptions are relaxed a unique Nash equilibrium does not necessarily occurs. D'Aspremont et al. (1979), for example, starting from a different assumption on the initial location of the firms, show that the Hotelling model allows for a solution where the sellers seek to move as far away from each other as possible. In the free-entry circular model of Salop (1979) equilibrium is found where each firm earns zero profits and firms are symmetrically located around the circumference of the circle.The Chamberlin (1933) large group model leads to the classical long-run monopolistic equilibrium, the one in which profits are zero and the "dd" curve is tangential to the average cost curve. As long as the "dd" curve that each firm faces still has some negative slope, each firm will produce at a point above the level of minimum average cost. Models postulating horizontal differentiation generally back equilibria characterised by many firms earning zero profits and prices above marginal costs. These models raise the question of whether the market will produce too many or too few brands as compared with the social optimal, which is the issue previously addressed by Spence (1976). The result is generally a suboptimal number of firms/products, with too many or too few firms in the Chamberlin representative consumer model (depending on the parameters of the model) and unambiguously too much variety in the localised competition circle model.While a perfect equilibrium is often problematic in the horizontal case, a perfect equilibrium exists in the vertical case consistent with the finiteness property, and stating that at the equilibrium there is a limit to the number of products for which price can exceed unit variable cost and which have a positive share of the market. The finiteness property was introduced by Shaked and Sutton (1983). The markets in which this is a feature of equilibrium are referred to as natural oligopolies. In the model of Shaked and Sutton, the two conditions that the unit variable costs associated with increased quality rise more slowly than consumers' willingness to pay for this and that the main burden of quality improvement falls on fixed rather than variable costs. An important development of the previous model of Shaked and Sutton (1987) is the one that demonstrates that a weak version of the finiteness property still holds when a mix of horizontal and vertical differentiation is accounted for (that is the pervasive situation in the real world where product differentiation never falls under the ideal type of vertical or horizontal).Summarising, differentiation is always a source of market imperfection and welfare loss. In the case of pure horizontal differentiation these effects are mainly linked to inefficient scales of production or to the suboptimal product variety, whereas the market structure approaches the competitive one. In the vertical (or mixed) case the negative welfare effects are linked to the oligopolistic structure emerging as market equilibrium. The limit theorems describing horizontal differentiation state that in the limit as the market gets large enough, an arbitrary large number of firms, each with a very small market share could co-exist in equilibrium.When carrying out differentiation policies, firms will be earning supernormal profits even though the competitive game is based on the assumptions of non cooperative Bertrand behaviour and free-entry. This result is in contrast to both structure-conduct-performance paradigm and entry-deterrence theory, and is an example of case where structure (the number of the firms) and performance are endogenously determined. The economic literature just mentioned refers to the analysis of one industry at time, that is, on the analysis of competition and market structure at the horizontal level (inter-brand competition). In the food sector the set of prices, qualities, and varieties that actually face the final consumers depends on the strategies carried out by different actors in different stages of the distribution channel. These strategies are the results of horizontal as well as vertical competition. Vertical competition has traditionally been addressed by the channel literature modelling different channel structure in a manufacturer-retailer relationship. Traditionally, three ideal types of structure have been considered (Choi, 1996): exclusive dealer channel (one manufacturer supplying one retailer); monopoly common retailer channel (two manufacturers supplying the same unique retailer); monopoly manufacturer channel (a unique manufacturer supplying two retailers); and duopoly common retailer channel (two manufacturers both supplying two retailers).The topic of channel literature has been the analysis of channel coordination/control problems between the manufacturers and its retailers and on the analysis of vertical strategic interaction; this latter defined in terms of "the direction of channel member's reaction to the action of its channel partners within a given demand structure" (Lee and Staelin, 1997, p. 185). Previous literature, taking for granted the bargaining power of manufacturers, has focused on the incentive schemes used by manufacturers in order to let the retailers choose the strategies able to maximize the channel total profit, while appropriating the largest share of it. Choi (1991), for example, quotes the different forms of governance for the achievement of the maximum channel profit. Because such studies have generally been applied to non-grocery sectors with few national brands and frequent exclusive selling agreement, the problems of channel coordination with regards to differentiating besides pricing behaviours in a multi/manufacturer multi/retailer setting (that is the typical channel setting for the food industry), have been paid little attention. Starting with the previous insights of Choi (1991), successive works have explicitly addressed the problem of channel coordination and differentiation in grocery sectors (Avenel and Caprice, 2006; Choi, 1996; Choi and Coughlan, 2006; Ellickson, 2004; Lee and Staelin, 1997).Choi (1991) first analyses a channel structure with multiple-brand dealers, called common retailers, that well fits the typical structure of food retailing (department stores, supermarkets, and convenience stores). He studies a duopoly model of manufacturers who sell their products through a common independent retailer. He considers three different rules of the duopoly game, which account for different power balance scenario within the channel:1. A manufacturer Stackelberg game (in which markets are characterised by leaders and followers), where the manufacturers can play the role of Stackelberg leaders with respect to the retailer by taking the retailer's reaction function into consideration for their respective wholesale price decisions.2. A vertical Nash game, where neither the manufacturer nor the retailer can influence the counterpart's price decision (i.e. the manufacturer conditions its wholesale price on the retail price and vice-versa).3. A Retailer-Stackelberg game, where retailers play the role of Stackelberg leaders.While the first and the third game applies to situations in which few powerful manufacturers (retailers) supply (buy from) many retailers (manufacturers), the second game fits a situation where power is quite balanced in the relationship. Choi solves these models under both the assumption of linearity and nonlinearity of the demand function, finding contradictory results. Moreover, he solves the models under different assumption on the degree of product substitutability between the manufacturers' brands, in such a way as to introduce the analysis of the effect of product differentiation on channel competition. Also, in this case the results are affected by the form of the demand function; with contradictory results (for instance he finds that less differentiation leads to increased prices and profits for all the members of the channel).Choi (1996) extends the previous model by introducing a differentiated duopoly common retailer channel. He analyses pricing strategies of duopoly manufacturers who produce differentiated products and duopoly retailers who sell both products and carry out store differentiation strategies. Both product and store differentiation are assumed to be horizontal and, like the previous work, three games are considered (vertical Nash, manufacturer Stackelberg, and retailer Stackelberg). The assumed demand function is adjusted in such a way as to explicitly take into account the two differentiation levels (introducing two parameters, for the product and store differentiation) and to overcome the contradictory results of the previous model as regard profit channel and differentiation. The Stackelberg games are quite different from the previous article because, besides the vertical competition, two horizontal levels of competition must be modelled; the manufacturer level and the retailer level. Accordingly, the equilibrium concept employed is the sub game-perfect Stackelberg equilibrium. Results attained by the model are summarised in the following seven propositions (Choi, 1996, pp. 125-129):1. A Stackelberg channel leadership by either manufacturer or retailer results in higher retail prices than those of the Nash game.2. Given a set of differentiation parameters, a channel member benefits by playing the Stackelberg leader at the expense of the other channel member who becomes the follower.3. Total channel profit is larger when there is no channel leadership. However, vertical Nash is not a stable structure, because each channel member has an incentive to become a leader.4. Wholesale prices (retail margins) increase as products (stores) are more differentiated. On the other hand, wholesale prices (retail margins) decrease as stores (products) are more differentiated. Overall, retail prices increase as products and stores are more differentiated.5. Product (store) differentiation benefits manufacturers (retailers), but at the same time hurts retailers (manufacturers). Therefore, manufacturers want more product differentiation and less store differentiation, while the retailers want the reverse.6. Product (store) differentiation and the manufacturer (retailer) Stackelberg leadership have positive synergy effect on the manufacturer (retailer) profits.7. The total profit-maximizing combinations of product and store differentiations are not stable because each channel member has an incentive to differentiate unilaterally.These results are consistent with the general wisdom that differentiation is used to mitigate price competition and that it tends to produce negative welfare effects. In the analysed case the combined vertical-horizontal competition produces non-stable equilibria that fail to maximise the total channel profit as consequence of the conflicting interests of retailers and manufacturers, and therefore opening the question whether a cooperative solution could lead to welfare improvements. Moreover, the sketched channel structure fits the current situation of food marketing channels, either for the double level of differentiation or for vertical power asymmetry that pushes towards non-cooperative vertical forms of coordination, where both parties seek to take the leadership (and retailers actually seem to accomplish it).Avenel and Caprice (2006) model a vertical structure with a vertically differentiated duopoly at the manufacturer level and two retailers who differentiate through the chosen product line (i.e. each of them sell one or both the high and the low quality offered by manufacturers). The focus is on the analysis of the effects of different vertical contractual arrangements on product line differentiation, given different setting of vertical strategic interaction and different levels of costs for quality. Even if this model seems to better apply to the non- grocery sector (in that the assumption of manufacturer channel as leader and the kind of contractual arrangement that are examined, that is, exclusive dealing, vertical integration, and franchise fee), it can be of some interest for those segments of food market, such as the new functional and nutraceutical products, that imply a vertical differentiation strategy fed by heavy sunk investments in research and development by powerful food companies.Choi and Coughlan (2006) investigate the positioning problem of private labels considering the differentiation strategies carried out by national brands, and the consequent product-line pricing strategies carried out by the retailer. They model a manufacturer Stackelberg game where the manufacturer determines the wholesale price and the quality level of his national brand and the retailer chose:* the optimal level of vertical differentiation of her store brand from the national brand;* the degree of substitutability between the national and the store brand;* the retail margin for the national brands; and* the price of the store brand.The equilibrium concept is a sub-game perfect equilibrium in which the second stage price equilibrium is reached immediately after the differentiation decisions. In order to simultaneously take into account the effect of horizontal and vertical differentiation the demand function used in the model is derived from a consumer utility function that contains a preference parameters for each product (vertical differentiation) and a parameter measuring the degree of substitutability with respect to other products. The results of the model for the case of two national brands and one store brand suggest that if the quality levels of the two national brands are equal and they are substantially horizontally differentiated, imitating either brand is optimal for the private label. However, when the national brands are allowed to be vertical differentiated, the private label is better off imitating the higher quality brand. Positioning in between is never an optimal solution. In contrast, when the two national brands are horizontally undifferentiated the private label better response is to horizontally differentiate from both national brands. A consequence of these results is that the more the national brands differentiate, the more store brands carry out imitative strategies leading to head-to-head competition that pushes national brands towards further differentiation and/or advertising investments. Because high differentiation and advertising investment are sources of market power, these findings are consistent with that store brand literature that have suggested that the anticompetitive effects of store brands can be greater than the competitive ones (Cotterill and Putsis, 2000; Kim and Parker, 1997, 1999).To complete this short review of the main findings attained so far from the literature on differentiation and marketing channels, it is worth quoting a recent study by Ellickson (2004) who empirically applies Sutton's theory of endogenous sunk cost and vertical differentiation to the supermarket industry in the USA. During the 1980s and 1990s, the consolidation process in this industry has been driven by the introduction of innovative automated distribution and procurement systems. If one assumes that the level of concentration is determined by the economies of scale and scope associated with these innovations, as markets grow (and these economies are exploited) the level of concentration should decrease. In contrast, in about 50 spatially defined markets in the USA, the evidence is of a stable small number of firms (three to six) capturing the majority of the market, independent of the population; with a competitive fringe of smaller retailers capturing a minor share of the market (Ellickson, 2004). Ellickson (2004) builds and tests a model demonstrating that such a structure is a real "natural oligopoly" stemming from a competitive game among the leader firms based on a growing vertical differentiation associated with increasing sunk costs. In his model, supermarkets compete by offering a greater variety of products (where variety is considered as a purely vertical form of product differentiation). This implies larger stores and therefore larger sunk costs that discourage entry by other firms. As a consequence, quality provided by the oligopolists (proxied by store size) should increase with the size of the market. In other terms high concentration and escalation in quality seem to be both characteristic features of the supermarket industry. The previous section has shown how, in order to maintain their competitive advantage, firms continuously increase their quality effort, either in the horizontal competitive game (manufacturers-to-manufacturers and retailers-to-retailers) or in the vertical competitive game (manufacturers-to-retailers). Once a differentiation strategy has initiated, it continues through time, especially when a quality (vertical) more than a feature (horizontal) differentiation is involved. Consistent with the general findings of the economic theory, the channel literature suggests that vertical differentiation, more than the horizontal one, tends to be associated with a high degree of industry concentration and market power (Hingley and Lindgreen, 2002; White, 2000). In any case, the equilibria (prices and market structures) at any level of the channel depend on a complex interplay between strategies carried out at horizontal and at vertical level; power asymmetries between upstream and downstream firms; and the kinds of governance structures along the channel (Hingley, 2005a).With regard to the fresh produce sector, at least three hints can be drawn on these general findings described:1. The sector of fresh produce offers retailers a wide range of possibilities to increase product variety, and therefore it can be a core category in the differentiating efforts carried out by supermarkets in the horizontal competitive arena. Examples of fresh produce variety improvement are: new format and packaging; standards - as organic, fair trade, non GMO, and so on; longer shelf life through bio and nano technologies or enhanced storage and handling systems; improved technological foods, such as functional and nutraceutical; IV Gamma products, de-seasonality, (i.e. making seasonal products available throughout the year); typical products with an origin denomination; and ethnic products.2. Because the main fresh produce suppliers do not generally have their own supplier brand, in their differentiating strategies retailers do not have to take into account strategic reactions by the upstream counterparties; and hence are more able to entirely appropriate the competitive advantage stemming from the differentiation.3. Because of the general weakness of the fresh produce sector structure, retailers can easily assume the leadership of the channel and therefore impose transaction governance forms that can accomplish the following goals: maximizing the channel profit; giving themselves the power of appropriating the larger share of the profit; and leading suppliers to comply with retailers' differentiating strategies without a real vertical contractual integration. The multiple chain retailers dominate the market for fresh produce in the UK; they have the biggest market share in fresh fruit and vegetables providing 84 per cent of all UK retail sales (Mintel, 2005). There is steady growth in value sales of fresh produce in the UK, which marks it out against a general decline in most food commodities. This trend partly reflects the changing shopping habits of UK consumers, but is also driven by the proactive role the supermarkets have taken. The multiples are keen to develop their profile as suppliers of healthy eating products, but are also using various strategies to drive interest in the fresh sector, such as introducing exclusive new varieties or introducing new packaging. Mintel (2005) identify "interest in fresh produce source and origin" from consumers, but, however, note that price most often determines purchase decisions, with supermarket competitive pressure forcing price and margins down. The "everyday low pricing" strategies used by retailers have kept prices down across many basic categories. Such strategies enable the supermarkets to be seen to be offering value for money when compared to competitors. Building value in the fresh produce sector is difficult, and price therefore remains the main differentiator for the consumer. Also, the essential nature of some products means that some fruit and vegetables have been vulnerable to retail pricing strategies. However, branding and product differentiation will be of key importance to growth and adding value to the market. On this evidence, differentiating foods as being local and/or regional could therefore be beneficial to producers when marketing their produce and should enable them to obtain premium prices.There is relatively little supplier proprietary branding in the UK fresh produce market. The availability and seasonality of fresh produce make it difficult for supplier branded produce to retain an on-shelf presence. Retailer own-branding has been of key importance to the development strategies of the multiples, who have segmented the fruit and vegetable market with their, for example, good/better/best/organic own-label ranges.In the UK, supermarkets (both directly and through their intermediaries) set both the agenda and the price for the rest of the supply chain (Hingley, 2005a). UK Growers feel that the price control exerted by dominant multiple retailers is having a profound effect on their industry, and are again looking to both new markets and external agencies for support on this matter. In the UK, differentiation takes place in the vertical competitive context.The UK fresh produce supply chain has undergone numerous changes in the last decade, with large supermarket retailers becoming increasingly powerful. The implementation of modern business practices has helped improve efficiency in the UK fresh produce supply chain. This has allowed the chain to break out of the commodity trap and take the fresh produce category out of the commodity trading environment (Fearne and Hughes, 2000, p. 120) by means of innovation and value creation (White, 2000). The overall trend is towards the UK fresh produce industry being dominated by a few large corporations operating on a national level, with some corporations even operating on a European or global scale. Most recently, the takeover of one of the largest UK food retailers, Safeway by Wm. Morrison, has resulted in four major supermarket chains (Tesco, Sainsbury, Wal-Mart-Asda, and Morrisons) accounting for three-quarters of retail grocery sales (IGD, 2005). Tesco take a third of the value of UK grocery sales alone.A further development has been a change from market transactions to market relationships, networks, and interactions (Bourlakis, 2001; Hingley and Lindgreen, 2002; Lindgreen and Hingley, 2003; Lindgreen, 2003; Kotzab, 2001). From the retailer perspective (and largely initiated by them) has been the development of category management as a key managerial tool (Lindgreen et al., 2000). O'Keefe and Fearne (2002), for example, contend that their analysis of the application of category leadership in the fresh produce industry by UK retailer Waitrose shows that it is possible to successfully apply an integrated network-based relationship approach to what was considered to be a commodity sector.Category management (where a preferred supply takes greater responsibility for the entire supply chain of a given product category) has become universally applied by retailers. The premise is that category management facilitates greater levels of collaboration in vertical supply channels and underpins relationship development (Barnes et al., 1995). This occurs where a single (lead) supplier organises the supply (from all the suppliers) of a given product category to the retailer. However, such initiatives are seen by some to be simply moving risk and cost onto the supplier and away from the retailer (Allen, 2001). This is an argument put forward by Dapiran and Hogarth-Scott (2003) who contend that the development of category management has not necessarily increased cooperation in supply chains and can be used by retailers to reinforce power and control.Retailers are looking for fewer and larger suppliers who can work with them in vertical "partnership" (Hingley, 2001; White, 2000). This approach delivers considerable advantages for retailers, in that they can influence entire food channels for given products through singular dyadic interfaces with nominated channel leading intermediaries or "super-middlemen" (Hingley, 2005a). Reducing the number of points of contact for supply not only derives benefits in terms of transaction cost savings, but also relational benefits in dealing with fewer, but closer "partner" suppliers. This has resulted in an overriding trend towards supply chain concentration of a market determined by the standards of large-scale retailers.In Italy, fresh produce accounts for more than the 24 per cent of the total value of agricultural production (valued at prices received by farmers), and contributes to the positive part of the food trade balance sheet; with a self-sufficiency rate equal to 114 per cent. Notwithstanding this positive data, the Italian fresh produce industry is in the middle of a deep crisis. In its last report on the industry the CIA (2006), the main farmers union reported the loss of Italian leadership in the European market. During the last ten years, the Italian share of the total fresh produce markets of European partners (EU-15) has continuously decreased; meanwhile imports into Italy registered a sharp increase of 56 per cent from the EU-25 and of 112 per cent from outside the EU. The loss of competitiveness has been due to the enduring weakness of production structure (small firms) and to poor logistic structures compared with the recent consolidation and innovation processes within Italy's traditional competitor (Spain) and in the new fresh produce specialised countries (Egypt, Morocco, Tunisia, and Turkey). Also, new entrants to the European fresh produce market like China, Chile, Argentina, and Uruguay seem to be stronger on the both levels of structures and organisation.When asked how to overcome this crisis and recover a leading position in the domestic as well as the export market, farmers associations, experts, and public officers of the Ministry for Agriculture all give three simple answers: horizontal integration at agricultural level for achieving network externalities in the production and selling activities; quality improvement and better exploitation of the comparative advantages Italian producers have with respect weather, natural conditions, and product variety; and better relationships with big retailers that sell more 60 per cent of the production and are the only actors in the distribution channel that actually can "persuade" consumers to reward the Italian product.Differentiation strategies by leading supermarkets along with a preference for Italian suppliers could help Italian farmers to exit the crisis. Evidence from both consumers' attitudes and retailers' marketing strategies seem to indicate that this is a practicable way. It is interesting to note also that collaborating growers in Southern-Italy are taking the branding initiative in fresh produce, whereby most recently in Sicily a consortium of Sicilian fruit growers from the Calatino south Simeto District have unveiled a new brand - Puraterra. The name is a reference to the pure soil and the high quality of the organic produce, cultivated on a total area of 100,000 hectares. Blood oranges, grapes, cactus figs, peaches, and artichokes will be supplied under the new brand (Fresh Info, 2007).A recent survey by INDICOD (www.indicod-ecr.it/) on consumer preferences for fresh produce shows at least five notable attitudes:1. As regards product attributes consumers rank this as follows:* sensory attributes (taste, appearance and smell);* price;* convenience (time and energy saving in food shopping, storage and preparation disposal); and* origin and traceability.2. As regards organic products, almost half of the sample bought these at least once in the last month.3. When explicitly asked, 65 per cent of consumers disclose their preference for Italian products.4. Overall, 60 per cent of consumers in the sample are happy with the non-packaged, unbranded display of produce with free service, but would like to receive more information on origin and product characteristics.5. Young women in the sample are strongly interested in convenience attributes of produce, with a high willingness to pay for it.Currently in Italy, the market for produce is led by supermarkets, nevertheless with a still large share (about 38 per cent) covered by traditional trade. Over the past 15 years, supermarkets carried out a price-based competition, enhancing procurement efficiency (mainly by operating their own distribution centres) and shrinking suppliers' margins. This led to the substitution of Italian suppliers (with poor production structure and management capability) with foreign suppliers (mainly Spanish) that better fit buyer organisational and cost needs. Nevertheless, some changes recently occurred with a growing attention for differentiation and local procurement policies.Currently, about 55 per cent of the Italian grocery market is covered by five groups with the following shares: Coop Italia 17.1 per cent; Carrefour Italia (with four different flags/formats Carrefour, GS, Diperdi, Docks Market), 10.4 per cent; Auchan, 9.6 per cent; Conad, 6 per cent; and Esselunga, 8.3 per cent (Dati IRI, 2006). During the last ten years, all these leading groups, except Auchan, launched an own-branded line of high quality fresh produce, and an own-branded line of organic fresh produce. Moreover, both Carrefour and Conad started a line of Italian traditional product ("Terre d'Italia" for Carrefour, and "Percorso Italia" for Conad) and all increased the offer of IV gamma products (fresh cut, prepared, dressed, and ready-to-eat), with a growing range and larger display.Summarising the Italian market for fresh produce, there seems to be split between an unbranded/undifferentiated segment, where sensorial attributes and price are the key leverages of competition; and a highly differentiated/ semi-branded segment, where quality, variety, origin, convenience, and every sort of added value are the keys elements for obtaining premium prices and competitive advantages. The second segment, of course, might be the one interesting for Italian growers struggling to maintain their market shares. It was decided to approach the question of product differentiation in vertical channel structures using a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers. It is believed that this constitutes the most appropriate method to emphasise detail, depth, and insight, as well as understanding and explanation (Patton, 1987; Sayre, 2001). In this research, semi-structured, personal interviews were used allowed in order to facilitate respondents' thoughts, opinions, attitudes, and motivational ideas. The two organisations, which form the key vertical channel interaction, were selected for their ability to contribute new insights, as well as in the expectation that these insights would be replicated (Perry, 1998). The cases were selected for reasons of being typical examples (Miles and Huberman, 1994; Patton, 1987) of fresh produce supply (the grower) and fresh produce category management intermediary (the buying and value adding organisation in "partnership" with multiple retailer customers). Interview questions were standardised around a number of topics (Dibb et al., 1997). Questions were kept deliberately broad to allow interviewees as much freedom in their answers as possible (Glaser and Strauss, 1967). The findings are taken from the words of the respondents themselves, thereby aiding the aim of the research, whilst gaining much more information than would have been available from alternative research methods (Corbin and Strauss, 1998). Within-case analysis involved writing up a summary of each individual case in order to identify important case level phenomena.The principal areas for exploration identified in the preceding literature are the following two ones: the impact of vertical competition on channel coordination; and competitive advantage through value-adding in vertical chains (cost leadership and differentiation strategies through branding, production and technological systems, and seasonal variation opportunities). The two halves of the vertical-channel dyad are as follows. FP Marketing (name changed for reasons of anonymity) is the central marketing organisation for its own and associated growers produce against customer programmes and is based in the UK, with an annual sales turnover of over 100 million euros. It coordinates crop production and volumes both in the UK and overseas, and supplies consolidated and value-added (packaged) fresh produce to large multiple retailers in the UK, under retailers' own-label. Overall, 90 per cent of their business is in supply to UK multiple retailers, the remainder constitutes product that does not meet retailer specifications, and is marketed to UK wholesalers or processors. The group also has is own transport company. The product range is protected (e.g. glasshouse) fresh produce crops (tomatoes, cucumbers, peppers, and so forth) from UK, Northern Europe, and Eastern Europe and the same range from protected/unprotected sources in Southern Europe. The emphasis for this study is on tomato production and marketing, and the vertical relationship of a tomato producer and value-adding intermediary to multiple retailer customers.FP Grower (name changed for reasons of anonymity) is a southern Italy-based family grower business of some 20 types of fresh produce, most notably tomatoes, and has an annual sales turnover of 10 million euros. They have 180 ha, (80 ha glasshouse and 100 ha open field, in order to manage demand throughout the year). They grow and undertake primary value-adding functions (washing and basic packing in preparation for delivery). Their principal dedicated and "partner" customer is FP Marketing. A total of 60 per cent of their product goes to intermediaries like FP Marketing and 35 per cent direct to retailers, with the remainder 5 per cent to wholesale markets - 80 per cent of FP Grower's customers are overseas (UK, Austria, Switzerland, and Germany). FP Marketing does invest some funds in varietal and agronomic development in southern Italy, but own no means of production in the region. The two organisations concerned in this case analysis are, therefore, separately owned and managed. Interviews with FP Marketing concerned the commercial director and development director, and interview with FP Grower concerned the commercial director.FP Marketing are a category management supplier to UK multiple retailer chains. As the category management process has evolved in the UK, their principal retail customers have pushed FP Marketing to focus and category manage the supply of fresh produce protected crops, hence they have foregone their interests in other crops (for example, in leafy salads), but have gained business in (notably) tomatoes. This meant that FP Marketing was able to expand their remit, responsibilities, and sourcing of tomatoes on behalf of their predominant retail customers:We have got northern European growers, right the way from Belgium to the UK. We have now expanded into Poland for new sources, and that covers the UK seasonal supply/demand (FP Marketing).What the category management system does is allow retailers to coordinate category supply through category leaders like FP Marketing. The intermediary organisation benefits from more business, but must take on an enhanced role and associated responsibilities, and this is becoming increasingly expensive for suppliers. However, FP Marketing does see this as part of a (service-based) value-adding process:We have to provide services; we have to provide more resources. That is our added value to the customer. We supply all that technical (input), the agronomists, the ideas, the trials, the NPD, all of this development. There is not a charge for that (FP Marketing).Multiple retail chains will specify quality assurance through determination of produce from accredited sources. These are normally European baseline production standards, environmental growing conditions, and so forth; and different customers in different countries may expect variations by different accredited standards. FP Grower, for example, offers four types of certifications, including EUREPGAP, and is trialling a limited acreage of organic certified produce. With respect to further utilising quality and production systems as a means of market differentiation, UK retailers have developed their own further standards, additional to or inclusive of baseline accreditation:[Named UK retailer] have got a (named variety of) Cherry tomato, and we grow that for them. And [a] particular grower has got [additional] standards in his greenhouse. Normally, it is EUREPGAP standard throughout the industry, but [named grower] has gone the next level which is [named retailer's standard]. This is the next level in terms of technical excellence (FP Marketing).Production and quality standards are also important to FP Grower, but he sees that variations in environment standards, as well as other areas such as diverse labour laws not controlled by retail customers, as frustrating and undermining:Foreign competitors [i.e. growers in other countries] take advantage from different labour regulations and different pesticide/use regulations, without a real policy of price and quality transparency being carried out by retailers. Product from [named countries] with low food safety standards is arriving [...] and sold in Italian supermarkets without clear information on its origin (FP Grower).The category management role for FP Marketing includes managing the seasonal supply of product that takes in northern European protected crop (as described above), but also that from southern Europe. Equally important is devolved responsibility for product differentiation. Access to southern Italian tomatoes (typified by that produced by FP Grower) allows this differentiation. This region is notable for vine ripened tomatoes. These are specific variety, late-harvested (left on the vine until very red, mature, and full-flavoured). This source allows distinct advantages in variety, climatic conditions, and grower expertise not possible in northern Europe in order to produce a product with distinct taste and flavour advantages:Generally, [the advantage is a] combination of better growing conditions, lower growing costs, and the growth technique, the tomato speciality technique [ ... ] by harvesting something on the vine you can take it to the next stage of maturity it will give it that extra shelf life, flavour, and life advantage. The flavours and varieties they [southern Italian growers] are producing are market leaders (FP Marketing).The motivation of FP Marketing is to try to add value to the products it supplies to supermarket customers in order to avoid the "commodity-trap" of being in an unbranded business, in which retailer own-label is the predominant identity:In commodity areas supply is far greater than demand and by their nature supermarkets will use that against us. So, we work to try and put identity to products [...] and try to add value to it, and try and raise awareness with our customer. We look at varieties and taste, we try not to be in value and standard (retail lines), our ideal aspiration is to be in "special" and "finest" (retail lines) [...] because you can get a higher value for it (FP Marketing).Remember, all of our products are our customers' [retailers'] own brand, there is no identity of our company. It is a way of promoting the grower, the variety, the techniques they are using and most importantly, the flavour. The flavours and varieties they are producing are market leaders (FP Marketing).It is interesting to note that FP Grower does not share the FP Marketing's emphasis on product specialisation based on regionality. This may be a matter of perspective, where FP Marketing are sourcing produce from many countries, varieties, types, and production methods; and FP Grower sees his produce as simply tomatoes determined by general quality standards and procurement accountability.FP Grower's motivation is to find a wide market for his produce, whilst FP Marketing, with their category management-based interaction with retail customers identifies opportunities for sub-branding by regional identity:We have now got customers [i.e. UK retailers] who are even putting grower's names on the packs (FP Marketing).I think [that] they [UK retailers] see [sourcing from] Italy as a way of adding value. It is all a way of trying to sub-brand down to the grower (FP Marketing).In this way, retail customers' (through the expertise and packaging operations of FP Marketing) in the UK are keen to differentiate for both UK and overseas (e.g. Southern Italian) produce as a means of further value-added.In terms of branding, FP Grower does have a named identity, but as this is mainly used as an identifier on outer cases for wholesale and intermediary customers, brand identity does not appear on-pack at retailer level. If there is pack identity it is with the retailer's own-label brand. FP Grower's customers collect product (using their own transport arrangements) from them at the farm, which is packed "on demand" to customers' specification. As a result, FP Grower does not benefit from directly attributed brand identity. FP Grower's principal customer, FP Marketing, is responsible for all of the value-adding in terms of packaging and on-pack marketing for UK retail customers. FP Grower puts loose raw material (tomatoes) into plastic returnable trays. This is collected by FP Marketing's own transport to take the produce to the UK. It is there that further value-adding takes place in terms of consolidation, grading, and packing into punnets to the specification of specific retail customers under their brand identity. So, FP Marketing also does not have brand identity on-pack; value-adding for them is derived from the kind of service elements described above (continual sourcing throughout the seasons, new varietal sourcing, consolidation, packaging, new product development, and so forth).The vertical channel arrangement between FP Grower and FP Marketing does offer FP Grower something that they do not have from other customer sources, and that is a contractual agreement:We have full exclusivity with them [FP Grower] in [for supply to] the UK (FP Marketing).FP Marketing is supplied exclusively with tomatoes on the basis of an annual contract. The contract is signed in October before planting, and delivery of the product is from March until the following October (FP Grower).As a result, FP Grower is happy with this arrangement as it provides security of business that is not forthcoming from other customers, who provide regular business, but not price stability:FP Marketing is the only customer who buys through contract. Other customers just order product when they need. For every order there is a price negotiation. The price is not stable because when products come from abroad [Spain and Morocco; FP Grower is near to ports in Southern Italy]. The price falls suddenly, leaving no bargaining power (FP Grower).The arrangement with FP Marketing is much the preferred way of doing business for FP Grower, as they are worried about:The excessive power of retailers who are not interested in collaborative agreements, but only look for lower prices and higher margins (FP Grower).This may be a further reason why FP Grower has not developed customer markets dedicated to varietal type, production method, or regional association, as these things are more difficult to achieve without further contractual/collaborative agreements. However, FP Grower is looking to expand through exploiting seasonal gaps with "UK customers interested in winter production" and to add service value through "further quality and logistic improvement". They also have more long-term thoughts about horizontal and vertical integration of their own, through producer collaboration with other growers to sell direct to the public, via retailing of a producer group's own range of produce.Vertical coordination through a category management type system does have clear advantages for primary producers like FP Grower and intermediaries like FP Marketing, through the consistency of a planned contractual arrangement. As this further develops, this can allow further market differentiation (through, for example, production method or varietal specialisation or emphasis on regional identity). However, control remains firmly in the hands of the multiple retailer customers, whose name and identity value-adding services are conducted in:Supermarkets are very cute [clever], they outsource some of their work to us. We do their work for them, whether it in inventory, in marketing, in procurement. We are continually doing that, so it is a cost that we are bearing (FP Marketing).Fresh produce is still very price sensitive and commodity suppliers/supply chains substitutable, and it is easy to enter the commodity fresh produce market in supply to retailers:It [category management] is very beneficial, but that does not take away from the fact that we live and work in a very marginal [profit] industry (FP Marketing).But category management-based supply does, in return provide some security, as it allows intermediaries and primary suppliers to add-value through service. Most of the value-adding services are conducted by the intermediary (in this case FP Marketing) and that allows more ownership of the business.FP Marketing do acknowledge that there may be scope for more of these value adding activities to take place closer to the country and point of production:If they [growers/grower groups] were able to produce a finished article in Italy, pre-packed in a plastic tray, then you (they could) start driving costs out of the business (FP Marketing).From this point Italian growers/grower groups could exploit Italian retail demand for value-added products:If it works for us [value-adding] in northern Europe, why should it not work in the home market? And it is closer, the costs are lower, they can deliver into those markets [within Italy] a lot cheaper (FP Marketing).However, FP Marketing are quick to point out that to supply the retail market outside of Italy, for example the UK, it would be much harder to replace what they do in terms of providing the consolidation (and all that requires in carrying a continual, multi-seasonal and vast range of products and sources); and all of the value-added services that large UK retailers' require. The current competitive structure of the food system is such as to give strong incentives to differentiation strategies. Evidences from the economic literature on market differentiation suggest that the degree and the kind of differentiation (vertical/quality versus horizontal/feature) in the food marketing channel will depend on several interplaying factors: form and preference parameters of the demand function; competitive pressure at vertical and horizontal level; forms of vertical governance structures; and power asymmetries between upstream and downstream firms in the channel. In any case, differentiation is likely to be associated with high degree of concentration and market power.A theoretical general finding is that equilibria in differentiated markets are not stable and that a welfare assessment is difficult given that the net welfare effect of differentiation depends on the degree of market power (and the associated monopoly inefficiencies) owned by firms at equilibrium, the consumer preferences for differentiated products, and the form of the differentiation cost function.With regards the market for fresh produce it has been shown that a differentiation strategy in this sector might benefit retailers more than in other sectors, due to the absence of brand policies (and consequently of vertical conflicting strategies) by suppliers. Results from the presented case study seem to be consistent with the theoretical findings. In the sketched marketing channel made of the vertical channel interface between "FP Grower" and "FP Marketing" and the final retailer, the retailer is the leading actor of the differentiation policy and the one who mostly benefits from it. In the analysed case, it is identified that higher product differentiation can add value to the channel value. As predicted by the theory, the differentiation strategy can be carried out because the power asymmetry in the channel favour the party (the retailer) who possess the resources (consumer and market segmentation information, economic strength, and managerial skill) required to making the differentiation policy succeed. The theory also predicts that the vertical governance form must be such as to give sufficient incentives to upstream channel partners to comply with the retailer differentiation policy. In the example, the performed annual contractual arrangement gives growers a benefit, in term of sales planning and assurance; which offsets the relationship disadvantages due to the retailer's buying power. The channel organisation also leaves the marketing intermediary (FP Marketing) the right incentives to incur the specific investment requested for the success of the differentiation policy.A general result of the study is that when retailers engage in product differentiation it is more likely that the terms of channel relationships shift from collaborative to competitive types with the power imbalance becoming the disciplinary means by which vertical coordination is achieved and maintained. As a consequence the relationship marketing idea that channel partners look for equitable collaborative relations seems to be contradicted by the evidence that for suppliers it could be wise agreeing to some inequity as the cost of doing business (Corsten and Kumar, 2005), especially when smart large retailers carry out successfully competitive strategies with positive spill over effects on the upstream firms. This viewpoint is concurred by Hingley (2005b) and Hingley et al. (2005) in analysis of fresh food chain supplier-supermarket relationships, where acceptance of channel asymmetry is advocated. Following this, the question to be answered is how much power is allowed for in the system without being a threat for the general social welfare and how to assess the anticompetitive effects of power imbalance in the channel in antitrust contexts?
|
- The purpose of this article is twofold: first, to review the literature in order to assess the opportunities and the possible welfare effects of differentiation strategies in the food market; and second, to analyse the current structure and organisation of the fresh produce market (fruit, vegetable, and salad) in the light of new product procurement, innovation, and differentiation policies carried out by retailers at the global level.
|
[SECTION: Method] The current differentiation strategies in the fresh produce (fruit, vegetable, and salad) industry are analysed in the light of new procurement policies carried out by retailers at the global level. These retailers pay growing attention to product differentiation and innovation as a means to put new value (rather than simply ripping out costs) into the supply chain.Differentiation strategies are analysed. On a theoretical level, the main findings of the literature on product differentiation and market structure are reviewed in order to assess the opportunities and the possible welfare effects of differentiation strategies in the food market. On an empirical level, the current structure and organisation of the fresh produce market are analysed, using both data at the aggregate level and the findings of a case study. The case study refers to a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers.The findings of the study indicate that in the fresh produce industry there are good opportunities for successful differentiation strategies. Nevertheless, actors at the different vertical stages of the marketing channel take very different advantage from it, depending on their "power" to lead the channel. Moreover, the fact that product differentiation tends to foster the oligopolistic structure of the market might have general negative welfare effects. Differentiation strategies are pervasive in market economies and are a powerful means of obtaining competitive advantages. This is because the "master" of competitive advantage offers superior value which "stems from offering lower prices than competitors for equivalent benefits; or providing unique benefits that more than offset a higher price" (Porter, 1985, p. 3). Firms create value by cost leadership or differentiation (Porter, 1985). Using the latter strategy, firms differentiate their product to avoid ruinous price competition and seek some form of monopoly rent. Differentiation offers firms market power, naturally resolving the Bertrand paradox (whereby two undifferentiated players reduce price in order to capture the market but find themselves in a state of Nash equilibrium-without profit).The industrial economic literature focuses on the effects of differentiation strategies on market structure, firms' performances, and welfare effects (Beath and Katsoulacos, 1991). A basic tenet is the distinction made between horizontal and vertical differentiation. Products are said to be horizontally differentiated when, if offered at the same price, consumers, if asked to do so, would rank them differently showing different preferences for different varieties. Instead, they are said to be vertically differentiated if, when offered at the same price, all consumer choose to purchase the same one, that of highest quality.Horizontal and vertical differentiation leads to quite different general results in terms of market structure. Horizontal differentiation is the implicit assumption at the core of models of monopolistic competition and have basically given rise to two classes of models based on the assumption of symmetric consumer preferences (or representative consumer) and asymmetric preferences. In the case of symmetric preferences, one brand is an equally good substitute for any other and the consumer's actual choice will depend on income and relative prices. When preferences are asymmetric, brands are not all equally substitute: if a consumer's ideal brand is i then the consumer prefers brands that are "near" to i in terms of his specification (i.e. in the space of product characteristics in the Lancaster lessicon; more than those that are "far" from it). Asymmetric preferences are assumed in location models, whereas symmetric preferences are assumed in models grounded in the Chamberlin paradigm.The simplest and seminal location model is the Hotelling model of a spatial duopoly, sanctioning the famous principle of minimum differentiation. Successive studies have shown that the Nash equilibrium in the Hotelling model relies on its restrictive assumption, as the zero conjectural variation assumption and the prices and the number of firms being fixed exogenously. When these assumptions are relaxed a unique Nash equilibrium does not necessarily occurs. D'Aspremont et al. (1979), for example, starting from a different assumption on the initial location of the firms, show that the Hotelling model allows for a solution where the sellers seek to move as far away from each other as possible. In the free-entry circular model of Salop (1979) equilibrium is found where each firm earns zero profits and firms are symmetrically located around the circumference of the circle.The Chamberlin (1933) large group model leads to the classical long-run monopolistic equilibrium, the one in which profits are zero and the "dd" curve is tangential to the average cost curve. As long as the "dd" curve that each firm faces still has some negative slope, each firm will produce at a point above the level of minimum average cost. Models postulating horizontal differentiation generally back equilibria characterised by many firms earning zero profits and prices above marginal costs. These models raise the question of whether the market will produce too many or too few brands as compared with the social optimal, which is the issue previously addressed by Spence (1976). The result is generally a suboptimal number of firms/products, with too many or too few firms in the Chamberlin representative consumer model (depending on the parameters of the model) and unambiguously too much variety in the localised competition circle model.While a perfect equilibrium is often problematic in the horizontal case, a perfect equilibrium exists in the vertical case consistent with the finiteness property, and stating that at the equilibrium there is a limit to the number of products for which price can exceed unit variable cost and which have a positive share of the market. The finiteness property was introduced by Shaked and Sutton (1983). The markets in which this is a feature of equilibrium are referred to as natural oligopolies. In the model of Shaked and Sutton, the two conditions that the unit variable costs associated with increased quality rise more slowly than consumers' willingness to pay for this and that the main burden of quality improvement falls on fixed rather than variable costs. An important development of the previous model of Shaked and Sutton (1987) is the one that demonstrates that a weak version of the finiteness property still holds when a mix of horizontal and vertical differentiation is accounted for (that is the pervasive situation in the real world where product differentiation never falls under the ideal type of vertical or horizontal).Summarising, differentiation is always a source of market imperfection and welfare loss. In the case of pure horizontal differentiation these effects are mainly linked to inefficient scales of production or to the suboptimal product variety, whereas the market structure approaches the competitive one. In the vertical (or mixed) case the negative welfare effects are linked to the oligopolistic structure emerging as market equilibrium. The limit theorems describing horizontal differentiation state that in the limit as the market gets large enough, an arbitrary large number of firms, each with a very small market share could co-exist in equilibrium.When carrying out differentiation policies, firms will be earning supernormal profits even though the competitive game is based on the assumptions of non cooperative Bertrand behaviour and free-entry. This result is in contrast to both structure-conduct-performance paradigm and entry-deterrence theory, and is an example of case where structure (the number of the firms) and performance are endogenously determined. The economic literature just mentioned refers to the analysis of one industry at time, that is, on the analysis of competition and market structure at the horizontal level (inter-brand competition). In the food sector the set of prices, qualities, and varieties that actually face the final consumers depends on the strategies carried out by different actors in different stages of the distribution channel. These strategies are the results of horizontal as well as vertical competition. Vertical competition has traditionally been addressed by the channel literature modelling different channel structure in a manufacturer-retailer relationship. Traditionally, three ideal types of structure have been considered (Choi, 1996): exclusive dealer channel (one manufacturer supplying one retailer); monopoly common retailer channel (two manufacturers supplying the same unique retailer); monopoly manufacturer channel (a unique manufacturer supplying two retailers); and duopoly common retailer channel (two manufacturers both supplying two retailers).The topic of channel literature has been the analysis of channel coordination/control problems between the manufacturers and its retailers and on the analysis of vertical strategic interaction; this latter defined in terms of "the direction of channel member's reaction to the action of its channel partners within a given demand structure" (Lee and Staelin, 1997, p. 185). Previous literature, taking for granted the bargaining power of manufacturers, has focused on the incentive schemes used by manufacturers in order to let the retailers choose the strategies able to maximize the channel total profit, while appropriating the largest share of it. Choi (1991), for example, quotes the different forms of governance for the achievement of the maximum channel profit. Because such studies have generally been applied to non-grocery sectors with few national brands and frequent exclusive selling agreement, the problems of channel coordination with regards to differentiating besides pricing behaviours in a multi/manufacturer multi/retailer setting (that is the typical channel setting for the food industry), have been paid little attention. Starting with the previous insights of Choi (1991), successive works have explicitly addressed the problem of channel coordination and differentiation in grocery sectors (Avenel and Caprice, 2006; Choi, 1996; Choi and Coughlan, 2006; Ellickson, 2004; Lee and Staelin, 1997).Choi (1991) first analyses a channel structure with multiple-brand dealers, called common retailers, that well fits the typical structure of food retailing (department stores, supermarkets, and convenience stores). He studies a duopoly model of manufacturers who sell their products through a common independent retailer. He considers three different rules of the duopoly game, which account for different power balance scenario within the channel:1. A manufacturer Stackelberg game (in which markets are characterised by leaders and followers), where the manufacturers can play the role of Stackelberg leaders with respect to the retailer by taking the retailer's reaction function into consideration for their respective wholesale price decisions.2. A vertical Nash game, where neither the manufacturer nor the retailer can influence the counterpart's price decision (i.e. the manufacturer conditions its wholesale price on the retail price and vice-versa).3. A Retailer-Stackelberg game, where retailers play the role of Stackelberg leaders.While the first and the third game applies to situations in which few powerful manufacturers (retailers) supply (buy from) many retailers (manufacturers), the second game fits a situation where power is quite balanced in the relationship. Choi solves these models under both the assumption of linearity and nonlinearity of the demand function, finding contradictory results. Moreover, he solves the models under different assumption on the degree of product substitutability between the manufacturers' brands, in such a way as to introduce the analysis of the effect of product differentiation on channel competition. Also, in this case the results are affected by the form of the demand function; with contradictory results (for instance he finds that less differentiation leads to increased prices and profits for all the members of the channel).Choi (1996) extends the previous model by introducing a differentiated duopoly common retailer channel. He analyses pricing strategies of duopoly manufacturers who produce differentiated products and duopoly retailers who sell both products and carry out store differentiation strategies. Both product and store differentiation are assumed to be horizontal and, like the previous work, three games are considered (vertical Nash, manufacturer Stackelberg, and retailer Stackelberg). The assumed demand function is adjusted in such a way as to explicitly take into account the two differentiation levels (introducing two parameters, for the product and store differentiation) and to overcome the contradictory results of the previous model as regard profit channel and differentiation. The Stackelberg games are quite different from the previous article because, besides the vertical competition, two horizontal levels of competition must be modelled; the manufacturer level and the retailer level. Accordingly, the equilibrium concept employed is the sub game-perfect Stackelberg equilibrium. Results attained by the model are summarised in the following seven propositions (Choi, 1996, pp. 125-129):1. A Stackelberg channel leadership by either manufacturer or retailer results in higher retail prices than those of the Nash game.2. Given a set of differentiation parameters, a channel member benefits by playing the Stackelberg leader at the expense of the other channel member who becomes the follower.3. Total channel profit is larger when there is no channel leadership. However, vertical Nash is not a stable structure, because each channel member has an incentive to become a leader.4. Wholesale prices (retail margins) increase as products (stores) are more differentiated. On the other hand, wholesale prices (retail margins) decrease as stores (products) are more differentiated. Overall, retail prices increase as products and stores are more differentiated.5. Product (store) differentiation benefits manufacturers (retailers), but at the same time hurts retailers (manufacturers). Therefore, manufacturers want more product differentiation and less store differentiation, while the retailers want the reverse.6. Product (store) differentiation and the manufacturer (retailer) Stackelberg leadership have positive synergy effect on the manufacturer (retailer) profits.7. The total profit-maximizing combinations of product and store differentiations are not stable because each channel member has an incentive to differentiate unilaterally.These results are consistent with the general wisdom that differentiation is used to mitigate price competition and that it tends to produce negative welfare effects. In the analysed case the combined vertical-horizontal competition produces non-stable equilibria that fail to maximise the total channel profit as consequence of the conflicting interests of retailers and manufacturers, and therefore opening the question whether a cooperative solution could lead to welfare improvements. Moreover, the sketched channel structure fits the current situation of food marketing channels, either for the double level of differentiation or for vertical power asymmetry that pushes towards non-cooperative vertical forms of coordination, where both parties seek to take the leadership (and retailers actually seem to accomplish it).Avenel and Caprice (2006) model a vertical structure with a vertically differentiated duopoly at the manufacturer level and two retailers who differentiate through the chosen product line (i.e. each of them sell one or both the high and the low quality offered by manufacturers). The focus is on the analysis of the effects of different vertical contractual arrangements on product line differentiation, given different setting of vertical strategic interaction and different levels of costs for quality. Even if this model seems to better apply to the non- grocery sector (in that the assumption of manufacturer channel as leader and the kind of contractual arrangement that are examined, that is, exclusive dealing, vertical integration, and franchise fee), it can be of some interest for those segments of food market, such as the new functional and nutraceutical products, that imply a vertical differentiation strategy fed by heavy sunk investments in research and development by powerful food companies.Choi and Coughlan (2006) investigate the positioning problem of private labels considering the differentiation strategies carried out by national brands, and the consequent product-line pricing strategies carried out by the retailer. They model a manufacturer Stackelberg game where the manufacturer determines the wholesale price and the quality level of his national brand and the retailer chose:* the optimal level of vertical differentiation of her store brand from the national brand;* the degree of substitutability between the national and the store brand;* the retail margin for the national brands; and* the price of the store brand.The equilibrium concept is a sub-game perfect equilibrium in which the second stage price equilibrium is reached immediately after the differentiation decisions. In order to simultaneously take into account the effect of horizontal and vertical differentiation the demand function used in the model is derived from a consumer utility function that contains a preference parameters for each product (vertical differentiation) and a parameter measuring the degree of substitutability with respect to other products. The results of the model for the case of two national brands and one store brand suggest that if the quality levels of the two national brands are equal and they are substantially horizontally differentiated, imitating either brand is optimal for the private label. However, when the national brands are allowed to be vertical differentiated, the private label is better off imitating the higher quality brand. Positioning in between is never an optimal solution. In contrast, when the two national brands are horizontally undifferentiated the private label better response is to horizontally differentiate from both national brands. A consequence of these results is that the more the national brands differentiate, the more store brands carry out imitative strategies leading to head-to-head competition that pushes national brands towards further differentiation and/or advertising investments. Because high differentiation and advertising investment are sources of market power, these findings are consistent with that store brand literature that have suggested that the anticompetitive effects of store brands can be greater than the competitive ones (Cotterill and Putsis, 2000; Kim and Parker, 1997, 1999).To complete this short review of the main findings attained so far from the literature on differentiation and marketing channels, it is worth quoting a recent study by Ellickson (2004) who empirically applies Sutton's theory of endogenous sunk cost and vertical differentiation to the supermarket industry in the USA. During the 1980s and 1990s, the consolidation process in this industry has been driven by the introduction of innovative automated distribution and procurement systems. If one assumes that the level of concentration is determined by the economies of scale and scope associated with these innovations, as markets grow (and these economies are exploited) the level of concentration should decrease. In contrast, in about 50 spatially defined markets in the USA, the evidence is of a stable small number of firms (three to six) capturing the majority of the market, independent of the population; with a competitive fringe of smaller retailers capturing a minor share of the market (Ellickson, 2004). Ellickson (2004) builds and tests a model demonstrating that such a structure is a real "natural oligopoly" stemming from a competitive game among the leader firms based on a growing vertical differentiation associated with increasing sunk costs. In his model, supermarkets compete by offering a greater variety of products (where variety is considered as a purely vertical form of product differentiation). This implies larger stores and therefore larger sunk costs that discourage entry by other firms. As a consequence, quality provided by the oligopolists (proxied by store size) should increase with the size of the market. In other terms high concentration and escalation in quality seem to be both characteristic features of the supermarket industry. The previous section has shown how, in order to maintain their competitive advantage, firms continuously increase their quality effort, either in the horizontal competitive game (manufacturers-to-manufacturers and retailers-to-retailers) or in the vertical competitive game (manufacturers-to-retailers). Once a differentiation strategy has initiated, it continues through time, especially when a quality (vertical) more than a feature (horizontal) differentiation is involved. Consistent with the general findings of the economic theory, the channel literature suggests that vertical differentiation, more than the horizontal one, tends to be associated with a high degree of industry concentration and market power (Hingley and Lindgreen, 2002; White, 2000). In any case, the equilibria (prices and market structures) at any level of the channel depend on a complex interplay between strategies carried out at horizontal and at vertical level; power asymmetries between upstream and downstream firms; and the kinds of governance structures along the channel (Hingley, 2005a).With regard to the fresh produce sector, at least three hints can be drawn on these general findings described:1. The sector of fresh produce offers retailers a wide range of possibilities to increase product variety, and therefore it can be a core category in the differentiating efforts carried out by supermarkets in the horizontal competitive arena. Examples of fresh produce variety improvement are: new format and packaging; standards - as organic, fair trade, non GMO, and so on; longer shelf life through bio and nano technologies or enhanced storage and handling systems; improved technological foods, such as functional and nutraceutical; IV Gamma products, de-seasonality, (i.e. making seasonal products available throughout the year); typical products with an origin denomination; and ethnic products.2. Because the main fresh produce suppliers do not generally have their own supplier brand, in their differentiating strategies retailers do not have to take into account strategic reactions by the upstream counterparties; and hence are more able to entirely appropriate the competitive advantage stemming from the differentiation.3. Because of the general weakness of the fresh produce sector structure, retailers can easily assume the leadership of the channel and therefore impose transaction governance forms that can accomplish the following goals: maximizing the channel profit; giving themselves the power of appropriating the larger share of the profit; and leading suppliers to comply with retailers' differentiating strategies without a real vertical contractual integration. The multiple chain retailers dominate the market for fresh produce in the UK; they have the biggest market share in fresh fruit and vegetables providing 84 per cent of all UK retail sales (Mintel, 2005). There is steady growth in value sales of fresh produce in the UK, which marks it out against a general decline in most food commodities. This trend partly reflects the changing shopping habits of UK consumers, but is also driven by the proactive role the supermarkets have taken. The multiples are keen to develop their profile as suppliers of healthy eating products, but are also using various strategies to drive interest in the fresh sector, such as introducing exclusive new varieties or introducing new packaging. Mintel (2005) identify "interest in fresh produce source and origin" from consumers, but, however, note that price most often determines purchase decisions, with supermarket competitive pressure forcing price and margins down. The "everyday low pricing" strategies used by retailers have kept prices down across many basic categories. Such strategies enable the supermarkets to be seen to be offering value for money when compared to competitors. Building value in the fresh produce sector is difficult, and price therefore remains the main differentiator for the consumer. Also, the essential nature of some products means that some fruit and vegetables have been vulnerable to retail pricing strategies. However, branding and product differentiation will be of key importance to growth and adding value to the market. On this evidence, differentiating foods as being local and/or regional could therefore be beneficial to producers when marketing their produce and should enable them to obtain premium prices.There is relatively little supplier proprietary branding in the UK fresh produce market. The availability and seasonality of fresh produce make it difficult for supplier branded produce to retain an on-shelf presence. Retailer own-branding has been of key importance to the development strategies of the multiples, who have segmented the fruit and vegetable market with their, for example, good/better/best/organic own-label ranges.In the UK, supermarkets (both directly and through their intermediaries) set both the agenda and the price for the rest of the supply chain (Hingley, 2005a). UK Growers feel that the price control exerted by dominant multiple retailers is having a profound effect on their industry, and are again looking to both new markets and external agencies for support on this matter. In the UK, differentiation takes place in the vertical competitive context.The UK fresh produce supply chain has undergone numerous changes in the last decade, with large supermarket retailers becoming increasingly powerful. The implementation of modern business practices has helped improve efficiency in the UK fresh produce supply chain. This has allowed the chain to break out of the commodity trap and take the fresh produce category out of the commodity trading environment (Fearne and Hughes, 2000, p. 120) by means of innovation and value creation (White, 2000). The overall trend is towards the UK fresh produce industry being dominated by a few large corporations operating on a national level, with some corporations even operating on a European or global scale. Most recently, the takeover of one of the largest UK food retailers, Safeway by Wm. Morrison, has resulted in four major supermarket chains (Tesco, Sainsbury, Wal-Mart-Asda, and Morrisons) accounting for three-quarters of retail grocery sales (IGD, 2005). Tesco take a third of the value of UK grocery sales alone.A further development has been a change from market transactions to market relationships, networks, and interactions (Bourlakis, 2001; Hingley and Lindgreen, 2002; Lindgreen and Hingley, 2003; Lindgreen, 2003; Kotzab, 2001). From the retailer perspective (and largely initiated by them) has been the development of category management as a key managerial tool (Lindgreen et al., 2000). O'Keefe and Fearne (2002), for example, contend that their analysis of the application of category leadership in the fresh produce industry by UK retailer Waitrose shows that it is possible to successfully apply an integrated network-based relationship approach to what was considered to be a commodity sector.Category management (where a preferred supply takes greater responsibility for the entire supply chain of a given product category) has become universally applied by retailers. The premise is that category management facilitates greater levels of collaboration in vertical supply channels and underpins relationship development (Barnes et al., 1995). This occurs where a single (lead) supplier organises the supply (from all the suppliers) of a given product category to the retailer. However, such initiatives are seen by some to be simply moving risk and cost onto the supplier and away from the retailer (Allen, 2001). This is an argument put forward by Dapiran and Hogarth-Scott (2003) who contend that the development of category management has not necessarily increased cooperation in supply chains and can be used by retailers to reinforce power and control.Retailers are looking for fewer and larger suppliers who can work with them in vertical "partnership" (Hingley, 2001; White, 2000). This approach delivers considerable advantages for retailers, in that they can influence entire food channels for given products through singular dyadic interfaces with nominated channel leading intermediaries or "super-middlemen" (Hingley, 2005a). Reducing the number of points of contact for supply not only derives benefits in terms of transaction cost savings, but also relational benefits in dealing with fewer, but closer "partner" suppliers. This has resulted in an overriding trend towards supply chain concentration of a market determined by the standards of large-scale retailers.In Italy, fresh produce accounts for more than the 24 per cent of the total value of agricultural production (valued at prices received by farmers), and contributes to the positive part of the food trade balance sheet; with a self-sufficiency rate equal to 114 per cent. Notwithstanding this positive data, the Italian fresh produce industry is in the middle of a deep crisis. In its last report on the industry the CIA (2006), the main farmers union reported the loss of Italian leadership in the European market. During the last ten years, the Italian share of the total fresh produce markets of European partners (EU-15) has continuously decreased; meanwhile imports into Italy registered a sharp increase of 56 per cent from the EU-25 and of 112 per cent from outside the EU. The loss of competitiveness has been due to the enduring weakness of production structure (small firms) and to poor logistic structures compared with the recent consolidation and innovation processes within Italy's traditional competitor (Spain) and in the new fresh produce specialised countries (Egypt, Morocco, Tunisia, and Turkey). Also, new entrants to the European fresh produce market like China, Chile, Argentina, and Uruguay seem to be stronger on the both levels of structures and organisation.When asked how to overcome this crisis and recover a leading position in the domestic as well as the export market, farmers associations, experts, and public officers of the Ministry for Agriculture all give three simple answers: horizontal integration at agricultural level for achieving network externalities in the production and selling activities; quality improvement and better exploitation of the comparative advantages Italian producers have with respect weather, natural conditions, and product variety; and better relationships with big retailers that sell more 60 per cent of the production and are the only actors in the distribution channel that actually can "persuade" consumers to reward the Italian product.Differentiation strategies by leading supermarkets along with a preference for Italian suppliers could help Italian farmers to exit the crisis. Evidence from both consumers' attitudes and retailers' marketing strategies seem to indicate that this is a practicable way. It is interesting to note also that collaborating growers in Southern-Italy are taking the branding initiative in fresh produce, whereby most recently in Sicily a consortium of Sicilian fruit growers from the Calatino south Simeto District have unveiled a new brand - Puraterra. The name is a reference to the pure soil and the high quality of the organic produce, cultivated on a total area of 100,000 hectares. Blood oranges, grapes, cactus figs, peaches, and artichokes will be supplied under the new brand (Fresh Info, 2007).A recent survey by INDICOD (www.indicod-ecr.it/) on consumer preferences for fresh produce shows at least five notable attitudes:1. As regards product attributes consumers rank this as follows:* sensory attributes (taste, appearance and smell);* price;* convenience (time and energy saving in food shopping, storage and preparation disposal); and* origin and traceability.2. As regards organic products, almost half of the sample bought these at least once in the last month.3. When explicitly asked, 65 per cent of consumers disclose their preference for Italian products.4. Overall, 60 per cent of consumers in the sample are happy with the non-packaged, unbranded display of produce with free service, but would like to receive more information on origin and product characteristics.5. Young women in the sample are strongly interested in convenience attributes of produce, with a high willingness to pay for it.Currently in Italy, the market for produce is led by supermarkets, nevertheless with a still large share (about 38 per cent) covered by traditional trade. Over the past 15 years, supermarkets carried out a price-based competition, enhancing procurement efficiency (mainly by operating their own distribution centres) and shrinking suppliers' margins. This led to the substitution of Italian suppliers (with poor production structure and management capability) with foreign suppliers (mainly Spanish) that better fit buyer organisational and cost needs. Nevertheless, some changes recently occurred with a growing attention for differentiation and local procurement policies.Currently, about 55 per cent of the Italian grocery market is covered by five groups with the following shares: Coop Italia 17.1 per cent; Carrefour Italia (with four different flags/formats Carrefour, GS, Diperdi, Docks Market), 10.4 per cent; Auchan, 9.6 per cent; Conad, 6 per cent; and Esselunga, 8.3 per cent (Dati IRI, 2006). During the last ten years, all these leading groups, except Auchan, launched an own-branded line of high quality fresh produce, and an own-branded line of organic fresh produce. Moreover, both Carrefour and Conad started a line of Italian traditional product ("Terre d'Italia" for Carrefour, and "Percorso Italia" for Conad) and all increased the offer of IV gamma products (fresh cut, prepared, dressed, and ready-to-eat), with a growing range and larger display.Summarising the Italian market for fresh produce, there seems to be split between an unbranded/undifferentiated segment, where sensorial attributes and price are the key leverages of competition; and a highly differentiated/ semi-branded segment, where quality, variety, origin, convenience, and every sort of added value are the keys elements for obtaining premium prices and competitive advantages. The second segment, of course, might be the one interesting for Italian growers struggling to maintain their market shares. It was decided to approach the question of product differentiation in vertical channel structures using a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers. It is believed that this constitutes the most appropriate method to emphasise detail, depth, and insight, as well as understanding and explanation (Patton, 1987; Sayre, 2001). In this research, semi-structured, personal interviews were used allowed in order to facilitate respondents' thoughts, opinions, attitudes, and motivational ideas. The two organisations, which form the key vertical channel interaction, were selected for their ability to contribute new insights, as well as in the expectation that these insights would be replicated (Perry, 1998). The cases were selected for reasons of being typical examples (Miles and Huberman, 1994; Patton, 1987) of fresh produce supply (the grower) and fresh produce category management intermediary (the buying and value adding organisation in "partnership" with multiple retailer customers). Interview questions were standardised around a number of topics (Dibb et al., 1997). Questions were kept deliberately broad to allow interviewees as much freedom in their answers as possible (Glaser and Strauss, 1967). The findings are taken from the words of the respondents themselves, thereby aiding the aim of the research, whilst gaining much more information than would have been available from alternative research methods (Corbin and Strauss, 1998). Within-case analysis involved writing up a summary of each individual case in order to identify important case level phenomena.The principal areas for exploration identified in the preceding literature are the following two ones: the impact of vertical competition on channel coordination; and competitive advantage through value-adding in vertical chains (cost leadership and differentiation strategies through branding, production and technological systems, and seasonal variation opportunities). The two halves of the vertical-channel dyad are as follows. FP Marketing (name changed for reasons of anonymity) is the central marketing organisation for its own and associated growers produce against customer programmes and is based in the UK, with an annual sales turnover of over 100 million euros. It coordinates crop production and volumes both in the UK and overseas, and supplies consolidated and value-added (packaged) fresh produce to large multiple retailers in the UK, under retailers' own-label. Overall, 90 per cent of their business is in supply to UK multiple retailers, the remainder constitutes product that does not meet retailer specifications, and is marketed to UK wholesalers or processors. The group also has is own transport company. The product range is protected (e.g. glasshouse) fresh produce crops (tomatoes, cucumbers, peppers, and so forth) from UK, Northern Europe, and Eastern Europe and the same range from protected/unprotected sources in Southern Europe. The emphasis for this study is on tomato production and marketing, and the vertical relationship of a tomato producer and value-adding intermediary to multiple retailer customers.FP Grower (name changed for reasons of anonymity) is a southern Italy-based family grower business of some 20 types of fresh produce, most notably tomatoes, and has an annual sales turnover of 10 million euros. They have 180 ha, (80 ha glasshouse and 100 ha open field, in order to manage demand throughout the year). They grow and undertake primary value-adding functions (washing and basic packing in preparation for delivery). Their principal dedicated and "partner" customer is FP Marketing. A total of 60 per cent of their product goes to intermediaries like FP Marketing and 35 per cent direct to retailers, with the remainder 5 per cent to wholesale markets - 80 per cent of FP Grower's customers are overseas (UK, Austria, Switzerland, and Germany). FP Marketing does invest some funds in varietal and agronomic development in southern Italy, but own no means of production in the region. The two organisations concerned in this case analysis are, therefore, separately owned and managed. Interviews with FP Marketing concerned the commercial director and development director, and interview with FP Grower concerned the commercial director.FP Marketing are a category management supplier to UK multiple retailer chains. As the category management process has evolved in the UK, their principal retail customers have pushed FP Marketing to focus and category manage the supply of fresh produce protected crops, hence they have foregone their interests in other crops (for example, in leafy salads), but have gained business in (notably) tomatoes. This meant that FP Marketing was able to expand their remit, responsibilities, and sourcing of tomatoes on behalf of their predominant retail customers:We have got northern European growers, right the way from Belgium to the UK. We have now expanded into Poland for new sources, and that covers the UK seasonal supply/demand (FP Marketing).What the category management system does is allow retailers to coordinate category supply through category leaders like FP Marketing. The intermediary organisation benefits from more business, but must take on an enhanced role and associated responsibilities, and this is becoming increasingly expensive for suppliers. However, FP Marketing does see this as part of a (service-based) value-adding process:We have to provide services; we have to provide more resources. That is our added value to the customer. We supply all that technical (input), the agronomists, the ideas, the trials, the NPD, all of this development. There is not a charge for that (FP Marketing).Multiple retail chains will specify quality assurance through determination of produce from accredited sources. These are normally European baseline production standards, environmental growing conditions, and so forth; and different customers in different countries may expect variations by different accredited standards. FP Grower, for example, offers four types of certifications, including EUREPGAP, and is trialling a limited acreage of organic certified produce. With respect to further utilising quality and production systems as a means of market differentiation, UK retailers have developed their own further standards, additional to or inclusive of baseline accreditation:[Named UK retailer] have got a (named variety of) Cherry tomato, and we grow that for them. And [a] particular grower has got [additional] standards in his greenhouse. Normally, it is EUREPGAP standard throughout the industry, but [named grower] has gone the next level which is [named retailer's standard]. This is the next level in terms of technical excellence (FP Marketing).Production and quality standards are also important to FP Grower, but he sees that variations in environment standards, as well as other areas such as diverse labour laws not controlled by retail customers, as frustrating and undermining:Foreign competitors [i.e. growers in other countries] take advantage from different labour regulations and different pesticide/use regulations, without a real policy of price and quality transparency being carried out by retailers. Product from [named countries] with low food safety standards is arriving [...] and sold in Italian supermarkets without clear information on its origin (FP Grower).The category management role for FP Marketing includes managing the seasonal supply of product that takes in northern European protected crop (as described above), but also that from southern Europe. Equally important is devolved responsibility for product differentiation. Access to southern Italian tomatoes (typified by that produced by FP Grower) allows this differentiation. This region is notable for vine ripened tomatoes. These are specific variety, late-harvested (left on the vine until very red, mature, and full-flavoured). This source allows distinct advantages in variety, climatic conditions, and grower expertise not possible in northern Europe in order to produce a product with distinct taste and flavour advantages:Generally, [the advantage is a] combination of better growing conditions, lower growing costs, and the growth technique, the tomato speciality technique [ ... ] by harvesting something on the vine you can take it to the next stage of maturity it will give it that extra shelf life, flavour, and life advantage. The flavours and varieties they [southern Italian growers] are producing are market leaders (FP Marketing).The motivation of FP Marketing is to try to add value to the products it supplies to supermarket customers in order to avoid the "commodity-trap" of being in an unbranded business, in which retailer own-label is the predominant identity:In commodity areas supply is far greater than demand and by their nature supermarkets will use that against us. So, we work to try and put identity to products [...] and try to add value to it, and try and raise awareness with our customer. We look at varieties and taste, we try not to be in value and standard (retail lines), our ideal aspiration is to be in "special" and "finest" (retail lines) [...] because you can get a higher value for it (FP Marketing).Remember, all of our products are our customers' [retailers'] own brand, there is no identity of our company. It is a way of promoting the grower, the variety, the techniques they are using and most importantly, the flavour. The flavours and varieties they are producing are market leaders (FP Marketing).It is interesting to note that FP Grower does not share the FP Marketing's emphasis on product specialisation based on regionality. This may be a matter of perspective, where FP Marketing are sourcing produce from many countries, varieties, types, and production methods; and FP Grower sees his produce as simply tomatoes determined by general quality standards and procurement accountability.FP Grower's motivation is to find a wide market for his produce, whilst FP Marketing, with their category management-based interaction with retail customers identifies opportunities for sub-branding by regional identity:We have now got customers [i.e. UK retailers] who are even putting grower's names on the packs (FP Marketing).I think [that] they [UK retailers] see [sourcing from] Italy as a way of adding value. It is all a way of trying to sub-brand down to the grower (FP Marketing).In this way, retail customers' (through the expertise and packaging operations of FP Marketing) in the UK are keen to differentiate for both UK and overseas (e.g. Southern Italian) produce as a means of further value-added.In terms of branding, FP Grower does have a named identity, but as this is mainly used as an identifier on outer cases for wholesale and intermediary customers, brand identity does not appear on-pack at retailer level. If there is pack identity it is with the retailer's own-label brand. FP Grower's customers collect product (using their own transport arrangements) from them at the farm, which is packed "on demand" to customers' specification. As a result, FP Grower does not benefit from directly attributed brand identity. FP Grower's principal customer, FP Marketing, is responsible for all of the value-adding in terms of packaging and on-pack marketing for UK retail customers. FP Grower puts loose raw material (tomatoes) into plastic returnable trays. This is collected by FP Marketing's own transport to take the produce to the UK. It is there that further value-adding takes place in terms of consolidation, grading, and packing into punnets to the specification of specific retail customers under their brand identity. So, FP Marketing also does not have brand identity on-pack; value-adding for them is derived from the kind of service elements described above (continual sourcing throughout the seasons, new varietal sourcing, consolidation, packaging, new product development, and so forth).The vertical channel arrangement between FP Grower and FP Marketing does offer FP Grower something that they do not have from other customer sources, and that is a contractual agreement:We have full exclusivity with them [FP Grower] in [for supply to] the UK (FP Marketing).FP Marketing is supplied exclusively with tomatoes on the basis of an annual contract. The contract is signed in October before planting, and delivery of the product is from March until the following October (FP Grower).As a result, FP Grower is happy with this arrangement as it provides security of business that is not forthcoming from other customers, who provide regular business, but not price stability:FP Marketing is the only customer who buys through contract. Other customers just order product when they need. For every order there is a price negotiation. The price is not stable because when products come from abroad [Spain and Morocco; FP Grower is near to ports in Southern Italy]. The price falls suddenly, leaving no bargaining power (FP Grower).The arrangement with FP Marketing is much the preferred way of doing business for FP Grower, as they are worried about:The excessive power of retailers who are not interested in collaborative agreements, but only look for lower prices and higher margins (FP Grower).This may be a further reason why FP Grower has not developed customer markets dedicated to varietal type, production method, or regional association, as these things are more difficult to achieve without further contractual/collaborative agreements. However, FP Grower is looking to expand through exploiting seasonal gaps with "UK customers interested in winter production" and to add service value through "further quality and logistic improvement". They also have more long-term thoughts about horizontal and vertical integration of their own, through producer collaboration with other growers to sell direct to the public, via retailing of a producer group's own range of produce.Vertical coordination through a category management type system does have clear advantages for primary producers like FP Grower and intermediaries like FP Marketing, through the consistency of a planned contractual arrangement. As this further develops, this can allow further market differentiation (through, for example, production method or varietal specialisation or emphasis on regional identity). However, control remains firmly in the hands of the multiple retailer customers, whose name and identity value-adding services are conducted in:Supermarkets are very cute [clever], they outsource some of their work to us. We do their work for them, whether it in inventory, in marketing, in procurement. We are continually doing that, so it is a cost that we are bearing (FP Marketing).Fresh produce is still very price sensitive and commodity suppliers/supply chains substitutable, and it is easy to enter the commodity fresh produce market in supply to retailers:It [category management] is very beneficial, but that does not take away from the fact that we live and work in a very marginal [profit] industry (FP Marketing).But category management-based supply does, in return provide some security, as it allows intermediaries and primary suppliers to add-value through service. Most of the value-adding services are conducted by the intermediary (in this case FP Marketing) and that allows more ownership of the business.FP Marketing do acknowledge that there may be scope for more of these value adding activities to take place closer to the country and point of production:If they [growers/grower groups] were able to produce a finished article in Italy, pre-packed in a plastic tray, then you (they could) start driving costs out of the business (FP Marketing).From this point Italian growers/grower groups could exploit Italian retail demand for value-added products:If it works for us [value-adding] in northern Europe, why should it not work in the home market? And it is closer, the costs are lower, they can deliver into those markets [within Italy] a lot cheaper (FP Marketing).However, FP Marketing are quick to point out that to supply the retail market outside of Italy, for example the UK, it would be much harder to replace what they do in terms of providing the consolidation (and all that requires in carrying a continual, multi-seasonal and vast range of products and sources); and all of the value-added services that large UK retailers' require. The current competitive structure of the food system is such as to give strong incentives to differentiation strategies. Evidences from the economic literature on market differentiation suggest that the degree and the kind of differentiation (vertical/quality versus horizontal/feature) in the food marketing channel will depend on several interplaying factors: form and preference parameters of the demand function; competitive pressure at vertical and horizontal level; forms of vertical governance structures; and power asymmetries between upstream and downstream firms in the channel. In any case, differentiation is likely to be associated with high degree of concentration and market power.A theoretical general finding is that equilibria in differentiated markets are not stable and that a welfare assessment is difficult given that the net welfare effect of differentiation depends on the degree of market power (and the associated monopoly inefficiencies) owned by firms at equilibrium, the consumer preferences for differentiated products, and the form of the differentiation cost function.With regards the market for fresh produce it has been shown that a differentiation strategy in this sector might benefit retailers more than in other sectors, due to the absence of brand policies (and consequently of vertical conflicting strategies) by suppliers. Results from the presented case study seem to be consistent with the theoretical findings. In the sketched marketing channel made of the vertical channel interface between "FP Grower" and "FP Marketing" and the final retailer, the retailer is the leading actor of the differentiation policy and the one who mostly benefits from it. In the analysed case, it is identified that higher product differentiation can add value to the channel value. As predicted by the theory, the differentiation strategy can be carried out because the power asymmetry in the channel favour the party (the retailer) who possess the resources (consumer and market segmentation information, economic strength, and managerial skill) required to making the differentiation policy succeed. The theory also predicts that the vertical governance form must be such as to give sufficient incentives to upstream channel partners to comply with the retailer differentiation policy. In the example, the performed annual contractual arrangement gives growers a benefit, in term of sales planning and assurance; which offsets the relationship disadvantages due to the retailer's buying power. The channel organisation also leaves the marketing intermediary (FP Marketing) the right incentives to incur the specific investment requested for the success of the differentiation policy.A general result of the study is that when retailers engage in product differentiation it is more likely that the terms of channel relationships shift from collaborative to competitive types with the power imbalance becoming the disciplinary means by which vertical coordination is achieved and maintained. As a consequence the relationship marketing idea that channel partners look for equitable collaborative relations seems to be contradicted by the evidence that for suppliers it could be wise agreeing to some inequity as the cost of doing business (Corsten and Kumar, 2005), especially when smart large retailers carry out successfully competitive strategies with positive spill over effects on the upstream firms. This viewpoint is concurred by Hingley (2005b) and Hingley et al. (2005) in analysis of fresh food chain supplier-supermarket relationships, where acceptance of channel asymmetry is advocated. Following this, the question to be answered is how much power is allowed for in the system without being a threat for the general social welfare and how to assess the anticompetitive effects of power imbalance in the channel in antitrust contexts?
|
- The paper used a single dyadic case study across two countries (Italy and the UK): the primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers.
|
[SECTION: Findings] The current differentiation strategies in the fresh produce (fruit, vegetable, and salad) industry are analysed in the light of new procurement policies carried out by retailers at the global level. These retailers pay growing attention to product differentiation and innovation as a means to put new value (rather than simply ripping out costs) into the supply chain.Differentiation strategies are analysed. On a theoretical level, the main findings of the literature on product differentiation and market structure are reviewed in order to assess the opportunities and the possible welfare effects of differentiation strategies in the food market. On an empirical level, the current structure and organisation of the fresh produce market are analysed, using both data at the aggregate level and the findings of a case study. The case study refers to a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers.The findings of the study indicate that in the fresh produce industry there are good opportunities for successful differentiation strategies. Nevertheless, actors at the different vertical stages of the marketing channel take very different advantage from it, depending on their "power" to lead the channel. Moreover, the fact that product differentiation tends to foster the oligopolistic structure of the market might have general negative welfare effects. Differentiation strategies are pervasive in market economies and are a powerful means of obtaining competitive advantages. This is because the "master" of competitive advantage offers superior value which "stems from offering lower prices than competitors for equivalent benefits; or providing unique benefits that more than offset a higher price" (Porter, 1985, p. 3). Firms create value by cost leadership or differentiation (Porter, 1985). Using the latter strategy, firms differentiate their product to avoid ruinous price competition and seek some form of monopoly rent. Differentiation offers firms market power, naturally resolving the Bertrand paradox (whereby two undifferentiated players reduce price in order to capture the market but find themselves in a state of Nash equilibrium-without profit).The industrial economic literature focuses on the effects of differentiation strategies on market structure, firms' performances, and welfare effects (Beath and Katsoulacos, 1991). A basic tenet is the distinction made between horizontal and vertical differentiation. Products are said to be horizontally differentiated when, if offered at the same price, consumers, if asked to do so, would rank them differently showing different preferences for different varieties. Instead, they are said to be vertically differentiated if, when offered at the same price, all consumer choose to purchase the same one, that of highest quality.Horizontal and vertical differentiation leads to quite different general results in terms of market structure. Horizontal differentiation is the implicit assumption at the core of models of monopolistic competition and have basically given rise to two classes of models based on the assumption of symmetric consumer preferences (or representative consumer) and asymmetric preferences. In the case of symmetric preferences, one brand is an equally good substitute for any other and the consumer's actual choice will depend on income and relative prices. When preferences are asymmetric, brands are not all equally substitute: if a consumer's ideal brand is i then the consumer prefers brands that are "near" to i in terms of his specification (i.e. in the space of product characteristics in the Lancaster lessicon; more than those that are "far" from it). Asymmetric preferences are assumed in location models, whereas symmetric preferences are assumed in models grounded in the Chamberlin paradigm.The simplest and seminal location model is the Hotelling model of a spatial duopoly, sanctioning the famous principle of minimum differentiation. Successive studies have shown that the Nash equilibrium in the Hotelling model relies on its restrictive assumption, as the zero conjectural variation assumption and the prices and the number of firms being fixed exogenously. When these assumptions are relaxed a unique Nash equilibrium does not necessarily occurs. D'Aspremont et al. (1979), for example, starting from a different assumption on the initial location of the firms, show that the Hotelling model allows for a solution where the sellers seek to move as far away from each other as possible. In the free-entry circular model of Salop (1979) equilibrium is found where each firm earns zero profits and firms are symmetrically located around the circumference of the circle.The Chamberlin (1933) large group model leads to the classical long-run monopolistic equilibrium, the one in which profits are zero and the "dd" curve is tangential to the average cost curve. As long as the "dd" curve that each firm faces still has some negative slope, each firm will produce at a point above the level of minimum average cost. Models postulating horizontal differentiation generally back equilibria characterised by many firms earning zero profits and prices above marginal costs. These models raise the question of whether the market will produce too many or too few brands as compared with the social optimal, which is the issue previously addressed by Spence (1976). The result is generally a suboptimal number of firms/products, with too many or too few firms in the Chamberlin representative consumer model (depending on the parameters of the model) and unambiguously too much variety in the localised competition circle model.While a perfect equilibrium is often problematic in the horizontal case, a perfect equilibrium exists in the vertical case consistent with the finiteness property, and stating that at the equilibrium there is a limit to the number of products for which price can exceed unit variable cost and which have a positive share of the market. The finiteness property was introduced by Shaked and Sutton (1983). The markets in which this is a feature of equilibrium are referred to as natural oligopolies. In the model of Shaked and Sutton, the two conditions that the unit variable costs associated with increased quality rise more slowly than consumers' willingness to pay for this and that the main burden of quality improvement falls on fixed rather than variable costs. An important development of the previous model of Shaked and Sutton (1987) is the one that demonstrates that a weak version of the finiteness property still holds when a mix of horizontal and vertical differentiation is accounted for (that is the pervasive situation in the real world where product differentiation never falls under the ideal type of vertical or horizontal).Summarising, differentiation is always a source of market imperfection and welfare loss. In the case of pure horizontal differentiation these effects are mainly linked to inefficient scales of production or to the suboptimal product variety, whereas the market structure approaches the competitive one. In the vertical (or mixed) case the negative welfare effects are linked to the oligopolistic structure emerging as market equilibrium. The limit theorems describing horizontal differentiation state that in the limit as the market gets large enough, an arbitrary large number of firms, each with a very small market share could co-exist in equilibrium.When carrying out differentiation policies, firms will be earning supernormal profits even though the competitive game is based on the assumptions of non cooperative Bertrand behaviour and free-entry. This result is in contrast to both structure-conduct-performance paradigm and entry-deterrence theory, and is an example of case where structure (the number of the firms) and performance are endogenously determined. The economic literature just mentioned refers to the analysis of one industry at time, that is, on the analysis of competition and market structure at the horizontal level (inter-brand competition). In the food sector the set of prices, qualities, and varieties that actually face the final consumers depends on the strategies carried out by different actors in different stages of the distribution channel. These strategies are the results of horizontal as well as vertical competition. Vertical competition has traditionally been addressed by the channel literature modelling different channel structure in a manufacturer-retailer relationship. Traditionally, three ideal types of structure have been considered (Choi, 1996): exclusive dealer channel (one manufacturer supplying one retailer); monopoly common retailer channel (two manufacturers supplying the same unique retailer); monopoly manufacturer channel (a unique manufacturer supplying two retailers); and duopoly common retailer channel (two manufacturers both supplying two retailers).The topic of channel literature has been the analysis of channel coordination/control problems between the manufacturers and its retailers and on the analysis of vertical strategic interaction; this latter defined in terms of "the direction of channel member's reaction to the action of its channel partners within a given demand structure" (Lee and Staelin, 1997, p. 185). Previous literature, taking for granted the bargaining power of manufacturers, has focused on the incentive schemes used by manufacturers in order to let the retailers choose the strategies able to maximize the channel total profit, while appropriating the largest share of it. Choi (1991), for example, quotes the different forms of governance for the achievement of the maximum channel profit. Because such studies have generally been applied to non-grocery sectors with few national brands and frequent exclusive selling agreement, the problems of channel coordination with regards to differentiating besides pricing behaviours in a multi/manufacturer multi/retailer setting (that is the typical channel setting for the food industry), have been paid little attention. Starting with the previous insights of Choi (1991), successive works have explicitly addressed the problem of channel coordination and differentiation in grocery sectors (Avenel and Caprice, 2006; Choi, 1996; Choi and Coughlan, 2006; Ellickson, 2004; Lee and Staelin, 1997).Choi (1991) first analyses a channel structure with multiple-brand dealers, called common retailers, that well fits the typical structure of food retailing (department stores, supermarkets, and convenience stores). He studies a duopoly model of manufacturers who sell their products through a common independent retailer. He considers three different rules of the duopoly game, which account for different power balance scenario within the channel:1. A manufacturer Stackelberg game (in which markets are characterised by leaders and followers), where the manufacturers can play the role of Stackelberg leaders with respect to the retailer by taking the retailer's reaction function into consideration for their respective wholesale price decisions.2. A vertical Nash game, where neither the manufacturer nor the retailer can influence the counterpart's price decision (i.e. the manufacturer conditions its wholesale price on the retail price and vice-versa).3. A Retailer-Stackelberg game, where retailers play the role of Stackelberg leaders.While the first and the third game applies to situations in which few powerful manufacturers (retailers) supply (buy from) many retailers (manufacturers), the second game fits a situation where power is quite balanced in the relationship. Choi solves these models under both the assumption of linearity and nonlinearity of the demand function, finding contradictory results. Moreover, he solves the models under different assumption on the degree of product substitutability between the manufacturers' brands, in such a way as to introduce the analysis of the effect of product differentiation on channel competition. Also, in this case the results are affected by the form of the demand function; with contradictory results (for instance he finds that less differentiation leads to increased prices and profits for all the members of the channel).Choi (1996) extends the previous model by introducing a differentiated duopoly common retailer channel. He analyses pricing strategies of duopoly manufacturers who produce differentiated products and duopoly retailers who sell both products and carry out store differentiation strategies. Both product and store differentiation are assumed to be horizontal and, like the previous work, three games are considered (vertical Nash, manufacturer Stackelberg, and retailer Stackelberg). The assumed demand function is adjusted in such a way as to explicitly take into account the two differentiation levels (introducing two parameters, for the product and store differentiation) and to overcome the contradictory results of the previous model as regard profit channel and differentiation. The Stackelberg games are quite different from the previous article because, besides the vertical competition, two horizontal levels of competition must be modelled; the manufacturer level and the retailer level. Accordingly, the equilibrium concept employed is the sub game-perfect Stackelberg equilibrium. Results attained by the model are summarised in the following seven propositions (Choi, 1996, pp. 125-129):1. A Stackelberg channel leadership by either manufacturer or retailer results in higher retail prices than those of the Nash game.2. Given a set of differentiation parameters, a channel member benefits by playing the Stackelberg leader at the expense of the other channel member who becomes the follower.3. Total channel profit is larger when there is no channel leadership. However, vertical Nash is not a stable structure, because each channel member has an incentive to become a leader.4. Wholesale prices (retail margins) increase as products (stores) are more differentiated. On the other hand, wholesale prices (retail margins) decrease as stores (products) are more differentiated. Overall, retail prices increase as products and stores are more differentiated.5. Product (store) differentiation benefits manufacturers (retailers), but at the same time hurts retailers (manufacturers). Therefore, manufacturers want more product differentiation and less store differentiation, while the retailers want the reverse.6. Product (store) differentiation and the manufacturer (retailer) Stackelberg leadership have positive synergy effect on the manufacturer (retailer) profits.7. The total profit-maximizing combinations of product and store differentiations are not stable because each channel member has an incentive to differentiate unilaterally.These results are consistent with the general wisdom that differentiation is used to mitigate price competition and that it tends to produce negative welfare effects. In the analysed case the combined vertical-horizontal competition produces non-stable equilibria that fail to maximise the total channel profit as consequence of the conflicting interests of retailers and manufacturers, and therefore opening the question whether a cooperative solution could lead to welfare improvements. Moreover, the sketched channel structure fits the current situation of food marketing channels, either for the double level of differentiation or for vertical power asymmetry that pushes towards non-cooperative vertical forms of coordination, where both parties seek to take the leadership (and retailers actually seem to accomplish it).Avenel and Caprice (2006) model a vertical structure with a vertically differentiated duopoly at the manufacturer level and two retailers who differentiate through the chosen product line (i.e. each of them sell one or both the high and the low quality offered by manufacturers). The focus is on the analysis of the effects of different vertical contractual arrangements on product line differentiation, given different setting of vertical strategic interaction and different levels of costs for quality. Even if this model seems to better apply to the non- grocery sector (in that the assumption of manufacturer channel as leader and the kind of contractual arrangement that are examined, that is, exclusive dealing, vertical integration, and franchise fee), it can be of some interest for those segments of food market, such as the new functional and nutraceutical products, that imply a vertical differentiation strategy fed by heavy sunk investments in research and development by powerful food companies.Choi and Coughlan (2006) investigate the positioning problem of private labels considering the differentiation strategies carried out by national brands, and the consequent product-line pricing strategies carried out by the retailer. They model a manufacturer Stackelberg game where the manufacturer determines the wholesale price and the quality level of his national brand and the retailer chose:* the optimal level of vertical differentiation of her store brand from the national brand;* the degree of substitutability between the national and the store brand;* the retail margin for the national brands; and* the price of the store brand.The equilibrium concept is a sub-game perfect equilibrium in which the second stage price equilibrium is reached immediately after the differentiation decisions. In order to simultaneously take into account the effect of horizontal and vertical differentiation the demand function used in the model is derived from a consumer utility function that contains a preference parameters for each product (vertical differentiation) and a parameter measuring the degree of substitutability with respect to other products. The results of the model for the case of two national brands and one store brand suggest that if the quality levels of the two national brands are equal and they are substantially horizontally differentiated, imitating either brand is optimal for the private label. However, when the national brands are allowed to be vertical differentiated, the private label is better off imitating the higher quality brand. Positioning in between is never an optimal solution. In contrast, when the two national brands are horizontally undifferentiated the private label better response is to horizontally differentiate from both national brands. A consequence of these results is that the more the national brands differentiate, the more store brands carry out imitative strategies leading to head-to-head competition that pushes national brands towards further differentiation and/or advertising investments. Because high differentiation and advertising investment are sources of market power, these findings are consistent with that store brand literature that have suggested that the anticompetitive effects of store brands can be greater than the competitive ones (Cotterill and Putsis, 2000; Kim and Parker, 1997, 1999).To complete this short review of the main findings attained so far from the literature on differentiation and marketing channels, it is worth quoting a recent study by Ellickson (2004) who empirically applies Sutton's theory of endogenous sunk cost and vertical differentiation to the supermarket industry in the USA. During the 1980s and 1990s, the consolidation process in this industry has been driven by the introduction of innovative automated distribution and procurement systems. If one assumes that the level of concentration is determined by the economies of scale and scope associated with these innovations, as markets grow (and these economies are exploited) the level of concentration should decrease. In contrast, in about 50 spatially defined markets in the USA, the evidence is of a stable small number of firms (three to six) capturing the majority of the market, independent of the population; with a competitive fringe of smaller retailers capturing a minor share of the market (Ellickson, 2004). Ellickson (2004) builds and tests a model demonstrating that such a structure is a real "natural oligopoly" stemming from a competitive game among the leader firms based on a growing vertical differentiation associated with increasing sunk costs. In his model, supermarkets compete by offering a greater variety of products (where variety is considered as a purely vertical form of product differentiation). This implies larger stores and therefore larger sunk costs that discourage entry by other firms. As a consequence, quality provided by the oligopolists (proxied by store size) should increase with the size of the market. In other terms high concentration and escalation in quality seem to be both characteristic features of the supermarket industry. The previous section has shown how, in order to maintain their competitive advantage, firms continuously increase their quality effort, either in the horizontal competitive game (manufacturers-to-manufacturers and retailers-to-retailers) or in the vertical competitive game (manufacturers-to-retailers). Once a differentiation strategy has initiated, it continues through time, especially when a quality (vertical) more than a feature (horizontal) differentiation is involved. Consistent with the general findings of the economic theory, the channel literature suggests that vertical differentiation, more than the horizontal one, tends to be associated with a high degree of industry concentration and market power (Hingley and Lindgreen, 2002; White, 2000). In any case, the equilibria (prices and market structures) at any level of the channel depend on a complex interplay between strategies carried out at horizontal and at vertical level; power asymmetries between upstream and downstream firms; and the kinds of governance structures along the channel (Hingley, 2005a).With regard to the fresh produce sector, at least three hints can be drawn on these general findings described:1. The sector of fresh produce offers retailers a wide range of possibilities to increase product variety, and therefore it can be a core category in the differentiating efforts carried out by supermarkets in the horizontal competitive arena. Examples of fresh produce variety improvement are: new format and packaging; standards - as organic, fair trade, non GMO, and so on; longer shelf life through bio and nano technologies or enhanced storage and handling systems; improved technological foods, such as functional and nutraceutical; IV Gamma products, de-seasonality, (i.e. making seasonal products available throughout the year); typical products with an origin denomination; and ethnic products.2. Because the main fresh produce suppliers do not generally have their own supplier brand, in their differentiating strategies retailers do not have to take into account strategic reactions by the upstream counterparties; and hence are more able to entirely appropriate the competitive advantage stemming from the differentiation.3. Because of the general weakness of the fresh produce sector structure, retailers can easily assume the leadership of the channel and therefore impose transaction governance forms that can accomplish the following goals: maximizing the channel profit; giving themselves the power of appropriating the larger share of the profit; and leading suppliers to comply with retailers' differentiating strategies without a real vertical contractual integration. The multiple chain retailers dominate the market for fresh produce in the UK; they have the biggest market share in fresh fruit and vegetables providing 84 per cent of all UK retail sales (Mintel, 2005). There is steady growth in value sales of fresh produce in the UK, which marks it out against a general decline in most food commodities. This trend partly reflects the changing shopping habits of UK consumers, but is also driven by the proactive role the supermarkets have taken. The multiples are keen to develop their profile as suppliers of healthy eating products, but are also using various strategies to drive interest in the fresh sector, such as introducing exclusive new varieties or introducing new packaging. Mintel (2005) identify "interest in fresh produce source and origin" from consumers, but, however, note that price most often determines purchase decisions, with supermarket competitive pressure forcing price and margins down. The "everyday low pricing" strategies used by retailers have kept prices down across many basic categories. Such strategies enable the supermarkets to be seen to be offering value for money when compared to competitors. Building value in the fresh produce sector is difficult, and price therefore remains the main differentiator for the consumer. Also, the essential nature of some products means that some fruit and vegetables have been vulnerable to retail pricing strategies. However, branding and product differentiation will be of key importance to growth and adding value to the market. On this evidence, differentiating foods as being local and/or regional could therefore be beneficial to producers when marketing their produce and should enable them to obtain premium prices.There is relatively little supplier proprietary branding in the UK fresh produce market. The availability and seasonality of fresh produce make it difficult for supplier branded produce to retain an on-shelf presence. Retailer own-branding has been of key importance to the development strategies of the multiples, who have segmented the fruit and vegetable market with their, for example, good/better/best/organic own-label ranges.In the UK, supermarkets (both directly and through their intermediaries) set both the agenda and the price for the rest of the supply chain (Hingley, 2005a). UK Growers feel that the price control exerted by dominant multiple retailers is having a profound effect on their industry, and are again looking to both new markets and external agencies for support on this matter. In the UK, differentiation takes place in the vertical competitive context.The UK fresh produce supply chain has undergone numerous changes in the last decade, with large supermarket retailers becoming increasingly powerful. The implementation of modern business practices has helped improve efficiency in the UK fresh produce supply chain. This has allowed the chain to break out of the commodity trap and take the fresh produce category out of the commodity trading environment (Fearne and Hughes, 2000, p. 120) by means of innovation and value creation (White, 2000). The overall trend is towards the UK fresh produce industry being dominated by a few large corporations operating on a national level, with some corporations even operating on a European or global scale. Most recently, the takeover of one of the largest UK food retailers, Safeway by Wm. Morrison, has resulted in four major supermarket chains (Tesco, Sainsbury, Wal-Mart-Asda, and Morrisons) accounting for three-quarters of retail grocery sales (IGD, 2005). Tesco take a third of the value of UK grocery sales alone.A further development has been a change from market transactions to market relationships, networks, and interactions (Bourlakis, 2001; Hingley and Lindgreen, 2002; Lindgreen and Hingley, 2003; Lindgreen, 2003; Kotzab, 2001). From the retailer perspective (and largely initiated by them) has been the development of category management as a key managerial tool (Lindgreen et al., 2000). O'Keefe and Fearne (2002), for example, contend that their analysis of the application of category leadership in the fresh produce industry by UK retailer Waitrose shows that it is possible to successfully apply an integrated network-based relationship approach to what was considered to be a commodity sector.Category management (where a preferred supply takes greater responsibility for the entire supply chain of a given product category) has become universally applied by retailers. The premise is that category management facilitates greater levels of collaboration in vertical supply channels and underpins relationship development (Barnes et al., 1995). This occurs where a single (lead) supplier organises the supply (from all the suppliers) of a given product category to the retailer. However, such initiatives are seen by some to be simply moving risk and cost onto the supplier and away from the retailer (Allen, 2001). This is an argument put forward by Dapiran and Hogarth-Scott (2003) who contend that the development of category management has not necessarily increased cooperation in supply chains and can be used by retailers to reinforce power and control.Retailers are looking for fewer and larger suppliers who can work with them in vertical "partnership" (Hingley, 2001; White, 2000). This approach delivers considerable advantages for retailers, in that they can influence entire food channels for given products through singular dyadic interfaces with nominated channel leading intermediaries or "super-middlemen" (Hingley, 2005a). Reducing the number of points of contact for supply not only derives benefits in terms of transaction cost savings, but also relational benefits in dealing with fewer, but closer "partner" suppliers. This has resulted in an overriding trend towards supply chain concentration of a market determined by the standards of large-scale retailers.In Italy, fresh produce accounts for more than the 24 per cent of the total value of agricultural production (valued at prices received by farmers), and contributes to the positive part of the food trade balance sheet; with a self-sufficiency rate equal to 114 per cent. Notwithstanding this positive data, the Italian fresh produce industry is in the middle of a deep crisis. In its last report on the industry the CIA (2006), the main farmers union reported the loss of Italian leadership in the European market. During the last ten years, the Italian share of the total fresh produce markets of European partners (EU-15) has continuously decreased; meanwhile imports into Italy registered a sharp increase of 56 per cent from the EU-25 and of 112 per cent from outside the EU. The loss of competitiveness has been due to the enduring weakness of production structure (small firms) and to poor logistic structures compared with the recent consolidation and innovation processes within Italy's traditional competitor (Spain) and in the new fresh produce specialised countries (Egypt, Morocco, Tunisia, and Turkey). Also, new entrants to the European fresh produce market like China, Chile, Argentina, and Uruguay seem to be stronger on the both levels of structures and organisation.When asked how to overcome this crisis and recover a leading position in the domestic as well as the export market, farmers associations, experts, and public officers of the Ministry for Agriculture all give three simple answers: horizontal integration at agricultural level for achieving network externalities in the production and selling activities; quality improvement and better exploitation of the comparative advantages Italian producers have with respect weather, natural conditions, and product variety; and better relationships with big retailers that sell more 60 per cent of the production and are the only actors in the distribution channel that actually can "persuade" consumers to reward the Italian product.Differentiation strategies by leading supermarkets along with a preference for Italian suppliers could help Italian farmers to exit the crisis. Evidence from both consumers' attitudes and retailers' marketing strategies seem to indicate that this is a practicable way. It is interesting to note also that collaborating growers in Southern-Italy are taking the branding initiative in fresh produce, whereby most recently in Sicily a consortium of Sicilian fruit growers from the Calatino south Simeto District have unveiled a new brand - Puraterra. The name is a reference to the pure soil and the high quality of the organic produce, cultivated on a total area of 100,000 hectares. Blood oranges, grapes, cactus figs, peaches, and artichokes will be supplied under the new brand (Fresh Info, 2007).A recent survey by INDICOD (www.indicod-ecr.it/) on consumer preferences for fresh produce shows at least five notable attitudes:1. As regards product attributes consumers rank this as follows:* sensory attributes (taste, appearance and smell);* price;* convenience (time and energy saving in food shopping, storage and preparation disposal); and* origin and traceability.2. As regards organic products, almost half of the sample bought these at least once in the last month.3. When explicitly asked, 65 per cent of consumers disclose their preference for Italian products.4. Overall, 60 per cent of consumers in the sample are happy with the non-packaged, unbranded display of produce with free service, but would like to receive more information on origin and product characteristics.5. Young women in the sample are strongly interested in convenience attributes of produce, with a high willingness to pay for it.Currently in Italy, the market for produce is led by supermarkets, nevertheless with a still large share (about 38 per cent) covered by traditional trade. Over the past 15 years, supermarkets carried out a price-based competition, enhancing procurement efficiency (mainly by operating their own distribution centres) and shrinking suppliers' margins. This led to the substitution of Italian suppliers (with poor production structure and management capability) with foreign suppliers (mainly Spanish) that better fit buyer organisational and cost needs. Nevertheless, some changes recently occurred with a growing attention for differentiation and local procurement policies.Currently, about 55 per cent of the Italian grocery market is covered by five groups with the following shares: Coop Italia 17.1 per cent; Carrefour Italia (with four different flags/formats Carrefour, GS, Diperdi, Docks Market), 10.4 per cent; Auchan, 9.6 per cent; Conad, 6 per cent; and Esselunga, 8.3 per cent (Dati IRI, 2006). During the last ten years, all these leading groups, except Auchan, launched an own-branded line of high quality fresh produce, and an own-branded line of organic fresh produce. Moreover, both Carrefour and Conad started a line of Italian traditional product ("Terre d'Italia" for Carrefour, and "Percorso Italia" for Conad) and all increased the offer of IV gamma products (fresh cut, prepared, dressed, and ready-to-eat), with a growing range and larger display.Summarising the Italian market for fresh produce, there seems to be split between an unbranded/undifferentiated segment, where sensorial attributes and price are the key leverages of competition; and a highly differentiated/ semi-branded segment, where quality, variety, origin, convenience, and every sort of added value are the keys elements for obtaining premium prices and competitive advantages. The second segment, of course, might be the one interesting for Italian growers struggling to maintain their market shares. It was decided to approach the question of product differentiation in vertical channel structures using a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers. It is believed that this constitutes the most appropriate method to emphasise detail, depth, and insight, as well as understanding and explanation (Patton, 1987; Sayre, 2001). In this research, semi-structured, personal interviews were used allowed in order to facilitate respondents' thoughts, opinions, attitudes, and motivational ideas. The two organisations, which form the key vertical channel interaction, were selected for their ability to contribute new insights, as well as in the expectation that these insights would be replicated (Perry, 1998). The cases were selected for reasons of being typical examples (Miles and Huberman, 1994; Patton, 1987) of fresh produce supply (the grower) and fresh produce category management intermediary (the buying and value adding organisation in "partnership" with multiple retailer customers). Interview questions were standardised around a number of topics (Dibb et al., 1997). Questions were kept deliberately broad to allow interviewees as much freedom in their answers as possible (Glaser and Strauss, 1967). The findings are taken from the words of the respondents themselves, thereby aiding the aim of the research, whilst gaining much more information than would have been available from alternative research methods (Corbin and Strauss, 1998). Within-case analysis involved writing up a summary of each individual case in order to identify important case level phenomena.The principal areas for exploration identified in the preceding literature are the following two ones: the impact of vertical competition on channel coordination; and competitive advantage through value-adding in vertical chains (cost leadership and differentiation strategies through branding, production and technological systems, and seasonal variation opportunities). The two halves of the vertical-channel dyad are as follows. FP Marketing (name changed for reasons of anonymity) is the central marketing organisation for its own and associated growers produce against customer programmes and is based in the UK, with an annual sales turnover of over 100 million euros. It coordinates crop production and volumes both in the UK and overseas, and supplies consolidated and value-added (packaged) fresh produce to large multiple retailers in the UK, under retailers' own-label. Overall, 90 per cent of their business is in supply to UK multiple retailers, the remainder constitutes product that does not meet retailer specifications, and is marketed to UK wholesalers or processors. The group also has is own transport company. The product range is protected (e.g. glasshouse) fresh produce crops (tomatoes, cucumbers, peppers, and so forth) from UK, Northern Europe, and Eastern Europe and the same range from protected/unprotected sources in Southern Europe. The emphasis for this study is on tomato production and marketing, and the vertical relationship of a tomato producer and value-adding intermediary to multiple retailer customers.FP Grower (name changed for reasons of anonymity) is a southern Italy-based family grower business of some 20 types of fresh produce, most notably tomatoes, and has an annual sales turnover of 10 million euros. They have 180 ha, (80 ha glasshouse and 100 ha open field, in order to manage demand throughout the year). They grow and undertake primary value-adding functions (washing and basic packing in preparation for delivery). Their principal dedicated and "partner" customer is FP Marketing. A total of 60 per cent of their product goes to intermediaries like FP Marketing and 35 per cent direct to retailers, with the remainder 5 per cent to wholesale markets - 80 per cent of FP Grower's customers are overseas (UK, Austria, Switzerland, and Germany). FP Marketing does invest some funds in varietal and agronomic development in southern Italy, but own no means of production in the region. The two organisations concerned in this case analysis are, therefore, separately owned and managed. Interviews with FP Marketing concerned the commercial director and development director, and interview with FP Grower concerned the commercial director.FP Marketing are a category management supplier to UK multiple retailer chains. As the category management process has evolved in the UK, their principal retail customers have pushed FP Marketing to focus and category manage the supply of fresh produce protected crops, hence they have foregone their interests in other crops (for example, in leafy salads), but have gained business in (notably) tomatoes. This meant that FP Marketing was able to expand their remit, responsibilities, and sourcing of tomatoes on behalf of their predominant retail customers:We have got northern European growers, right the way from Belgium to the UK. We have now expanded into Poland for new sources, and that covers the UK seasonal supply/demand (FP Marketing).What the category management system does is allow retailers to coordinate category supply through category leaders like FP Marketing. The intermediary organisation benefits from more business, but must take on an enhanced role and associated responsibilities, and this is becoming increasingly expensive for suppliers. However, FP Marketing does see this as part of a (service-based) value-adding process:We have to provide services; we have to provide more resources. That is our added value to the customer. We supply all that technical (input), the agronomists, the ideas, the trials, the NPD, all of this development. There is not a charge for that (FP Marketing).Multiple retail chains will specify quality assurance through determination of produce from accredited sources. These are normally European baseline production standards, environmental growing conditions, and so forth; and different customers in different countries may expect variations by different accredited standards. FP Grower, for example, offers four types of certifications, including EUREPGAP, and is trialling a limited acreage of organic certified produce. With respect to further utilising quality and production systems as a means of market differentiation, UK retailers have developed their own further standards, additional to or inclusive of baseline accreditation:[Named UK retailer] have got a (named variety of) Cherry tomato, and we grow that for them. And [a] particular grower has got [additional] standards in his greenhouse. Normally, it is EUREPGAP standard throughout the industry, but [named grower] has gone the next level which is [named retailer's standard]. This is the next level in terms of technical excellence (FP Marketing).Production and quality standards are also important to FP Grower, but he sees that variations in environment standards, as well as other areas such as diverse labour laws not controlled by retail customers, as frustrating and undermining:Foreign competitors [i.e. growers in other countries] take advantage from different labour regulations and different pesticide/use regulations, without a real policy of price and quality transparency being carried out by retailers. Product from [named countries] with low food safety standards is arriving [...] and sold in Italian supermarkets without clear information on its origin (FP Grower).The category management role for FP Marketing includes managing the seasonal supply of product that takes in northern European protected crop (as described above), but also that from southern Europe. Equally important is devolved responsibility for product differentiation. Access to southern Italian tomatoes (typified by that produced by FP Grower) allows this differentiation. This region is notable for vine ripened tomatoes. These are specific variety, late-harvested (left on the vine until very red, mature, and full-flavoured). This source allows distinct advantages in variety, climatic conditions, and grower expertise not possible in northern Europe in order to produce a product with distinct taste and flavour advantages:Generally, [the advantage is a] combination of better growing conditions, lower growing costs, and the growth technique, the tomato speciality technique [ ... ] by harvesting something on the vine you can take it to the next stage of maturity it will give it that extra shelf life, flavour, and life advantage. The flavours and varieties they [southern Italian growers] are producing are market leaders (FP Marketing).The motivation of FP Marketing is to try to add value to the products it supplies to supermarket customers in order to avoid the "commodity-trap" of being in an unbranded business, in which retailer own-label is the predominant identity:In commodity areas supply is far greater than demand and by their nature supermarkets will use that against us. So, we work to try and put identity to products [...] and try to add value to it, and try and raise awareness with our customer. We look at varieties and taste, we try not to be in value and standard (retail lines), our ideal aspiration is to be in "special" and "finest" (retail lines) [...] because you can get a higher value for it (FP Marketing).Remember, all of our products are our customers' [retailers'] own brand, there is no identity of our company. It is a way of promoting the grower, the variety, the techniques they are using and most importantly, the flavour. The flavours and varieties they are producing are market leaders (FP Marketing).It is interesting to note that FP Grower does not share the FP Marketing's emphasis on product specialisation based on regionality. This may be a matter of perspective, where FP Marketing are sourcing produce from many countries, varieties, types, and production methods; and FP Grower sees his produce as simply tomatoes determined by general quality standards and procurement accountability.FP Grower's motivation is to find a wide market for his produce, whilst FP Marketing, with their category management-based interaction with retail customers identifies opportunities for sub-branding by regional identity:We have now got customers [i.e. UK retailers] who are even putting grower's names on the packs (FP Marketing).I think [that] they [UK retailers] see [sourcing from] Italy as a way of adding value. It is all a way of trying to sub-brand down to the grower (FP Marketing).In this way, retail customers' (through the expertise and packaging operations of FP Marketing) in the UK are keen to differentiate for both UK and overseas (e.g. Southern Italian) produce as a means of further value-added.In terms of branding, FP Grower does have a named identity, but as this is mainly used as an identifier on outer cases for wholesale and intermediary customers, brand identity does not appear on-pack at retailer level. If there is pack identity it is with the retailer's own-label brand. FP Grower's customers collect product (using their own transport arrangements) from them at the farm, which is packed "on demand" to customers' specification. As a result, FP Grower does not benefit from directly attributed brand identity. FP Grower's principal customer, FP Marketing, is responsible for all of the value-adding in terms of packaging and on-pack marketing for UK retail customers. FP Grower puts loose raw material (tomatoes) into plastic returnable trays. This is collected by FP Marketing's own transport to take the produce to the UK. It is there that further value-adding takes place in terms of consolidation, grading, and packing into punnets to the specification of specific retail customers under their brand identity. So, FP Marketing also does not have brand identity on-pack; value-adding for them is derived from the kind of service elements described above (continual sourcing throughout the seasons, new varietal sourcing, consolidation, packaging, new product development, and so forth).The vertical channel arrangement between FP Grower and FP Marketing does offer FP Grower something that they do not have from other customer sources, and that is a contractual agreement:We have full exclusivity with them [FP Grower] in [for supply to] the UK (FP Marketing).FP Marketing is supplied exclusively with tomatoes on the basis of an annual contract. The contract is signed in October before planting, and delivery of the product is from March until the following October (FP Grower).As a result, FP Grower is happy with this arrangement as it provides security of business that is not forthcoming from other customers, who provide regular business, but not price stability:FP Marketing is the only customer who buys through contract. Other customers just order product when they need. For every order there is a price negotiation. The price is not stable because when products come from abroad [Spain and Morocco; FP Grower is near to ports in Southern Italy]. The price falls suddenly, leaving no bargaining power (FP Grower).The arrangement with FP Marketing is much the preferred way of doing business for FP Grower, as they are worried about:The excessive power of retailers who are not interested in collaborative agreements, but only look for lower prices and higher margins (FP Grower).This may be a further reason why FP Grower has not developed customer markets dedicated to varietal type, production method, or regional association, as these things are more difficult to achieve without further contractual/collaborative agreements. However, FP Grower is looking to expand through exploiting seasonal gaps with "UK customers interested in winter production" and to add service value through "further quality and logistic improvement". They also have more long-term thoughts about horizontal and vertical integration of their own, through producer collaboration with other growers to sell direct to the public, via retailing of a producer group's own range of produce.Vertical coordination through a category management type system does have clear advantages for primary producers like FP Grower and intermediaries like FP Marketing, through the consistency of a planned contractual arrangement. As this further develops, this can allow further market differentiation (through, for example, production method or varietal specialisation or emphasis on regional identity). However, control remains firmly in the hands of the multiple retailer customers, whose name and identity value-adding services are conducted in:Supermarkets are very cute [clever], they outsource some of their work to us. We do their work for them, whether it in inventory, in marketing, in procurement. We are continually doing that, so it is a cost that we are bearing (FP Marketing).Fresh produce is still very price sensitive and commodity suppliers/supply chains substitutable, and it is easy to enter the commodity fresh produce market in supply to retailers:It [category management] is very beneficial, but that does not take away from the fact that we live and work in a very marginal [profit] industry (FP Marketing).But category management-based supply does, in return provide some security, as it allows intermediaries and primary suppliers to add-value through service. Most of the value-adding services are conducted by the intermediary (in this case FP Marketing) and that allows more ownership of the business.FP Marketing do acknowledge that there may be scope for more of these value adding activities to take place closer to the country and point of production:If they [growers/grower groups] were able to produce a finished article in Italy, pre-packed in a plastic tray, then you (they could) start driving costs out of the business (FP Marketing).From this point Italian growers/grower groups could exploit Italian retail demand for value-added products:If it works for us [value-adding] in northern Europe, why should it not work in the home market? And it is closer, the costs are lower, they can deliver into those markets [within Italy] a lot cheaper (FP Marketing).However, FP Marketing are quick to point out that to supply the retail market outside of Italy, for example the UK, it would be much harder to replace what they do in terms of providing the consolidation (and all that requires in carrying a continual, multi-seasonal and vast range of products and sources); and all of the value-added services that large UK retailers' require. The current competitive structure of the food system is such as to give strong incentives to differentiation strategies. Evidences from the economic literature on market differentiation suggest that the degree and the kind of differentiation (vertical/quality versus horizontal/feature) in the food marketing channel will depend on several interplaying factors: form and preference parameters of the demand function; competitive pressure at vertical and horizontal level; forms of vertical governance structures; and power asymmetries between upstream and downstream firms in the channel. In any case, differentiation is likely to be associated with high degree of concentration and market power.A theoretical general finding is that equilibria in differentiated markets are not stable and that a welfare assessment is difficult given that the net welfare effect of differentiation depends on the degree of market power (and the associated monopoly inefficiencies) owned by firms at equilibrium, the consumer preferences for differentiated products, and the form of the differentiation cost function.With regards the market for fresh produce it has been shown that a differentiation strategy in this sector might benefit retailers more than in other sectors, due to the absence of brand policies (and consequently of vertical conflicting strategies) by suppliers. Results from the presented case study seem to be consistent with the theoretical findings. In the sketched marketing channel made of the vertical channel interface between "FP Grower" and "FP Marketing" and the final retailer, the retailer is the leading actor of the differentiation policy and the one who mostly benefits from it. In the analysed case, it is identified that higher product differentiation can add value to the channel value. As predicted by the theory, the differentiation strategy can be carried out because the power asymmetry in the channel favour the party (the retailer) who possess the resources (consumer and market segmentation information, economic strength, and managerial skill) required to making the differentiation policy succeed. The theory also predicts that the vertical governance form must be such as to give sufficient incentives to upstream channel partners to comply with the retailer differentiation policy. In the example, the performed annual contractual arrangement gives growers a benefit, in term of sales planning and assurance; which offsets the relationship disadvantages due to the retailer's buying power. The channel organisation also leaves the marketing intermediary (FP Marketing) the right incentives to incur the specific investment requested for the success of the differentiation policy.A general result of the study is that when retailers engage in product differentiation it is more likely that the terms of channel relationships shift from collaborative to competitive types with the power imbalance becoming the disciplinary means by which vertical coordination is achieved and maintained. As a consequence the relationship marketing idea that channel partners look for equitable collaborative relations seems to be contradicted by the evidence that for suppliers it could be wise agreeing to some inequity as the cost of doing business (Corsten and Kumar, 2005), especially when smart large retailers carry out successfully competitive strategies with positive spill over effects on the upstream firms. This viewpoint is concurred by Hingley (2005b) and Hingley et al. (2005) in analysis of fresh food chain supplier-supermarket relationships, where acceptance of channel asymmetry is advocated. Following this, the question to be answered is how much power is allowed for in the system without being a threat for the general social welfare and how to assess the anticompetitive effects of power imbalance in the channel in antitrust contexts?
|
- First, equilibrium in differentiated markets is not stable, and a welfare assessment is difficult. Second, a differentiation strategy in the market for fresh produce might benefit retailers more than in other sectors, which seem to be consistent with the theoretical findings. Third, when retailers engage in product differentiation it is more likely that channel relationships shift from collaborative to competitive types, with the power imbalance becoming the disciplinary means by which vertical coordination is achieved and maintained.
|
[SECTION: Value] The current differentiation strategies in the fresh produce (fruit, vegetable, and salad) industry are analysed in the light of new procurement policies carried out by retailers at the global level. These retailers pay growing attention to product differentiation and innovation as a means to put new value (rather than simply ripping out costs) into the supply chain.Differentiation strategies are analysed. On a theoretical level, the main findings of the literature on product differentiation and market structure are reviewed in order to assess the opportunities and the possible welfare effects of differentiation strategies in the food market. On an empirical level, the current structure and organisation of the fresh produce market are analysed, using both data at the aggregate level and the findings of a case study. The case study refers to a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers.The findings of the study indicate that in the fresh produce industry there are good opportunities for successful differentiation strategies. Nevertheless, actors at the different vertical stages of the marketing channel take very different advantage from it, depending on their "power" to lead the channel. Moreover, the fact that product differentiation tends to foster the oligopolistic structure of the market might have general negative welfare effects. Differentiation strategies are pervasive in market economies and are a powerful means of obtaining competitive advantages. This is because the "master" of competitive advantage offers superior value which "stems from offering lower prices than competitors for equivalent benefits; or providing unique benefits that more than offset a higher price" (Porter, 1985, p. 3). Firms create value by cost leadership or differentiation (Porter, 1985). Using the latter strategy, firms differentiate their product to avoid ruinous price competition and seek some form of monopoly rent. Differentiation offers firms market power, naturally resolving the Bertrand paradox (whereby two undifferentiated players reduce price in order to capture the market but find themselves in a state of Nash equilibrium-without profit).The industrial economic literature focuses on the effects of differentiation strategies on market structure, firms' performances, and welfare effects (Beath and Katsoulacos, 1991). A basic tenet is the distinction made between horizontal and vertical differentiation. Products are said to be horizontally differentiated when, if offered at the same price, consumers, if asked to do so, would rank them differently showing different preferences for different varieties. Instead, they are said to be vertically differentiated if, when offered at the same price, all consumer choose to purchase the same one, that of highest quality.Horizontal and vertical differentiation leads to quite different general results in terms of market structure. Horizontal differentiation is the implicit assumption at the core of models of monopolistic competition and have basically given rise to two classes of models based on the assumption of symmetric consumer preferences (or representative consumer) and asymmetric preferences. In the case of symmetric preferences, one brand is an equally good substitute for any other and the consumer's actual choice will depend on income and relative prices. When preferences are asymmetric, brands are not all equally substitute: if a consumer's ideal brand is i then the consumer prefers brands that are "near" to i in terms of his specification (i.e. in the space of product characteristics in the Lancaster lessicon; more than those that are "far" from it). Asymmetric preferences are assumed in location models, whereas symmetric preferences are assumed in models grounded in the Chamberlin paradigm.The simplest and seminal location model is the Hotelling model of a spatial duopoly, sanctioning the famous principle of minimum differentiation. Successive studies have shown that the Nash equilibrium in the Hotelling model relies on its restrictive assumption, as the zero conjectural variation assumption and the prices and the number of firms being fixed exogenously. When these assumptions are relaxed a unique Nash equilibrium does not necessarily occurs. D'Aspremont et al. (1979), for example, starting from a different assumption on the initial location of the firms, show that the Hotelling model allows for a solution where the sellers seek to move as far away from each other as possible. In the free-entry circular model of Salop (1979) equilibrium is found where each firm earns zero profits and firms are symmetrically located around the circumference of the circle.The Chamberlin (1933) large group model leads to the classical long-run monopolistic equilibrium, the one in which profits are zero and the "dd" curve is tangential to the average cost curve. As long as the "dd" curve that each firm faces still has some negative slope, each firm will produce at a point above the level of minimum average cost. Models postulating horizontal differentiation generally back equilibria characterised by many firms earning zero profits and prices above marginal costs. These models raise the question of whether the market will produce too many or too few brands as compared with the social optimal, which is the issue previously addressed by Spence (1976). The result is generally a suboptimal number of firms/products, with too many or too few firms in the Chamberlin representative consumer model (depending on the parameters of the model) and unambiguously too much variety in the localised competition circle model.While a perfect equilibrium is often problematic in the horizontal case, a perfect equilibrium exists in the vertical case consistent with the finiteness property, and stating that at the equilibrium there is a limit to the number of products for which price can exceed unit variable cost and which have a positive share of the market. The finiteness property was introduced by Shaked and Sutton (1983). The markets in which this is a feature of equilibrium are referred to as natural oligopolies. In the model of Shaked and Sutton, the two conditions that the unit variable costs associated with increased quality rise more slowly than consumers' willingness to pay for this and that the main burden of quality improvement falls on fixed rather than variable costs. An important development of the previous model of Shaked and Sutton (1987) is the one that demonstrates that a weak version of the finiteness property still holds when a mix of horizontal and vertical differentiation is accounted for (that is the pervasive situation in the real world where product differentiation never falls under the ideal type of vertical or horizontal).Summarising, differentiation is always a source of market imperfection and welfare loss. In the case of pure horizontal differentiation these effects are mainly linked to inefficient scales of production or to the suboptimal product variety, whereas the market structure approaches the competitive one. In the vertical (or mixed) case the negative welfare effects are linked to the oligopolistic structure emerging as market equilibrium. The limit theorems describing horizontal differentiation state that in the limit as the market gets large enough, an arbitrary large number of firms, each with a very small market share could co-exist in equilibrium.When carrying out differentiation policies, firms will be earning supernormal profits even though the competitive game is based on the assumptions of non cooperative Bertrand behaviour and free-entry. This result is in contrast to both structure-conduct-performance paradigm and entry-deterrence theory, and is an example of case where structure (the number of the firms) and performance are endogenously determined. The economic literature just mentioned refers to the analysis of one industry at time, that is, on the analysis of competition and market structure at the horizontal level (inter-brand competition). In the food sector the set of prices, qualities, and varieties that actually face the final consumers depends on the strategies carried out by different actors in different stages of the distribution channel. These strategies are the results of horizontal as well as vertical competition. Vertical competition has traditionally been addressed by the channel literature modelling different channel structure in a manufacturer-retailer relationship. Traditionally, three ideal types of structure have been considered (Choi, 1996): exclusive dealer channel (one manufacturer supplying one retailer); monopoly common retailer channel (two manufacturers supplying the same unique retailer); monopoly manufacturer channel (a unique manufacturer supplying two retailers); and duopoly common retailer channel (two manufacturers both supplying two retailers).The topic of channel literature has been the analysis of channel coordination/control problems between the manufacturers and its retailers and on the analysis of vertical strategic interaction; this latter defined in terms of "the direction of channel member's reaction to the action of its channel partners within a given demand structure" (Lee and Staelin, 1997, p. 185). Previous literature, taking for granted the bargaining power of manufacturers, has focused on the incentive schemes used by manufacturers in order to let the retailers choose the strategies able to maximize the channel total profit, while appropriating the largest share of it. Choi (1991), for example, quotes the different forms of governance for the achievement of the maximum channel profit. Because such studies have generally been applied to non-grocery sectors with few national brands and frequent exclusive selling agreement, the problems of channel coordination with regards to differentiating besides pricing behaviours in a multi/manufacturer multi/retailer setting (that is the typical channel setting for the food industry), have been paid little attention. Starting with the previous insights of Choi (1991), successive works have explicitly addressed the problem of channel coordination and differentiation in grocery sectors (Avenel and Caprice, 2006; Choi, 1996; Choi and Coughlan, 2006; Ellickson, 2004; Lee and Staelin, 1997).Choi (1991) first analyses a channel structure with multiple-brand dealers, called common retailers, that well fits the typical structure of food retailing (department stores, supermarkets, and convenience stores). He studies a duopoly model of manufacturers who sell their products through a common independent retailer. He considers three different rules of the duopoly game, which account for different power balance scenario within the channel:1. A manufacturer Stackelberg game (in which markets are characterised by leaders and followers), where the manufacturers can play the role of Stackelberg leaders with respect to the retailer by taking the retailer's reaction function into consideration for their respective wholesale price decisions.2. A vertical Nash game, where neither the manufacturer nor the retailer can influence the counterpart's price decision (i.e. the manufacturer conditions its wholesale price on the retail price and vice-versa).3. A Retailer-Stackelberg game, where retailers play the role of Stackelberg leaders.While the first and the third game applies to situations in which few powerful manufacturers (retailers) supply (buy from) many retailers (manufacturers), the second game fits a situation where power is quite balanced in the relationship. Choi solves these models under both the assumption of linearity and nonlinearity of the demand function, finding contradictory results. Moreover, he solves the models under different assumption on the degree of product substitutability between the manufacturers' brands, in such a way as to introduce the analysis of the effect of product differentiation on channel competition. Also, in this case the results are affected by the form of the demand function; with contradictory results (for instance he finds that less differentiation leads to increased prices and profits for all the members of the channel).Choi (1996) extends the previous model by introducing a differentiated duopoly common retailer channel. He analyses pricing strategies of duopoly manufacturers who produce differentiated products and duopoly retailers who sell both products and carry out store differentiation strategies. Both product and store differentiation are assumed to be horizontal and, like the previous work, three games are considered (vertical Nash, manufacturer Stackelberg, and retailer Stackelberg). The assumed demand function is adjusted in such a way as to explicitly take into account the two differentiation levels (introducing two parameters, for the product and store differentiation) and to overcome the contradictory results of the previous model as regard profit channel and differentiation. The Stackelberg games are quite different from the previous article because, besides the vertical competition, two horizontal levels of competition must be modelled; the manufacturer level and the retailer level. Accordingly, the equilibrium concept employed is the sub game-perfect Stackelberg equilibrium. Results attained by the model are summarised in the following seven propositions (Choi, 1996, pp. 125-129):1. A Stackelberg channel leadership by either manufacturer or retailer results in higher retail prices than those of the Nash game.2. Given a set of differentiation parameters, a channel member benefits by playing the Stackelberg leader at the expense of the other channel member who becomes the follower.3. Total channel profit is larger when there is no channel leadership. However, vertical Nash is not a stable structure, because each channel member has an incentive to become a leader.4. Wholesale prices (retail margins) increase as products (stores) are more differentiated. On the other hand, wholesale prices (retail margins) decrease as stores (products) are more differentiated. Overall, retail prices increase as products and stores are more differentiated.5. Product (store) differentiation benefits manufacturers (retailers), but at the same time hurts retailers (manufacturers). Therefore, manufacturers want more product differentiation and less store differentiation, while the retailers want the reverse.6. Product (store) differentiation and the manufacturer (retailer) Stackelberg leadership have positive synergy effect on the manufacturer (retailer) profits.7. The total profit-maximizing combinations of product and store differentiations are not stable because each channel member has an incentive to differentiate unilaterally.These results are consistent with the general wisdom that differentiation is used to mitigate price competition and that it tends to produce negative welfare effects. In the analysed case the combined vertical-horizontal competition produces non-stable equilibria that fail to maximise the total channel profit as consequence of the conflicting interests of retailers and manufacturers, and therefore opening the question whether a cooperative solution could lead to welfare improvements. Moreover, the sketched channel structure fits the current situation of food marketing channels, either for the double level of differentiation or for vertical power asymmetry that pushes towards non-cooperative vertical forms of coordination, where both parties seek to take the leadership (and retailers actually seem to accomplish it).Avenel and Caprice (2006) model a vertical structure with a vertically differentiated duopoly at the manufacturer level and two retailers who differentiate through the chosen product line (i.e. each of them sell one or both the high and the low quality offered by manufacturers). The focus is on the analysis of the effects of different vertical contractual arrangements on product line differentiation, given different setting of vertical strategic interaction and different levels of costs for quality. Even if this model seems to better apply to the non- grocery sector (in that the assumption of manufacturer channel as leader and the kind of contractual arrangement that are examined, that is, exclusive dealing, vertical integration, and franchise fee), it can be of some interest for those segments of food market, such as the new functional and nutraceutical products, that imply a vertical differentiation strategy fed by heavy sunk investments in research and development by powerful food companies.Choi and Coughlan (2006) investigate the positioning problem of private labels considering the differentiation strategies carried out by national brands, and the consequent product-line pricing strategies carried out by the retailer. They model a manufacturer Stackelberg game where the manufacturer determines the wholesale price and the quality level of his national brand and the retailer chose:* the optimal level of vertical differentiation of her store brand from the national brand;* the degree of substitutability between the national and the store brand;* the retail margin for the national brands; and* the price of the store brand.The equilibrium concept is a sub-game perfect equilibrium in which the second stage price equilibrium is reached immediately after the differentiation decisions. In order to simultaneously take into account the effect of horizontal and vertical differentiation the demand function used in the model is derived from a consumer utility function that contains a preference parameters for each product (vertical differentiation) and a parameter measuring the degree of substitutability with respect to other products. The results of the model for the case of two national brands and one store brand suggest that if the quality levels of the two national brands are equal and they are substantially horizontally differentiated, imitating either brand is optimal for the private label. However, when the national brands are allowed to be vertical differentiated, the private label is better off imitating the higher quality brand. Positioning in between is never an optimal solution. In contrast, when the two national brands are horizontally undifferentiated the private label better response is to horizontally differentiate from both national brands. A consequence of these results is that the more the national brands differentiate, the more store brands carry out imitative strategies leading to head-to-head competition that pushes national brands towards further differentiation and/or advertising investments. Because high differentiation and advertising investment are sources of market power, these findings are consistent with that store brand literature that have suggested that the anticompetitive effects of store brands can be greater than the competitive ones (Cotterill and Putsis, 2000; Kim and Parker, 1997, 1999).To complete this short review of the main findings attained so far from the literature on differentiation and marketing channels, it is worth quoting a recent study by Ellickson (2004) who empirically applies Sutton's theory of endogenous sunk cost and vertical differentiation to the supermarket industry in the USA. During the 1980s and 1990s, the consolidation process in this industry has been driven by the introduction of innovative automated distribution and procurement systems. If one assumes that the level of concentration is determined by the economies of scale and scope associated with these innovations, as markets grow (and these economies are exploited) the level of concentration should decrease. In contrast, in about 50 spatially defined markets in the USA, the evidence is of a stable small number of firms (three to six) capturing the majority of the market, independent of the population; with a competitive fringe of smaller retailers capturing a minor share of the market (Ellickson, 2004). Ellickson (2004) builds and tests a model demonstrating that such a structure is a real "natural oligopoly" stemming from a competitive game among the leader firms based on a growing vertical differentiation associated with increasing sunk costs. In his model, supermarkets compete by offering a greater variety of products (where variety is considered as a purely vertical form of product differentiation). This implies larger stores and therefore larger sunk costs that discourage entry by other firms. As a consequence, quality provided by the oligopolists (proxied by store size) should increase with the size of the market. In other terms high concentration and escalation in quality seem to be both characteristic features of the supermarket industry. The previous section has shown how, in order to maintain their competitive advantage, firms continuously increase their quality effort, either in the horizontal competitive game (manufacturers-to-manufacturers and retailers-to-retailers) or in the vertical competitive game (manufacturers-to-retailers). Once a differentiation strategy has initiated, it continues through time, especially when a quality (vertical) more than a feature (horizontal) differentiation is involved. Consistent with the general findings of the economic theory, the channel literature suggests that vertical differentiation, more than the horizontal one, tends to be associated with a high degree of industry concentration and market power (Hingley and Lindgreen, 2002; White, 2000). In any case, the equilibria (prices and market structures) at any level of the channel depend on a complex interplay between strategies carried out at horizontal and at vertical level; power asymmetries between upstream and downstream firms; and the kinds of governance structures along the channel (Hingley, 2005a).With regard to the fresh produce sector, at least three hints can be drawn on these general findings described:1. The sector of fresh produce offers retailers a wide range of possibilities to increase product variety, and therefore it can be a core category in the differentiating efforts carried out by supermarkets in the horizontal competitive arena. Examples of fresh produce variety improvement are: new format and packaging; standards - as organic, fair trade, non GMO, and so on; longer shelf life through bio and nano technologies or enhanced storage and handling systems; improved technological foods, such as functional and nutraceutical; IV Gamma products, de-seasonality, (i.e. making seasonal products available throughout the year); typical products with an origin denomination; and ethnic products.2. Because the main fresh produce suppliers do not generally have their own supplier brand, in their differentiating strategies retailers do not have to take into account strategic reactions by the upstream counterparties; and hence are more able to entirely appropriate the competitive advantage stemming from the differentiation.3. Because of the general weakness of the fresh produce sector structure, retailers can easily assume the leadership of the channel and therefore impose transaction governance forms that can accomplish the following goals: maximizing the channel profit; giving themselves the power of appropriating the larger share of the profit; and leading suppliers to comply with retailers' differentiating strategies without a real vertical contractual integration. The multiple chain retailers dominate the market for fresh produce in the UK; they have the biggest market share in fresh fruit and vegetables providing 84 per cent of all UK retail sales (Mintel, 2005). There is steady growth in value sales of fresh produce in the UK, which marks it out against a general decline in most food commodities. This trend partly reflects the changing shopping habits of UK consumers, but is also driven by the proactive role the supermarkets have taken. The multiples are keen to develop their profile as suppliers of healthy eating products, but are also using various strategies to drive interest in the fresh sector, such as introducing exclusive new varieties or introducing new packaging. Mintel (2005) identify "interest in fresh produce source and origin" from consumers, but, however, note that price most often determines purchase decisions, with supermarket competitive pressure forcing price and margins down. The "everyday low pricing" strategies used by retailers have kept prices down across many basic categories. Such strategies enable the supermarkets to be seen to be offering value for money when compared to competitors. Building value in the fresh produce sector is difficult, and price therefore remains the main differentiator for the consumer. Also, the essential nature of some products means that some fruit and vegetables have been vulnerable to retail pricing strategies. However, branding and product differentiation will be of key importance to growth and adding value to the market. On this evidence, differentiating foods as being local and/or regional could therefore be beneficial to producers when marketing their produce and should enable them to obtain premium prices.There is relatively little supplier proprietary branding in the UK fresh produce market. The availability and seasonality of fresh produce make it difficult for supplier branded produce to retain an on-shelf presence. Retailer own-branding has been of key importance to the development strategies of the multiples, who have segmented the fruit and vegetable market with their, for example, good/better/best/organic own-label ranges.In the UK, supermarkets (both directly and through their intermediaries) set both the agenda and the price for the rest of the supply chain (Hingley, 2005a). UK Growers feel that the price control exerted by dominant multiple retailers is having a profound effect on their industry, and are again looking to both new markets and external agencies for support on this matter. In the UK, differentiation takes place in the vertical competitive context.The UK fresh produce supply chain has undergone numerous changes in the last decade, with large supermarket retailers becoming increasingly powerful. The implementation of modern business practices has helped improve efficiency in the UK fresh produce supply chain. This has allowed the chain to break out of the commodity trap and take the fresh produce category out of the commodity trading environment (Fearne and Hughes, 2000, p. 120) by means of innovation and value creation (White, 2000). The overall trend is towards the UK fresh produce industry being dominated by a few large corporations operating on a national level, with some corporations even operating on a European or global scale. Most recently, the takeover of one of the largest UK food retailers, Safeway by Wm. Morrison, has resulted in four major supermarket chains (Tesco, Sainsbury, Wal-Mart-Asda, and Morrisons) accounting for three-quarters of retail grocery sales (IGD, 2005). Tesco take a third of the value of UK grocery sales alone.A further development has been a change from market transactions to market relationships, networks, and interactions (Bourlakis, 2001; Hingley and Lindgreen, 2002; Lindgreen and Hingley, 2003; Lindgreen, 2003; Kotzab, 2001). From the retailer perspective (and largely initiated by them) has been the development of category management as a key managerial tool (Lindgreen et al., 2000). O'Keefe and Fearne (2002), for example, contend that their analysis of the application of category leadership in the fresh produce industry by UK retailer Waitrose shows that it is possible to successfully apply an integrated network-based relationship approach to what was considered to be a commodity sector.Category management (where a preferred supply takes greater responsibility for the entire supply chain of a given product category) has become universally applied by retailers. The premise is that category management facilitates greater levels of collaboration in vertical supply channels and underpins relationship development (Barnes et al., 1995). This occurs where a single (lead) supplier organises the supply (from all the suppliers) of a given product category to the retailer. However, such initiatives are seen by some to be simply moving risk and cost onto the supplier and away from the retailer (Allen, 2001). This is an argument put forward by Dapiran and Hogarth-Scott (2003) who contend that the development of category management has not necessarily increased cooperation in supply chains and can be used by retailers to reinforce power and control.Retailers are looking for fewer and larger suppliers who can work with them in vertical "partnership" (Hingley, 2001; White, 2000). This approach delivers considerable advantages for retailers, in that they can influence entire food channels for given products through singular dyadic interfaces with nominated channel leading intermediaries or "super-middlemen" (Hingley, 2005a). Reducing the number of points of contact for supply not only derives benefits in terms of transaction cost savings, but also relational benefits in dealing with fewer, but closer "partner" suppliers. This has resulted in an overriding trend towards supply chain concentration of a market determined by the standards of large-scale retailers.In Italy, fresh produce accounts for more than the 24 per cent of the total value of agricultural production (valued at prices received by farmers), and contributes to the positive part of the food trade balance sheet; with a self-sufficiency rate equal to 114 per cent. Notwithstanding this positive data, the Italian fresh produce industry is in the middle of a deep crisis. In its last report on the industry the CIA (2006), the main farmers union reported the loss of Italian leadership in the European market. During the last ten years, the Italian share of the total fresh produce markets of European partners (EU-15) has continuously decreased; meanwhile imports into Italy registered a sharp increase of 56 per cent from the EU-25 and of 112 per cent from outside the EU. The loss of competitiveness has been due to the enduring weakness of production structure (small firms) and to poor logistic structures compared with the recent consolidation and innovation processes within Italy's traditional competitor (Spain) and in the new fresh produce specialised countries (Egypt, Morocco, Tunisia, and Turkey). Also, new entrants to the European fresh produce market like China, Chile, Argentina, and Uruguay seem to be stronger on the both levels of structures and organisation.When asked how to overcome this crisis and recover a leading position in the domestic as well as the export market, farmers associations, experts, and public officers of the Ministry for Agriculture all give three simple answers: horizontal integration at agricultural level for achieving network externalities in the production and selling activities; quality improvement and better exploitation of the comparative advantages Italian producers have with respect weather, natural conditions, and product variety; and better relationships with big retailers that sell more 60 per cent of the production and are the only actors in the distribution channel that actually can "persuade" consumers to reward the Italian product.Differentiation strategies by leading supermarkets along with a preference for Italian suppliers could help Italian farmers to exit the crisis. Evidence from both consumers' attitudes and retailers' marketing strategies seem to indicate that this is a practicable way. It is interesting to note also that collaborating growers in Southern-Italy are taking the branding initiative in fresh produce, whereby most recently in Sicily a consortium of Sicilian fruit growers from the Calatino south Simeto District have unveiled a new brand - Puraterra. The name is a reference to the pure soil and the high quality of the organic produce, cultivated on a total area of 100,000 hectares. Blood oranges, grapes, cactus figs, peaches, and artichokes will be supplied under the new brand (Fresh Info, 2007).A recent survey by INDICOD (www.indicod-ecr.it/) on consumer preferences for fresh produce shows at least five notable attitudes:1. As regards product attributes consumers rank this as follows:* sensory attributes (taste, appearance and smell);* price;* convenience (time and energy saving in food shopping, storage and preparation disposal); and* origin and traceability.2. As regards organic products, almost half of the sample bought these at least once in the last month.3. When explicitly asked, 65 per cent of consumers disclose their preference for Italian products.4. Overall, 60 per cent of consumers in the sample are happy with the non-packaged, unbranded display of produce with free service, but would like to receive more information on origin and product characteristics.5. Young women in the sample are strongly interested in convenience attributes of produce, with a high willingness to pay for it.Currently in Italy, the market for produce is led by supermarkets, nevertheless with a still large share (about 38 per cent) covered by traditional trade. Over the past 15 years, supermarkets carried out a price-based competition, enhancing procurement efficiency (mainly by operating their own distribution centres) and shrinking suppliers' margins. This led to the substitution of Italian suppliers (with poor production structure and management capability) with foreign suppliers (mainly Spanish) that better fit buyer organisational and cost needs. Nevertheless, some changes recently occurred with a growing attention for differentiation and local procurement policies.Currently, about 55 per cent of the Italian grocery market is covered by five groups with the following shares: Coop Italia 17.1 per cent; Carrefour Italia (with four different flags/formats Carrefour, GS, Diperdi, Docks Market), 10.4 per cent; Auchan, 9.6 per cent; Conad, 6 per cent; and Esselunga, 8.3 per cent (Dati IRI, 2006). During the last ten years, all these leading groups, except Auchan, launched an own-branded line of high quality fresh produce, and an own-branded line of organic fresh produce. Moreover, both Carrefour and Conad started a line of Italian traditional product ("Terre d'Italia" for Carrefour, and "Percorso Italia" for Conad) and all increased the offer of IV gamma products (fresh cut, prepared, dressed, and ready-to-eat), with a growing range and larger display.Summarising the Italian market for fresh produce, there seems to be split between an unbranded/undifferentiated segment, where sensorial attributes and price are the key leverages of competition; and a highly differentiated/ semi-branded segment, where quality, variety, origin, convenience, and every sort of added value are the keys elements for obtaining premium prices and competitive advantages. The second segment, of course, might be the one interesting for Italian growers struggling to maintain their market shares. It was decided to approach the question of product differentiation in vertical channel structures using a single dyadic case approach, in which a primary producer is engaged in "partner" supply to a principal category management intermediary for channel leading multiple retailers. It is believed that this constitutes the most appropriate method to emphasise detail, depth, and insight, as well as understanding and explanation (Patton, 1987; Sayre, 2001). In this research, semi-structured, personal interviews were used allowed in order to facilitate respondents' thoughts, opinions, attitudes, and motivational ideas. The two organisations, which form the key vertical channel interaction, were selected for their ability to contribute new insights, as well as in the expectation that these insights would be replicated (Perry, 1998). The cases were selected for reasons of being typical examples (Miles and Huberman, 1994; Patton, 1987) of fresh produce supply (the grower) and fresh produce category management intermediary (the buying and value adding organisation in "partnership" with multiple retailer customers). Interview questions were standardised around a number of topics (Dibb et al., 1997). Questions were kept deliberately broad to allow interviewees as much freedom in their answers as possible (Glaser and Strauss, 1967). The findings are taken from the words of the respondents themselves, thereby aiding the aim of the research, whilst gaining much more information than would have been available from alternative research methods (Corbin and Strauss, 1998). Within-case analysis involved writing up a summary of each individual case in order to identify important case level phenomena.The principal areas for exploration identified in the preceding literature are the following two ones: the impact of vertical competition on channel coordination; and competitive advantage through value-adding in vertical chains (cost leadership and differentiation strategies through branding, production and technological systems, and seasonal variation opportunities). The two halves of the vertical-channel dyad are as follows. FP Marketing (name changed for reasons of anonymity) is the central marketing organisation for its own and associated growers produce against customer programmes and is based in the UK, with an annual sales turnover of over 100 million euros. It coordinates crop production and volumes both in the UK and overseas, and supplies consolidated and value-added (packaged) fresh produce to large multiple retailers in the UK, under retailers' own-label. Overall, 90 per cent of their business is in supply to UK multiple retailers, the remainder constitutes product that does not meet retailer specifications, and is marketed to UK wholesalers or processors. The group also has is own transport company. The product range is protected (e.g. glasshouse) fresh produce crops (tomatoes, cucumbers, peppers, and so forth) from UK, Northern Europe, and Eastern Europe and the same range from protected/unprotected sources in Southern Europe. The emphasis for this study is on tomato production and marketing, and the vertical relationship of a tomato producer and value-adding intermediary to multiple retailer customers.FP Grower (name changed for reasons of anonymity) is a southern Italy-based family grower business of some 20 types of fresh produce, most notably tomatoes, and has an annual sales turnover of 10 million euros. They have 180 ha, (80 ha glasshouse and 100 ha open field, in order to manage demand throughout the year). They grow and undertake primary value-adding functions (washing and basic packing in preparation for delivery). Their principal dedicated and "partner" customer is FP Marketing. A total of 60 per cent of their product goes to intermediaries like FP Marketing and 35 per cent direct to retailers, with the remainder 5 per cent to wholesale markets - 80 per cent of FP Grower's customers are overseas (UK, Austria, Switzerland, and Germany). FP Marketing does invest some funds in varietal and agronomic development in southern Italy, but own no means of production in the region. The two organisations concerned in this case analysis are, therefore, separately owned and managed. Interviews with FP Marketing concerned the commercial director and development director, and interview with FP Grower concerned the commercial director.FP Marketing are a category management supplier to UK multiple retailer chains. As the category management process has evolved in the UK, their principal retail customers have pushed FP Marketing to focus and category manage the supply of fresh produce protected crops, hence they have foregone their interests in other crops (for example, in leafy salads), but have gained business in (notably) tomatoes. This meant that FP Marketing was able to expand their remit, responsibilities, and sourcing of tomatoes on behalf of their predominant retail customers:We have got northern European growers, right the way from Belgium to the UK. We have now expanded into Poland for new sources, and that covers the UK seasonal supply/demand (FP Marketing).What the category management system does is allow retailers to coordinate category supply through category leaders like FP Marketing. The intermediary organisation benefits from more business, but must take on an enhanced role and associated responsibilities, and this is becoming increasingly expensive for suppliers. However, FP Marketing does see this as part of a (service-based) value-adding process:We have to provide services; we have to provide more resources. That is our added value to the customer. We supply all that technical (input), the agronomists, the ideas, the trials, the NPD, all of this development. There is not a charge for that (FP Marketing).Multiple retail chains will specify quality assurance through determination of produce from accredited sources. These are normally European baseline production standards, environmental growing conditions, and so forth; and different customers in different countries may expect variations by different accredited standards. FP Grower, for example, offers four types of certifications, including EUREPGAP, and is trialling a limited acreage of organic certified produce. With respect to further utilising quality and production systems as a means of market differentiation, UK retailers have developed their own further standards, additional to or inclusive of baseline accreditation:[Named UK retailer] have got a (named variety of) Cherry tomato, and we grow that for them. And [a] particular grower has got [additional] standards in his greenhouse. Normally, it is EUREPGAP standard throughout the industry, but [named grower] has gone the next level which is [named retailer's standard]. This is the next level in terms of technical excellence (FP Marketing).Production and quality standards are also important to FP Grower, but he sees that variations in environment standards, as well as other areas such as diverse labour laws not controlled by retail customers, as frustrating and undermining:Foreign competitors [i.e. growers in other countries] take advantage from different labour regulations and different pesticide/use regulations, without a real policy of price and quality transparency being carried out by retailers. Product from [named countries] with low food safety standards is arriving [...] and sold in Italian supermarkets without clear information on its origin (FP Grower).The category management role for FP Marketing includes managing the seasonal supply of product that takes in northern European protected crop (as described above), but also that from southern Europe. Equally important is devolved responsibility for product differentiation. Access to southern Italian tomatoes (typified by that produced by FP Grower) allows this differentiation. This region is notable for vine ripened tomatoes. These are specific variety, late-harvested (left on the vine until very red, mature, and full-flavoured). This source allows distinct advantages in variety, climatic conditions, and grower expertise not possible in northern Europe in order to produce a product with distinct taste and flavour advantages:Generally, [the advantage is a] combination of better growing conditions, lower growing costs, and the growth technique, the tomato speciality technique [ ... ] by harvesting something on the vine you can take it to the next stage of maturity it will give it that extra shelf life, flavour, and life advantage. The flavours and varieties they [southern Italian growers] are producing are market leaders (FP Marketing).The motivation of FP Marketing is to try to add value to the products it supplies to supermarket customers in order to avoid the "commodity-trap" of being in an unbranded business, in which retailer own-label is the predominant identity:In commodity areas supply is far greater than demand and by their nature supermarkets will use that against us. So, we work to try and put identity to products [...] and try to add value to it, and try and raise awareness with our customer. We look at varieties and taste, we try not to be in value and standard (retail lines), our ideal aspiration is to be in "special" and "finest" (retail lines) [...] because you can get a higher value for it (FP Marketing).Remember, all of our products are our customers' [retailers'] own brand, there is no identity of our company. It is a way of promoting the grower, the variety, the techniques they are using and most importantly, the flavour. The flavours and varieties they are producing are market leaders (FP Marketing).It is interesting to note that FP Grower does not share the FP Marketing's emphasis on product specialisation based on regionality. This may be a matter of perspective, where FP Marketing are sourcing produce from many countries, varieties, types, and production methods; and FP Grower sees his produce as simply tomatoes determined by general quality standards and procurement accountability.FP Grower's motivation is to find a wide market for his produce, whilst FP Marketing, with their category management-based interaction with retail customers identifies opportunities for sub-branding by regional identity:We have now got customers [i.e. UK retailers] who are even putting grower's names on the packs (FP Marketing).I think [that] they [UK retailers] see [sourcing from] Italy as a way of adding value. It is all a way of trying to sub-brand down to the grower (FP Marketing).In this way, retail customers' (through the expertise and packaging operations of FP Marketing) in the UK are keen to differentiate for both UK and overseas (e.g. Southern Italian) produce as a means of further value-added.In terms of branding, FP Grower does have a named identity, but as this is mainly used as an identifier on outer cases for wholesale and intermediary customers, brand identity does not appear on-pack at retailer level. If there is pack identity it is with the retailer's own-label brand. FP Grower's customers collect product (using their own transport arrangements) from them at the farm, which is packed "on demand" to customers' specification. As a result, FP Grower does not benefit from directly attributed brand identity. FP Grower's principal customer, FP Marketing, is responsible for all of the value-adding in terms of packaging and on-pack marketing for UK retail customers. FP Grower puts loose raw material (tomatoes) into plastic returnable trays. This is collected by FP Marketing's own transport to take the produce to the UK. It is there that further value-adding takes place in terms of consolidation, grading, and packing into punnets to the specification of specific retail customers under their brand identity. So, FP Marketing also does not have brand identity on-pack; value-adding for them is derived from the kind of service elements described above (continual sourcing throughout the seasons, new varietal sourcing, consolidation, packaging, new product development, and so forth).The vertical channel arrangement between FP Grower and FP Marketing does offer FP Grower something that they do not have from other customer sources, and that is a contractual agreement:We have full exclusivity with them [FP Grower] in [for supply to] the UK (FP Marketing).FP Marketing is supplied exclusively with tomatoes on the basis of an annual contract. The contract is signed in October before planting, and delivery of the product is from March until the following October (FP Grower).As a result, FP Grower is happy with this arrangement as it provides security of business that is not forthcoming from other customers, who provide regular business, but not price stability:FP Marketing is the only customer who buys through contract. Other customers just order product when they need. For every order there is a price negotiation. The price is not stable because when products come from abroad [Spain and Morocco; FP Grower is near to ports in Southern Italy]. The price falls suddenly, leaving no bargaining power (FP Grower).The arrangement with FP Marketing is much the preferred way of doing business for FP Grower, as they are worried about:The excessive power of retailers who are not interested in collaborative agreements, but only look for lower prices and higher margins (FP Grower).This may be a further reason why FP Grower has not developed customer markets dedicated to varietal type, production method, or regional association, as these things are more difficult to achieve without further contractual/collaborative agreements. However, FP Grower is looking to expand through exploiting seasonal gaps with "UK customers interested in winter production" and to add service value through "further quality and logistic improvement". They also have more long-term thoughts about horizontal and vertical integration of their own, through producer collaboration with other growers to sell direct to the public, via retailing of a producer group's own range of produce.Vertical coordination through a category management type system does have clear advantages for primary producers like FP Grower and intermediaries like FP Marketing, through the consistency of a planned contractual arrangement. As this further develops, this can allow further market differentiation (through, for example, production method or varietal specialisation or emphasis on regional identity). However, control remains firmly in the hands of the multiple retailer customers, whose name and identity value-adding services are conducted in:Supermarkets are very cute [clever], they outsource some of their work to us. We do their work for them, whether it in inventory, in marketing, in procurement. We are continually doing that, so it is a cost that we are bearing (FP Marketing).Fresh produce is still very price sensitive and commodity suppliers/supply chains substitutable, and it is easy to enter the commodity fresh produce market in supply to retailers:It [category management] is very beneficial, but that does not take away from the fact that we live and work in a very marginal [profit] industry (FP Marketing).But category management-based supply does, in return provide some security, as it allows intermediaries and primary suppliers to add-value through service. Most of the value-adding services are conducted by the intermediary (in this case FP Marketing) and that allows more ownership of the business.FP Marketing do acknowledge that there may be scope for more of these value adding activities to take place closer to the country and point of production:If they [growers/grower groups] were able to produce a finished article in Italy, pre-packed in a plastic tray, then you (they could) start driving costs out of the business (FP Marketing).From this point Italian growers/grower groups could exploit Italian retail demand for value-added products:If it works for us [value-adding] in northern Europe, why should it not work in the home market? And it is closer, the costs are lower, they can deliver into those markets [within Italy] a lot cheaper (FP Marketing).However, FP Marketing are quick to point out that to supply the retail market outside of Italy, for example the UK, it would be much harder to replace what they do in terms of providing the consolidation (and all that requires in carrying a continual, multi-seasonal and vast range of products and sources); and all of the value-added services that large UK retailers' require. The current competitive structure of the food system is such as to give strong incentives to differentiation strategies. Evidences from the economic literature on market differentiation suggest that the degree and the kind of differentiation (vertical/quality versus horizontal/feature) in the food marketing channel will depend on several interplaying factors: form and preference parameters of the demand function; competitive pressure at vertical and horizontal level; forms of vertical governance structures; and power asymmetries between upstream and downstream firms in the channel. In any case, differentiation is likely to be associated with high degree of concentration and market power.A theoretical general finding is that equilibria in differentiated markets are not stable and that a welfare assessment is difficult given that the net welfare effect of differentiation depends on the degree of market power (and the associated monopoly inefficiencies) owned by firms at equilibrium, the consumer preferences for differentiated products, and the form of the differentiation cost function.With regards the market for fresh produce it has been shown that a differentiation strategy in this sector might benefit retailers more than in other sectors, due to the absence of brand policies (and consequently of vertical conflicting strategies) by suppliers. Results from the presented case study seem to be consistent with the theoretical findings. In the sketched marketing channel made of the vertical channel interface between "FP Grower" and "FP Marketing" and the final retailer, the retailer is the leading actor of the differentiation policy and the one who mostly benefits from it. In the analysed case, it is identified that higher product differentiation can add value to the channel value. As predicted by the theory, the differentiation strategy can be carried out because the power asymmetry in the channel favour the party (the retailer) who possess the resources (consumer and market segmentation information, economic strength, and managerial skill) required to making the differentiation policy succeed. The theory also predicts that the vertical governance form must be such as to give sufficient incentives to upstream channel partners to comply with the retailer differentiation policy. In the example, the performed annual contractual arrangement gives growers a benefit, in term of sales planning and assurance; which offsets the relationship disadvantages due to the retailer's buying power. The channel organisation also leaves the marketing intermediary (FP Marketing) the right incentives to incur the specific investment requested for the success of the differentiation policy.A general result of the study is that when retailers engage in product differentiation it is more likely that the terms of channel relationships shift from collaborative to competitive types with the power imbalance becoming the disciplinary means by which vertical coordination is achieved and maintained. As a consequence the relationship marketing idea that channel partners look for equitable collaborative relations seems to be contradicted by the evidence that for suppliers it could be wise agreeing to some inequity as the cost of doing business (Corsten and Kumar, 2005), especially when smart large retailers carry out successfully competitive strategies with positive spill over effects on the upstream firms. This viewpoint is concurred by Hingley (2005b) and Hingley et al. (2005) in analysis of fresh food chain supplier-supermarket relationships, where acceptance of channel asymmetry is advocated. Following this, the question to be answered is how much power is allowed for in the system without being a threat for the general social welfare and how to assess the anticompetitive effects of power imbalance in the channel in antitrust contexts?
|
- This article was based on a single case study.
|
[SECTION: Purpose] As per resource-based view (Wernerfelt, 1984), in the competitive market environment, an organization should be effective and require an important intangible core competence that is employee competencies. An employee competency refers to those traits, skills or attributes that employees need to perform their jobs more effectively (Soderquist et al., 2010; Campion et al., 2011). A competent workforce is believed to produce higher quality products (Ahuja and Khamba, 2008), support innovation (Siguaw et al., 2006) and reduce turnover costs (Joo and Shim, 2010). To develop and maintain employee competencies for future requirement and in the present environment, an organization must emphasize on human resource development (HRD). Werner and DeSimone (2006) defined HRD as a set of systematic and planned activities designed by an organization to provide its members with the opportunities to learn necessary skills to meet current and future job demands. According to Werner and DeSimone (2006), "HRD practices are the programs, which are designed to be strategically oriented to the organizational process for managing the development of human resources to contribute to the overall success of the organization" (p. 26). The rationale for using HRD practices to support business objectives is quite straightforward: enhancing or unleashing needed employee expertise (Chermack and Kasshanna, 2007). HRD practices continuously improve employee's expertise and performance through the existing practices of training, performance appraisal and organizational development initiatives (Garavan, 2007). HRD alone is not sufficient to enhance employee competencies to a greater level because not all knowledge and skills obtained from HRD practices is properly transferred (Froehlich et al., 2014). Thus, an organization should create a learning culture in the organization, so that employee can share, acquire and create knowledge and skills, which can modify the behaviour of the employees. Organizational learning culture refers to a set of norms and values about the functioning of an organization that supports systematic organizational learning so that individual learning, teamwork, collaboration, creativity and knowledge distribution have collective meaning and value (Torres-Coronas and Arias-Oliva, 2008, p. 177). Thus, organizational learning culture could directly or indirectly influence employee competencies. The present study integrates the resource-based view (Wernerfelt, 1984) and organizational perspective of learning to create a strong theoretical foundation by exploring the effects of team building, employee empowerment and organizational learning culture on employee competencies. The study provides empirical evidences to bridge the knowledge gaps with regard to the relationship between HRD practices, organizational learning culture and employee competencies. Even though HRD practices and organizational learning culture are considered critical concepts and practices, most of the existing literature focuses on the conceptual level and consider commitment, productivity, and profitability as primary outcome variables. Few studies have attempted to examine the moderating role of organizational learning culture on individual outcomes such as commitment, engagement and satisfaction. Thus, the significance of the study lies in providing empirical validation of the moderating role of organizational learning culture towards the relationship of HRD practices and employee competencies. This research attempts to answer the following structured questions: RQ1. Is there any relationship between HRD practices and employee competencies? RQ2. Does organizational learning culture will moderate the relationship between HRD practices and employee competencies? From the above research questions, following research objectives were derived. To study the impact of HRD practices on enhancement of competencies of employees of the cement industry. To assess the moderating role of organizational learning culture in between the relationship of HRD practices and employee competencies. Team building Klein et al. (2009) define team building as "the formal and informal team-level practices that focus on improving social relations and clarifying roles as well as solving task and interpersonal problems that affect team functioning". In this intervention, team members experimentally learn, by examining their structures, norms, values and interpersonal dynamics, to increase their skills for effective performance (Senecal et al., 2008). In the literature, there is consensus that there are four approaches/components to team building: goal setting; role-clarification; interpersonal relations; and problem solving. A brief explanation is presented below: Goal setting: This component is designed specifically to strengthen a team member's motivation to achieve team goals and objectives (Salas et al., 2004). Team members are expected to become involved in action planning to identify ways to achieve those goals (Aga et al., 2016). Role clarification: It entails clarifying individual role expectations, group norms and shared responsibilities of team member (Klein et al., 2009). Role clarification can be used to improve team and individual characteristics (i.e. by reducing role ambiguity) and work structure by negotiating, defining and adjusting team member roles (Mathieu and Schulze, 2006). Interpersonal relations: It assumes that teams with fewer interpersonal conflicts function more effectively than teams with greater number of interpersonal conflicts. It involves an increase in teamwork skills, such as mutual supportiveness, communication and sharing of feelings (Aga et al., 2016). Problem solving: The fourth component emphasizes on the identification of major problems in the team's tasks to enhance task-related skills. It is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving, action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Effective team building intervention in an organization enhances an individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). At the organization level, team effort helps to solve various problems of the organization, such as conflict among organizational members, unclear roles and assignments, lack of innovation in solving problems, etc., that upsurge the performance of the organization (Stone, 2010). Employee empowerment An employee empowerment approach is composed of practices aimed at sharing information, job related knowledge and authority with employees (Fernandez and Moldogaziev, 2013). Baird and Wang (2010) stated, "The basic objective of empowerment is redistribution of power between management and employees - most commonly in the form of increasing employee authority, responsibility, and influencing commitment". In the literature, empowerment defined in two perspectives: psychological perspective and managerial perspective. From a psychological perspective, empowerment is a motivational akin to a state of mind or set of cognitions (Fernandez and Moldogaziev, 2013). Dust et al. (2014) described employee empowerment as a four-dimensional motivational construct composed of four cognitions those are meaning, competence, self-determination and impact, that reflect an active rather than a passive orientation towards a work role. From a managerial perspective, employee empowerment is a relational construct that describes how those with power in organizations share power, information, resources and rewards with those lacking them (Gomez and Rosen, 2001; Fernandez and Moldogaziev, 2013). Bowen and Lawler (1995) define empowerment as sharing with front-line employees on four organizational ingredients: information about the organization's performance; knowledge that enables employees to understand and contribute to organizational performance; rewards based on the organization's performance; and power to make decisions that influence organizational direction and performance. Torres-Coronas and Arias-Oliva (2008, p. 177) defines organizational learning culture as: A set of norms and values about the functioning of an organization that support systematic organizational learning so that individual learning, teamwork, collaboration, creativity, and knowledge distribution have collective meaning and value. Organizational learning culture is a complex process that refers to the development of new knowledge and has the potential to change behaviour (Skerlavaj et al., 2010). According to Kandemir and Hult (2005), organizational learning culture has been viewed as a process by which organizations as collectives learn through interaction with their environments and propose that learning might result in new and significant insights and awareness. The objective of building an organizational learning culture in an organization is to expand people's capacity to create the results they truly desire, the employee's new and expansive patterns of thinking to be encouraged, collective aspiration to be set free, and employees should be continually learning how to learn together (Senge, 2009). According to Marsick and Watkins (2003), organizational learning culture consists of seven interlinked constructs: create continuous learning opportunities; promote inquiry and dialogue; encourage collaboration and team learning; create systems to capture and share learning; empower people toward a collective vision; connect the organization to its environment; and provide strategic leadership for learning, which helps in building the organization's strategic learning culture. Table I summarizes the seven dimensions of organizational learning culture. The word competency was first explained in the book "The Competent Manager" (Boyatzis, 1982, p. 21) which defines the term as, "an underlying characteristic of a person that could be a motive, trait, and skill aspect of one's self-image or social role, or a body of knowledge which he or she uses". A competency is a reliably measurable, relatively enduring (stable) characteristic of a person, team or organization that causes and statistically predicts a measurable level of performance (Berger and Berger, 2010). Some definitions of the term competency are shown in the Table II given. The term "reliably measurable" means two or more independent observers or methods (tests, surveys) agree statistically that a person demonstrates a competency (Spencer et al., 2008) while "relatively enduring" means a competency measured at one point of time is statistically likely to be demonstrated at a later point of time (Catano et al., 2007). Competency characteristics are content knowledge, behaviour skills, cognitive processing (IQ), personality traits, values, motives and occasionally other perceptual or sensor motor capabilities that accurately predict some level of performance. Cardy and Selvarajan (2006) has classified competencies into two categories: employee (personal) and organization (corporate). Employee competencies are those characteristics or traits that are acquired by employees, such as knowledge, skills, ability and personality that differentiate them from average performers (Cardy and Selvarajan, 2006). Organizational competencies are those, which are embedded in the organizational system and structures that tend to exist within the organization, even when an employee leaves (Semeijn et al., 2014). Human capital attributes have been argued to be an important resource of organizational performance because organizations that are able to generate organization specific, valuable and unique competencies are thought to be in a superior position that enables them to outperform their rivals and succeed in a dynamic business environment (van Esch et al., 2018). This study selected a set of independent constructs: HRD practices (team building and employee empowerment) and organizational learning culture; and dependent constructs: employee competencies. The independent constructs are considered necessary for influencing employee competencies and its influence on organizational effectiveness. Figure 1 illustrates the research model of this study. In the following sections, the relationship between the constructs is discussed. Human resource development practices and employee competencies Researchers (Sung and Choi, 2014) have suggested that organizations should design and implement HRD practices so that the individual can perform effectively and meet the performance expectations through improving individual competencies. Kehoe and Wright (2013) deliberates that HRD was the basic component for employees to acquire competencies that in turn significantly improve organizational performance. In fact, the general purpose of HRD practices is to produce a competent and qualified employees to perform an assigned job and contribute to the organization's business outcomes (Nolan and Garavan, 2016). Scholars have investigated the outcome of HRD practices and reported that these practices improve employees' capabilities on the job, productivity and efficiency (Haslinda, 2009; Alagaraja et al., 2015). Yuvaraj and Mulugeta (2013) also provided a similar result that explains HRD practices continuously improve employees' capability and performance through the existing practices of training, career development, performance appraisal and organizational development components of HRD. The study examined two practices: team building and employee empowerment that were being widely implemented in the selected organizations (Cement manufacturing units). The association between selected HRD practices and employee competencies are revealed in subsequent reviews. Team building and employee competencies According to LePine et al. (2008), "The practices of team-building components (goal-setting, interpersonal processes, role-clarification, and problem-solving) can lead to improved performance through modification of attitudes, values, problem-solving techniques, and group processes". In the goal-setting component, team members are introduced to a goal-setting framework and are expected to involve in action planning to identify ways to achieve those goals, which strengthen team member's problem-solving skills and motivation (Aga et al., 2016). Team members exposed to role-clarification activities are expected to achieve better understanding of their and others' respective roles and duties within the team (Salas et al., 1999). Interpersonal process component involves an enhancement in team member's interpersonal skills, such as mutual supportiveness, communication and sharing of information (Klein et al., 2009). The fourth component emphasizes the identification of major problems in the team's tasks to enhance task-related skills (Lacerenza et al., 2018). Team building is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving and action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Team building intervention enhances individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). Shuffler et al. (2011) in their meta-analysis found that an effective team building improves affective outcomes (trust, attitude and confidence) and cognitive outcomes (shared knowledge among team members) in employees. The above discussions provide ample facts to suggest that: H1a. Team building is positively related to the enhancement of employee competencies. Employee empowerment and employee competencies Fernandez and Moldogaziev (2013) have stated that, "employee empowerment is a relational construct that describes how those with power in organizations share power and formal authority with those lacking it". Organizations have implemented empowerment initiatives based on the premise that when individual employees can participate in decision-making and share responsibility, for how work is conducted, outcomes such as performance and employee's knowledge will be enhanced (Maynard et al., 2012). Organizations that encourage harmonious relationships between superiors and subordinates provide employees with the liberty to express their creative suggestions, which help in enriching their self-motivation (Fernandez and Moldogaziev, 2012). When employees are empowered and given the autonomy and flexibility, they are likely to be more motivated and take full responsibility to find new ways and develop new skills to respond to challenges (Luoh et al., 2014). Kanter (1993) and Laschinger (1996) define structural empowerment as workplace structures that enable employees to carry out work in meaningful ways. These structures empower employees by providing access to information required to perform the job effectively, support from peer and supervisor feedback, resources like time and supply to carry out job and opportunity for learning and growth within the organization (Dainty et al., 2002). Liden et al. (2000) found that empowering working conditions have been positively linked to employee's positive job attitude and tolerant to work pressure and ambiguity. When employees are involved in their work with the spirits of vigour and commitment, it makes a significant difference to their self-motivation and positive job attitude (Manojlovich, 2005). Empowerment can enrich individual's ability to perform their duties successfully, where they have control over their workload, get support from the peers, feel more rewarded for their accomplishments and are treated fairly (Janssen, 2004). Fernandez and Moldogaziev (2013), in their empirical study found that there is a positive relationship between employee empowerment and employee's attitude and behaviour. Leach et al. (2003) further indicated that, employee empowerment has a positive impact on job knowledge through an empirical validation. Hence, the following premise is expected: H1b. Employee empowerment is having a significant and positive relationship with the enhancement of employee competencies. Moderating role of organizational learning culture Organizational learning culture as a moderator is grounded on the signaling theory (Spence, 2002) and experiential learning theory (Kolb 1984). Based on the viewpoint of signaling theory, organizations that cultivate learning culture would give indications to the employees that the management values and supports the exchange of knowledge and skills learnt by them from the HRD programmes provided by their organizations (Bloor and Dawson, 1994; Spence, 2002). Such culture that facilitates knowledge-transfer and idea sharing would positively influence employee competencies. According to the experiential learning theory (Kolb 1984), the process of learning is highly affected by two elements: individual's interaction with different stakeholders and feedback of one's knowledge from their superiors and peers. Referring to this, employee perceptions' that the organization promotes sound learning culture through regular feedbacks and mentorship would motivate them to acquire and exchange their skills and knowledge (Clark et al., 1993). Therefore, the learning culture process has been identified as one of the vital and appropriate contextual factors to enhance employee competencies (Jeong et al., 2017). Based on above discussion, it can infer that organizational learning culture plays an important moderating role in between HRD practices and employee competencies. Thus, in the present study associations between selected HRD practices, organizational learning culture and employee competencies are revealed in subsequent reviews. The moderating effect of organizational learning culture between team building and employee competencies Team building practices are based upon an action research model of data collection, feedback, and action planning (Whitehead, 2001). Team building activities operate within a particular environmental context. Although groups are often viewed as the context variable for individual behaviours, the organizational environment should also be considered as the context variable for group behaviour (Shuffler et al., 2011). According to Van den Bossche et al. (2006), "The organizational variable that could influence a team member's knowledge and problem-solving skill is the organization's learning culture". The enhancement of team member's competencies based on organizational learning culture can influence the level of cooperation or performance between team members, which in turn may affect team effectiveness (Hollenbeck et al., 2004). The above discussions provide ample facts to suggest that: H2a. Organizational learning culture will moderate the relationship between team building and employee competencies, where the relationship will be stronger when the organizational learning culture is high. The moderating effect of organizational learning culture between employee empowerment and employee competencies Employee empowerment involves the employees being provided with a greater degree of flexibility and more freedom to make decisions relating to work (Greasley et al., 2005). Empowerment is closely related to people's perceptions about themselves in relation to their work environments (Kuo et al., 2010). Jones et al. (2013) stated that: The environment surrounding individuals is important for increasing employee empowerment because empowerment is not a consistent or enduring personality trait, but rather a set of cognitions shaped by work environments. In a recent study, Joo and Shim (2010) has found a positive moderating role of organizational learning culture between empowerment and employee positive behaviour, where their results indicated if an organization has high learning culture in the presence of empowerment would influence highly on employee behaviour. Thus, the following hypothesis is proposed: H2b. Organizational learning culture will moderate the positive relationship between employee empowerment and employee competencies, where the relationship will be stronger when the organizational learning culture is high. Research design, sampling and data collection A structured questionnaire was developed for collection of primary data on the basis of seven-point Likert scale. It consists of two sections; first section collects general information of the respondents like, age, gender, designation and experience. The second section includes the items that measure the constructs team building, employee empowerment, organizational learning culture and employee competencies. The study took place in four medium-sized cement-manufacturing units in India. We communicated personally (through appointments, phone calls and email) to senior executives of the four units and explained the methodology of the study. We gave instructions to executives and supervisors about how to answer specific questions and to distribute the questionnaire to their subordinates and colleagues, who had participated in HRD practices in the past two years. The schedule was distributed to around 952 employees, out of which 653 complete responses were obtained, corresponding to a response rate of 68.53 per cent of the respondents. Team building A six-item scale representing four broad areas of team building practices was developed for this study: goal setting, interpersonal relations, role clarification and problem solving. These items were adapted from Aga et al. (2016), Klein et al. (2009) and Salas et al. (1999) and its reliability is 0.81. Employee empowerment A five-item scale to measure the effectiveness of employee empowerment implemented in the organization is developed by adopting Menon (2001) and Men and Stacks (2013) scales of employee empowerment. We modified the items according to the current study, and its reliability is 0.87. Employee competencies. The competencies analysed in the study were technical expertise, adaptability, innovation, teamwork and cooperation, conceptual thinking and self-confidence. For this, we have adopted Diaz-Fernandez et al. (2014) measures of employee competencies and adapted it to the current scenario of the study. The construct consists of six items and its reliability (Cronbach's alpha) is 0.71. Organizational learning culture. Dimensions of learning organization questionnaire (DLOQ) were developed by Watkins and Marsick (1997) with 21 items, measuring seven dimensions, but later it was shortened to seven items and measures all seven dimensions of DLOQ by Yang et al. (2004). We have used Yang et al. (2004) scale of organizational learning culture which measures continuous learning, team learning, dialogue and inquiry, empowerment, system connection, embedded system and strategic leadership. The seven-item scale shows reliability of 0.88. The results are described in the order in which the analyses were conducted. First, the confirmatory factor analysis (CFA) in AMOS to establish a factor structure and find out any kind of presence of common method bias. Second, construct validity of the full measurement model to assess the convergent and discriminant validity. Third, we carried out descriptive statistics, correlation and reliability analysis in SPSS of the full measurement model. Fourth, we performed moderated structural equation modelling (MSEM) in AMOS to test the hypotheses. Comparison of measurement models and Harman's single factor test A full measurement model was tested initially. Team building, employee empowerment, organizational learning culture and employee competencies items were loaded onto their respective factors. All factors were allowed to correlate. The four-factor model showed a good model fit kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, Goodness of fit index [GFI] = 0.901, comparative fit index [CFI] = 0.950, root mean square error of approximation [RMSEA] = 0.052. Emphasis was given to carry out sequential kh2 difference test that compared the full measurement model to five alternative nested models as shown in Table III. The result of the full measurement model is significantly better as compared to the alternative models that suggest the variables in the study are distinct. Cross-sectional and self-reported data are susceptible to common method biases. Following the procedure adopted by several scholars (Ketkar and Sett, 2010; Conway et al., 2015) all items of both independent and dependent variables are included in a single factor and fit indices were examined. The single factor model showed poor fit with the data. Comparison of the single factor model with the full measurement model had shown that the full measurement model had significantly better fit with the data as compared to the single-factor model. While this test does not eliminate the possibility of method bias, it provides evidence that inter-item correlations are not driven purely by method bias (Podsakoff et al., 2003). Construct validity of the full measurement model Construct validity was established in the study by assessing convergent validity and discriminant validity. To estimate convergent validity, discriminant validity and goodness of fit statistics, we performed a CFA. Convergent validity is established by estimating factor loadings (completely standardized loading), composite reliability and average variance extracted (AVE) from the CFA. In Table IV, the results of convergent validity are provided, which shows that the values are in acceptable region, confirming convergent validity. Discriminant analysis is assessed by comparing the AVE with corresponding inter-dimension squared correlation estimates (Fornell and Larcker, 1981). Table V shows the square root of AVE values of all study factors are greater than inter-dimension correlations, supporting discriminant validity. The goodness of fit statistics of the measurement model specified good model fit with the data (kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, GFI = 0.901, CFI = 0.950, RMSEA = 0.052). Thus, the instrument used in the study has good construct validity and psychometric properties. Descriptive statistics, correlations and reliabilities Table VI presents mean, standard deviation and correlations among the four variables. The reliabilities of each individual variable vary from 0.77 to 0.93. The correlations among the four variables are significant, supporting all of the hypotheses. Moderated structural equation modelling results We used AMOS 20.0 to test the study's hypotheses through SEM, such that we could explicitly account for measurement error when examining the hypothesized relationships among the study's focal constructs. This approach also allowed us to assess how well our conceptual model as a whole fit the data, as recommended by previous studies that seek to test complex models having web of hypotheses with both mediating and moderating affects. Several procedures for testing the interaction (moderating) effects in SEM have been forwarded (Jaccard et al., 1996; Joreskog and Yang, 1996; Ping, 1995). Cortina et al. (2001) found that all procedures produced very similar results. Present study adopted Ping's (1995) approach to Moderated SEM using the three steps described by Cortina et al. (2001). These steps are detailed in the Appendix I. Recent few studies (Anning-Dorson, 2017; Harney et al., 2018) also followed the same approach. The goodness-of-fit statistics of MSEM results (kh2 = 483.728; df = 174; NFI = 0.935; CFI = 0.957; RMSEA = 0.059; SRMR = 0.077) indicate that it has a good model fit. Figure 2 shows the results of MSEM analysis in which results of beta coefficients and adjusted R2 are given. In total, 59 per cent of variance of HRD practices, organizational learning culture and interactions variables on employee competencies is explained. It shows that there is positive relationship between HRD practices and employee competencies, which infer team building and employee empowerment positively influences the employee competencies, confirming (b = 0.408, Standard Error (SE) = 0.031, critical ratio (CR) = 13.161, p < 0.001) H1a and (b = 0.035, S.E = 0.004, CR = 8.75 p < 0.05) H1b. H2a proposed that organizational learning culture would moderate positive relationship between team building and employee competencies, where the relationship may be stronger when organizational learning culture is higher. The results of the MSEM (Figure 2) show that there is significant interaction between team building and employee competencies (b = 0.109, SE = 0.021, CR= 5.190, p < 0.05), supports H2a. The moderated relationship also supported by a simple slope test based on one standard deviation above and one standard deviation below. From Figure 3, it clarifies that organizations with higher organizational learning culture (b = 0.515, t = 8.837, p < 0.001) will influence more on employee competencies than with lower organizational learning culture (b = 0.401, t = 5.049, p < 0.001). The analysis has found similar results for H2b also, which indicates that there is moderated positive relationship between employee empowerment and employee competencies, where the relationship may be stronger when higher organizational learning culture is present (b = 0.130, SE= 0.020, CR = 6.50 p < 0.05), confirming H2b. From simple slope test (Figure 4), it further elucidates that organizations with higher organizational learning culture (b = 0.162, t = 2.788, p < 0.05) will influence more on employee competencies than with lower organizational learning culture (b = 0.142, t = 1.499). The study has found that team building, employee empowerment and organizational learning culture have a significant and positive influence on employee competencies. In addition, the findings confirm that the moderating effect of organizational learning culture on the above relationship is significant. Detail findings are discussed below. HRD practices (team building and employee empowerment) of cement manufacturing units have shown a significant and positive relationship with employee competencies. In this respect, the finding of significant and positive influence of team building on employee competencies are congruent with the studies of Aga et al. (2016), Beebe and Masterson (2014) and Braun et al. (2013) that established that effective implementation team building programmes will enhance knowledge, skill and capabilities of the employees. It was also found that employee empowerment shows positive impact on employee competencies which confirms the assumption of eminent researchers such as Fernandez and Moldogaziev (2013) state that effective implementation of employee empowerment enhances employee competency. Further, the perception of organizational learning culture moderated the relationship between team building and employee empowerment with employee competencies. Employees perceived that organization having effective implementation of team building practices along with high organizational learning culture results in higher employee competencies. In this respect, the findings of the study empirically validate the hypothesized relation that has been theoretically stated in many studies (Sung and Choi, 2014; Banerjee et al., 2017). A similar result also found that organizational learning culture moderated the relationship between employee empowerment and employee competencies. This empirical finding validates the theoretical assumptions of renowned researchers (Kim and McLean, 2008; Park, 2010; Moon and Choi, 2017) that employee empowerment facilitated by learning environment may show a positive effect on the development of employee competencies. Similarly, H2b proposed that organizational learning culture strengthens the positive relationship between employee empowerment and employee competencies. The findings of the present research corroborate with the previous studies by Jones et al. (2013) and Kuo et al. (2010) and establishes that in the presence of organizational learning culture strengthens the relationship between employee empowerment and employee competencies. The findings of this study have several important theoretical contributions. The study provides a deeper understanding of how HRD practices influences enhancement of employee competencies in the presence of contextual variable organizational learning culture. The results confirm that the relationship between HRD practices and employee competencies can be strengthened by positive organizational learning culture. A work context that is supportive for encouraging employees for continuous learning through the acquisition of new knowledge and skills would foster the relationship between HRD practices and employee competencies. This study therefore provides support for a contingency perspective in HRD research that with a positive organizational learning culture; the effect of HRD practices on the development of employee's competencies can be enhanced. By demonstrating that organizational learning culture moderates the HRD practices and employee competencies relationship, this study builds on a recent stream of research examining the resource-based view (Wernerfelt, 1984) from a contingency perspective. This study also extends the application of HRD practices and organizational learning culture to a new context (i.e. the emerging economy of India). It demonstrates that in emerging economies, characterized by environmental turbulence and uncertainties. The implementation of HRD practices will help the organization to perform better by increasing the competency level of their employees. Bates and Khasawneh (2005) state that, "There is considerable consensus today that a key competitive advantage for organizations lies in their ability to learn and be responsive to challenges from both internal and external environments". Evidently, focus has to be paid, in developing an organizational learning culture to enhance employee competencies to build competitive advantage and enhance organizational effectiveness. The outcomes of the paper also recommend some suggestions to managers striving for success. First, from moderated SEM results, managers should recognize that, just by providing team building and employee empowerment initiatives are not enough, it is the organization's responsibility to create an environment of learning to enhance employee competencies. Second, organizations have to take advantage of organizational learning capability by signifying the importance of managers and their attitudes in effective implementation of learning conditions within the organization. This indicates that managers are the facilitators of learning culture within the organization, and it can be achieved by applying the attributes of a learning organization in such a way that learning orientation becomes the main trigger for learning (Real et al., 2014). Third, the direct inference of study results is that employees with enhanced competencies are the most vital stakeholder group in any business process that is endeavouring for improved organizational effectiveness. Hence, managers striving for effectiveness and efficiency in their process should put employees first, which supports the opinion of Skerlavaj et al. (2010). The study has a few limitations; however, these pave the way for a new line of future research. Even though, we collected responses from employees who participated in team building and employee empowerment initiatives in the past two years, but we have collected data at a single point of time (cross-sectional study). This might raise issues relating to the direction of causality, we recommend the future researchers to conduct a longitudinal study that minimizes the issues on the direction of causality. The data used in the study are largely subjective opinions of the employees responding to the survey. As per Real et al. (2014), subjective assessment obtained through multi-item scales are in the general consistent with an objective measure, the difference between perceptions and objective data may exist. Future studies might emphasis on this area, using objective measures. Finally, we cannot generalize the results through wider range of sectors and global environment, as the study was conducted in Indian cement industries.
|
The purpose of this paper is to examine the impact of team building and employee empowerment on employee competencies and examine the moderating role of organizational learning culture in between these relationships.
|
[SECTION: Method] As per resource-based view (Wernerfelt, 1984), in the competitive market environment, an organization should be effective and require an important intangible core competence that is employee competencies. An employee competency refers to those traits, skills or attributes that employees need to perform their jobs more effectively (Soderquist et al., 2010; Campion et al., 2011). A competent workforce is believed to produce higher quality products (Ahuja and Khamba, 2008), support innovation (Siguaw et al., 2006) and reduce turnover costs (Joo and Shim, 2010). To develop and maintain employee competencies for future requirement and in the present environment, an organization must emphasize on human resource development (HRD). Werner and DeSimone (2006) defined HRD as a set of systematic and planned activities designed by an organization to provide its members with the opportunities to learn necessary skills to meet current and future job demands. According to Werner and DeSimone (2006), "HRD practices are the programs, which are designed to be strategically oriented to the organizational process for managing the development of human resources to contribute to the overall success of the organization" (p. 26). The rationale for using HRD practices to support business objectives is quite straightforward: enhancing or unleashing needed employee expertise (Chermack and Kasshanna, 2007). HRD practices continuously improve employee's expertise and performance through the existing practices of training, performance appraisal and organizational development initiatives (Garavan, 2007). HRD alone is not sufficient to enhance employee competencies to a greater level because not all knowledge and skills obtained from HRD practices is properly transferred (Froehlich et al., 2014). Thus, an organization should create a learning culture in the organization, so that employee can share, acquire and create knowledge and skills, which can modify the behaviour of the employees. Organizational learning culture refers to a set of norms and values about the functioning of an organization that supports systematic organizational learning so that individual learning, teamwork, collaboration, creativity and knowledge distribution have collective meaning and value (Torres-Coronas and Arias-Oliva, 2008, p. 177). Thus, organizational learning culture could directly or indirectly influence employee competencies. The present study integrates the resource-based view (Wernerfelt, 1984) and organizational perspective of learning to create a strong theoretical foundation by exploring the effects of team building, employee empowerment and organizational learning culture on employee competencies. The study provides empirical evidences to bridge the knowledge gaps with regard to the relationship between HRD practices, organizational learning culture and employee competencies. Even though HRD practices and organizational learning culture are considered critical concepts and practices, most of the existing literature focuses on the conceptual level and consider commitment, productivity, and profitability as primary outcome variables. Few studies have attempted to examine the moderating role of organizational learning culture on individual outcomes such as commitment, engagement and satisfaction. Thus, the significance of the study lies in providing empirical validation of the moderating role of organizational learning culture towards the relationship of HRD practices and employee competencies. This research attempts to answer the following structured questions: RQ1. Is there any relationship between HRD practices and employee competencies? RQ2. Does organizational learning culture will moderate the relationship between HRD practices and employee competencies? From the above research questions, following research objectives were derived. To study the impact of HRD practices on enhancement of competencies of employees of the cement industry. To assess the moderating role of organizational learning culture in between the relationship of HRD practices and employee competencies. Team building Klein et al. (2009) define team building as "the formal and informal team-level practices that focus on improving social relations and clarifying roles as well as solving task and interpersonal problems that affect team functioning". In this intervention, team members experimentally learn, by examining their structures, norms, values and interpersonal dynamics, to increase their skills for effective performance (Senecal et al., 2008). In the literature, there is consensus that there are four approaches/components to team building: goal setting; role-clarification; interpersonal relations; and problem solving. A brief explanation is presented below: Goal setting: This component is designed specifically to strengthen a team member's motivation to achieve team goals and objectives (Salas et al., 2004). Team members are expected to become involved in action planning to identify ways to achieve those goals (Aga et al., 2016). Role clarification: It entails clarifying individual role expectations, group norms and shared responsibilities of team member (Klein et al., 2009). Role clarification can be used to improve team and individual characteristics (i.e. by reducing role ambiguity) and work structure by negotiating, defining and adjusting team member roles (Mathieu and Schulze, 2006). Interpersonal relations: It assumes that teams with fewer interpersonal conflicts function more effectively than teams with greater number of interpersonal conflicts. It involves an increase in teamwork skills, such as mutual supportiveness, communication and sharing of feelings (Aga et al., 2016). Problem solving: The fourth component emphasizes on the identification of major problems in the team's tasks to enhance task-related skills. It is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving, action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Effective team building intervention in an organization enhances an individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). At the organization level, team effort helps to solve various problems of the organization, such as conflict among organizational members, unclear roles and assignments, lack of innovation in solving problems, etc., that upsurge the performance of the organization (Stone, 2010). Employee empowerment An employee empowerment approach is composed of practices aimed at sharing information, job related knowledge and authority with employees (Fernandez and Moldogaziev, 2013). Baird and Wang (2010) stated, "The basic objective of empowerment is redistribution of power between management and employees - most commonly in the form of increasing employee authority, responsibility, and influencing commitment". In the literature, empowerment defined in two perspectives: psychological perspective and managerial perspective. From a psychological perspective, empowerment is a motivational akin to a state of mind or set of cognitions (Fernandez and Moldogaziev, 2013). Dust et al. (2014) described employee empowerment as a four-dimensional motivational construct composed of four cognitions those are meaning, competence, self-determination and impact, that reflect an active rather than a passive orientation towards a work role. From a managerial perspective, employee empowerment is a relational construct that describes how those with power in organizations share power, information, resources and rewards with those lacking them (Gomez and Rosen, 2001; Fernandez and Moldogaziev, 2013). Bowen and Lawler (1995) define empowerment as sharing with front-line employees on four organizational ingredients: information about the organization's performance; knowledge that enables employees to understand and contribute to organizational performance; rewards based on the organization's performance; and power to make decisions that influence organizational direction and performance. Torres-Coronas and Arias-Oliva (2008, p. 177) defines organizational learning culture as: A set of norms and values about the functioning of an organization that support systematic organizational learning so that individual learning, teamwork, collaboration, creativity, and knowledge distribution have collective meaning and value. Organizational learning culture is a complex process that refers to the development of new knowledge and has the potential to change behaviour (Skerlavaj et al., 2010). According to Kandemir and Hult (2005), organizational learning culture has been viewed as a process by which organizations as collectives learn through interaction with their environments and propose that learning might result in new and significant insights and awareness. The objective of building an organizational learning culture in an organization is to expand people's capacity to create the results they truly desire, the employee's new and expansive patterns of thinking to be encouraged, collective aspiration to be set free, and employees should be continually learning how to learn together (Senge, 2009). According to Marsick and Watkins (2003), organizational learning culture consists of seven interlinked constructs: create continuous learning opportunities; promote inquiry and dialogue; encourage collaboration and team learning; create systems to capture and share learning; empower people toward a collective vision; connect the organization to its environment; and provide strategic leadership for learning, which helps in building the organization's strategic learning culture. Table I summarizes the seven dimensions of organizational learning culture. The word competency was first explained in the book "The Competent Manager" (Boyatzis, 1982, p. 21) which defines the term as, "an underlying characteristic of a person that could be a motive, trait, and skill aspect of one's self-image or social role, or a body of knowledge which he or she uses". A competency is a reliably measurable, relatively enduring (stable) characteristic of a person, team or organization that causes and statistically predicts a measurable level of performance (Berger and Berger, 2010). Some definitions of the term competency are shown in the Table II given. The term "reliably measurable" means two or more independent observers or methods (tests, surveys) agree statistically that a person demonstrates a competency (Spencer et al., 2008) while "relatively enduring" means a competency measured at one point of time is statistically likely to be demonstrated at a later point of time (Catano et al., 2007). Competency characteristics are content knowledge, behaviour skills, cognitive processing (IQ), personality traits, values, motives and occasionally other perceptual or sensor motor capabilities that accurately predict some level of performance. Cardy and Selvarajan (2006) has classified competencies into two categories: employee (personal) and organization (corporate). Employee competencies are those characteristics or traits that are acquired by employees, such as knowledge, skills, ability and personality that differentiate them from average performers (Cardy and Selvarajan, 2006). Organizational competencies are those, which are embedded in the organizational system and structures that tend to exist within the organization, even when an employee leaves (Semeijn et al., 2014). Human capital attributes have been argued to be an important resource of organizational performance because organizations that are able to generate organization specific, valuable and unique competencies are thought to be in a superior position that enables them to outperform their rivals and succeed in a dynamic business environment (van Esch et al., 2018). This study selected a set of independent constructs: HRD practices (team building and employee empowerment) and organizational learning culture; and dependent constructs: employee competencies. The independent constructs are considered necessary for influencing employee competencies and its influence on organizational effectiveness. Figure 1 illustrates the research model of this study. In the following sections, the relationship between the constructs is discussed. Human resource development practices and employee competencies Researchers (Sung and Choi, 2014) have suggested that organizations should design and implement HRD practices so that the individual can perform effectively and meet the performance expectations through improving individual competencies. Kehoe and Wright (2013) deliberates that HRD was the basic component for employees to acquire competencies that in turn significantly improve organizational performance. In fact, the general purpose of HRD practices is to produce a competent and qualified employees to perform an assigned job and contribute to the organization's business outcomes (Nolan and Garavan, 2016). Scholars have investigated the outcome of HRD practices and reported that these practices improve employees' capabilities on the job, productivity and efficiency (Haslinda, 2009; Alagaraja et al., 2015). Yuvaraj and Mulugeta (2013) also provided a similar result that explains HRD practices continuously improve employees' capability and performance through the existing practices of training, career development, performance appraisal and organizational development components of HRD. The study examined two practices: team building and employee empowerment that were being widely implemented in the selected organizations (Cement manufacturing units). The association between selected HRD practices and employee competencies are revealed in subsequent reviews. Team building and employee competencies According to LePine et al. (2008), "The practices of team-building components (goal-setting, interpersonal processes, role-clarification, and problem-solving) can lead to improved performance through modification of attitudes, values, problem-solving techniques, and group processes". In the goal-setting component, team members are introduced to a goal-setting framework and are expected to involve in action planning to identify ways to achieve those goals, which strengthen team member's problem-solving skills and motivation (Aga et al., 2016). Team members exposed to role-clarification activities are expected to achieve better understanding of their and others' respective roles and duties within the team (Salas et al., 1999). Interpersonal process component involves an enhancement in team member's interpersonal skills, such as mutual supportiveness, communication and sharing of information (Klein et al., 2009). The fourth component emphasizes the identification of major problems in the team's tasks to enhance task-related skills (Lacerenza et al., 2018). Team building is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving and action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Team building intervention enhances individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). Shuffler et al. (2011) in their meta-analysis found that an effective team building improves affective outcomes (trust, attitude and confidence) and cognitive outcomes (shared knowledge among team members) in employees. The above discussions provide ample facts to suggest that: H1a. Team building is positively related to the enhancement of employee competencies. Employee empowerment and employee competencies Fernandez and Moldogaziev (2013) have stated that, "employee empowerment is a relational construct that describes how those with power in organizations share power and formal authority with those lacking it". Organizations have implemented empowerment initiatives based on the premise that when individual employees can participate in decision-making and share responsibility, for how work is conducted, outcomes such as performance and employee's knowledge will be enhanced (Maynard et al., 2012). Organizations that encourage harmonious relationships between superiors and subordinates provide employees with the liberty to express their creative suggestions, which help in enriching their self-motivation (Fernandez and Moldogaziev, 2012). When employees are empowered and given the autonomy and flexibility, they are likely to be more motivated and take full responsibility to find new ways and develop new skills to respond to challenges (Luoh et al., 2014). Kanter (1993) and Laschinger (1996) define structural empowerment as workplace structures that enable employees to carry out work in meaningful ways. These structures empower employees by providing access to information required to perform the job effectively, support from peer and supervisor feedback, resources like time and supply to carry out job and opportunity for learning and growth within the organization (Dainty et al., 2002). Liden et al. (2000) found that empowering working conditions have been positively linked to employee's positive job attitude and tolerant to work pressure and ambiguity. When employees are involved in their work with the spirits of vigour and commitment, it makes a significant difference to their self-motivation and positive job attitude (Manojlovich, 2005). Empowerment can enrich individual's ability to perform their duties successfully, where they have control over their workload, get support from the peers, feel more rewarded for their accomplishments and are treated fairly (Janssen, 2004). Fernandez and Moldogaziev (2013), in their empirical study found that there is a positive relationship between employee empowerment and employee's attitude and behaviour. Leach et al. (2003) further indicated that, employee empowerment has a positive impact on job knowledge through an empirical validation. Hence, the following premise is expected: H1b. Employee empowerment is having a significant and positive relationship with the enhancement of employee competencies. Moderating role of organizational learning culture Organizational learning culture as a moderator is grounded on the signaling theory (Spence, 2002) and experiential learning theory (Kolb 1984). Based on the viewpoint of signaling theory, organizations that cultivate learning culture would give indications to the employees that the management values and supports the exchange of knowledge and skills learnt by them from the HRD programmes provided by their organizations (Bloor and Dawson, 1994; Spence, 2002). Such culture that facilitates knowledge-transfer and idea sharing would positively influence employee competencies. According to the experiential learning theory (Kolb 1984), the process of learning is highly affected by two elements: individual's interaction with different stakeholders and feedback of one's knowledge from their superiors and peers. Referring to this, employee perceptions' that the organization promotes sound learning culture through regular feedbacks and mentorship would motivate them to acquire and exchange their skills and knowledge (Clark et al., 1993). Therefore, the learning culture process has been identified as one of the vital and appropriate contextual factors to enhance employee competencies (Jeong et al., 2017). Based on above discussion, it can infer that organizational learning culture plays an important moderating role in between HRD practices and employee competencies. Thus, in the present study associations between selected HRD practices, organizational learning culture and employee competencies are revealed in subsequent reviews. The moderating effect of organizational learning culture between team building and employee competencies Team building practices are based upon an action research model of data collection, feedback, and action planning (Whitehead, 2001). Team building activities operate within a particular environmental context. Although groups are often viewed as the context variable for individual behaviours, the organizational environment should also be considered as the context variable for group behaviour (Shuffler et al., 2011). According to Van den Bossche et al. (2006), "The organizational variable that could influence a team member's knowledge and problem-solving skill is the organization's learning culture". The enhancement of team member's competencies based on organizational learning culture can influence the level of cooperation or performance between team members, which in turn may affect team effectiveness (Hollenbeck et al., 2004). The above discussions provide ample facts to suggest that: H2a. Organizational learning culture will moderate the relationship between team building and employee competencies, where the relationship will be stronger when the organizational learning culture is high. The moderating effect of organizational learning culture between employee empowerment and employee competencies Employee empowerment involves the employees being provided with a greater degree of flexibility and more freedom to make decisions relating to work (Greasley et al., 2005). Empowerment is closely related to people's perceptions about themselves in relation to their work environments (Kuo et al., 2010). Jones et al. (2013) stated that: The environment surrounding individuals is important for increasing employee empowerment because empowerment is not a consistent or enduring personality trait, but rather a set of cognitions shaped by work environments. In a recent study, Joo and Shim (2010) has found a positive moderating role of organizational learning culture between empowerment and employee positive behaviour, where their results indicated if an organization has high learning culture in the presence of empowerment would influence highly on employee behaviour. Thus, the following hypothesis is proposed: H2b. Organizational learning culture will moderate the positive relationship between employee empowerment and employee competencies, where the relationship will be stronger when the organizational learning culture is high. Research design, sampling and data collection A structured questionnaire was developed for collection of primary data on the basis of seven-point Likert scale. It consists of two sections; first section collects general information of the respondents like, age, gender, designation and experience. The second section includes the items that measure the constructs team building, employee empowerment, organizational learning culture and employee competencies. The study took place in four medium-sized cement-manufacturing units in India. We communicated personally (through appointments, phone calls and email) to senior executives of the four units and explained the methodology of the study. We gave instructions to executives and supervisors about how to answer specific questions and to distribute the questionnaire to their subordinates and colleagues, who had participated in HRD practices in the past two years. The schedule was distributed to around 952 employees, out of which 653 complete responses were obtained, corresponding to a response rate of 68.53 per cent of the respondents. Team building A six-item scale representing four broad areas of team building practices was developed for this study: goal setting, interpersonal relations, role clarification and problem solving. These items were adapted from Aga et al. (2016), Klein et al. (2009) and Salas et al. (1999) and its reliability is 0.81. Employee empowerment A five-item scale to measure the effectiveness of employee empowerment implemented in the organization is developed by adopting Menon (2001) and Men and Stacks (2013) scales of employee empowerment. We modified the items according to the current study, and its reliability is 0.87. Employee competencies. The competencies analysed in the study were technical expertise, adaptability, innovation, teamwork and cooperation, conceptual thinking and self-confidence. For this, we have adopted Diaz-Fernandez et al. (2014) measures of employee competencies and adapted it to the current scenario of the study. The construct consists of six items and its reliability (Cronbach's alpha) is 0.71. Organizational learning culture. Dimensions of learning organization questionnaire (DLOQ) were developed by Watkins and Marsick (1997) with 21 items, measuring seven dimensions, but later it was shortened to seven items and measures all seven dimensions of DLOQ by Yang et al. (2004). We have used Yang et al. (2004) scale of organizational learning culture which measures continuous learning, team learning, dialogue and inquiry, empowerment, system connection, embedded system and strategic leadership. The seven-item scale shows reliability of 0.88. The results are described in the order in which the analyses were conducted. First, the confirmatory factor analysis (CFA) in AMOS to establish a factor structure and find out any kind of presence of common method bias. Second, construct validity of the full measurement model to assess the convergent and discriminant validity. Third, we carried out descriptive statistics, correlation and reliability analysis in SPSS of the full measurement model. Fourth, we performed moderated structural equation modelling (MSEM) in AMOS to test the hypotheses. Comparison of measurement models and Harman's single factor test A full measurement model was tested initially. Team building, employee empowerment, organizational learning culture and employee competencies items were loaded onto their respective factors. All factors were allowed to correlate. The four-factor model showed a good model fit kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, Goodness of fit index [GFI] = 0.901, comparative fit index [CFI] = 0.950, root mean square error of approximation [RMSEA] = 0.052. Emphasis was given to carry out sequential kh2 difference test that compared the full measurement model to five alternative nested models as shown in Table III. The result of the full measurement model is significantly better as compared to the alternative models that suggest the variables in the study are distinct. Cross-sectional and self-reported data are susceptible to common method biases. Following the procedure adopted by several scholars (Ketkar and Sett, 2010; Conway et al., 2015) all items of both independent and dependent variables are included in a single factor and fit indices were examined. The single factor model showed poor fit with the data. Comparison of the single factor model with the full measurement model had shown that the full measurement model had significantly better fit with the data as compared to the single-factor model. While this test does not eliminate the possibility of method bias, it provides evidence that inter-item correlations are not driven purely by method bias (Podsakoff et al., 2003). Construct validity of the full measurement model Construct validity was established in the study by assessing convergent validity and discriminant validity. To estimate convergent validity, discriminant validity and goodness of fit statistics, we performed a CFA. Convergent validity is established by estimating factor loadings (completely standardized loading), composite reliability and average variance extracted (AVE) from the CFA. In Table IV, the results of convergent validity are provided, which shows that the values are in acceptable region, confirming convergent validity. Discriminant analysis is assessed by comparing the AVE with corresponding inter-dimension squared correlation estimates (Fornell and Larcker, 1981). Table V shows the square root of AVE values of all study factors are greater than inter-dimension correlations, supporting discriminant validity. The goodness of fit statistics of the measurement model specified good model fit with the data (kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, GFI = 0.901, CFI = 0.950, RMSEA = 0.052). Thus, the instrument used in the study has good construct validity and psychometric properties. Descriptive statistics, correlations and reliabilities Table VI presents mean, standard deviation and correlations among the four variables. The reliabilities of each individual variable vary from 0.77 to 0.93. The correlations among the four variables are significant, supporting all of the hypotheses. Moderated structural equation modelling results We used AMOS 20.0 to test the study's hypotheses through SEM, such that we could explicitly account for measurement error when examining the hypothesized relationships among the study's focal constructs. This approach also allowed us to assess how well our conceptual model as a whole fit the data, as recommended by previous studies that seek to test complex models having web of hypotheses with both mediating and moderating affects. Several procedures for testing the interaction (moderating) effects in SEM have been forwarded (Jaccard et al., 1996; Joreskog and Yang, 1996; Ping, 1995). Cortina et al. (2001) found that all procedures produced very similar results. Present study adopted Ping's (1995) approach to Moderated SEM using the three steps described by Cortina et al. (2001). These steps are detailed in the Appendix I. Recent few studies (Anning-Dorson, 2017; Harney et al., 2018) also followed the same approach. The goodness-of-fit statistics of MSEM results (kh2 = 483.728; df = 174; NFI = 0.935; CFI = 0.957; RMSEA = 0.059; SRMR = 0.077) indicate that it has a good model fit. Figure 2 shows the results of MSEM analysis in which results of beta coefficients and adjusted R2 are given. In total, 59 per cent of variance of HRD practices, organizational learning culture and interactions variables on employee competencies is explained. It shows that there is positive relationship between HRD practices and employee competencies, which infer team building and employee empowerment positively influences the employee competencies, confirming (b = 0.408, Standard Error (SE) = 0.031, critical ratio (CR) = 13.161, p < 0.001) H1a and (b = 0.035, S.E = 0.004, CR = 8.75 p < 0.05) H1b. H2a proposed that organizational learning culture would moderate positive relationship between team building and employee competencies, where the relationship may be stronger when organizational learning culture is higher. The results of the MSEM (Figure 2) show that there is significant interaction between team building and employee competencies (b = 0.109, SE = 0.021, CR= 5.190, p < 0.05), supports H2a. The moderated relationship also supported by a simple slope test based on one standard deviation above and one standard deviation below. From Figure 3, it clarifies that organizations with higher organizational learning culture (b = 0.515, t = 8.837, p < 0.001) will influence more on employee competencies than with lower organizational learning culture (b = 0.401, t = 5.049, p < 0.001). The analysis has found similar results for H2b also, which indicates that there is moderated positive relationship between employee empowerment and employee competencies, where the relationship may be stronger when higher organizational learning culture is present (b = 0.130, SE= 0.020, CR = 6.50 p < 0.05), confirming H2b. From simple slope test (Figure 4), it further elucidates that organizations with higher organizational learning culture (b = 0.162, t = 2.788, p < 0.05) will influence more on employee competencies than with lower organizational learning culture (b = 0.142, t = 1.499). The study has found that team building, employee empowerment and organizational learning culture have a significant and positive influence on employee competencies. In addition, the findings confirm that the moderating effect of organizational learning culture on the above relationship is significant. Detail findings are discussed below. HRD practices (team building and employee empowerment) of cement manufacturing units have shown a significant and positive relationship with employee competencies. In this respect, the finding of significant and positive influence of team building on employee competencies are congruent with the studies of Aga et al. (2016), Beebe and Masterson (2014) and Braun et al. (2013) that established that effective implementation team building programmes will enhance knowledge, skill and capabilities of the employees. It was also found that employee empowerment shows positive impact on employee competencies which confirms the assumption of eminent researchers such as Fernandez and Moldogaziev (2013) state that effective implementation of employee empowerment enhances employee competency. Further, the perception of organizational learning culture moderated the relationship between team building and employee empowerment with employee competencies. Employees perceived that organization having effective implementation of team building practices along with high organizational learning culture results in higher employee competencies. In this respect, the findings of the study empirically validate the hypothesized relation that has been theoretically stated in many studies (Sung and Choi, 2014; Banerjee et al., 2017). A similar result also found that organizational learning culture moderated the relationship between employee empowerment and employee competencies. This empirical finding validates the theoretical assumptions of renowned researchers (Kim and McLean, 2008; Park, 2010; Moon and Choi, 2017) that employee empowerment facilitated by learning environment may show a positive effect on the development of employee competencies. Similarly, H2b proposed that organizational learning culture strengthens the positive relationship between employee empowerment and employee competencies. The findings of the present research corroborate with the previous studies by Jones et al. (2013) and Kuo et al. (2010) and establishes that in the presence of organizational learning culture strengthens the relationship between employee empowerment and employee competencies. The findings of this study have several important theoretical contributions. The study provides a deeper understanding of how HRD practices influences enhancement of employee competencies in the presence of contextual variable organizational learning culture. The results confirm that the relationship between HRD practices and employee competencies can be strengthened by positive organizational learning culture. A work context that is supportive for encouraging employees for continuous learning through the acquisition of new knowledge and skills would foster the relationship between HRD practices and employee competencies. This study therefore provides support for a contingency perspective in HRD research that with a positive organizational learning culture; the effect of HRD practices on the development of employee's competencies can be enhanced. By demonstrating that organizational learning culture moderates the HRD practices and employee competencies relationship, this study builds on a recent stream of research examining the resource-based view (Wernerfelt, 1984) from a contingency perspective. This study also extends the application of HRD practices and organizational learning culture to a new context (i.e. the emerging economy of India). It demonstrates that in emerging economies, characterized by environmental turbulence and uncertainties. The implementation of HRD practices will help the organization to perform better by increasing the competency level of their employees. Bates and Khasawneh (2005) state that, "There is considerable consensus today that a key competitive advantage for organizations lies in their ability to learn and be responsive to challenges from both internal and external environments". Evidently, focus has to be paid, in developing an organizational learning culture to enhance employee competencies to build competitive advantage and enhance organizational effectiveness. The outcomes of the paper also recommend some suggestions to managers striving for success. First, from moderated SEM results, managers should recognize that, just by providing team building and employee empowerment initiatives are not enough, it is the organization's responsibility to create an environment of learning to enhance employee competencies. Second, organizations have to take advantage of organizational learning capability by signifying the importance of managers and their attitudes in effective implementation of learning conditions within the organization. This indicates that managers are the facilitators of learning culture within the organization, and it can be achieved by applying the attributes of a learning organization in such a way that learning orientation becomes the main trigger for learning (Real et al., 2014). Third, the direct inference of study results is that employees with enhanced competencies are the most vital stakeholder group in any business process that is endeavouring for improved organizational effectiveness. Hence, managers striving for effectiveness and efficiency in their process should put employees first, which supports the opinion of Skerlavaj et al. (2010). The study has a few limitations; however, these pave the way for a new line of future research. Even though, we collected responses from employees who participated in team building and employee empowerment initiatives in the past two years, but we have collected data at a single point of time (cross-sectional study). This might raise issues relating to the direction of causality, we recommend the future researchers to conduct a longitudinal study that minimizes the issues on the direction of causality. The data used in the study are largely subjective opinions of the employees responding to the survey. As per Real et al. (2014), subjective assessment obtained through multi-item scales are in the general consistent with an objective measure, the difference between perceptions and objective data may exist. Future studies might emphasis on this area, using objective measures. Finally, we cannot generalize the results through wider range of sectors and global environment, as the study was conducted in Indian cement industries.
|
An integrated research model is developed by combining resource-based view, signalling theory and experiential learning theory. The validity of the model is tested by applying moderated structural equation modelling (MSEM) approach to the data collected from 653 employees working in cement manufacturing companies. The reliability and validity of the dimensions are established through confirmatory factor analysis and the related hypotheses are tested by using MSEM.
|
[SECTION: Findings] As per resource-based view (Wernerfelt, 1984), in the competitive market environment, an organization should be effective and require an important intangible core competence that is employee competencies. An employee competency refers to those traits, skills or attributes that employees need to perform their jobs more effectively (Soderquist et al., 2010; Campion et al., 2011). A competent workforce is believed to produce higher quality products (Ahuja and Khamba, 2008), support innovation (Siguaw et al., 2006) and reduce turnover costs (Joo and Shim, 2010). To develop and maintain employee competencies for future requirement and in the present environment, an organization must emphasize on human resource development (HRD). Werner and DeSimone (2006) defined HRD as a set of systematic and planned activities designed by an organization to provide its members with the opportunities to learn necessary skills to meet current and future job demands. According to Werner and DeSimone (2006), "HRD practices are the programs, which are designed to be strategically oriented to the organizational process for managing the development of human resources to contribute to the overall success of the organization" (p. 26). The rationale for using HRD practices to support business objectives is quite straightforward: enhancing or unleashing needed employee expertise (Chermack and Kasshanna, 2007). HRD practices continuously improve employee's expertise and performance through the existing practices of training, performance appraisal and organizational development initiatives (Garavan, 2007). HRD alone is not sufficient to enhance employee competencies to a greater level because not all knowledge and skills obtained from HRD practices is properly transferred (Froehlich et al., 2014). Thus, an organization should create a learning culture in the organization, so that employee can share, acquire and create knowledge and skills, which can modify the behaviour of the employees. Organizational learning culture refers to a set of norms and values about the functioning of an organization that supports systematic organizational learning so that individual learning, teamwork, collaboration, creativity and knowledge distribution have collective meaning and value (Torres-Coronas and Arias-Oliva, 2008, p. 177). Thus, organizational learning culture could directly or indirectly influence employee competencies. The present study integrates the resource-based view (Wernerfelt, 1984) and organizational perspective of learning to create a strong theoretical foundation by exploring the effects of team building, employee empowerment and organizational learning culture on employee competencies. The study provides empirical evidences to bridge the knowledge gaps with regard to the relationship between HRD practices, organizational learning culture and employee competencies. Even though HRD practices and organizational learning culture are considered critical concepts and practices, most of the existing literature focuses on the conceptual level and consider commitment, productivity, and profitability as primary outcome variables. Few studies have attempted to examine the moderating role of organizational learning culture on individual outcomes such as commitment, engagement and satisfaction. Thus, the significance of the study lies in providing empirical validation of the moderating role of organizational learning culture towards the relationship of HRD practices and employee competencies. This research attempts to answer the following structured questions: RQ1. Is there any relationship between HRD practices and employee competencies? RQ2. Does organizational learning culture will moderate the relationship between HRD practices and employee competencies? From the above research questions, following research objectives were derived. To study the impact of HRD practices on enhancement of competencies of employees of the cement industry. To assess the moderating role of organizational learning culture in between the relationship of HRD practices and employee competencies. Team building Klein et al. (2009) define team building as "the formal and informal team-level practices that focus on improving social relations and clarifying roles as well as solving task and interpersonal problems that affect team functioning". In this intervention, team members experimentally learn, by examining their structures, norms, values and interpersonal dynamics, to increase their skills for effective performance (Senecal et al., 2008). In the literature, there is consensus that there are four approaches/components to team building: goal setting; role-clarification; interpersonal relations; and problem solving. A brief explanation is presented below: Goal setting: This component is designed specifically to strengthen a team member's motivation to achieve team goals and objectives (Salas et al., 2004). Team members are expected to become involved in action planning to identify ways to achieve those goals (Aga et al., 2016). Role clarification: It entails clarifying individual role expectations, group norms and shared responsibilities of team member (Klein et al., 2009). Role clarification can be used to improve team and individual characteristics (i.e. by reducing role ambiguity) and work structure by negotiating, defining and adjusting team member roles (Mathieu and Schulze, 2006). Interpersonal relations: It assumes that teams with fewer interpersonal conflicts function more effectively than teams with greater number of interpersonal conflicts. It involves an increase in teamwork skills, such as mutual supportiveness, communication and sharing of feelings (Aga et al., 2016). Problem solving: The fourth component emphasizes on the identification of major problems in the team's tasks to enhance task-related skills. It is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving, action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Effective team building intervention in an organization enhances an individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). At the organization level, team effort helps to solve various problems of the organization, such as conflict among organizational members, unclear roles and assignments, lack of innovation in solving problems, etc., that upsurge the performance of the organization (Stone, 2010). Employee empowerment An employee empowerment approach is composed of practices aimed at sharing information, job related knowledge and authority with employees (Fernandez and Moldogaziev, 2013). Baird and Wang (2010) stated, "The basic objective of empowerment is redistribution of power between management and employees - most commonly in the form of increasing employee authority, responsibility, and influencing commitment". In the literature, empowerment defined in two perspectives: psychological perspective and managerial perspective. From a psychological perspective, empowerment is a motivational akin to a state of mind or set of cognitions (Fernandez and Moldogaziev, 2013). Dust et al. (2014) described employee empowerment as a four-dimensional motivational construct composed of four cognitions those are meaning, competence, self-determination and impact, that reflect an active rather than a passive orientation towards a work role. From a managerial perspective, employee empowerment is a relational construct that describes how those with power in organizations share power, information, resources and rewards with those lacking them (Gomez and Rosen, 2001; Fernandez and Moldogaziev, 2013). Bowen and Lawler (1995) define empowerment as sharing with front-line employees on four organizational ingredients: information about the organization's performance; knowledge that enables employees to understand and contribute to organizational performance; rewards based on the organization's performance; and power to make decisions that influence organizational direction and performance. Torres-Coronas and Arias-Oliva (2008, p. 177) defines organizational learning culture as: A set of norms and values about the functioning of an organization that support systematic organizational learning so that individual learning, teamwork, collaboration, creativity, and knowledge distribution have collective meaning and value. Organizational learning culture is a complex process that refers to the development of new knowledge and has the potential to change behaviour (Skerlavaj et al., 2010). According to Kandemir and Hult (2005), organizational learning culture has been viewed as a process by which organizations as collectives learn through interaction with their environments and propose that learning might result in new and significant insights and awareness. The objective of building an organizational learning culture in an organization is to expand people's capacity to create the results they truly desire, the employee's new and expansive patterns of thinking to be encouraged, collective aspiration to be set free, and employees should be continually learning how to learn together (Senge, 2009). According to Marsick and Watkins (2003), organizational learning culture consists of seven interlinked constructs: create continuous learning opportunities; promote inquiry and dialogue; encourage collaboration and team learning; create systems to capture and share learning; empower people toward a collective vision; connect the organization to its environment; and provide strategic leadership for learning, which helps in building the organization's strategic learning culture. Table I summarizes the seven dimensions of organizational learning culture. The word competency was first explained in the book "The Competent Manager" (Boyatzis, 1982, p. 21) which defines the term as, "an underlying characteristic of a person that could be a motive, trait, and skill aspect of one's self-image or social role, or a body of knowledge which he or she uses". A competency is a reliably measurable, relatively enduring (stable) characteristic of a person, team or organization that causes and statistically predicts a measurable level of performance (Berger and Berger, 2010). Some definitions of the term competency are shown in the Table II given. The term "reliably measurable" means two or more independent observers or methods (tests, surveys) agree statistically that a person demonstrates a competency (Spencer et al., 2008) while "relatively enduring" means a competency measured at one point of time is statistically likely to be demonstrated at a later point of time (Catano et al., 2007). Competency characteristics are content knowledge, behaviour skills, cognitive processing (IQ), personality traits, values, motives and occasionally other perceptual or sensor motor capabilities that accurately predict some level of performance. Cardy and Selvarajan (2006) has classified competencies into two categories: employee (personal) and organization (corporate). Employee competencies are those characteristics or traits that are acquired by employees, such as knowledge, skills, ability and personality that differentiate them from average performers (Cardy and Selvarajan, 2006). Organizational competencies are those, which are embedded in the organizational system and structures that tend to exist within the organization, even when an employee leaves (Semeijn et al., 2014). Human capital attributes have been argued to be an important resource of organizational performance because organizations that are able to generate organization specific, valuable and unique competencies are thought to be in a superior position that enables them to outperform their rivals and succeed in a dynamic business environment (van Esch et al., 2018). This study selected a set of independent constructs: HRD practices (team building and employee empowerment) and organizational learning culture; and dependent constructs: employee competencies. The independent constructs are considered necessary for influencing employee competencies and its influence on organizational effectiveness. Figure 1 illustrates the research model of this study. In the following sections, the relationship between the constructs is discussed. Human resource development practices and employee competencies Researchers (Sung and Choi, 2014) have suggested that organizations should design and implement HRD practices so that the individual can perform effectively and meet the performance expectations through improving individual competencies. Kehoe and Wright (2013) deliberates that HRD was the basic component for employees to acquire competencies that in turn significantly improve organizational performance. In fact, the general purpose of HRD practices is to produce a competent and qualified employees to perform an assigned job and contribute to the organization's business outcomes (Nolan and Garavan, 2016). Scholars have investigated the outcome of HRD practices and reported that these practices improve employees' capabilities on the job, productivity and efficiency (Haslinda, 2009; Alagaraja et al., 2015). Yuvaraj and Mulugeta (2013) also provided a similar result that explains HRD practices continuously improve employees' capability and performance through the existing practices of training, career development, performance appraisal and organizational development components of HRD. The study examined two practices: team building and employee empowerment that were being widely implemented in the selected organizations (Cement manufacturing units). The association between selected HRD practices and employee competencies are revealed in subsequent reviews. Team building and employee competencies According to LePine et al. (2008), "The practices of team-building components (goal-setting, interpersonal processes, role-clarification, and problem-solving) can lead to improved performance through modification of attitudes, values, problem-solving techniques, and group processes". In the goal-setting component, team members are introduced to a goal-setting framework and are expected to involve in action planning to identify ways to achieve those goals, which strengthen team member's problem-solving skills and motivation (Aga et al., 2016). Team members exposed to role-clarification activities are expected to achieve better understanding of their and others' respective roles and duties within the team (Salas et al., 1999). Interpersonal process component involves an enhancement in team member's interpersonal skills, such as mutual supportiveness, communication and sharing of information (Klein et al., 2009). The fourth component emphasizes the identification of major problems in the team's tasks to enhance task-related skills (Lacerenza et al., 2018). Team building is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving and action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Team building intervention enhances individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). Shuffler et al. (2011) in their meta-analysis found that an effective team building improves affective outcomes (trust, attitude and confidence) and cognitive outcomes (shared knowledge among team members) in employees. The above discussions provide ample facts to suggest that: H1a. Team building is positively related to the enhancement of employee competencies. Employee empowerment and employee competencies Fernandez and Moldogaziev (2013) have stated that, "employee empowerment is a relational construct that describes how those with power in organizations share power and formal authority with those lacking it". Organizations have implemented empowerment initiatives based on the premise that when individual employees can participate in decision-making and share responsibility, for how work is conducted, outcomes such as performance and employee's knowledge will be enhanced (Maynard et al., 2012). Organizations that encourage harmonious relationships between superiors and subordinates provide employees with the liberty to express their creative suggestions, which help in enriching their self-motivation (Fernandez and Moldogaziev, 2012). When employees are empowered and given the autonomy and flexibility, they are likely to be more motivated and take full responsibility to find new ways and develop new skills to respond to challenges (Luoh et al., 2014). Kanter (1993) and Laschinger (1996) define structural empowerment as workplace structures that enable employees to carry out work in meaningful ways. These structures empower employees by providing access to information required to perform the job effectively, support from peer and supervisor feedback, resources like time and supply to carry out job and opportunity for learning and growth within the organization (Dainty et al., 2002). Liden et al. (2000) found that empowering working conditions have been positively linked to employee's positive job attitude and tolerant to work pressure and ambiguity. When employees are involved in their work with the spirits of vigour and commitment, it makes a significant difference to their self-motivation and positive job attitude (Manojlovich, 2005). Empowerment can enrich individual's ability to perform their duties successfully, where they have control over their workload, get support from the peers, feel more rewarded for their accomplishments and are treated fairly (Janssen, 2004). Fernandez and Moldogaziev (2013), in their empirical study found that there is a positive relationship between employee empowerment and employee's attitude and behaviour. Leach et al. (2003) further indicated that, employee empowerment has a positive impact on job knowledge through an empirical validation. Hence, the following premise is expected: H1b. Employee empowerment is having a significant and positive relationship with the enhancement of employee competencies. Moderating role of organizational learning culture Organizational learning culture as a moderator is grounded on the signaling theory (Spence, 2002) and experiential learning theory (Kolb 1984). Based on the viewpoint of signaling theory, organizations that cultivate learning culture would give indications to the employees that the management values and supports the exchange of knowledge and skills learnt by them from the HRD programmes provided by their organizations (Bloor and Dawson, 1994; Spence, 2002). Such culture that facilitates knowledge-transfer and idea sharing would positively influence employee competencies. According to the experiential learning theory (Kolb 1984), the process of learning is highly affected by two elements: individual's interaction with different stakeholders and feedback of one's knowledge from their superiors and peers. Referring to this, employee perceptions' that the organization promotes sound learning culture through regular feedbacks and mentorship would motivate them to acquire and exchange their skills and knowledge (Clark et al., 1993). Therefore, the learning culture process has been identified as one of the vital and appropriate contextual factors to enhance employee competencies (Jeong et al., 2017). Based on above discussion, it can infer that organizational learning culture plays an important moderating role in between HRD practices and employee competencies. Thus, in the present study associations between selected HRD practices, organizational learning culture and employee competencies are revealed in subsequent reviews. The moderating effect of organizational learning culture between team building and employee competencies Team building practices are based upon an action research model of data collection, feedback, and action planning (Whitehead, 2001). Team building activities operate within a particular environmental context. Although groups are often viewed as the context variable for individual behaviours, the organizational environment should also be considered as the context variable for group behaviour (Shuffler et al., 2011). According to Van den Bossche et al. (2006), "The organizational variable that could influence a team member's knowledge and problem-solving skill is the organization's learning culture". The enhancement of team member's competencies based on organizational learning culture can influence the level of cooperation or performance between team members, which in turn may affect team effectiveness (Hollenbeck et al., 2004). The above discussions provide ample facts to suggest that: H2a. Organizational learning culture will moderate the relationship between team building and employee competencies, where the relationship will be stronger when the organizational learning culture is high. The moderating effect of organizational learning culture between employee empowerment and employee competencies Employee empowerment involves the employees being provided with a greater degree of flexibility and more freedom to make decisions relating to work (Greasley et al., 2005). Empowerment is closely related to people's perceptions about themselves in relation to their work environments (Kuo et al., 2010). Jones et al. (2013) stated that: The environment surrounding individuals is important for increasing employee empowerment because empowerment is not a consistent or enduring personality trait, but rather a set of cognitions shaped by work environments. In a recent study, Joo and Shim (2010) has found a positive moderating role of organizational learning culture between empowerment and employee positive behaviour, where their results indicated if an organization has high learning culture in the presence of empowerment would influence highly on employee behaviour. Thus, the following hypothesis is proposed: H2b. Organizational learning culture will moderate the positive relationship between employee empowerment and employee competencies, where the relationship will be stronger when the organizational learning culture is high. Research design, sampling and data collection A structured questionnaire was developed for collection of primary data on the basis of seven-point Likert scale. It consists of two sections; first section collects general information of the respondents like, age, gender, designation and experience. The second section includes the items that measure the constructs team building, employee empowerment, organizational learning culture and employee competencies. The study took place in four medium-sized cement-manufacturing units in India. We communicated personally (through appointments, phone calls and email) to senior executives of the four units and explained the methodology of the study. We gave instructions to executives and supervisors about how to answer specific questions and to distribute the questionnaire to their subordinates and colleagues, who had participated in HRD practices in the past two years. The schedule was distributed to around 952 employees, out of which 653 complete responses were obtained, corresponding to a response rate of 68.53 per cent of the respondents. Team building A six-item scale representing four broad areas of team building practices was developed for this study: goal setting, interpersonal relations, role clarification and problem solving. These items were adapted from Aga et al. (2016), Klein et al. (2009) and Salas et al. (1999) and its reliability is 0.81. Employee empowerment A five-item scale to measure the effectiveness of employee empowerment implemented in the organization is developed by adopting Menon (2001) and Men and Stacks (2013) scales of employee empowerment. We modified the items according to the current study, and its reliability is 0.87. Employee competencies. The competencies analysed in the study were technical expertise, adaptability, innovation, teamwork and cooperation, conceptual thinking and self-confidence. For this, we have adopted Diaz-Fernandez et al. (2014) measures of employee competencies and adapted it to the current scenario of the study. The construct consists of six items and its reliability (Cronbach's alpha) is 0.71. Organizational learning culture. Dimensions of learning organization questionnaire (DLOQ) were developed by Watkins and Marsick (1997) with 21 items, measuring seven dimensions, but later it was shortened to seven items and measures all seven dimensions of DLOQ by Yang et al. (2004). We have used Yang et al. (2004) scale of organizational learning culture which measures continuous learning, team learning, dialogue and inquiry, empowerment, system connection, embedded system and strategic leadership. The seven-item scale shows reliability of 0.88. The results are described in the order in which the analyses were conducted. First, the confirmatory factor analysis (CFA) in AMOS to establish a factor structure and find out any kind of presence of common method bias. Second, construct validity of the full measurement model to assess the convergent and discriminant validity. Third, we carried out descriptive statistics, correlation and reliability analysis in SPSS of the full measurement model. Fourth, we performed moderated structural equation modelling (MSEM) in AMOS to test the hypotheses. Comparison of measurement models and Harman's single factor test A full measurement model was tested initially. Team building, employee empowerment, organizational learning culture and employee competencies items were loaded onto their respective factors. All factors were allowed to correlate. The four-factor model showed a good model fit kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, Goodness of fit index [GFI] = 0.901, comparative fit index [CFI] = 0.950, root mean square error of approximation [RMSEA] = 0.052. Emphasis was given to carry out sequential kh2 difference test that compared the full measurement model to five alternative nested models as shown in Table III. The result of the full measurement model is significantly better as compared to the alternative models that suggest the variables in the study are distinct. Cross-sectional and self-reported data are susceptible to common method biases. Following the procedure adopted by several scholars (Ketkar and Sett, 2010; Conway et al., 2015) all items of both independent and dependent variables are included in a single factor and fit indices were examined. The single factor model showed poor fit with the data. Comparison of the single factor model with the full measurement model had shown that the full measurement model had significantly better fit with the data as compared to the single-factor model. While this test does not eliminate the possibility of method bias, it provides evidence that inter-item correlations are not driven purely by method bias (Podsakoff et al., 2003). Construct validity of the full measurement model Construct validity was established in the study by assessing convergent validity and discriminant validity. To estimate convergent validity, discriminant validity and goodness of fit statistics, we performed a CFA. Convergent validity is established by estimating factor loadings (completely standardized loading), composite reliability and average variance extracted (AVE) from the CFA. In Table IV, the results of convergent validity are provided, which shows that the values are in acceptable region, confirming convergent validity. Discriminant analysis is assessed by comparing the AVE with corresponding inter-dimension squared correlation estimates (Fornell and Larcker, 1981). Table V shows the square root of AVE values of all study factors are greater than inter-dimension correlations, supporting discriminant validity. The goodness of fit statistics of the measurement model specified good model fit with the data (kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, GFI = 0.901, CFI = 0.950, RMSEA = 0.052). Thus, the instrument used in the study has good construct validity and psychometric properties. Descriptive statistics, correlations and reliabilities Table VI presents mean, standard deviation and correlations among the four variables. The reliabilities of each individual variable vary from 0.77 to 0.93. The correlations among the four variables are significant, supporting all of the hypotheses. Moderated structural equation modelling results We used AMOS 20.0 to test the study's hypotheses through SEM, such that we could explicitly account for measurement error when examining the hypothesized relationships among the study's focal constructs. This approach also allowed us to assess how well our conceptual model as a whole fit the data, as recommended by previous studies that seek to test complex models having web of hypotheses with both mediating and moderating affects. Several procedures for testing the interaction (moderating) effects in SEM have been forwarded (Jaccard et al., 1996; Joreskog and Yang, 1996; Ping, 1995). Cortina et al. (2001) found that all procedures produced very similar results. Present study adopted Ping's (1995) approach to Moderated SEM using the three steps described by Cortina et al. (2001). These steps are detailed in the Appendix I. Recent few studies (Anning-Dorson, 2017; Harney et al., 2018) also followed the same approach. The goodness-of-fit statistics of MSEM results (kh2 = 483.728; df = 174; NFI = 0.935; CFI = 0.957; RMSEA = 0.059; SRMR = 0.077) indicate that it has a good model fit. Figure 2 shows the results of MSEM analysis in which results of beta coefficients and adjusted R2 are given. In total, 59 per cent of variance of HRD practices, organizational learning culture and interactions variables on employee competencies is explained. It shows that there is positive relationship between HRD practices and employee competencies, which infer team building and employee empowerment positively influences the employee competencies, confirming (b = 0.408, Standard Error (SE) = 0.031, critical ratio (CR) = 13.161, p < 0.001) H1a and (b = 0.035, S.E = 0.004, CR = 8.75 p < 0.05) H1b. H2a proposed that organizational learning culture would moderate positive relationship between team building and employee competencies, where the relationship may be stronger when organizational learning culture is higher. The results of the MSEM (Figure 2) show that there is significant interaction between team building and employee competencies (b = 0.109, SE = 0.021, CR= 5.190, p < 0.05), supports H2a. The moderated relationship also supported by a simple slope test based on one standard deviation above and one standard deviation below. From Figure 3, it clarifies that organizations with higher organizational learning culture (b = 0.515, t = 8.837, p < 0.001) will influence more on employee competencies than with lower organizational learning culture (b = 0.401, t = 5.049, p < 0.001). The analysis has found similar results for H2b also, which indicates that there is moderated positive relationship between employee empowerment and employee competencies, where the relationship may be stronger when higher organizational learning culture is present (b = 0.130, SE= 0.020, CR = 6.50 p < 0.05), confirming H2b. From simple slope test (Figure 4), it further elucidates that organizations with higher organizational learning culture (b = 0.162, t = 2.788, p < 0.05) will influence more on employee competencies than with lower organizational learning culture (b = 0.142, t = 1.499). The study has found that team building, employee empowerment and organizational learning culture have a significant and positive influence on employee competencies. In addition, the findings confirm that the moderating effect of organizational learning culture on the above relationship is significant. Detail findings are discussed below. HRD practices (team building and employee empowerment) of cement manufacturing units have shown a significant and positive relationship with employee competencies. In this respect, the finding of significant and positive influence of team building on employee competencies are congruent with the studies of Aga et al. (2016), Beebe and Masterson (2014) and Braun et al. (2013) that established that effective implementation team building programmes will enhance knowledge, skill and capabilities of the employees. It was also found that employee empowerment shows positive impact on employee competencies which confirms the assumption of eminent researchers such as Fernandez and Moldogaziev (2013) state that effective implementation of employee empowerment enhances employee competency. Further, the perception of organizational learning culture moderated the relationship between team building and employee empowerment with employee competencies. Employees perceived that organization having effective implementation of team building practices along with high organizational learning culture results in higher employee competencies. In this respect, the findings of the study empirically validate the hypothesized relation that has been theoretically stated in many studies (Sung and Choi, 2014; Banerjee et al., 2017). A similar result also found that organizational learning culture moderated the relationship between employee empowerment and employee competencies. This empirical finding validates the theoretical assumptions of renowned researchers (Kim and McLean, 2008; Park, 2010; Moon and Choi, 2017) that employee empowerment facilitated by learning environment may show a positive effect on the development of employee competencies. Similarly, H2b proposed that organizational learning culture strengthens the positive relationship between employee empowerment and employee competencies. The findings of the present research corroborate with the previous studies by Jones et al. (2013) and Kuo et al. (2010) and establishes that in the presence of organizational learning culture strengthens the relationship between employee empowerment and employee competencies. The findings of this study have several important theoretical contributions. The study provides a deeper understanding of how HRD practices influences enhancement of employee competencies in the presence of contextual variable organizational learning culture. The results confirm that the relationship between HRD practices and employee competencies can be strengthened by positive organizational learning culture. A work context that is supportive for encouraging employees for continuous learning through the acquisition of new knowledge and skills would foster the relationship between HRD practices and employee competencies. This study therefore provides support for a contingency perspective in HRD research that with a positive organizational learning culture; the effect of HRD practices on the development of employee's competencies can be enhanced. By demonstrating that organizational learning culture moderates the HRD practices and employee competencies relationship, this study builds on a recent stream of research examining the resource-based view (Wernerfelt, 1984) from a contingency perspective. This study also extends the application of HRD practices and organizational learning culture to a new context (i.e. the emerging economy of India). It demonstrates that in emerging economies, characterized by environmental turbulence and uncertainties. The implementation of HRD practices will help the organization to perform better by increasing the competency level of their employees. Bates and Khasawneh (2005) state that, "There is considerable consensus today that a key competitive advantage for organizations lies in their ability to learn and be responsive to challenges from both internal and external environments". Evidently, focus has to be paid, in developing an organizational learning culture to enhance employee competencies to build competitive advantage and enhance organizational effectiveness. The outcomes of the paper also recommend some suggestions to managers striving for success. First, from moderated SEM results, managers should recognize that, just by providing team building and employee empowerment initiatives are not enough, it is the organization's responsibility to create an environment of learning to enhance employee competencies. Second, organizations have to take advantage of organizational learning capability by signifying the importance of managers and their attitudes in effective implementation of learning conditions within the organization. This indicates that managers are the facilitators of learning culture within the organization, and it can be achieved by applying the attributes of a learning organization in such a way that learning orientation becomes the main trigger for learning (Real et al., 2014). Third, the direct inference of study results is that employees with enhanced competencies are the most vital stakeholder group in any business process that is endeavouring for improved organizational effectiveness. Hence, managers striving for effectiveness and efficiency in their process should put employees first, which supports the opinion of Skerlavaj et al. (2010). The study has a few limitations; however, these pave the way for a new line of future research. Even though, we collected responses from employees who participated in team building and employee empowerment initiatives in the past two years, but we have collected data at a single point of time (cross-sectional study). This might raise issues relating to the direction of causality, we recommend the future researchers to conduct a longitudinal study that minimizes the issues on the direction of causality. The data used in the study are largely subjective opinions of the employees responding to the survey. As per Real et al. (2014), subjective assessment obtained through multi-item scales are in the general consistent with an objective measure, the difference between perceptions and objective data may exist. Future studies might emphasis on this area, using objective measures. Finally, we cannot generalize the results through wider range of sectors and global environment, as the study was conducted in Indian cement industries.
|
The findings suggest that organizational learning culture significantly strengthens the relationships of team building and employee empowerment on employee competencies.
|
[SECTION: Value] As per resource-based view (Wernerfelt, 1984), in the competitive market environment, an organization should be effective and require an important intangible core competence that is employee competencies. An employee competency refers to those traits, skills or attributes that employees need to perform their jobs more effectively (Soderquist et al., 2010; Campion et al., 2011). A competent workforce is believed to produce higher quality products (Ahuja and Khamba, 2008), support innovation (Siguaw et al., 2006) and reduce turnover costs (Joo and Shim, 2010). To develop and maintain employee competencies for future requirement and in the present environment, an organization must emphasize on human resource development (HRD). Werner and DeSimone (2006) defined HRD as a set of systematic and planned activities designed by an organization to provide its members with the opportunities to learn necessary skills to meet current and future job demands. According to Werner and DeSimone (2006), "HRD practices are the programs, which are designed to be strategically oriented to the organizational process for managing the development of human resources to contribute to the overall success of the organization" (p. 26). The rationale for using HRD practices to support business objectives is quite straightforward: enhancing or unleashing needed employee expertise (Chermack and Kasshanna, 2007). HRD practices continuously improve employee's expertise and performance through the existing practices of training, performance appraisal and organizational development initiatives (Garavan, 2007). HRD alone is not sufficient to enhance employee competencies to a greater level because not all knowledge and skills obtained from HRD practices is properly transferred (Froehlich et al., 2014). Thus, an organization should create a learning culture in the organization, so that employee can share, acquire and create knowledge and skills, which can modify the behaviour of the employees. Organizational learning culture refers to a set of norms and values about the functioning of an organization that supports systematic organizational learning so that individual learning, teamwork, collaboration, creativity and knowledge distribution have collective meaning and value (Torres-Coronas and Arias-Oliva, 2008, p. 177). Thus, organizational learning culture could directly or indirectly influence employee competencies. The present study integrates the resource-based view (Wernerfelt, 1984) and organizational perspective of learning to create a strong theoretical foundation by exploring the effects of team building, employee empowerment and organizational learning culture on employee competencies. The study provides empirical evidences to bridge the knowledge gaps with regard to the relationship between HRD practices, organizational learning culture and employee competencies. Even though HRD practices and organizational learning culture are considered critical concepts and practices, most of the existing literature focuses on the conceptual level and consider commitment, productivity, and profitability as primary outcome variables. Few studies have attempted to examine the moderating role of organizational learning culture on individual outcomes such as commitment, engagement and satisfaction. Thus, the significance of the study lies in providing empirical validation of the moderating role of organizational learning culture towards the relationship of HRD practices and employee competencies. This research attempts to answer the following structured questions: RQ1. Is there any relationship between HRD practices and employee competencies? RQ2. Does organizational learning culture will moderate the relationship between HRD practices and employee competencies? From the above research questions, following research objectives were derived. To study the impact of HRD practices on enhancement of competencies of employees of the cement industry. To assess the moderating role of organizational learning culture in between the relationship of HRD practices and employee competencies. Team building Klein et al. (2009) define team building as "the formal and informal team-level practices that focus on improving social relations and clarifying roles as well as solving task and interpersonal problems that affect team functioning". In this intervention, team members experimentally learn, by examining their structures, norms, values and interpersonal dynamics, to increase their skills for effective performance (Senecal et al., 2008). In the literature, there is consensus that there are four approaches/components to team building: goal setting; role-clarification; interpersonal relations; and problem solving. A brief explanation is presented below: Goal setting: This component is designed specifically to strengthen a team member's motivation to achieve team goals and objectives (Salas et al., 2004). Team members are expected to become involved in action planning to identify ways to achieve those goals (Aga et al., 2016). Role clarification: It entails clarifying individual role expectations, group norms and shared responsibilities of team member (Klein et al., 2009). Role clarification can be used to improve team and individual characteristics (i.e. by reducing role ambiguity) and work structure by negotiating, defining and adjusting team member roles (Mathieu and Schulze, 2006). Interpersonal relations: It assumes that teams with fewer interpersonal conflicts function more effectively than teams with greater number of interpersonal conflicts. It involves an increase in teamwork skills, such as mutual supportiveness, communication and sharing of feelings (Aga et al., 2016). Problem solving: The fourth component emphasizes on the identification of major problems in the team's tasks to enhance task-related skills. It is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving, action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Effective team building intervention in an organization enhances an individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). At the organization level, team effort helps to solve various problems of the organization, such as conflict among organizational members, unclear roles and assignments, lack of innovation in solving problems, etc., that upsurge the performance of the organization (Stone, 2010). Employee empowerment An employee empowerment approach is composed of practices aimed at sharing information, job related knowledge and authority with employees (Fernandez and Moldogaziev, 2013). Baird and Wang (2010) stated, "The basic objective of empowerment is redistribution of power between management and employees - most commonly in the form of increasing employee authority, responsibility, and influencing commitment". In the literature, empowerment defined in two perspectives: psychological perspective and managerial perspective. From a psychological perspective, empowerment is a motivational akin to a state of mind or set of cognitions (Fernandez and Moldogaziev, 2013). Dust et al. (2014) described employee empowerment as a four-dimensional motivational construct composed of four cognitions those are meaning, competence, self-determination and impact, that reflect an active rather than a passive orientation towards a work role. From a managerial perspective, employee empowerment is a relational construct that describes how those with power in organizations share power, information, resources and rewards with those lacking them (Gomez and Rosen, 2001; Fernandez and Moldogaziev, 2013). Bowen and Lawler (1995) define empowerment as sharing with front-line employees on four organizational ingredients: information about the organization's performance; knowledge that enables employees to understand and contribute to organizational performance; rewards based on the organization's performance; and power to make decisions that influence organizational direction and performance. Torres-Coronas and Arias-Oliva (2008, p. 177) defines organizational learning culture as: A set of norms and values about the functioning of an organization that support systematic organizational learning so that individual learning, teamwork, collaboration, creativity, and knowledge distribution have collective meaning and value. Organizational learning culture is a complex process that refers to the development of new knowledge and has the potential to change behaviour (Skerlavaj et al., 2010). According to Kandemir and Hult (2005), organizational learning culture has been viewed as a process by which organizations as collectives learn through interaction with their environments and propose that learning might result in new and significant insights and awareness. The objective of building an organizational learning culture in an organization is to expand people's capacity to create the results they truly desire, the employee's new and expansive patterns of thinking to be encouraged, collective aspiration to be set free, and employees should be continually learning how to learn together (Senge, 2009). According to Marsick and Watkins (2003), organizational learning culture consists of seven interlinked constructs: create continuous learning opportunities; promote inquiry and dialogue; encourage collaboration and team learning; create systems to capture and share learning; empower people toward a collective vision; connect the organization to its environment; and provide strategic leadership for learning, which helps in building the organization's strategic learning culture. Table I summarizes the seven dimensions of organizational learning culture. The word competency was first explained in the book "The Competent Manager" (Boyatzis, 1982, p. 21) which defines the term as, "an underlying characteristic of a person that could be a motive, trait, and skill aspect of one's self-image or social role, or a body of knowledge which he or she uses". A competency is a reliably measurable, relatively enduring (stable) characteristic of a person, team or organization that causes and statistically predicts a measurable level of performance (Berger and Berger, 2010). Some definitions of the term competency are shown in the Table II given. The term "reliably measurable" means two or more independent observers or methods (tests, surveys) agree statistically that a person demonstrates a competency (Spencer et al., 2008) while "relatively enduring" means a competency measured at one point of time is statistically likely to be demonstrated at a later point of time (Catano et al., 2007). Competency characteristics are content knowledge, behaviour skills, cognitive processing (IQ), personality traits, values, motives and occasionally other perceptual or sensor motor capabilities that accurately predict some level of performance. Cardy and Selvarajan (2006) has classified competencies into two categories: employee (personal) and organization (corporate). Employee competencies are those characteristics or traits that are acquired by employees, such as knowledge, skills, ability and personality that differentiate them from average performers (Cardy and Selvarajan, 2006). Organizational competencies are those, which are embedded in the organizational system and structures that tend to exist within the organization, even when an employee leaves (Semeijn et al., 2014). Human capital attributes have been argued to be an important resource of organizational performance because organizations that are able to generate organization specific, valuable and unique competencies are thought to be in a superior position that enables them to outperform their rivals and succeed in a dynamic business environment (van Esch et al., 2018). This study selected a set of independent constructs: HRD practices (team building and employee empowerment) and organizational learning culture; and dependent constructs: employee competencies. The independent constructs are considered necessary for influencing employee competencies and its influence on organizational effectiveness. Figure 1 illustrates the research model of this study. In the following sections, the relationship between the constructs is discussed. Human resource development practices and employee competencies Researchers (Sung and Choi, 2014) have suggested that organizations should design and implement HRD practices so that the individual can perform effectively and meet the performance expectations through improving individual competencies. Kehoe and Wright (2013) deliberates that HRD was the basic component for employees to acquire competencies that in turn significantly improve organizational performance. In fact, the general purpose of HRD practices is to produce a competent and qualified employees to perform an assigned job and contribute to the organization's business outcomes (Nolan and Garavan, 2016). Scholars have investigated the outcome of HRD practices and reported that these practices improve employees' capabilities on the job, productivity and efficiency (Haslinda, 2009; Alagaraja et al., 2015). Yuvaraj and Mulugeta (2013) also provided a similar result that explains HRD practices continuously improve employees' capability and performance through the existing practices of training, career development, performance appraisal and organizational development components of HRD. The study examined two practices: team building and employee empowerment that were being widely implemented in the selected organizations (Cement manufacturing units). The association between selected HRD practices and employee competencies are revealed in subsequent reviews. Team building and employee competencies According to LePine et al. (2008), "The practices of team-building components (goal-setting, interpersonal processes, role-clarification, and problem-solving) can lead to improved performance through modification of attitudes, values, problem-solving techniques, and group processes". In the goal-setting component, team members are introduced to a goal-setting framework and are expected to involve in action planning to identify ways to achieve those goals, which strengthen team member's problem-solving skills and motivation (Aga et al., 2016). Team members exposed to role-clarification activities are expected to achieve better understanding of their and others' respective roles and duties within the team (Salas et al., 1999). Interpersonal process component involves an enhancement in team member's interpersonal skills, such as mutual supportiveness, communication and sharing of information (Klein et al., 2009). The fourth component emphasizes the identification of major problems in the team's tasks to enhance task-related skills (Lacerenza et al., 2018). Team building is an intervention, in which team members identify major problems, generate relevant information, engage in problem solving and action planning, implement and evaluate action plans (Aga et al., 2016; Beebe and Masterson, 2014). Team building intervention enhances individual's cognitive outcome like teamwork competencies and affective outcomes like trust and team potency, whereas at team level, the outcomes are coordination and effective communication (Tannenbaum et al., 2012). Shuffler et al. (2011) in their meta-analysis found that an effective team building improves affective outcomes (trust, attitude and confidence) and cognitive outcomes (shared knowledge among team members) in employees. The above discussions provide ample facts to suggest that: H1a. Team building is positively related to the enhancement of employee competencies. Employee empowerment and employee competencies Fernandez and Moldogaziev (2013) have stated that, "employee empowerment is a relational construct that describes how those with power in organizations share power and formal authority with those lacking it". Organizations have implemented empowerment initiatives based on the premise that when individual employees can participate in decision-making and share responsibility, for how work is conducted, outcomes such as performance and employee's knowledge will be enhanced (Maynard et al., 2012). Organizations that encourage harmonious relationships between superiors and subordinates provide employees with the liberty to express their creative suggestions, which help in enriching their self-motivation (Fernandez and Moldogaziev, 2012). When employees are empowered and given the autonomy and flexibility, they are likely to be more motivated and take full responsibility to find new ways and develop new skills to respond to challenges (Luoh et al., 2014). Kanter (1993) and Laschinger (1996) define structural empowerment as workplace structures that enable employees to carry out work in meaningful ways. These structures empower employees by providing access to information required to perform the job effectively, support from peer and supervisor feedback, resources like time and supply to carry out job and opportunity for learning and growth within the organization (Dainty et al., 2002). Liden et al. (2000) found that empowering working conditions have been positively linked to employee's positive job attitude and tolerant to work pressure and ambiguity. When employees are involved in their work with the spirits of vigour and commitment, it makes a significant difference to their self-motivation and positive job attitude (Manojlovich, 2005). Empowerment can enrich individual's ability to perform their duties successfully, where they have control over their workload, get support from the peers, feel more rewarded for their accomplishments and are treated fairly (Janssen, 2004). Fernandez and Moldogaziev (2013), in their empirical study found that there is a positive relationship between employee empowerment and employee's attitude and behaviour. Leach et al. (2003) further indicated that, employee empowerment has a positive impact on job knowledge through an empirical validation. Hence, the following premise is expected: H1b. Employee empowerment is having a significant and positive relationship with the enhancement of employee competencies. Moderating role of organizational learning culture Organizational learning culture as a moderator is grounded on the signaling theory (Spence, 2002) and experiential learning theory (Kolb 1984). Based on the viewpoint of signaling theory, organizations that cultivate learning culture would give indications to the employees that the management values and supports the exchange of knowledge and skills learnt by them from the HRD programmes provided by their organizations (Bloor and Dawson, 1994; Spence, 2002). Such culture that facilitates knowledge-transfer and idea sharing would positively influence employee competencies. According to the experiential learning theory (Kolb 1984), the process of learning is highly affected by two elements: individual's interaction with different stakeholders and feedback of one's knowledge from their superiors and peers. Referring to this, employee perceptions' that the organization promotes sound learning culture through regular feedbacks and mentorship would motivate them to acquire and exchange their skills and knowledge (Clark et al., 1993). Therefore, the learning culture process has been identified as one of the vital and appropriate contextual factors to enhance employee competencies (Jeong et al., 2017). Based on above discussion, it can infer that organizational learning culture plays an important moderating role in between HRD practices and employee competencies. Thus, in the present study associations between selected HRD practices, organizational learning culture and employee competencies are revealed in subsequent reviews. The moderating effect of organizational learning culture between team building and employee competencies Team building practices are based upon an action research model of data collection, feedback, and action planning (Whitehead, 2001). Team building activities operate within a particular environmental context. Although groups are often viewed as the context variable for individual behaviours, the organizational environment should also be considered as the context variable for group behaviour (Shuffler et al., 2011). According to Van den Bossche et al. (2006), "The organizational variable that could influence a team member's knowledge and problem-solving skill is the organization's learning culture". The enhancement of team member's competencies based on organizational learning culture can influence the level of cooperation or performance between team members, which in turn may affect team effectiveness (Hollenbeck et al., 2004). The above discussions provide ample facts to suggest that: H2a. Organizational learning culture will moderate the relationship between team building and employee competencies, where the relationship will be stronger when the organizational learning culture is high. The moderating effect of organizational learning culture between employee empowerment and employee competencies Employee empowerment involves the employees being provided with a greater degree of flexibility and more freedom to make decisions relating to work (Greasley et al., 2005). Empowerment is closely related to people's perceptions about themselves in relation to their work environments (Kuo et al., 2010). Jones et al. (2013) stated that: The environment surrounding individuals is important for increasing employee empowerment because empowerment is not a consistent or enduring personality trait, but rather a set of cognitions shaped by work environments. In a recent study, Joo and Shim (2010) has found a positive moderating role of organizational learning culture between empowerment and employee positive behaviour, where their results indicated if an organization has high learning culture in the presence of empowerment would influence highly on employee behaviour. Thus, the following hypothesis is proposed: H2b. Organizational learning culture will moderate the positive relationship between employee empowerment and employee competencies, where the relationship will be stronger when the organizational learning culture is high. Research design, sampling and data collection A structured questionnaire was developed for collection of primary data on the basis of seven-point Likert scale. It consists of two sections; first section collects general information of the respondents like, age, gender, designation and experience. The second section includes the items that measure the constructs team building, employee empowerment, organizational learning culture and employee competencies. The study took place in four medium-sized cement-manufacturing units in India. We communicated personally (through appointments, phone calls and email) to senior executives of the four units and explained the methodology of the study. We gave instructions to executives and supervisors about how to answer specific questions and to distribute the questionnaire to their subordinates and colleagues, who had participated in HRD practices in the past two years. The schedule was distributed to around 952 employees, out of which 653 complete responses were obtained, corresponding to a response rate of 68.53 per cent of the respondents. Team building A six-item scale representing four broad areas of team building practices was developed for this study: goal setting, interpersonal relations, role clarification and problem solving. These items were adapted from Aga et al. (2016), Klein et al. (2009) and Salas et al. (1999) and its reliability is 0.81. Employee empowerment A five-item scale to measure the effectiveness of employee empowerment implemented in the organization is developed by adopting Menon (2001) and Men and Stacks (2013) scales of employee empowerment. We modified the items according to the current study, and its reliability is 0.87. Employee competencies. The competencies analysed in the study were technical expertise, adaptability, innovation, teamwork and cooperation, conceptual thinking and self-confidence. For this, we have adopted Diaz-Fernandez et al. (2014) measures of employee competencies and adapted it to the current scenario of the study. The construct consists of six items and its reliability (Cronbach's alpha) is 0.71. Organizational learning culture. Dimensions of learning organization questionnaire (DLOQ) were developed by Watkins and Marsick (1997) with 21 items, measuring seven dimensions, but later it was shortened to seven items and measures all seven dimensions of DLOQ by Yang et al. (2004). We have used Yang et al. (2004) scale of organizational learning culture which measures continuous learning, team learning, dialogue and inquiry, empowerment, system connection, embedded system and strategic leadership. The seven-item scale shows reliability of 0.88. The results are described in the order in which the analyses were conducted. First, the confirmatory factor analysis (CFA) in AMOS to establish a factor structure and find out any kind of presence of common method bias. Second, construct validity of the full measurement model to assess the convergent and discriminant validity. Third, we carried out descriptive statistics, correlation and reliability analysis in SPSS of the full measurement model. Fourth, we performed moderated structural equation modelling (MSEM) in AMOS to test the hypotheses. Comparison of measurement models and Harman's single factor test A full measurement model was tested initially. Team building, employee empowerment, organizational learning culture and employee competencies items were loaded onto their respective factors. All factors were allowed to correlate. The four-factor model showed a good model fit kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, Goodness of fit index [GFI] = 0.901, comparative fit index [CFI] = 0.950, root mean square error of approximation [RMSEA] = 0.052. Emphasis was given to carry out sequential kh2 difference test that compared the full measurement model to five alternative nested models as shown in Table III. The result of the full measurement model is significantly better as compared to the alternative models that suggest the variables in the study are distinct. Cross-sectional and self-reported data are susceptible to common method biases. Following the procedure adopted by several scholars (Ketkar and Sett, 2010; Conway et al., 2015) all items of both independent and dependent variables are included in a single factor and fit indices were examined. The single factor model showed poor fit with the data. Comparison of the single factor model with the full measurement model had shown that the full measurement model had significantly better fit with the data as compared to the single-factor model. While this test does not eliminate the possibility of method bias, it provides evidence that inter-item correlations are not driven purely by method bias (Podsakoff et al., 2003). Construct validity of the full measurement model Construct validity was established in the study by assessing convergent validity and discriminant validity. To estimate convergent validity, discriminant validity and goodness of fit statistics, we performed a CFA. Convergent validity is established by estimating factor loadings (completely standardized loading), composite reliability and average variance extracted (AVE) from the CFA. In Table IV, the results of convergent validity are provided, which shows that the values are in acceptable region, confirming convergent validity. Discriminant analysis is assessed by comparing the AVE with corresponding inter-dimension squared correlation estimates (Fornell and Larcker, 1981). Table V shows the square root of AVE values of all study factors are greater than inter-dimension correlations, supporting discriminant validity. The goodness of fit statistics of the measurement model specified good model fit with the data (kh2 = 799.845, df =337, p < 0.05, CMIN/df = 2.373, GFI = 0.901, CFI = 0.950, RMSEA = 0.052). Thus, the instrument used in the study has good construct validity and psychometric properties. Descriptive statistics, correlations and reliabilities Table VI presents mean, standard deviation and correlations among the four variables. The reliabilities of each individual variable vary from 0.77 to 0.93. The correlations among the four variables are significant, supporting all of the hypotheses. Moderated structural equation modelling results We used AMOS 20.0 to test the study's hypotheses through SEM, such that we could explicitly account for measurement error when examining the hypothesized relationships among the study's focal constructs. This approach also allowed us to assess how well our conceptual model as a whole fit the data, as recommended by previous studies that seek to test complex models having web of hypotheses with both mediating and moderating affects. Several procedures for testing the interaction (moderating) effects in SEM have been forwarded (Jaccard et al., 1996; Joreskog and Yang, 1996; Ping, 1995). Cortina et al. (2001) found that all procedures produced very similar results. Present study adopted Ping's (1995) approach to Moderated SEM using the three steps described by Cortina et al. (2001). These steps are detailed in the Appendix I. Recent few studies (Anning-Dorson, 2017; Harney et al., 2018) also followed the same approach. The goodness-of-fit statistics of MSEM results (kh2 = 483.728; df = 174; NFI = 0.935; CFI = 0.957; RMSEA = 0.059; SRMR = 0.077) indicate that it has a good model fit. Figure 2 shows the results of MSEM analysis in which results of beta coefficients and adjusted R2 are given. In total, 59 per cent of variance of HRD practices, organizational learning culture and interactions variables on employee competencies is explained. It shows that there is positive relationship between HRD practices and employee competencies, which infer team building and employee empowerment positively influences the employee competencies, confirming (b = 0.408, Standard Error (SE) = 0.031, critical ratio (CR) = 13.161, p < 0.001) H1a and (b = 0.035, S.E = 0.004, CR = 8.75 p < 0.05) H1b. H2a proposed that organizational learning culture would moderate positive relationship between team building and employee competencies, where the relationship may be stronger when organizational learning culture is higher. The results of the MSEM (Figure 2) show that there is significant interaction between team building and employee competencies (b = 0.109, SE = 0.021, CR= 5.190, p < 0.05), supports H2a. The moderated relationship also supported by a simple slope test based on one standard deviation above and one standard deviation below. From Figure 3, it clarifies that organizations with higher organizational learning culture (b = 0.515, t = 8.837, p < 0.001) will influence more on employee competencies than with lower organizational learning culture (b = 0.401, t = 5.049, p < 0.001). The analysis has found similar results for H2b also, which indicates that there is moderated positive relationship between employee empowerment and employee competencies, where the relationship may be stronger when higher organizational learning culture is present (b = 0.130, SE= 0.020, CR = 6.50 p < 0.05), confirming H2b. From simple slope test (Figure 4), it further elucidates that organizations with higher organizational learning culture (b = 0.162, t = 2.788, p < 0.05) will influence more on employee competencies than with lower organizational learning culture (b = 0.142, t = 1.499). The study has found that team building, employee empowerment and organizational learning culture have a significant and positive influence on employee competencies. In addition, the findings confirm that the moderating effect of organizational learning culture on the above relationship is significant. Detail findings are discussed below. HRD practices (team building and employee empowerment) of cement manufacturing units have shown a significant and positive relationship with employee competencies. In this respect, the finding of significant and positive influence of team building on employee competencies are congruent with the studies of Aga et al. (2016), Beebe and Masterson (2014) and Braun et al. (2013) that established that effective implementation team building programmes will enhance knowledge, skill and capabilities of the employees. It was also found that employee empowerment shows positive impact on employee competencies which confirms the assumption of eminent researchers such as Fernandez and Moldogaziev (2013) state that effective implementation of employee empowerment enhances employee competency. Further, the perception of organizational learning culture moderated the relationship between team building and employee empowerment with employee competencies. Employees perceived that organization having effective implementation of team building practices along with high organizational learning culture results in higher employee competencies. In this respect, the findings of the study empirically validate the hypothesized relation that has been theoretically stated in many studies (Sung and Choi, 2014; Banerjee et al., 2017). A similar result also found that organizational learning culture moderated the relationship between employee empowerment and employee competencies. This empirical finding validates the theoretical assumptions of renowned researchers (Kim and McLean, 2008; Park, 2010; Moon and Choi, 2017) that employee empowerment facilitated by learning environment may show a positive effect on the development of employee competencies. Similarly, H2b proposed that organizational learning culture strengthens the positive relationship between employee empowerment and employee competencies. The findings of the present research corroborate with the previous studies by Jones et al. (2013) and Kuo et al. (2010) and establishes that in the presence of organizational learning culture strengthens the relationship between employee empowerment and employee competencies. The findings of this study have several important theoretical contributions. The study provides a deeper understanding of how HRD practices influences enhancement of employee competencies in the presence of contextual variable organizational learning culture. The results confirm that the relationship between HRD practices and employee competencies can be strengthened by positive organizational learning culture. A work context that is supportive for encouraging employees for continuous learning through the acquisition of new knowledge and skills would foster the relationship between HRD practices and employee competencies. This study therefore provides support for a contingency perspective in HRD research that with a positive organizational learning culture; the effect of HRD practices on the development of employee's competencies can be enhanced. By demonstrating that organizational learning culture moderates the HRD practices and employee competencies relationship, this study builds on a recent stream of research examining the resource-based view (Wernerfelt, 1984) from a contingency perspective. This study also extends the application of HRD practices and organizational learning culture to a new context (i.e. the emerging economy of India). It demonstrates that in emerging economies, characterized by environmental turbulence and uncertainties. The implementation of HRD practices will help the organization to perform better by increasing the competency level of their employees. Bates and Khasawneh (2005) state that, "There is considerable consensus today that a key competitive advantage for organizations lies in their ability to learn and be responsive to challenges from both internal and external environments". Evidently, focus has to be paid, in developing an organizational learning culture to enhance employee competencies to build competitive advantage and enhance organizational effectiveness. The outcomes of the paper also recommend some suggestions to managers striving for success. First, from moderated SEM results, managers should recognize that, just by providing team building and employee empowerment initiatives are not enough, it is the organization's responsibility to create an environment of learning to enhance employee competencies. Second, organizations have to take advantage of organizational learning capability by signifying the importance of managers and their attitudes in effective implementation of learning conditions within the organization. This indicates that managers are the facilitators of learning culture within the organization, and it can be achieved by applying the attributes of a learning organization in such a way that learning orientation becomes the main trigger for learning (Real et al., 2014). Third, the direct inference of study results is that employees with enhanced competencies are the most vital stakeholder group in any business process that is endeavouring for improved organizational effectiveness. Hence, managers striving for effectiveness and efficiency in their process should put employees first, which supports the opinion of Skerlavaj et al. (2010). The study has a few limitations; however, these pave the way for a new line of future research. Even though, we collected responses from employees who participated in team building and employee empowerment initiatives in the past two years, but we have collected data at a single point of time (cross-sectional study). This might raise issues relating to the direction of causality, we recommend the future researchers to conduct a longitudinal study that minimizes the issues on the direction of causality. The data used in the study are largely subjective opinions of the employees responding to the survey. As per Real et al. (2014), subjective assessment obtained through multi-item scales are in the general consistent with an objective measure, the difference between perceptions and objective data may exist. Future studies might emphasis on this area, using objective measures. Finally, we cannot generalize the results through wider range of sectors and global environment, as the study was conducted in Indian cement industries.
|
The research is undertaken in Indian cement manufacturing companies which cannot be generalized across a broader range of sectors and international environment.
|
[SECTION: Purpose] IMP began as an international research project in 1976 and held its first conference in 1984. In 2016, the 33rd annual IMP conference was organised in Poznan. Over these 40 years, IMP expanded substantially, and today a large number of researchers and research groups identify themselves as belonging to the IMP community by applying IMP models and concepts in their research. As shown in this paper, the IMP community's researchers have published numerous books and journal papers that are highly cited. In addition, more than 200 doctoral dissertations are based on the IMP approach. Today, IMP is visible through a journal published by Emerald and through a website (www.impgroup.org). From this website 2,700 papers (mostly from conferences) and 75 books and dissertations can be downloaded. Another sign of increasing interest in the IMP idea is the occurrence of several publications discussing IMP in more or less explicit ways: what IMP is, what it is not, and the characteristics of IMP as a research field. Some of these publications examined papers from specific conferences, such as Gemunden (1997), Easton et al. (2003) and Windischhofer et al. (2004). Others focussed on the researchers in the IMP community and how they are connected through joint publications, for example, Morlacchi et al. (2005) and Henneberg et al. (2007). Some authors, such as Turnbull et al. (1996), Hakansson and Snehota (2000), Wilkinson (2002), Ford and Hakansson (2006a), Mattsson and Johanson (2006), Cova et al. (2009) and Hakansson and Waluszewski (2016) discussed the development of IMP in broader terms. Finally, a number of researchers compared the IMP perspective with other approaches, for example, Johanson and Mattsson (1987), Mattsson (1997), Ford (2004), Ford (2011), Hunt (2013), Olsen (2013), or with the development of industrial marketing in general, such as Backhaus et al. (2011) and Vieira and Brito (2015). Some authors were critical of certain aspects of the development of IMP, or to IMP in general. Such publications were presented by Lowe (2001), Harrison (2003), Cova and Salle (2003), Cunningham (2008) and Moller (2013). One particular facet of this criticism is that the creativity featuring the initial IMP research was replaced by "increasing uniformity, repetition and stereotyping of the IMP style in recent years" (Cunningham, 2008, p. 48). Other publications discussing IMP development are listed in "Further reading" after the reference list. All of these publications indicate that researchers find an interest in analysing IMP and its development. The main reason for this attention is that IMP has generated a substance worthy of discussion due to its clear research focus over time. For example, in an extensive study of the conferences from 1984 to 2012, Wuehrer and Smejkal (2013) conclude that IMP research features quite a strong stability over time. Their bibliometric analysis describes a clear and consistent picture over that entire time span. The core research dimensions have been the same over the years, implying that IMP is characterised by a certain continuity regarding substance and identity. Deep probing analysis of extensive business relationships was a driving force in the first IMP project and this phenomenon has continued to attract the main attention in research projects and at conferences. When the first project began, business relationships could not be explained with mainstream theories and the role and the dynamics of this phenomenon continue to represent challenges from a theoretical point of view. From the above description, we can conclude that over four decades there has been a continual evolvement of something that can be interpreted as "IMP". Moreover, scholars within and outside the IMP community have identified this phenomenon as interesting enough to analyse and discuss. In this paper, we examine the "IMP substance" in more detail. One aspect of this examination is to relate to artefacts representing concrete output from IMP activities in terms of research findings presented in books, journal papers and doctoral dissertations. A second issue of interest is to analyse the features of the processes that created this output. At the outset, IMP was a project involving research groups in five countries. However, during this venture complementary activities and resources became related to the project. A process was initiated where many researchers and research groups in several countries became connected through their common research interests and research agendas. Over time, numerous projects - more or less international, but always with international connections - were completed or are currently on-going. Researchers have met, discussed and found ways to collaborate around an empirical phenomenon related to how companies evolve through relatively cooperative business relationships embedded in network constellations. Thus, the development of IMP can be described and analysed as the development of a specific research network containing a multitude of activities, resources and actors embedded into the larger context. It is this specific network that we describe and analyse in the paper. Historical analyses of research tend to focus on two alternative aspects. One is to emphasise the researchers - the research community represented by individuals and their roles. The other is to concentrate on the development of ideas in terms of knowledge features and connections to other knowledge fields. This paper follows none of those routes in the analysis of the progress of IMP. Instead, we try to develop a third route by relying on the tools that were generated within IMP to analyse business development. We know that these tools have been useful in creating new and complementary pictures regarding changes in business. Therefore, we examine IMP's research development in terms of the evolution of a research network. The analysis is focussed on the interplay and the combinations of activities, resources and actors. Separation of these three layers in the business reality is a central approach applied in IMP research (Hakansson and Johanson, 1992). This perspective is identified as the ARA model for analysis of industrial networks in terms of the activities undertaken by actors through the use of various resources. The focus is on inter-organisational processes forming the business landscape or, as in this case, the research landscape. The objective of this paper is to provide specific images of the development of IMP as a research network through relying on the ARA model. The main mission is to investigate the features of actors, activities and resources that were connected and combined during the development of IMP. We begin by describing the process leading to the first IMP study - a joint international research project. After this study, the process became much more complex and multifaceted. Therefore, we rely on a classical research strategy to select some aspects of this process to enable more detailed analysis. This approach will be used to provide one potential answer to the basic question, what is IMP? The answer will be formulated in terms of a research stream from a research network community embedded in a wider network of science with three layers - researchers (actors), artefacts such as books and articles (resources) and research projects (activities). Once the ARA model was selected for the framing of the paper, the methodological issues were focussed on identifying relevant dimensions of activities, resources and actors, as well as the connections between them. 2.1 Activities The initial activity for what later became recognised as IMP was a joint international research project. The significant results emanating from this study, launched in 1976 and involving researchers in five European countries, are described in Section 3. A second joint research activity of significance for IMP's evolution was carried out during the years before and after 1990. Central features of this globally oriented project are presented in Section 5. There are also some other large research projects that were organised on a national level in the community of IMP researchers. We bring up those projects that created substantial effects in terms of research results and publications that have been widely cited. The first IMP conference was organised in 1984 in-between the two major IMP projects. The annual conference is the most important continual activity organised by the group. It would be too immense a task to try to illustrate the evolution of IMP over time by describing all the conferences. Therefore, we needed to select some of them to elucidate the IMP development. The first IMP conference, in Manchester in 1984, was the natural starting point. We began working with this paper in the spring 2013. The most recent conference at that time was the one organised in Rome in 2012, therefore it was chosen to represent the other end-point of the time scale. Since the Rome conference was the 28th, we selected the 14th IMP conference organised in Turku, Finland, in 1998, as the natural "mid-point". The three conferences serve as reference points for the activity layer providing three pictures of IMP's development. The conferences are important meeting points for researchers where research ideas, research issues and research results are presented and discussed. 2.2 Resources The most important resources in the IMP network are the IMP-developed research frameworks and the research findings presented in various forms of publications. These resources are both produced and used by the researchers belonging to the IMP community. The most visible and available resources are the publications in books and journals. These resources can be regarded as inspirational sources, but also as documents of research themes and issues that have been significant over time. It might be problematic to assume that the publications most cited are those that were most influential from a knowledge point of view. Despite that, we use the number of citations as one measure in the resource dimension because this factor provides an indication of how the actors in the network have used the publications. To identify these publications we relied on the database "Publish or Perish" that is based on Google Scholar. Our interest is to provide an account of the most cited of the IMP publications and, more importantly, to assess the broader impact of IMP research in general. Therefore, we traced all the publications that had been cited more than 100 times in August 2013, authored by researchers who were identified as belonging to the "IMP community". To illustrate the development of the resource layer over time, we relied on the time periods defined through the selection of the three conferences for analysis of the activity layer. These demarcation lines in time created two time periods of IMP development: Period 1: covering the development between the 1984 and the 1998 conferences. Period 2: covering the development between the 1998 and the 2012 conferences. The phase before the first conference in 1984 is identified as "the start of IMP". Then we grouped the publications into four categories according to the number of citations they had received in 2013. The first one included the top scoring publications with more than 1,000 citations. The three other categories represented publications in the intervals of 500-999, 200-499 and 100-199. We also analysed the research themes covered in the publications and the development of these themes over time through detailed examination of abstracts and, in some cases, the full text. 2.3 Actors In historical accounts, individual researchers tend to be identified as the most important actors. These actors are clearly visible through the authorships of books and publications. In "true IMP spirit" we supplemented this aspect of the actor level with analysis of the impact of collective forces, such as research groups and universities to which the individual actors are connected. This classification caused some problems since some researchers had moved between universities. We used the affiliation registered on the publication for the classification into research groups, although the actual work might have been conducted several years before, and in another research setting. In some cases we merged researchers from the same city into one group although they might represent different universities in this city. The significance of the research groups is demonstrated through the number of citations their publications achieved. To identify connections between groups we also analysed the co-authorships across research groups and how these joint publications developed over time. Moreover, we provide an account of the number of doctoral dissertations presented by the research groups. The evolvement of IMP will, thus, be described as the development of activities, resources and actors over two time periods where we use the three conferences as reference points (see Figure 1). The description of the two time periods is illustrated by the variables listed above Period 1 in the figure, while the text below Turku shows what aspects of the conferences are presented. It is important to note that the two authors of this paper were involved in the development of IMP - one during the entire period and the other from the first conference. This means that two "insiders" have made all limitations and selections, leading to both positive and negative consequences. On the positive side, we were able to use our personal insights and experiences. The negative one is that we cannot claim to be neutral or objective regarding focus and interpretation. We tried to handle these effects of participating observation by using, for example, citations measured by an external search engine and descriptions fetched from other publications as indicators, instead of more personal and soft-quality indicators. The outline of the paper follows the basic structure of Figure 1. We begin by illustrating the start of IMP with a short resume of the initial IMP project, inaugurated in 1976, by describing activities, resources and actors in this phase (Section 3). This is followed by the presentation of the first conference in Manchester 1984 (4), Period 1 ranging between 1985 and 1998 (5), and the conference in Turku 1998 (6). Thereafter, we describe and analyse the development in Period 2 between 1999 and 2013 (7), and finalise the empirical account with the Rome conference (8). Section 9 is devoted to discussion and interpretation of some significant observations regarding the development of the IMP research network. Section 10, the final section, summarises the development of IMP in network terms and brings up aspects related to these dynamics. In addition to the conclusions, some implications and thoughts about IMP's future potential are discussed. 3.1 Network situation at the start The initiation of the first joint international project can be explained by three factors related to resources, activities and actors, respectively. First, there was dissatisfaction, shared by several European researchers, regarding the resource availability of realistic and relevant textbooks and research reports covering the B2B field. The existing literature lacked descriptions, conceptualisations and analyses regarding buying and selling of industrial goods. Empirical studies conducted during the 1960s and early 1970s indicated that research resources, in terms of mainstream models and concepts available at that time, were inadequate for both explaining business reality and providing normative recommendations. Simply speaking, the business landscape showed features that were quite different from what was assumed in the contemporary literature, which called for further exploration of this significant phenomenon. Second, in the early 1970s several attempts were made to initiate international cooperation and coordination of research activities. For example, European Institute for Advanced Studies in Management in Brussels organised international seminars and workshops where young researchers were provided with opportunities to meet for discussions. Through such support for promotion of collective research activities, new international contacts were stimulated and meeting spots were designed that were beneficial for spontaneous discussions and coordinative attempts. Third, at the time, business administration and marketing faculties expanded substantially due to the massive increase of students at universities and the growth of management education. The actor level was affected considerably when vast numbers of young and ambitious researchers were employed and new research units were created. These conditions made it difficult for senior researchers to maintain control of the research operations. Consequently, there was a free space to use for the young actors. These three features contributed considerably to the launch of the first IMP project. The initiators were young people in the early phases of their development as researchers. They were internationally oriented and eager to do something jointly with others. They kept senior researchers outside the project ("no professors" was the rule!) while they actively encouraged young researchers from several countries to get involved. They also found a common interest in aggressively challenging the mainstream way of investigating the buying and selling of industrial goods. They all had empirical experience from the research field and this knowledge indicated features that were difficult to explain given the contemporary theoretical models. 3.2 Activities As described above, several specific circumstances provided opportunities for the establishment of a joint international research programme. The researchers involved in this programme had no ambitions related to more long-term cooperation. But all of them wanted the programme they implemented to become something unique. However, they certainly had a long way to go to carry through such an assignment. The first step was to mobilise a group of researchers representing several countries, and at the end researchers from five European countries became involved. The project was directed toward investigation of the features of marketing and purchasing of industrial goods in international settings, i.e. to characterise international business exchange. On this basis, an extensive study was designed and carried out in industrial companies in the UK, Sweden, West Germany, France and Italy. Both the buying and the selling sides of firms were investigated. The sample of companies was based on selection matrices distinguishing three types of products (raw materials, components, and equipment) and three types of technologies (unit production, mass production and process production). The most important customers and suppliers in the five countries were identified for each company. Thus, for a selling company in Sweden the most important customers in the UK, West Germany, France and Italy together with domestic customers in Sweden were selected for interviews. For each company, data were collected regarding the handling of individual customers, the history of the relationships, how the business processes had developed over the years and important events in terms of specific adaptations or projects in these processes. The same type of procedure for data collection was used on the buying side. Furthermore, for each interviewee, a personal attitude study was conducted regarding the perceptions related to the country of the counterpart involved in the specific relationship. Thus, data were collected about the company in general, its sales to (and purchases from) the countries involved, the business relationship with the most important counterpart in each (or at least some) of the countries and the attitudes of the manager being interviewed about the relationships with business partners in the specific country. The empirical investigation covering the five countries turned out to be a massive undertaking, requiring huge efforts from the project members. Moreover, the openness of companies in relation to researchers varied over the countries. Consequently, both data collection and analysis were time-consuming processes and required large amounts of research resources. In the first book that reported the study (Hakansson, 1982), an account for the methodological issues is provided in chapter 3. In particular, the group emphasised the processual approach that had been central during the project, as well as the fact that each country study had to find domestic financial resources. Progress, or at least the feeling of progress, was vital. Even if unsolved problems sometimes were at hand, the group always tried to continue to the next step in the research process in order to show the members that progress was made and that the project evolved. Furthermore, each country group had to report back to their financial sources about progress in the home country. The empirical results were very distinct across firms and countries. All companies were shown to be working within established long-term business relationships with their most important customers and suppliers. The results also showed that the relationships varied in many dimensions, such as number of persons involved, occurrence, level, and type of adaptations, relationship duration and the handling of monetary terms. These findings strongly contrasted with the contemporary view of marketing and business processes. Therefore, in the introduction to Hakansson (1982, p. 1) the perspective offered in mainstream research and teaching was challenged in four respects: "We challenge the concentration of the industrial buyer behaviour literature on a narrow analysis of a single discrete purchase". "We challenge the view of industrial marketing as the manipulation of marketing mix variables to achieve a response from a passive market". "We challenge the view of an atomistic structure, assuming a large number of buyers and sellers that easily can change business partners". "We challenge the separation which has occurred in analysing either the process of industrial purchasing or of industrial marketing". The basis of the contrasting IMP framing was then formulated in the following ways: "We emphasise the importance of the relationship which exists between buyers and sellers. This relationship is often close and may be long term and involve a complex pattern of interaction". "We believe it necessary to examine the interaction between individual buying and selling firms where either firm may be taking the more active part in the transaction". "We stress the stability of industrial market structures, where the parties know each other well and are aware of others' movements". "We emphasise the similarity of the tasks of the two parties. Industrial marketing can be understood only through simultaneous analysis of both the buying and selling sides of relationships". The interactive view of industrial marketing and purchasing was formulated in a theoretical model - the interaction model - containing five sets of variables characterising the short term episodes, the long-term relationship, the involved parties, the interaction atmosphere and the environment (Figure 2). The main part of the 1982 book is devoted to the presentation of 23 company cases. In each of these cases, the most important business relationships of the company are described and analysed from either the marketing or the purchasing side. The cases are organised according to the basic technology of the focal company. The business relationships between companies in the five countries are described in terms of duration in time (on average around 13 years), extensiveness in contact patterns, adaptations of products, adjustments in production or logistics, and the degree of social exchange. The variety among companies, with regard to their basic technologies and the type of business their products represent, provides a broad view of the business reality. In total, a very detailed and extensive picture is presented showing how a whole set of industrial companies behave with regard to their selling and buying activities in which the business relationships are basic ingredients. The cases were used as input to further the analysis of significant themes related to the basic variables of the interaction model. Consequently, the themes regarded variation in the processes of interaction, variation in the features of the parties involved in interaction, variation in interaction environments and the atmosphere in which interaction occurred. Fourteen researchers from the participating countries were involved as authors of cases and themes, individually and in various combinations. The main contribution of the first IMP book is a very strong empirically grounded argument for the importance of including interaction and business relationships in any systematic study of industrial settings. 3.3 Resources Significant resources activated at the start of IMP were rooted in two research areas, and particularly at Uppsala University the cross-fertilisation of the two was favourable. The first regarded studies on the internationalisation of businesses, while the second related to research on marketing and purchasing issues that increasingly devoted attention to business relationships between customers and suppliers. Based on the assumption that the numbers of citations reflect the use of research publications, the ten most significant resources before the Manchester conference are listed in Table I. Accordingly, in 2013 four publications accounted for more than 1,000 citations, two of which dealt with internationalisation and two with business relationships. In addition, six journal papers had been cited between 200 and 500 times. Furthermore, nine other publications accounted for more than 100 citations, implying that 19 books and papers presented before the Manchester conference had been cited more than 100 times in 2013. The publications from the 1970s can be seen as major inputs into the project while those from the early 1980s can be regarded as outcomes of the first IMP project. Throughout this paper, we present short summaries of the publications with more than 1,000 citations. Of the four related to the start of IMP, the 1982 book was described above. The paper by Johanson and Wiedersheim-Paul (1975) was based on four Swedish firms' internationalisation processes. Studying how these firms established themselves in foreign markets (year and institutional set-up) led to the identification of a typical internationalisation process. The standard pattern was that firms started in neighbouring countries (at a short psychic distance). They also tended to begin with entry forms requiring minor investments. Thus, the internationalisation of a company was found to be a gradual process in relation to institutional set-up - from agent to producing subsidiary - and to successive entrance in countries at a larger psychic distance. The empirical material in Johanson and Wiedersheim-Paul (1975) was one important base in the development of the "Uppsala model" of the internationalisation process, formulated in Johanson and Vahlne (1977). In this paper, the gradual process that was shown to be so significant in the empirical cases is explained by a basic model connecting two aspects: one related to the state of internationalisation and one concerned with adjustments to changing conditions. The state of internationalisation was defined through the knowledge of the foreign markets and the level of market commitment (resources devoted to the markets). The change aspects of the model related to the current activities of the firm and the decisions to commit resources. The model prescribes a mutual dependence between the change and state aspects. The paper by Ford (1980) draws attention to buyer-seller relationships that, until then, had received scant interest in the literature on industrial marketing and purchasing. In the paper, a five-stage model of business relationship development is launched, from the pre-relationship stage to the "final" one. The features of a relationship in these stages are described and analysed regarding: the increasing experience between the parties, the reduction of their uncertainties and the distance between them, the growth of their commitment, the formal and informal adaptations between the two and their mutual investments and savings. It is also of interest to analyse the research themes covered in the publications cited more than 100 times and how they later developed. Considering the text above, it was natural that the categorisation of themes included "internationalisation" and "business relationships", together with studies of marketing (labelled "customer-side focus") and purchasing and supply management ("supplier-side focus"). Other themes that appeared significant from the beginning were strategy and technology/innovation (combined into one common category). It goes without saying that this classification was not always easy to strictly apply. For example, sometimes it was unclear whether a specific publication should be categorised as business relationship or customer-side focus. However, in our coming statistics for the two periods of IMP, each publication represents only one theme. 3.4 Actors Six research institutions were involved in the first IMP project and in the production of the first IMP book: Uppsala University, UMIST in Manchester, Bath University, Lyon Business School, Munich University and ISVOR-Fiat in Italy. (The researchers representing these institutions are listed in Table AI, Appendix 1). Therefore, it is quite natural that the most cited IMP publications before the 1984 conference emanated from these research groups. Uppsala University contributed the majority of the highly cited publications. Similar conditions characterised the publications with a lower citation rate, some of which were co-authored with researchers at the Stockholm School of Economics. These two groups were connected because some people moved between the universities. The two other significant research communities with highly cited publications were located at Bath and in Manchester. In several cases the PhD theses of the involved researchers served as important input to the project that also resulted in several PhD dissertations. Following the first IMP book, and the joint work IMP researchers devoted to further publications, the Manchester group invited research colleagues to an international workshop in 1984 under the heading "Research Developments in International Marketing". This invitation resulted in the first annual conference. Over time, the conferences became the most visible features of the IMP research stream. In all, 20 papers were presented at the Manchester conference. Full details of these pioneering papers are provided in Appendix 2. The papers involved 30 authors from eight countries - 21 European and nine coming from overseas. So already from the beginning IMP was more than a European affair. Manchester, Uppsala, Lyon and Stockholm School of Economics contributed the most papers. Among non-European research groups that later became significant were Penn State from the USA and Sydney in Australia. Japan and Canada were also represented by researchers at the conference. The most important contributions at the Manchester conference came from the two research areas described in the previous section: business relationships and internationalisation. First, business relationships appeared as the main theme and here the results from the first IMP study provided significant input to the discussions at the seminar. These results were presented in two co-authored books, but also in some journal papers that documented the importance of dyadic business relationships between customers and suppliers[1]. The features and the significance of these relationships, from both marketing and purchasing points of view, were discussed in a number of the papers at the conference with a particular focus on which new concepts should be used to characterise these relationships and their functions. Second, since the topic of the conference was "Research Developments in International Marketing", it is natural that considerable interest was devoted to internationalisation. In the debates at the seminar, the Uppsala model of internationalisation[2] played an especially important role. A third theme intensively discussed during the seminar concerned "networks of business relationships" and "network effects". These issues became apparent as several papers and comments addressed the role of connections between relationships. The papers highlighted the significance of relationships and directed attention to how relationships are related. These connections were conceptualised in terms of relationship portfolios and networks. A fourth theme that emerged regarded technological aspects in business relationships. Technology was central in IMP1 and surfaced as significant at the seminar since almost half the number of papers showed some connections to technology. Technological issues appear crucial for both seller and buyer in almost any B2B situation, thus making the technological content of a business relationship substantial and often critical. Moreover, the significant role of business relationships for technical development was shown to be an important conclusion of the IMP1 study. The themes of the papers at the conference are presented in Table II. Regarding the participation of research groups, those involved in the IMP project provided the most attendees at the conference: Manchester 6, Uppsala 5, and Lyon 4. The papers presented by various research groups are shown in Table III. Here we included the research groups involved in the IMP project, as well as those that later evolved as significant parts of the IMP community. 5.1 Activities During this period the most important activity besides the conferences was the joint international study IMP 2. This project involved researchers from the five European countries participating in IMP1, and scholars from several other countries. European researchers from The Netherlands, Norway and Poland joined the group and involvement of researchers from the USA, Japan and Australia made the project increasingly global. Thus, the empirical base expanded substantially and provided opportunities for analysis of cultural and regional differences. The underlying reason for launching the second IMP study was a feeling of dissatisfaction by some of the researchers regarding the way the business environment, or context, was treated in the first study. The main attention had been directed to the development of dyadic business relationships, involving two active and reflective counterparts. The context of the two had been registered but not given any structural dimension or role. However, in the empirical descriptions there were ample examples showing that a specific customer-supplier relationship was related to other relationships. Thus, the environment of the individual relationship was not diffuse or atomistic, but featured specific other relationships. Consequently, rather than explaining the development of each relationship only through the actions of the two focal firms, there seemed to be reasons to investigate the context in terms of interdependencies in relation to other connected relationships. Five types of interdependencies were analysed; regarding technology and knowledge, as well as social, administrative and legal aspects. Relationships were found to be important means for handling these interdependencies and it was evident that a business relationship had to be seen as an element embedded in a network of relationships. The outcome of the project was presented in another IMP book (Hakansson and Snehota, 1995). The most important theoretical result from the study is an analytical scheme for examining the development effects of relationships, best known as the ARA model (Figure 3). The basic dimensions of this model were used for structuring the empirical observations in the study. The book contains chapters dealing with the functioning of relationships as activity links, resource ties, and actor bonds. Each chapter is based on cases that illustrate the conceptual development. The cases represent joint undertakings and are authored by research groups at Uppsala, Lyon, Poznan, Penn State, Eindhoven, Bath and Chalmers in Gothenburg. The study shows that connections within a relationship enabled enhancement of the internal structures of the involved companies, as well as the collective activity pattern, resource constellation and web of actors in the entire network. The cases revealed that relationships are costly, but can provide positive effects for (1) the dyad - the two actors seen as a "team", (2) individually for each of the two involved actors, and (3) third parties, connected to the two. Thus, research findings suggest that business relationships play a central role for positive economic outcomes for single firms, but even more in a collective way for networks of companies. These results led to the conclusion that well-functioning, connected business relationships represent important economic phenomena that need to be considered in any business analysis. Thus, the conclusions from the first IMP study were strengthened and further developed. The book also contains contributions from scholars advocating other analytical approaches, which are compared with the IMP view. These sections deal with technological development (University of Groningen) and the transaction cost approach (Norwegian School of Economies and Business in Bergen). Another research project aiming at a joint publication involving researchers from the UK and Sweden was organised in the late 1980s. After four years and three seminars in different countries, a book was published in 1992 (Axelsson and Easton, 1992). This book contains 13 chapters, written by 11 authors in various combinations, representing Uppsala University, Lancaster University, Stockholm School of Economics, Huddersfield Polytech and Chalmers University in Gothenburg. Five of the chapters were co-authored, some across university and country boundaries. The book's second chapter presented "A model of industrial networks" by Hakan Hakansson and Jan Johanson. Furthermore, an important project for research visibility was the establishment of a journal based on IMP research. The journal was launched in 1986, Industrial Marketing & Purchasing, with Peter Turnbull as the Editor and MCB University Press as the publisher, and lasted for three years. There were also some other collaborative projects initiated with significant future effects. One research programme in Sweden involving Uppsala University and Stockholm School of Economics resulted in important conclusions regarding the technological dimension of business networks. In the middle of the 1990s, collaboration was established between Chalmers University of Technology, Uppsala University, and Trondheim University, with the focus on case studies analysing single companies and the role of their local and global networks. This joint research programme established links among the three universities that still remain in 2018. Regarding conferences, the first Manchester seminar was not intended to be followed by other arrangements. However, the idea of a joint international seminar turned out to be fruitful and from 1985 the IMP conference became an annual event (except for 1987 when no conference was organised). The first conference was arranged by institutions involved in IMP1, but from 1989 other universities entered as hosts. In total, 12 conferences were organised in Period 1, presenting an average number of 66 papers (see Table AII). 5.2 Resources Several books and papers published during this period have been highly cited. No less than ten publications accounted for more than 1,000 citations in 2013 - see Table IV. The top-cited publication was the book edited by Hakan Hakansson and Ivan Snehota that reported the IMP 2 project. It is noteworthy that no less than six of the ten most cited works are publications in books (Table V). In this period, Routledge was the main publisher of IMP books. Of the most cited publications, the books by Hakansson and Snehota, and Axelsson and Easton, were presented above. In the paper by Anderson et al. (1994), the focus is on the network effects of changes in dyadic relationships. These effects can be both positive (constructive effects) and negative (deleterious effects). The positive effects can appear in other actors' resources, in other actors' activities and in relation to the perception of other actors. The same is the case for the negative effects. Both positive and negative effects are illustrated in two network cases and were shown to be related to cooperation and commitment. The paper by Wilson (1995) presents an integrated model of buyer-seller relationships that "blends the empirical knowledge about successful relationship variables with conceptual process models". The 13 relationship variables include trust, commitment, adaptations, mutual goals and interdependencies among others. The process model involves five stages: Partner selection, definition of purpose, setting relationship boundaries, creating relationship value, and maintaining relationships. The paper provides research directions on the concept and model levels, as well as for process research and concludes with managerial implications. The 1988 publication by Johanson and Mattsson is a book chapter aimed at illustrating the usefulness of the network approach in analysis of internationalisation. The authors develop a 2x2 matrix with the degree of internationalisation of the firm and the degree of internationalisation of the market as the two dimensions that both can score low or high. Each cell in the matrix defines one form of internationalisation with specific features and its particular situation regarding advantages and disadvantages. This network approach is then compared with the theory of internalisation and the Uppsala internationalisation model. The paper by Hakansson and Snehota (1989) provides a theoretical discussion of how a company can handle strategic issues within an environment characterised by continuous interactions with counterparts. Three central issues of the mainstream strategic management doctrine are discussed from the viewpoint of the network model: organisational boundaries, determinants of organisational effectiveness and the process of managing business strategy. The main conclusion is that a network context requires alternative assumptions in all three aspects. In a networked environment the focus of management has to move away from focussing on internal resources and structures towards relating the company's own activities and resources to those of important counterparts. Hakansson (ed. 1987) is a joint publication by five authors based on a research programme focussed on technological development in the steel industry network. Chapters are devoted to process and product development, to the importance of supplier relationships and to the role of personal networks among technicians. The basic network is identified as the web of contacts and relationships between suppliers, customers and other parties in the industry. Altogether, it shows that no firm can embark on a technical innovation without carefully considering how such effort may affect all others involved. There is an obvious need for coordination of technical research and development among all involved firms. The book by Ford et al. (1998) is an attempt to apply IMP thinking to managerial issues. The basic aim is to illustrate the consequences for management when the business reality is analysed from a network perspective. The book's main focus is on managing relationships with suppliers and customers. Particular attention is directed to the role of technology in these processes and what strategy actually implies when considered from a network view. In later editions of the book, strategy is conceptualised as an interplay between network pictures, networking and network outcome. In their 1987 paper, Johanson and Mattsson compare the industrial network approach with transaction cost economies. The basic characteristics of the two approaches are described and analysed regarding theoretical foundation, problem orientation, basic concepts, system delimitation and the nature of relationships. Furthermore, the authors provide an illustration that contrasts the features of the two approaches in the analysis of the internationalisation of business. Finally, the book edited by Ford (1990, 1997 and 2002) represents a volume containing previously published papers by researchers belonging to the IMP community. Looking at the themes of the cited publications, "business relationships" was in first position as the theme accounting for most publications. "Internationalisation" continued to be well represented and "networks" entered the list of highly cited themes (Table VI). The themes with 5-10 publications in Table VI kept their positions from the period before the first conference. Three new themes emerged in this period: services, research methods and knowledge exchange/learning. The significance of the focus on business relationships was also illustrated in the list of the ten top-cited publications (Table V). Business relationships is the theme in four of the ten publications, of which three appear at the top of the list. The enhanced attention to networks is illustrated by three publications, while technology/innovation and internationalisation are represented by one each. 5.3 Actors In this period, the research group at the Department of Business Studies at Uppsala University dominated the research arena and accounted for about one-third of the total number of publications cited more than 100 times, and more than half of the top-cited ones. The distribution on research groups of the publications cited more than 100 times is shown in Table VII. Stockholm School of Economics, Manchester and Bath continued to deliver well-cited research and Lyon was now represented on the list. In this period the horizon of IMP publications expanded considerably. Two newcomers accounted for publications with high citation scores: Penn State University in Pittsburgh, and Lancaster University in the UK. Chalmers University in Gothenburg entered the list followed by other newcomers from Australia (Sydney), the US (Georgia State), Finland (Helsinki and Turku), and Germany (Karlsruhe). The number of dissertations is another output that describes the activities of research groups. Up to 1998 we identified 57 dissertations related to IMP. The research groups with more than three dissertations are listed in Table VIII. The 14th IMP conference was organised by Turku University in 1998. In a total, 108 papers were presented at this conference, which was the second largest number of papers so far. The papers were written by 136 authors from 19 countries. Researchers from Finland were the main contributors, followed by UK and Sweden. Other countries that were well represented included Germany, France, Norway, The Netherlands, and the USA. These countries together accounted for 82 per cent of all papers. Industrial Marketing Management published a special issue (the first one dealing with IMP) from the 1998 conference (Vol. 28, No. 5), including ten of the papers presented at the conference. In the editorial, Kristian Moller and Aino Halinen grouped the papers into three interrelated sets. The first set addressed issues related to "network operations and their management". Papers in this group dealt with value generation in business relationships, learning in networks, and the determinants of network competence, i.e. "the skills and qualifications that a firm must master to manage relationships effectively". The papers in the second set examined how "resources are created and managed in buyer and supplier relationships". These papers were concerned with adaptations in business relationships, the role of interfaces with suppliers for productivity and innovation in relationships, and the effects of customer partnering for new product development. The third set focussed on "the organisational and implementation aspects of managing business relationships". The three papers in this group illustrated various aspects of the management of customer relationships: the need for internal coordination in the supplier firm, the functions of a "relationship promoter", and the role of teams and team design in these processes. Business relationships continued to be the main research theme, accounting for one-third of all papers at the conference (Table IX). The table shows that internationalisation and networks also kept their positions as major IMP themes. At this conference, issues related to the customer side of the firm received significant attention, including papers dealing with project marketing, system selling and distribution systems. In a similar vein, the supplier side and purchasing issues were the subjects of several papers. As will be shown later, both the customer and the supplier side accounted for substantial numbers of cited publications in the period following the Turku conference. Furthermore, two other themes in Table IX later showed to be significant regarding number of publications, as well as citations: research methods and knowledge exchange/learning. When it comes to research groups, the domestic ones from Helsinki and Turku accounted for most papers (Table X). Moreover, Oulu University contributed three papers from a newly established research group. The founding research groups at Uppsala, Bath, Manchester and Lyon continued to be well represented, as was the Stockholm School of Economics. The research groups that entered with cited publications in Period 1 also participated with papers at the Turku conference: Lancaster, Chalmers, Karlsruhe and Sydney. The US representatives at Penn State and Georgia State also delivered papers at the conference. So did some research groups that became more established in the period 1999-2012: Copenhagen Business School, Erasmus in Rotterdam, NTNU in Trondheim, and Corvinus in Budapest. 7.1 Activities The most visible of the important collective efforts in this period was probably the establishment of the IMP Journal. The first issue was launched in February 2006. Three issues were published per year with a total number of 100 papers during the first eight years. From 1 January 2015 the journal was taken over by Emerald. To stimulate contributions to the journal, a new activity was introduced in the form of IMP Journal Seminars. The first seminar was organised in Oslo 2005 and the second one in Gothenburg the year after. In addition to the objective of stimulating submissions to the IMP Journal, these seminars provided researchers with practice and training in formulating and interpreting reviews. Each seminar was devoted to specific themes. Ten IMP Journal Seminars were organised through to 2013: Oslo, Gothenburg (2 times), Trondheim, Lancaster, Padova, Lugano, Uppsala, Marseille and Milan. The IMP webpage was launched in this second period. Among other things, doctoral dissertations and conference papers can be downloaded from the site. More than 2,700 papers are available covering the conferences from 2000. The papers from the previous conferences are located in the library of Manchester Business School. In the second period, there were several attempts to conduct collective research, but none of them succeeded in organising a major joint data collection effort. One of these efforts, involving researchers from Sweden, Norway, Holland and Italy, focussed on the role of resource interfaces in the furniture industry (Baraldi and Bocconcelli, 2001). Moreover, there were several substantial national studies with distinct influence on later publications. One Swedish project focussed on the interplay between science, technological development and business, organised in Uppsala and reported in Hakansson and Waluszewski (2007). Two other studies were conducted in Norway. One of them applied a network perspective on logistics with particular focus on resource combining (Jahre et al., 2006). Another study investigated the business network of the global fishing industry (Olsen, 2012). Finally, a major project in Finland paved the way for the development of a framework for analysis of strategic nets (Moller et al., 2005). After Turku in 1998, 13 conferences followed between 1999 and 2011 distributed across Europe (see Table AII). The average number of papers per conference in this second period amounted to 164 (compared with 66 in the first period). Besides the annual IMP conferences, some related international conferences and workshops were organised. "The Nordic Workshop on Relationship Dynamics" with its centre in Finland has been organised nine times. Another example is the "IMP Asia Conference" organised seven times from Australia. 7.2 Resources The number of publications generated during Period 2 that had been cited more than 100 times in 2013 were about the same as in Period 1 - 105 to be compared with 101 (Table XI). However, in reality the frequency of citations has increased substantially. What needs to be taken into account is that it takes some years before a publication has reached the level of 100 citations. On average, the time period when the publications from Period 2 were available for citing is 15 years shorter than for those in Period 1. Similarly, the figures for highly cited papers are lower than in Period 1, due to the shorter life time of the publications. From this period there is one publication that reached the level of 1,000 citations. The ten most cited publications in the second period are listed in Table XII. In this period, eight of the ten most cited publications appeared in journals. One possible explanation for the difference in comparison with Period 1 is that the basic frameworks had been presented in books published previously. The highly cited papers appeared mainly in the Journal of Business Research (4) and Industrial Marketing Management (3). In several cases, they were part of special issues. The paper by Dubois and Gadde (2002), is an attempt to examine methodological challenges in case research that are not addressed in mainstream textbooks on research methodology. The approach, labelled systematic combining, involves the interplay of two simultaneous processes: one dealing with the matching between business reality and theoretical models and concepts, the other with the direction and redirection of a study through the adjustments of the framework and the empirical case that evolves during the process. The paper also suggests alternative ways to evaluate research quality. Regarding the content of the cited publications, business relationships kept the position as the most common theme in this period (Table XIII). Networks, together with the papers focussing on the supplier and customer sides, ranked high as they did in both Period 1 and at the Turku conference, while internationalisation accounted for fewer publications than previously. Knowledge exchange/learning and research methods became more significant than before. Of the new themes, value creation showed a particularly significant impact when it comes to citations. The three other emerging themes can be expected to become increasingly important in the future: network pictures, accounting and supply chain management. 7.3 Actors In this period, substantial changes occurred regarding the citation impact of the various research groups. Uppsala lost their dominant position, because several of the highly cited researchers had moved to chairs at other universities. The three research groups accounting for most of the cited publications were now BI Norwegian Business School in Oslo, Chalmers in Gothenburg and Copenhagen Business School (Table XIV). All research groups with highly cited publications in Period 1 were also represented on the list in Period 2. Thus there is a core of universities with research groups that continuously contribute publications that become highly cited. Furthermore, Trondheim and Oulu, that presented several papers at the Turku conference, now appear on the list of cited papers. Other newcomers on the list are Copenhagen, Erasmus-Rotterdam and Lugano. In all three cases, these advances were due to established researchers moving to these universities. Another aspect of the actor dimension concerns the dissertations presented by the research groups. The number of dissertations we traced increased from 57 in Period 1 to 92 in this period. Uppsala continued to be the main producer of doctoral dissertations, closely followed by BI, Lancaster and Chalmers. Copenhagen and Western Sydney entered the list, reflecting the presence of these universities at the Turku conference. The other universities also appeared on the list in Period 1 which accentuates the significance of the core groups. Table XV shows the research groups with more than three dissertations. At the Rome conference, 161 papers were presented, implying a 50 per cent increase compared to the Turku conference. In all, 24 countries delivered papers, representing a 25 per cent increase. Again, Finland was the country contributing the most papers, closely followed by Sweden and the UK. As always, the hosting country was well represented, so Italy was the fifth country when it comes to numbers of papers. France and Norway substantially increased their participation in comparison with the Turku conference. The three top countries continued to account for a substantial proportion of the papers (48 per cent). Together with the contributions from France, Italy and Norway they covered 73 per cent of the total number of papers. IMM's special issue from this conference (volume 42, issue 7) included 16 papers. The editorial team (Chiara Cantu, Daniela Corsaro, Renato Fiocca and Annalisa Tunisini), categorised these publications into five groups. The first contained papers dealing with "network structure and its dynamics". These papers dealt with competition in business networks, initial relationship development in new ventures, strategising in new ventures, and service network features. The second group involved issues related to "understanding interaction". The papers in this group were concerned with the role of contracts, managing conflict, and assessing and reinforcing internal alignment of new marketing units. The third group was labelled "Actors: Identity and role" and included papers dealing with actor identity in networks, how salespeople facilitate buyers' resource availability, and the changing role of middlemen in distribution networks. The fourth group contained papers on "solutions and value creation." These contributions were concerned with value co-creation, development and implementation of customer solutions, and the transition from products to solutions. The fifth and final group involved papers on "business behaviour in networks". The three papers in this group dealt with enablers and inhibitors of network capability, analysis of organisational networking behaviour, and joint learning in R&D collaboration. The themes of the papers at the conference are presented in Table XVI. Business relationships continued to be a strong theme, but the top position was now overtaken by networks representing the main theme in around one-third of the papers presented. The position of technology/innovation was considerably improved, while customer and supplier sides, as well as services, were well represented among the themes. The observation from the publications in Period 2 that supply chain management, accounting, and network pictures were receiving increasing interest was confirmed by their representation at the conference. Internationalisation, strategy, and research methods were still on the list, while market making appeared as a new theme. The representation of research groups at the conference is illustrated in Table XVII. A couple of research groups that had not been so visible at IMP previously were observed at the Rome conference. Since they participated with several papers at the conference they might become more influential in the future. In comparison with the Turku conference, Helsinki was again among the top paper suppliers. Manchester had doubled their paper representation and even greater increases were shown by BI and Chalmers. Among the research groups from Finland, Oulu continued to manifest a strong presence. Lappenranta, Tampere and Vasa appeared much stronger than in 1998, while Turku had a smaller representation than when they organised the conference. From Italy, Cattolica and Florence contributed with the most papers. Some research groups clearly showed a lesser representation than in 1998. The most prominent example was Bath, but similar tendencies were observed for Karlsruhe and Penn State. In all three cases, the reason was that senior researchers had left the universities. Most other research groups in Table XVII had kept their positions as their figures are quite comparable between 1998 and 2012. In relation to the citations in the period between 1998 and 2012, Helsinki, BI, Chalmers and Lancaster seemed to present adequate numbers of papers at the conference with potential to maintain their strong positions. Manchester, Lyon and Oulu can be predicted to improve their positions, considering their conference representation. It will be interesting to observe what will happen with publications and citations from the "new" research groups in Finland and Italy, as well as Bordeaux and Marseille in France. Being the organiser of a previous IMP conference seems to have stimulated participation from research groups from Glasgow and Budapest. It seems more difficult for Bath to keep its position in the near future since this research group has been reduced substantially, indicated by the fact that only one paper was presented at the conference. The description of the development of IMP in the activity, resource and actor dimensions provides a distinct image: IMP has evolved into a well-developed research network around common research themes, of which business relationships and business networks are the most significant. A huge number of research actors have appeared in the network, some for limited time periods, others over several decades. New research groups have entered, while those established in the network have expanded over time, implying that new researchers have advanced within these groups. A substantial body of resources in terms of books and journal papers have been produced and used in a collective way. Research activities such as conferences, seminars, joint projects and the establishment of its own dedicated journal have been functioning as important network tools. The investigation shows that IMP research activities at various universities and business schools have become increasingly interlinked through joint research programmes, annual conferences and the launch of the specialised journal. The resources dedicated to these research issues, central to IMP, have successively become more substantial in terms of both research input and publications. The number of individuals related to IMP has increased and research groups have been able to raise additional resources to enable enlargement. Furthermore, the relationships among these research groups have evolved and become stronger through joint arrangements and shared resources. The data collected for this paper enables analysis of the enhanced relatedness, both for individual researchers and for research groups, due to the development of the network. Regarding the connections among individual researchers, we examined the development of joint publications between the two periods studied. Here we relied on the statistics regarding publications cited more than 100 times. For these publications we analysed the distribution of authorship and distinguished between papers that were single authored, those that were written by two authors and those co-authored by three or more persons in the two periods (Table XVIII). These figures indicate a substantial development over time regarding collective authorships. The number of publications with three or more authors increased from 13 to 38 per cent. The proportion of papers written by two authors was about the same in both time periods. Consequently, the proportion of single-authored papers decreased substantially - from 34 to 10 per cent. It seems obvious that individual researchers have adapted to the evolving network and the increased cooperation opportunities it provides. One important effect of this development is that the research resources in the network become increasingly shared and embedded; in turn implying that the resource interfaces will develop further. The network will benefit as will the single researcher. Finally, it is an indication of the collective dimension of all knowledge development. In studies of business development in relationships and networks, one significant feature is that the boundaries of companies become increasingly blurred. To analyse these features in the development of a research network we examined the occurrence of co-authorships across research groups. This analysis was based on the assumption that the more joint co-authorships there are, the less clear the boundaries among the research groups will be. Again we used the publications cited more than 100 times and examined which of them were co-authored by representatives of different research groups. The result for Period 1 is presented in Figure 4. In Period 1, the total number of co-authored publications amounted at 41, involving 23 research groups in 11 countries. Considering Uppsala's dominant position regarding publications in this period it is not surprising that this university acts as the spider in this network. The most significant connections relate to the Stockholm School of Economics and Chalmers in Gothenburg, but also involve Bath, Lancaster, Chicago and Bocconi in Italy. The UK connection between Bath and Manchester is linked to the USA through Penn State and through this university to Helsinki Business School. In Finland four other research groups are connected through joint publications but there is yet no link to Helsinki. Two other national co-authorships are not connected to the rest of the network through joint publications - one in the USA and one in Australia. The corresponding analysis for Period 2 resulted in Figure 5. This picture indicates a substantial expansion of co-authorships. In Period 2, the research groups presented 151 joint publications, compared with 41 in the first period. The authors represented 37 research groups from 14 countries and in this network no research groups are unconnected. When it comes to joint publications, the most significant collaborative efforts occur between BI, Chalmers, Lugano, Bath and Marseille. Uppsala is well-connected to the five and constitutes a significant link to Helsinki and the other Finnish research groups that are now all interlinked. Helsinki, in turn, is an important connection to the University of South Wales which is related to Georgia State and Copenhagen Business School. The Copenhagen group is strongly connected to German universities, in particular Berlin. The Stockholm School of Economics is also linked with the group of the 'big five, at the bottom of the figure and provides the connection to researchers in Holland and Belgium through Erasmus. This group is related to Penn State, which in turn connects with Birmingham. Birmingham provides a link to Manchester and Lyon, which are both related to the big five through co-authorships with Bath and Marseille. These results strongly support the idea that the boundaries of the research groups have become more blurred over time and that cooperative efforts across research groups are important network activities. From personal experience, we have also observed that researchers - juniors as well as seniors - have moved between these groups. In this way, the research groups overlap and there are a number of researchers who have been involved in several of them. In some cases this means that, to a certain degree, such groups are more oriented toward research groups at other universities than to groups in their own university. The two figures show a considerable development in the number of co-authorships across research groups. They also suggest that the dynamics of a network involve two different forces. One of these forces increases the relatedness among the actors and makes them more similar. The other force creates diversity among actors. This differentiation occurs because all actors are simultaneously involved in other networks. Therefore, they also have to relate to actors outside the focal network. Such dynamics can be observed in the two figures in terms of distinct sub-networks that change over time. These conditions are representative of network dynamics in general. The two forces create tensions among the actors that have to be handled. For these reasons, members in a network not only develop similarities that are central features for members in a group. Network actors also have to become diverse through their efforts to relate the focal network to other networks of importance to them. Through these processes the actors in a focal network become increasingly differentiated over time. For a network to develop, both forces are important. Therefore, the actors within a focal network will become increasingly differentiated over time, despite the fact that they continue to be involved in a common, central research theme. The illustration of the IMP group's research development is certainly interesting and thought provoking for those who have been involved. But the description and analysis of this process is also of general relevance as a representative example of how research ideas can develop and become influential in different ways through becoming a distinct research network. The development of the quite substantive IMP network illustrates some important effects for the collective level and for individuals. Starting with the individual researcher, we can rely on earlier network studies to discuss some positive and negative effects identified as the "three network paradoxes" (Hakansson and Ford, 2002). Transformation of these paradoxes from the situation of a company to the situation of a research actor leads to the following three paradoxes: A research network is the basis of a research actor's operations, growth and development. But the same network also restricts the freedom of the researcher and may become a cage that imprisons the actor. The relationships of a research actor are, to some extent, the outcome of its own actions. But the researcher is also the outcome of these relationships and what has happened in them. A research actor aims at influencing (and sometimes controlling) the research network. But the more the actor achieves this ambition to control, the less effective and innovative the research network will become. The first paradox illustrates the situation of a young researcher starting a project in one of the central research groups within the IMP network. The existing network provides lots of ideas and possibilities, and also some obvious limitations. The contemporary network of activities, resources and established research actors creates a very fruitful environment with plenty of new ideas and opportunities for research. At the same time, this environment also tends to drive research in certain directions because of established, within-group, ways to formulate and frame research problems. These conditions are not specific to IMP, but a feature of all research traditions. Availability of dedicated journals and other joint publications, as well as an established peer review system are important ingredients in creating this effect. Given the contemporary network around a researcher, there are always alternatives that are much easier and more favourable to research than others. These network features also provide established researchers with secure positions and improved opportunities to attract external funding. The second paradox concerns what the research actor needs to do in order to develop. The researcher will benefit from relationships with specific partners by developing new combinations. In these processes, the actor will try to affect the research partners in ways that are favourable to its own ideas and research activities. But the research partners have the same ambitions, implying that the research actor will be affected by them. Therefore, to be able to become involved in the network and really benefit from others, the actor must accept becoming impacted by them. This will have two effects - increasingly developing relationships within the network, as well as in relation to other networks. The network will consequently become more differentiated. Finally, network evolvement is the outcome of joint actions of network actors. It is the total ambition by all involved actors that drives the development toward both integration and differentiation. Therefore, research actors need to try to influence and control the network. But if one actor becomes too dominant, the development force of the network will weaken, especially in the differentiation dimension. Thus, ambitious actors trying to become influential are needed, but if they are too successful, problems will arise. Well-functioning networks require several centres - thus being multipolar - to keep up the necessary tension. The three paradoxes together provide an interesting illustration of positive and negative aspects of network dynamics. First, the network certainly facilitates the research operations of the single researcher who can build on the existing activities, resources and actors. At the same time the network constrains and limits what the single researcher can do. The network promotes opportunities, but these opportunities are attained within the border of a certain frame. The findings in the study also enable discussion of the role of such a substantive research network in relation to the broader scientific landscape. The IMP network offers an example of how basic ideas are embedded into the larger context in terms of, for example, stability and identity in combination with tension and variety in the interfaces. Researchers belonging to other research networks can observe these features and relate to them. The existence of these network features can be an explanation to the stability that Wuehrer and Smejkal (2013) found with regard to IMP research themes. The network offers both stability and continuity by providing a research base and an important reference point for those working inside and outside the specific network. However, such networks also offer major opportunities for variation through the substantial number of interfaces to other research networks. There are numerous opportunities to develop the interfaces - to combine the basic ideas with several complementary and rival ideas and concepts, thus increasing the tension between involved actors. Stability in combination with increased tensions creates strong development forces. In summary, the process of an emergent research network, described in this paper, illustrates some features that are very similar to those found in studies of the development of business networks. The "network" is an outcome of a networking process where several actors, individually and jointly through research groups, interact and together create a basic structure that remains fluid and powerful. The observed structures and processes are typical from a network point of view since they include some actors that have been involved for the entire period, while others joined the network over time; some becoming very stable actors, while others came and left. IMP has been instrumental in building up an impressive empirical base about business relationships in different contexts and with various functions or roles. This empirical base is far from complete - there are always new contexts to investigate. But the base is already so extensive that it demands further theoretical conceptualisation and model development in order to explain the features and dynamics of the business landscape in more comprehensive ways. Therefore, the empirical base forces the IMP community to continue the research focussed on inter-organisational relationships to explore potential theoretical implications. Future IMP research opportunities reside in the continuous combining and recombining of basic empirical phenomena, such as business relationships and network structures, with empirical fields such as internationalisation, innovation, learning, and value generation, to derive managerial and policy implications. Such analytical combining efforts require additional empirical studies, preferably in international settings, as well as development of theoretical constructs and new theoretical frames. In these efforts, IMP researchers should consider the network paradoxes discussed above. First, they should rely on established research networks, but be open for innovative re-considering through new and developed combinations. As shown in the description of the Rome conference, several new research phenomena were evolving, such as value creation, key account management and market making, all with their particular requirements for conceptualisation and modelling. Second, researchers should do their best in trying to affect the research partners in favourable directions, but also accept being affected by their ambitions. The analysis of authorships across research groups showed a substantial increase between Period 1 and Period 2. These joint activities are likely to foster such acceptance in true network spirit. Third, and finally, researchers should strive for influence and control of the network, and at the same time ensure that no one is allowed to dominate the network. Considering the roots in the first IMP project it is quite natural that representatives of these research groups have had a strong impact on the development of IMP. However, the analysis of joint publications showed that several new constellations became established in Period 2. In the current Period 3 it is most likely that many of these connections among research groups, illustrated in Figure 5, will be marked by even "thicker" lines than in Period 2. Moreover, at the Rome conference, several "new" research groups entered the IMP arena with considerable numbers of papers. Hopefully, these newcomers will contribute to the establishment of additional IMP-related centres.
|
The purpose of this paper is to illustrate the development of research based on the IMP approach during the four decades since the inauguration in 1976. The paper presents a network analysis of IMP research based on one of the central IMP frameworks: the ARA model.
|
[SECTION: Method] IMP began as an international research project in 1976 and held its first conference in 1984. In 2016, the 33rd annual IMP conference was organised in Poznan. Over these 40 years, IMP expanded substantially, and today a large number of researchers and research groups identify themselves as belonging to the IMP community by applying IMP models and concepts in their research. As shown in this paper, the IMP community's researchers have published numerous books and journal papers that are highly cited. In addition, more than 200 doctoral dissertations are based on the IMP approach. Today, IMP is visible through a journal published by Emerald and through a website (www.impgroup.org). From this website 2,700 papers (mostly from conferences) and 75 books and dissertations can be downloaded. Another sign of increasing interest in the IMP idea is the occurrence of several publications discussing IMP in more or less explicit ways: what IMP is, what it is not, and the characteristics of IMP as a research field. Some of these publications examined papers from specific conferences, such as Gemunden (1997), Easton et al. (2003) and Windischhofer et al. (2004). Others focussed on the researchers in the IMP community and how they are connected through joint publications, for example, Morlacchi et al. (2005) and Henneberg et al. (2007). Some authors, such as Turnbull et al. (1996), Hakansson and Snehota (2000), Wilkinson (2002), Ford and Hakansson (2006a), Mattsson and Johanson (2006), Cova et al. (2009) and Hakansson and Waluszewski (2016) discussed the development of IMP in broader terms. Finally, a number of researchers compared the IMP perspective with other approaches, for example, Johanson and Mattsson (1987), Mattsson (1997), Ford (2004), Ford (2011), Hunt (2013), Olsen (2013), or with the development of industrial marketing in general, such as Backhaus et al. (2011) and Vieira and Brito (2015). Some authors were critical of certain aspects of the development of IMP, or to IMP in general. Such publications were presented by Lowe (2001), Harrison (2003), Cova and Salle (2003), Cunningham (2008) and Moller (2013). One particular facet of this criticism is that the creativity featuring the initial IMP research was replaced by "increasing uniformity, repetition and stereotyping of the IMP style in recent years" (Cunningham, 2008, p. 48). Other publications discussing IMP development are listed in "Further reading" after the reference list. All of these publications indicate that researchers find an interest in analysing IMP and its development. The main reason for this attention is that IMP has generated a substance worthy of discussion due to its clear research focus over time. For example, in an extensive study of the conferences from 1984 to 2012, Wuehrer and Smejkal (2013) conclude that IMP research features quite a strong stability over time. Their bibliometric analysis describes a clear and consistent picture over that entire time span. The core research dimensions have been the same over the years, implying that IMP is characterised by a certain continuity regarding substance and identity. Deep probing analysis of extensive business relationships was a driving force in the first IMP project and this phenomenon has continued to attract the main attention in research projects and at conferences. When the first project began, business relationships could not be explained with mainstream theories and the role and the dynamics of this phenomenon continue to represent challenges from a theoretical point of view. From the above description, we can conclude that over four decades there has been a continual evolvement of something that can be interpreted as "IMP". Moreover, scholars within and outside the IMP community have identified this phenomenon as interesting enough to analyse and discuss. In this paper, we examine the "IMP substance" in more detail. One aspect of this examination is to relate to artefacts representing concrete output from IMP activities in terms of research findings presented in books, journal papers and doctoral dissertations. A second issue of interest is to analyse the features of the processes that created this output. At the outset, IMP was a project involving research groups in five countries. However, during this venture complementary activities and resources became related to the project. A process was initiated where many researchers and research groups in several countries became connected through their common research interests and research agendas. Over time, numerous projects - more or less international, but always with international connections - were completed or are currently on-going. Researchers have met, discussed and found ways to collaborate around an empirical phenomenon related to how companies evolve through relatively cooperative business relationships embedded in network constellations. Thus, the development of IMP can be described and analysed as the development of a specific research network containing a multitude of activities, resources and actors embedded into the larger context. It is this specific network that we describe and analyse in the paper. Historical analyses of research tend to focus on two alternative aspects. One is to emphasise the researchers - the research community represented by individuals and their roles. The other is to concentrate on the development of ideas in terms of knowledge features and connections to other knowledge fields. This paper follows none of those routes in the analysis of the progress of IMP. Instead, we try to develop a third route by relying on the tools that were generated within IMP to analyse business development. We know that these tools have been useful in creating new and complementary pictures regarding changes in business. Therefore, we examine IMP's research development in terms of the evolution of a research network. The analysis is focussed on the interplay and the combinations of activities, resources and actors. Separation of these three layers in the business reality is a central approach applied in IMP research (Hakansson and Johanson, 1992). This perspective is identified as the ARA model for analysis of industrial networks in terms of the activities undertaken by actors through the use of various resources. The focus is on inter-organisational processes forming the business landscape or, as in this case, the research landscape. The objective of this paper is to provide specific images of the development of IMP as a research network through relying on the ARA model. The main mission is to investigate the features of actors, activities and resources that were connected and combined during the development of IMP. We begin by describing the process leading to the first IMP study - a joint international research project. After this study, the process became much more complex and multifaceted. Therefore, we rely on a classical research strategy to select some aspects of this process to enable more detailed analysis. This approach will be used to provide one potential answer to the basic question, what is IMP? The answer will be formulated in terms of a research stream from a research network community embedded in a wider network of science with three layers - researchers (actors), artefacts such as books and articles (resources) and research projects (activities). Once the ARA model was selected for the framing of the paper, the methodological issues were focussed on identifying relevant dimensions of activities, resources and actors, as well as the connections between them. 2.1 Activities The initial activity for what later became recognised as IMP was a joint international research project. The significant results emanating from this study, launched in 1976 and involving researchers in five European countries, are described in Section 3. A second joint research activity of significance for IMP's evolution was carried out during the years before and after 1990. Central features of this globally oriented project are presented in Section 5. There are also some other large research projects that were organised on a national level in the community of IMP researchers. We bring up those projects that created substantial effects in terms of research results and publications that have been widely cited. The first IMP conference was organised in 1984 in-between the two major IMP projects. The annual conference is the most important continual activity organised by the group. It would be too immense a task to try to illustrate the evolution of IMP over time by describing all the conferences. Therefore, we needed to select some of them to elucidate the IMP development. The first IMP conference, in Manchester in 1984, was the natural starting point. We began working with this paper in the spring 2013. The most recent conference at that time was the one organised in Rome in 2012, therefore it was chosen to represent the other end-point of the time scale. Since the Rome conference was the 28th, we selected the 14th IMP conference organised in Turku, Finland, in 1998, as the natural "mid-point". The three conferences serve as reference points for the activity layer providing three pictures of IMP's development. The conferences are important meeting points for researchers where research ideas, research issues and research results are presented and discussed. 2.2 Resources The most important resources in the IMP network are the IMP-developed research frameworks and the research findings presented in various forms of publications. These resources are both produced and used by the researchers belonging to the IMP community. The most visible and available resources are the publications in books and journals. These resources can be regarded as inspirational sources, but also as documents of research themes and issues that have been significant over time. It might be problematic to assume that the publications most cited are those that were most influential from a knowledge point of view. Despite that, we use the number of citations as one measure in the resource dimension because this factor provides an indication of how the actors in the network have used the publications. To identify these publications we relied on the database "Publish or Perish" that is based on Google Scholar. Our interest is to provide an account of the most cited of the IMP publications and, more importantly, to assess the broader impact of IMP research in general. Therefore, we traced all the publications that had been cited more than 100 times in August 2013, authored by researchers who were identified as belonging to the "IMP community". To illustrate the development of the resource layer over time, we relied on the time periods defined through the selection of the three conferences for analysis of the activity layer. These demarcation lines in time created two time periods of IMP development: Period 1: covering the development between the 1984 and the 1998 conferences. Period 2: covering the development between the 1998 and the 2012 conferences. The phase before the first conference in 1984 is identified as "the start of IMP". Then we grouped the publications into four categories according to the number of citations they had received in 2013. The first one included the top scoring publications with more than 1,000 citations. The three other categories represented publications in the intervals of 500-999, 200-499 and 100-199. We also analysed the research themes covered in the publications and the development of these themes over time through detailed examination of abstracts and, in some cases, the full text. 2.3 Actors In historical accounts, individual researchers tend to be identified as the most important actors. These actors are clearly visible through the authorships of books and publications. In "true IMP spirit" we supplemented this aspect of the actor level with analysis of the impact of collective forces, such as research groups and universities to which the individual actors are connected. This classification caused some problems since some researchers had moved between universities. We used the affiliation registered on the publication for the classification into research groups, although the actual work might have been conducted several years before, and in another research setting. In some cases we merged researchers from the same city into one group although they might represent different universities in this city. The significance of the research groups is demonstrated through the number of citations their publications achieved. To identify connections between groups we also analysed the co-authorships across research groups and how these joint publications developed over time. Moreover, we provide an account of the number of doctoral dissertations presented by the research groups. The evolvement of IMP will, thus, be described as the development of activities, resources and actors over two time periods where we use the three conferences as reference points (see Figure 1). The description of the two time periods is illustrated by the variables listed above Period 1 in the figure, while the text below Turku shows what aspects of the conferences are presented. It is important to note that the two authors of this paper were involved in the development of IMP - one during the entire period and the other from the first conference. This means that two "insiders" have made all limitations and selections, leading to both positive and negative consequences. On the positive side, we were able to use our personal insights and experiences. The negative one is that we cannot claim to be neutral or objective regarding focus and interpretation. We tried to handle these effects of participating observation by using, for example, citations measured by an external search engine and descriptions fetched from other publications as indicators, instead of more personal and soft-quality indicators. The outline of the paper follows the basic structure of Figure 1. We begin by illustrating the start of IMP with a short resume of the initial IMP project, inaugurated in 1976, by describing activities, resources and actors in this phase (Section 3). This is followed by the presentation of the first conference in Manchester 1984 (4), Period 1 ranging between 1985 and 1998 (5), and the conference in Turku 1998 (6). Thereafter, we describe and analyse the development in Period 2 between 1999 and 2013 (7), and finalise the empirical account with the Rome conference (8). Section 9 is devoted to discussion and interpretation of some significant observations regarding the development of the IMP research network. Section 10, the final section, summarises the development of IMP in network terms and brings up aspects related to these dynamics. In addition to the conclusions, some implications and thoughts about IMP's future potential are discussed. 3.1 Network situation at the start The initiation of the first joint international project can be explained by three factors related to resources, activities and actors, respectively. First, there was dissatisfaction, shared by several European researchers, regarding the resource availability of realistic and relevant textbooks and research reports covering the B2B field. The existing literature lacked descriptions, conceptualisations and analyses regarding buying and selling of industrial goods. Empirical studies conducted during the 1960s and early 1970s indicated that research resources, in terms of mainstream models and concepts available at that time, were inadequate for both explaining business reality and providing normative recommendations. Simply speaking, the business landscape showed features that were quite different from what was assumed in the contemporary literature, which called for further exploration of this significant phenomenon. Second, in the early 1970s several attempts were made to initiate international cooperation and coordination of research activities. For example, European Institute for Advanced Studies in Management in Brussels organised international seminars and workshops where young researchers were provided with opportunities to meet for discussions. Through such support for promotion of collective research activities, new international contacts were stimulated and meeting spots were designed that were beneficial for spontaneous discussions and coordinative attempts. Third, at the time, business administration and marketing faculties expanded substantially due to the massive increase of students at universities and the growth of management education. The actor level was affected considerably when vast numbers of young and ambitious researchers were employed and new research units were created. These conditions made it difficult for senior researchers to maintain control of the research operations. Consequently, there was a free space to use for the young actors. These three features contributed considerably to the launch of the first IMP project. The initiators were young people in the early phases of their development as researchers. They were internationally oriented and eager to do something jointly with others. They kept senior researchers outside the project ("no professors" was the rule!) while they actively encouraged young researchers from several countries to get involved. They also found a common interest in aggressively challenging the mainstream way of investigating the buying and selling of industrial goods. They all had empirical experience from the research field and this knowledge indicated features that were difficult to explain given the contemporary theoretical models. 3.2 Activities As described above, several specific circumstances provided opportunities for the establishment of a joint international research programme. The researchers involved in this programme had no ambitions related to more long-term cooperation. But all of them wanted the programme they implemented to become something unique. However, they certainly had a long way to go to carry through such an assignment. The first step was to mobilise a group of researchers representing several countries, and at the end researchers from five European countries became involved. The project was directed toward investigation of the features of marketing and purchasing of industrial goods in international settings, i.e. to characterise international business exchange. On this basis, an extensive study was designed and carried out in industrial companies in the UK, Sweden, West Germany, France and Italy. Both the buying and the selling sides of firms were investigated. The sample of companies was based on selection matrices distinguishing three types of products (raw materials, components, and equipment) and three types of technologies (unit production, mass production and process production). The most important customers and suppliers in the five countries were identified for each company. Thus, for a selling company in Sweden the most important customers in the UK, West Germany, France and Italy together with domestic customers in Sweden were selected for interviews. For each company, data were collected regarding the handling of individual customers, the history of the relationships, how the business processes had developed over the years and important events in terms of specific adaptations or projects in these processes. The same type of procedure for data collection was used on the buying side. Furthermore, for each interviewee, a personal attitude study was conducted regarding the perceptions related to the country of the counterpart involved in the specific relationship. Thus, data were collected about the company in general, its sales to (and purchases from) the countries involved, the business relationship with the most important counterpart in each (or at least some) of the countries and the attitudes of the manager being interviewed about the relationships with business partners in the specific country. The empirical investigation covering the five countries turned out to be a massive undertaking, requiring huge efforts from the project members. Moreover, the openness of companies in relation to researchers varied over the countries. Consequently, both data collection and analysis were time-consuming processes and required large amounts of research resources. In the first book that reported the study (Hakansson, 1982), an account for the methodological issues is provided in chapter 3. In particular, the group emphasised the processual approach that had been central during the project, as well as the fact that each country study had to find domestic financial resources. Progress, or at least the feeling of progress, was vital. Even if unsolved problems sometimes were at hand, the group always tried to continue to the next step in the research process in order to show the members that progress was made and that the project evolved. Furthermore, each country group had to report back to their financial sources about progress in the home country. The empirical results were very distinct across firms and countries. All companies were shown to be working within established long-term business relationships with their most important customers and suppliers. The results also showed that the relationships varied in many dimensions, such as number of persons involved, occurrence, level, and type of adaptations, relationship duration and the handling of monetary terms. These findings strongly contrasted with the contemporary view of marketing and business processes. Therefore, in the introduction to Hakansson (1982, p. 1) the perspective offered in mainstream research and teaching was challenged in four respects: "We challenge the concentration of the industrial buyer behaviour literature on a narrow analysis of a single discrete purchase". "We challenge the view of industrial marketing as the manipulation of marketing mix variables to achieve a response from a passive market". "We challenge the view of an atomistic structure, assuming a large number of buyers and sellers that easily can change business partners". "We challenge the separation which has occurred in analysing either the process of industrial purchasing or of industrial marketing". The basis of the contrasting IMP framing was then formulated in the following ways: "We emphasise the importance of the relationship which exists between buyers and sellers. This relationship is often close and may be long term and involve a complex pattern of interaction". "We believe it necessary to examine the interaction between individual buying and selling firms where either firm may be taking the more active part in the transaction". "We stress the stability of industrial market structures, where the parties know each other well and are aware of others' movements". "We emphasise the similarity of the tasks of the two parties. Industrial marketing can be understood only through simultaneous analysis of both the buying and selling sides of relationships". The interactive view of industrial marketing and purchasing was formulated in a theoretical model - the interaction model - containing five sets of variables characterising the short term episodes, the long-term relationship, the involved parties, the interaction atmosphere and the environment (Figure 2). The main part of the 1982 book is devoted to the presentation of 23 company cases. In each of these cases, the most important business relationships of the company are described and analysed from either the marketing or the purchasing side. The cases are organised according to the basic technology of the focal company. The business relationships between companies in the five countries are described in terms of duration in time (on average around 13 years), extensiveness in contact patterns, adaptations of products, adjustments in production or logistics, and the degree of social exchange. The variety among companies, with regard to their basic technologies and the type of business their products represent, provides a broad view of the business reality. In total, a very detailed and extensive picture is presented showing how a whole set of industrial companies behave with regard to their selling and buying activities in which the business relationships are basic ingredients. The cases were used as input to further the analysis of significant themes related to the basic variables of the interaction model. Consequently, the themes regarded variation in the processes of interaction, variation in the features of the parties involved in interaction, variation in interaction environments and the atmosphere in which interaction occurred. Fourteen researchers from the participating countries were involved as authors of cases and themes, individually and in various combinations. The main contribution of the first IMP book is a very strong empirically grounded argument for the importance of including interaction and business relationships in any systematic study of industrial settings. 3.3 Resources Significant resources activated at the start of IMP were rooted in two research areas, and particularly at Uppsala University the cross-fertilisation of the two was favourable. The first regarded studies on the internationalisation of businesses, while the second related to research on marketing and purchasing issues that increasingly devoted attention to business relationships between customers and suppliers. Based on the assumption that the numbers of citations reflect the use of research publications, the ten most significant resources before the Manchester conference are listed in Table I. Accordingly, in 2013 four publications accounted for more than 1,000 citations, two of which dealt with internationalisation and two with business relationships. In addition, six journal papers had been cited between 200 and 500 times. Furthermore, nine other publications accounted for more than 100 citations, implying that 19 books and papers presented before the Manchester conference had been cited more than 100 times in 2013. The publications from the 1970s can be seen as major inputs into the project while those from the early 1980s can be regarded as outcomes of the first IMP project. Throughout this paper, we present short summaries of the publications with more than 1,000 citations. Of the four related to the start of IMP, the 1982 book was described above. The paper by Johanson and Wiedersheim-Paul (1975) was based on four Swedish firms' internationalisation processes. Studying how these firms established themselves in foreign markets (year and institutional set-up) led to the identification of a typical internationalisation process. The standard pattern was that firms started in neighbouring countries (at a short psychic distance). They also tended to begin with entry forms requiring minor investments. Thus, the internationalisation of a company was found to be a gradual process in relation to institutional set-up - from agent to producing subsidiary - and to successive entrance in countries at a larger psychic distance. The empirical material in Johanson and Wiedersheim-Paul (1975) was one important base in the development of the "Uppsala model" of the internationalisation process, formulated in Johanson and Vahlne (1977). In this paper, the gradual process that was shown to be so significant in the empirical cases is explained by a basic model connecting two aspects: one related to the state of internationalisation and one concerned with adjustments to changing conditions. The state of internationalisation was defined through the knowledge of the foreign markets and the level of market commitment (resources devoted to the markets). The change aspects of the model related to the current activities of the firm and the decisions to commit resources. The model prescribes a mutual dependence between the change and state aspects. The paper by Ford (1980) draws attention to buyer-seller relationships that, until then, had received scant interest in the literature on industrial marketing and purchasing. In the paper, a five-stage model of business relationship development is launched, from the pre-relationship stage to the "final" one. The features of a relationship in these stages are described and analysed regarding: the increasing experience between the parties, the reduction of their uncertainties and the distance between them, the growth of their commitment, the formal and informal adaptations between the two and their mutual investments and savings. It is also of interest to analyse the research themes covered in the publications cited more than 100 times and how they later developed. Considering the text above, it was natural that the categorisation of themes included "internationalisation" and "business relationships", together with studies of marketing (labelled "customer-side focus") and purchasing and supply management ("supplier-side focus"). Other themes that appeared significant from the beginning were strategy and technology/innovation (combined into one common category). It goes without saying that this classification was not always easy to strictly apply. For example, sometimes it was unclear whether a specific publication should be categorised as business relationship or customer-side focus. However, in our coming statistics for the two periods of IMP, each publication represents only one theme. 3.4 Actors Six research institutions were involved in the first IMP project and in the production of the first IMP book: Uppsala University, UMIST in Manchester, Bath University, Lyon Business School, Munich University and ISVOR-Fiat in Italy. (The researchers representing these institutions are listed in Table AI, Appendix 1). Therefore, it is quite natural that the most cited IMP publications before the 1984 conference emanated from these research groups. Uppsala University contributed the majority of the highly cited publications. Similar conditions characterised the publications with a lower citation rate, some of which were co-authored with researchers at the Stockholm School of Economics. These two groups were connected because some people moved between the universities. The two other significant research communities with highly cited publications were located at Bath and in Manchester. In several cases the PhD theses of the involved researchers served as important input to the project that also resulted in several PhD dissertations. Following the first IMP book, and the joint work IMP researchers devoted to further publications, the Manchester group invited research colleagues to an international workshop in 1984 under the heading "Research Developments in International Marketing". This invitation resulted in the first annual conference. Over time, the conferences became the most visible features of the IMP research stream. In all, 20 papers were presented at the Manchester conference. Full details of these pioneering papers are provided in Appendix 2. The papers involved 30 authors from eight countries - 21 European and nine coming from overseas. So already from the beginning IMP was more than a European affair. Manchester, Uppsala, Lyon and Stockholm School of Economics contributed the most papers. Among non-European research groups that later became significant were Penn State from the USA and Sydney in Australia. Japan and Canada were also represented by researchers at the conference. The most important contributions at the Manchester conference came from the two research areas described in the previous section: business relationships and internationalisation. First, business relationships appeared as the main theme and here the results from the first IMP study provided significant input to the discussions at the seminar. These results were presented in two co-authored books, but also in some journal papers that documented the importance of dyadic business relationships between customers and suppliers[1]. The features and the significance of these relationships, from both marketing and purchasing points of view, were discussed in a number of the papers at the conference with a particular focus on which new concepts should be used to characterise these relationships and their functions. Second, since the topic of the conference was "Research Developments in International Marketing", it is natural that considerable interest was devoted to internationalisation. In the debates at the seminar, the Uppsala model of internationalisation[2] played an especially important role. A third theme intensively discussed during the seminar concerned "networks of business relationships" and "network effects". These issues became apparent as several papers and comments addressed the role of connections between relationships. The papers highlighted the significance of relationships and directed attention to how relationships are related. These connections were conceptualised in terms of relationship portfolios and networks. A fourth theme that emerged regarded technological aspects in business relationships. Technology was central in IMP1 and surfaced as significant at the seminar since almost half the number of papers showed some connections to technology. Technological issues appear crucial for both seller and buyer in almost any B2B situation, thus making the technological content of a business relationship substantial and often critical. Moreover, the significant role of business relationships for technical development was shown to be an important conclusion of the IMP1 study. The themes of the papers at the conference are presented in Table II. Regarding the participation of research groups, those involved in the IMP project provided the most attendees at the conference: Manchester 6, Uppsala 5, and Lyon 4. The papers presented by various research groups are shown in Table III. Here we included the research groups involved in the IMP project, as well as those that later evolved as significant parts of the IMP community. 5.1 Activities During this period the most important activity besides the conferences was the joint international study IMP 2. This project involved researchers from the five European countries participating in IMP1, and scholars from several other countries. European researchers from The Netherlands, Norway and Poland joined the group and involvement of researchers from the USA, Japan and Australia made the project increasingly global. Thus, the empirical base expanded substantially and provided opportunities for analysis of cultural and regional differences. The underlying reason for launching the second IMP study was a feeling of dissatisfaction by some of the researchers regarding the way the business environment, or context, was treated in the first study. The main attention had been directed to the development of dyadic business relationships, involving two active and reflective counterparts. The context of the two had been registered but not given any structural dimension or role. However, in the empirical descriptions there were ample examples showing that a specific customer-supplier relationship was related to other relationships. Thus, the environment of the individual relationship was not diffuse or atomistic, but featured specific other relationships. Consequently, rather than explaining the development of each relationship only through the actions of the two focal firms, there seemed to be reasons to investigate the context in terms of interdependencies in relation to other connected relationships. Five types of interdependencies were analysed; regarding technology and knowledge, as well as social, administrative and legal aspects. Relationships were found to be important means for handling these interdependencies and it was evident that a business relationship had to be seen as an element embedded in a network of relationships. The outcome of the project was presented in another IMP book (Hakansson and Snehota, 1995). The most important theoretical result from the study is an analytical scheme for examining the development effects of relationships, best known as the ARA model (Figure 3). The basic dimensions of this model were used for structuring the empirical observations in the study. The book contains chapters dealing with the functioning of relationships as activity links, resource ties, and actor bonds. Each chapter is based on cases that illustrate the conceptual development. The cases represent joint undertakings and are authored by research groups at Uppsala, Lyon, Poznan, Penn State, Eindhoven, Bath and Chalmers in Gothenburg. The study shows that connections within a relationship enabled enhancement of the internal structures of the involved companies, as well as the collective activity pattern, resource constellation and web of actors in the entire network. The cases revealed that relationships are costly, but can provide positive effects for (1) the dyad - the two actors seen as a "team", (2) individually for each of the two involved actors, and (3) third parties, connected to the two. Thus, research findings suggest that business relationships play a central role for positive economic outcomes for single firms, but even more in a collective way for networks of companies. These results led to the conclusion that well-functioning, connected business relationships represent important economic phenomena that need to be considered in any business analysis. Thus, the conclusions from the first IMP study were strengthened and further developed. The book also contains contributions from scholars advocating other analytical approaches, which are compared with the IMP view. These sections deal with technological development (University of Groningen) and the transaction cost approach (Norwegian School of Economies and Business in Bergen). Another research project aiming at a joint publication involving researchers from the UK and Sweden was organised in the late 1980s. After four years and three seminars in different countries, a book was published in 1992 (Axelsson and Easton, 1992). This book contains 13 chapters, written by 11 authors in various combinations, representing Uppsala University, Lancaster University, Stockholm School of Economics, Huddersfield Polytech and Chalmers University in Gothenburg. Five of the chapters were co-authored, some across university and country boundaries. The book's second chapter presented "A model of industrial networks" by Hakan Hakansson and Jan Johanson. Furthermore, an important project for research visibility was the establishment of a journal based on IMP research. The journal was launched in 1986, Industrial Marketing & Purchasing, with Peter Turnbull as the Editor and MCB University Press as the publisher, and lasted for three years. There were also some other collaborative projects initiated with significant future effects. One research programme in Sweden involving Uppsala University and Stockholm School of Economics resulted in important conclusions regarding the technological dimension of business networks. In the middle of the 1990s, collaboration was established between Chalmers University of Technology, Uppsala University, and Trondheim University, with the focus on case studies analysing single companies and the role of their local and global networks. This joint research programme established links among the three universities that still remain in 2018. Regarding conferences, the first Manchester seminar was not intended to be followed by other arrangements. However, the idea of a joint international seminar turned out to be fruitful and from 1985 the IMP conference became an annual event (except for 1987 when no conference was organised). The first conference was arranged by institutions involved in IMP1, but from 1989 other universities entered as hosts. In total, 12 conferences were organised in Period 1, presenting an average number of 66 papers (see Table AII). 5.2 Resources Several books and papers published during this period have been highly cited. No less than ten publications accounted for more than 1,000 citations in 2013 - see Table IV. The top-cited publication was the book edited by Hakan Hakansson and Ivan Snehota that reported the IMP 2 project. It is noteworthy that no less than six of the ten most cited works are publications in books (Table V). In this period, Routledge was the main publisher of IMP books. Of the most cited publications, the books by Hakansson and Snehota, and Axelsson and Easton, were presented above. In the paper by Anderson et al. (1994), the focus is on the network effects of changes in dyadic relationships. These effects can be both positive (constructive effects) and negative (deleterious effects). The positive effects can appear in other actors' resources, in other actors' activities and in relation to the perception of other actors. The same is the case for the negative effects. Both positive and negative effects are illustrated in two network cases and were shown to be related to cooperation and commitment. The paper by Wilson (1995) presents an integrated model of buyer-seller relationships that "blends the empirical knowledge about successful relationship variables with conceptual process models". The 13 relationship variables include trust, commitment, adaptations, mutual goals and interdependencies among others. The process model involves five stages: Partner selection, definition of purpose, setting relationship boundaries, creating relationship value, and maintaining relationships. The paper provides research directions on the concept and model levels, as well as for process research and concludes with managerial implications. The 1988 publication by Johanson and Mattsson is a book chapter aimed at illustrating the usefulness of the network approach in analysis of internationalisation. The authors develop a 2x2 matrix with the degree of internationalisation of the firm and the degree of internationalisation of the market as the two dimensions that both can score low or high. Each cell in the matrix defines one form of internationalisation with specific features and its particular situation regarding advantages and disadvantages. This network approach is then compared with the theory of internalisation and the Uppsala internationalisation model. The paper by Hakansson and Snehota (1989) provides a theoretical discussion of how a company can handle strategic issues within an environment characterised by continuous interactions with counterparts. Three central issues of the mainstream strategic management doctrine are discussed from the viewpoint of the network model: organisational boundaries, determinants of organisational effectiveness and the process of managing business strategy. The main conclusion is that a network context requires alternative assumptions in all three aspects. In a networked environment the focus of management has to move away from focussing on internal resources and structures towards relating the company's own activities and resources to those of important counterparts. Hakansson (ed. 1987) is a joint publication by five authors based on a research programme focussed on technological development in the steel industry network. Chapters are devoted to process and product development, to the importance of supplier relationships and to the role of personal networks among technicians. The basic network is identified as the web of contacts and relationships between suppliers, customers and other parties in the industry. Altogether, it shows that no firm can embark on a technical innovation without carefully considering how such effort may affect all others involved. There is an obvious need for coordination of technical research and development among all involved firms. The book by Ford et al. (1998) is an attempt to apply IMP thinking to managerial issues. The basic aim is to illustrate the consequences for management when the business reality is analysed from a network perspective. The book's main focus is on managing relationships with suppliers and customers. Particular attention is directed to the role of technology in these processes and what strategy actually implies when considered from a network view. In later editions of the book, strategy is conceptualised as an interplay between network pictures, networking and network outcome. In their 1987 paper, Johanson and Mattsson compare the industrial network approach with transaction cost economies. The basic characteristics of the two approaches are described and analysed regarding theoretical foundation, problem orientation, basic concepts, system delimitation and the nature of relationships. Furthermore, the authors provide an illustration that contrasts the features of the two approaches in the analysis of the internationalisation of business. Finally, the book edited by Ford (1990, 1997 and 2002) represents a volume containing previously published papers by researchers belonging to the IMP community. Looking at the themes of the cited publications, "business relationships" was in first position as the theme accounting for most publications. "Internationalisation" continued to be well represented and "networks" entered the list of highly cited themes (Table VI). The themes with 5-10 publications in Table VI kept their positions from the period before the first conference. Three new themes emerged in this period: services, research methods and knowledge exchange/learning. The significance of the focus on business relationships was also illustrated in the list of the ten top-cited publications (Table V). Business relationships is the theme in four of the ten publications, of which three appear at the top of the list. The enhanced attention to networks is illustrated by three publications, while technology/innovation and internationalisation are represented by one each. 5.3 Actors In this period, the research group at the Department of Business Studies at Uppsala University dominated the research arena and accounted for about one-third of the total number of publications cited more than 100 times, and more than half of the top-cited ones. The distribution on research groups of the publications cited more than 100 times is shown in Table VII. Stockholm School of Economics, Manchester and Bath continued to deliver well-cited research and Lyon was now represented on the list. In this period the horizon of IMP publications expanded considerably. Two newcomers accounted for publications with high citation scores: Penn State University in Pittsburgh, and Lancaster University in the UK. Chalmers University in Gothenburg entered the list followed by other newcomers from Australia (Sydney), the US (Georgia State), Finland (Helsinki and Turku), and Germany (Karlsruhe). The number of dissertations is another output that describes the activities of research groups. Up to 1998 we identified 57 dissertations related to IMP. The research groups with more than three dissertations are listed in Table VIII. The 14th IMP conference was organised by Turku University in 1998. In a total, 108 papers were presented at this conference, which was the second largest number of papers so far. The papers were written by 136 authors from 19 countries. Researchers from Finland were the main contributors, followed by UK and Sweden. Other countries that were well represented included Germany, France, Norway, The Netherlands, and the USA. These countries together accounted for 82 per cent of all papers. Industrial Marketing Management published a special issue (the first one dealing with IMP) from the 1998 conference (Vol. 28, No. 5), including ten of the papers presented at the conference. In the editorial, Kristian Moller and Aino Halinen grouped the papers into three interrelated sets. The first set addressed issues related to "network operations and their management". Papers in this group dealt with value generation in business relationships, learning in networks, and the determinants of network competence, i.e. "the skills and qualifications that a firm must master to manage relationships effectively". The papers in the second set examined how "resources are created and managed in buyer and supplier relationships". These papers were concerned with adaptations in business relationships, the role of interfaces with suppliers for productivity and innovation in relationships, and the effects of customer partnering for new product development. The third set focussed on "the organisational and implementation aspects of managing business relationships". The three papers in this group illustrated various aspects of the management of customer relationships: the need for internal coordination in the supplier firm, the functions of a "relationship promoter", and the role of teams and team design in these processes. Business relationships continued to be the main research theme, accounting for one-third of all papers at the conference (Table IX). The table shows that internationalisation and networks also kept their positions as major IMP themes. At this conference, issues related to the customer side of the firm received significant attention, including papers dealing with project marketing, system selling and distribution systems. In a similar vein, the supplier side and purchasing issues were the subjects of several papers. As will be shown later, both the customer and the supplier side accounted for substantial numbers of cited publications in the period following the Turku conference. Furthermore, two other themes in Table IX later showed to be significant regarding number of publications, as well as citations: research methods and knowledge exchange/learning. When it comes to research groups, the domestic ones from Helsinki and Turku accounted for most papers (Table X). Moreover, Oulu University contributed three papers from a newly established research group. The founding research groups at Uppsala, Bath, Manchester and Lyon continued to be well represented, as was the Stockholm School of Economics. The research groups that entered with cited publications in Period 1 also participated with papers at the Turku conference: Lancaster, Chalmers, Karlsruhe and Sydney. The US representatives at Penn State and Georgia State also delivered papers at the conference. So did some research groups that became more established in the period 1999-2012: Copenhagen Business School, Erasmus in Rotterdam, NTNU in Trondheim, and Corvinus in Budapest. 7.1 Activities The most visible of the important collective efforts in this period was probably the establishment of the IMP Journal. The first issue was launched in February 2006. Three issues were published per year with a total number of 100 papers during the first eight years. From 1 January 2015 the journal was taken over by Emerald. To stimulate contributions to the journal, a new activity was introduced in the form of IMP Journal Seminars. The first seminar was organised in Oslo 2005 and the second one in Gothenburg the year after. In addition to the objective of stimulating submissions to the IMP Journal, these seminars provided researchers with practice and training in formulating and interpreting reviews. Each seminar was devoted to specific themes. Ten IMP Journal Seminars were organised through to 2013: Oslo, Gothenburg (2 times), Trondheim, Lancaster, Padova, Lugano, Uppsala, Marseille and Milan. The IMP webpage was launched in this second period. Among other things, doctoral dissertations and conference papers can be downloaded from the site. More than 2,700 papers are available covering the conferences from 2000. The papers from the previous conferences are located in the library of Manchester Business School. In the second period, there were several attempts to conduct collective research, but none of them succeeded in organising a major joint data collection effort. One of these efforts, involving researchers from Sweden, Norway, Holland and Italy, focussed on the role of resource interfaces in the furniture industry (Baraldi and Bocconcelli, 2001). Moreover, there were several substantial national studies with distinct influence on later publications. One Swedish project focussed on the interplay between science, technological development and business, organised in Uppsala and reported in Hakansson and Waluszewski (2007). Two other studies were conducted in Norway. One of them applied a network perspective on logistics with particular focus on resource combining (Jahre et al., 2006). Another study investigated the business network of the global fishing industry (Olsen, 2012). Finally, a major project in Finland paved the way for the development of a framework for analysis of strategic nets (Moller et al., 2005). After Turku in 1998, 13 conferences followed between 1999 and 2011 distributed across Europe (see Table AII). The average number of papers per conference in this second period amounted to 164 (compared with 66 in the first period). Besides the annual IMP conferences, some related international conferences and workshops were organised. "The Nordic Workshop on Relationship Dynamics" with its centre in Finland has been organised nine times. Another example is the "IMP Asia Conference" organised seven times from Australia. 7.2 Resources The number of publications generated during Period 2 that had been cited more than 100 times in 2013 were about the same as in Period 1 - 105 to be compared with 101 (Table XI). However, in reality the frequency of citations has increased substantially. What needs to be taken into account is that it takes some years before a publication has reached the level of 100 citations. On average, the time period when the publications from Period 2 were available for citing is 15 years shorter than for those in Period 1. Similarly, the figures for highly cited papers are lower than in Period 1, due to the shorter life time of the publications. From this period there is one publication that reached the level of 1,000 citations. The ten most cited publications in the second period are listed in Table XII. In this period, eight of the ten most cited publications appeared in journals. One possible explanation for the difference in comparison with Period 1 is that the basic frameworks had been presented in books published previously. The highly cited papers appeared mainly in the Journal of Business Research (4) and Industrial Marketing Management (3). In several cases, they were part of special issues. The paper by Dubois and Gadde (2002), is an attempt to examine methodological challenges in case research that are not addressed in mainstream textbooks on research methodology. The approach, labelled systematic combining, involves the interplay of two simultaneous processes: one dealing with the matching between business reality and theoretical models and concepts, the other with the direction and redirection of a study through the adjustments of the framework and the empirical case that evolves during the process. The paper also suggests alternative ways to evaluate research quality. Regarding the content of the cited publications, business relationships kept the position as the most common theme in this period (Table XIII). Networks, together with the papers focussing on the supplier and customer sides, ranked high as they did in both Period 1 and at the Turku conference, while internationalisation accounted for fewer publications than previously. Knowledge exchange/learning and research methods became more significant than before. Of the new themes, value creation showed a particularly significant impact when it comes to citations. The three other emerging themes can be expected to become increasingly important in the future: network pictures, accounting and supply chain management. 7.3 Actors In this period, substantial changes occurred regarding the citation impact of the various research groups. Uppsala lost their dominant position, because several of the highly cited researchers had moved to chairs at other universities. The three research groups accounting for most of the cited publications were now BI Norwegian Business School in Oslo, Chalmers in Gothenburg and Copenhagen Business School (Table XIV). All research groups with highly cited publications in Period 1 were also represented on the list in Period 2. Thus there is a core of universities with research groups that continuously contribute publications that become highly cited. Furthermore, Trondheim and Oulu, that presented several papers at the Turku conference, now appear on the list of cited papers. Other newcomers on the list are Copenhagen, Erasmus-Rotterdam and Lugano. In all three cases, these advances were due to established researchers moving to these universities. Another aspect of the actor dimension concerns the dissertations presented by the research groups. The number of dissertations we traced increased from 57 in Period 1 to 92 in this period. Uppsala continued to be the main producer of doctoral dissertations, closely followed by BI, Lancaster and Chalmers. Copenhagen and Western Sydney entered the list, reflecting the presence of these universities at the Turku conference. The other universities also appeared on the list in Period 1 which accentuates the significance of the core groups. Table XV shows the research groups with more than three dissertations. At the Rome conference, 161 papers were presented, implying a 50 per cent increase compared to the Turku conference. In all, 24 countries delivered papers, representing a 25 per cent increase. Again, Finland was the country contributing the most papers, closely followed by Sweden and the UK. As always, the hosting country was well represented, so Italy was the fifth country when it comes to numbers of papers. France and Norway substantially increased their participation in comparison with the Turku conference. The three top countries continued to account for a substantial proportion of the papers (48 per cent). Together with the contributions from France, Italy and Norway they covered 73 per cent of the total number of papers. IMM's special issue from this conference (volume 42, issue 7) included 16 papers. The editorial team (Chiara Cantu, Daniela Corsaro, Renato Fiocca and Annalisa Tunisini), categorised these publications into five groups. The first contained papers dealing with "network structure and its dynamics". These papers dealt with competition in business networks, initial relationship development in new ventures, strategising in new ventures, and service network features. The second group involved issues related to "understanding interaction". The papers in this group were concerned with the role of contracts, managing conflict, and assessing and reinforcing internal alignment of new marketing units. The third group was labelled "Actors: Identity and role" and included papers dealing with actor identity in networks, how salespeople facilitate buyers' resource availability, and the changing role of middlemen in distribution networks. The fourth group contained papers on "solutions and value creation." These contributions were concerned with value co-creation, development and implementation of customer solutions, and the transition from products to solutions. The fifth and final group involved papers on "business behaviour in networks". The three papers in this group dealt with enablers and inhibitors of network capability, analysis of organisational networking behaviour, and joint learning in R&D collaboration. The themes of the papers at the conference are presented in Table XVI. Business relationships continued to be a strong theme, but the top position was now overtaken by networks representing the main theme in around one-third of the papers presented. The position of technology/innovation was considerably improved, while customer and supplier sides, as well as services, were well represented among the themes. The observation from the publications in Period 2 that supply chain management, accounting, and network pictures were receiving increasing interest was confirmed by their representation at the conference. Internationalisation, strategy, and research methods were still on the list, while market making appeared as a new theme. The representation of research groups at the conference is illustrated in Table XVII. A couple of research groups that had not been so visible at IMP previously were observed at the Rome conference. Since they participated with several papers at the conference they might become more influential in the future. In comparison with the Turku conference, Helsinki was again among the top paper suppliers. Manchester had doubled their paper representation and even greater increases were shown by BI and Chalmers. Among the research groups from Finland, Oulu continued to manifest a strong presence. Lappenranta, Tampere and Vasa appeared much stronger than in 1998, while Turku had a smaller representation than when they organised the conference. From Italy, Cattolica and Florence contributed with the most papers. Some research groups clearly showed a lesser representation than in 1998. The most prominent example was Bath, but similar tendencies were observed for Karlsruhe and Penn State. In all three cases, the reason was that senior researchers had left the universities. Most other research groups in Table XVII had kept their positions as their figures are quite comparable between 1998 and 2012. In relation to the citations in the period between 1998 and 2012, Helsinki, BI, Chalmers and Lancaster seemed to present adequate numbers of papers at the conference with potential to maintain their strong positions. Manchester, Lyon and Oulu can be predicted to improve their positions, considering their conference representation. It will be interesting to observe what will happen with publications and citations from the "new" research groups in Finland and Italy, as well as Bordeaux and Marseille in France. Being the organiser of a previous IMP conference seems to have stimulated participation from research groups from Glasgow and Budapest. It seems more difficult for Bath to keep its position in the near future since this research group has been reduced substantially, indicated by the fact that only one paper was presented at the conference. The description of the development of IMP in the activity, resource and actor dimensions provides a distinct image: IMP has evolved into a well-developed research network around common research themes, of which business relationships and business networks are the most significant. A huge number of research actors have appeared in the network, some for limited time periods, others over several decades. New research groups have entered, while those established in the network have expanded over time, implying that new researchers have advanced within these groups. A substantial body of resources in terms of books and journal papers have been produced and used in a collective way. Research activities such as conferences, seminars, joint projects and the establishment of its own dedicated journal have been functioning as important network tools. The investigation shows that IMP research activities at various universities and business schools have become increasingly interlinked through joint research programmes, annual conferences and the launch of the specialised journal. The resources dedicated to these research issues, central to IMP, have successively become more substantial in terms of both research input and publications. The number of individuals related to IMP has increased and research groups have been able to raise additional resources to enable enlargement. Furthermore, the relationships among these research groups have evolved and become stronger through joint arrangements and shared resources. The data collected for this paper enables analysis of the enhanced relatedness, both for individual researchers and for research groups, due to the development of the network. Regarding the connections among individual researchers, we examined the development of joint publications between the two periods studied. Here we relied on the statistics regarding publications cited more than 100 times. For these publications we analysed the distribution of authorship and distinguished between papers that were single authored, those that were written by two authors and those co-authored by three or more persons in the two periods (Table XVIII). These figures indicate a substantial development over time regarding collective authorships. The number of publications with three or more authors increased from 13 to 38 per cent. The proportion of papers written by two authors was about the same in both time periods. Consequently, the proportion of single-authored papers decreased substantially - from 34 to 10 per cent. It seems obvious that individual researchers have adapted to the evolving network and the increased cooperation opportunities it provides. One important effect of this development is that the research resources in the network become increasingly shared and embedded; in turn implying that the resource interfaces will develop further. The network will benefit as will the single researcher. Finally, it is an indication of the collective dimension of all knowledge development. In studies of business development in relationships and networks, one significant feature is that the boundaries of companies become increasingly blurred. To analyse these features in the development of a research network we examined the occurrence of co-authorships across research groups. This analysis was based on the assumption that the more joint co-authorships there are, the less clear the boundaries among the research groups will be. Again we used the publications cited more than 100 times and examined which of them were co-authored by representatives of different research groups. The result for Period 1 is presented in Figure 4. In Period 1, the total number of co-authored publications amounted at 41, involving 23 research groups in 11 countries. Considering Uppsala's dominant position regarding publications in this period it is not surprising that this university acts as the spider in this network. The most significant connections relate to the Stockholm School of Economics and Chalmers in Gothenburg, but also involve Bath, Lancaster, Chicago and Bocconi in Italy. The UK connection between Bath and Manchester is linked to the USA through Penn State and through this university to Helsinki Business School. In Finland four other research groups are connected through joint publications but there is yet no link to Helsinki. Two other national co-authorships are not connected to the rest of the network through joint publications - one in the USA and one in Australia. The corresponding analysis for Period 2 resulted in Figure 5. This picture indicates a substantial expansion of co-authorships. In Period 2, the research groups presented 151 joint publications, compared with 41 in the first period. The authors represented 37 research groups from 14 countries and in this network no research groups are unconnected. When it comes to joint publications, the most significant collaborative efforts occur between BI, Chalmers, Lugano, Bath and Marseille. Uppsala is well-connected to the five and constitutes a significant link to Helsinki and the other Finnish research groups that are now all interlinked. Helsinki, in turn, is an important connection to the University of South Wales which is related to Georgia State and Copenhagen Business School. The Copenhagen group is strongly connected to German universities, in particular Berlin. The Stockholm School of Economics is also linked with the group of the 'big five, at the bottom of the figure and provides the connection to researchers in Holland and Belgium through Erasmus. This group is related to Penn State, which in turn connects with Birmingham. Birmingham provides a link to Manchester and Lyon, which are both related to the big five through co-authorships with Bath and Marseille. These results strongly support the idea that the boundaries of the research groups have become more blurred over time and that cooperative efforts across research groups are important network activities. From personal experience, we have also observed that researchers - juniors as well as seniors - have moved between these groups. In this way, the research groups overlap and there are a number of researchers who have been involved in several of them. In some cases this means that, to a certain degree, such groups are more oriented toward research groups at other universities than to groups in their own university. The two figures show a considerable development in the number of co-authorships across research groups. They also suggest that the dynamics of a network involve two different forces. One of these forces increases the relatedness among the actors and makes them more similar. The other force creates diversity among actors. This differentiation occurs because all actors are simultaneously involved in other networks. Therefore, they also have to relate to actors outside the focal network. Such dynamics can be observed in the two figures in terms of distinct sub-networks that change over time. These conditions are representative of network dynamics in general. The two forces create tensions among the actors that have to be handled. For these reasons, members in a network not only develop similarities that are central features for members in a group. Network actors also have to become diverse through their efforts to relate the focal network to other networks of importance to them. Through these processes the actors in a focal network become increasingly differentiated over time. For a network to develop, both forces are important. Therefore, the actors within a focal network will become increasingly differentiated over time, despite the fact that they continue to be involved in a common, central research theme. The illustration of the IMP group's research development is certainly interesting and thought provoking for those who have been involved. But the description and analysis of this process is also of general relevance as a representative example of how research ideas can develop and become influential in different ways through becoming a distinct research network. The development of the quite substantive IMP network illustrates some important effects for the collective level and for individuals. Starting with the individual researcher, we can rely on earlier network studies to discuss some positive and negative effects identified as the "three network paradoxes" (Hakansson and Ford, 2002). Transformation of these paradoxes from the situation of a company to the situation of a research actor leads to the following three paradoxes: A research network is the basis of a research actor's operations, growth and development. But the same network also restricts the freedom of the researcher and may become a cage that imprisons the actor. The relationships of a research actor are, to some extent, the outcome of its own actions. But the researcher is also the outcome of these relationships and what has happened in them. A research actor aims at influencing (and sometimes controlling) the research network. But the more the actor achieves this ambition to control, the less effective and innovative the research network will become. The first paradox illustrates the situation of a young researcher starting a project in one of the central research groups within the IMP network. The existing network provides lots of ideas and possibilities, and also some obvious limitations. The contemporary network of activities, resources and established research actors creates a very fruitful environment with plenty of new ideas and opportunities for research. At the same time, this environment also tends to drive research in certain directions because of established, within-group, ways to formulate and frame research problems. These conditions are not specific to IMP, but a feature of all research traditions. Availability of dedicated journals and other joint publications, as well as an established peer review system are important ingredients in creating this effect. Given the contemporary network around a researcher, there are always alternatives that are much easier and more favourable to research than others. These network features also provide established researchers with secure positions and improved opportunities to attract external funding. The second paradox concerns what the research actor needs to do in order to develop. The researcher will benefit from relationships with specific partners by developing new combinations. In these processes, the actor will try to affect the research partners in ways that are favourable to its own ideas and research activities. But the research partners have the same ambitions, implying that the research actor will be affected by them. Therefore, to be able to become involved in the network and really benefit from others, the actor must accept becoming impacted by them. This will have two effects - increasingly developing relationships within the network, as well as in relation to other networks. The network will consequently become more differentiated. Finally, network evolvement is the outcome of joint actions of network actors. It is the total ambition by all involved actors that drives the development toward both integration and differentiation. Therefore, research actors need to try to influence and control the network. But if one actor becomes too dominant, the development force of the network will weaken, especially in the differentiation dimension. Thus, ambitious actors trying to become influential are needed, but if they are too successful, problems will arise. Well-functioning networks require several centres - thus being multipolar - to keep up the necessary tension. The three paradoxes together provide an interesting illustration of positive and negative aspects of network dynamics. First, the network certainly facilitates the research operations of the single researcher who can build on the existing activities, resources and actors. At the same time the network constrains and limits what the single researcher can do. The network promotes opportunities, but these opportunities are attained within the border of a certain frame. The findings in the study also enable discussion of the role of such a substantive research network in relation to the broader scientific landscape. The IMP network offers an example of how basic ideas are embedded into the larger context in terms of, for example, stability and identity in combination with tension and variety in the interfaces. Researchers belonging to other research networks can observe these features and relate to them. The existence of these network features can be an explanation to the stability that Wuehrer and Smejkal (2013) found with regard to IMP research themes. The network offers both stability and continuity by providing a research base and an important reference point for those working inside and outside the specific network. However, such networks also offer major opportunities for variation through the substantial number of interfaces to other research networks. There are numerous opportunities to develop the interfaces - to combine the basic ideas with several complementary and rival ideas and concepts, thus increasing the tension between involved actors. Stability in combination with increased tensions creates strong development forces. In summary, the process of an emergent research network, described in this paper, illustrates some features that are very similar to those found in studies of the development of business networks. The "network" is an outcome of a networking process where several actors, individually and jointly through research groups, interact and together create a basic structure that remains fluid and powerful. The observed structures and processes are typical from a network point of view since they include some actors that have been involved for the entire period, while others joined the network over time; some becoming very stable actors, while others came and left. IMP has been instrumental in building up an impressive empirical base about business relationships in different contexts and with various functions or roles. This empirical base is far from complete - there are always new contexts to investigate. But the base is already so extensive that it demands further theoretical conceptualisation and model development in order to explain the features and dynamics of the business landscape in more comprehensive ways. Therefore, the empirical base forces the IMP community to continue the research focussed on inter-organisational relationships to explore potential theoretical implications. Future IMP research opportunities reside in the continuous combining and recombining of basic empirical phenomena, such as business relationships and network structures, with empirical fields such as internationalisation, innovation, learning, and value generation, to derive managerial and policy implications. Such analytical combining efforts require additional empirical studies, preferably in international settings, as well as development of theoretical constructs and new theoretical frames. In these efforts, IMP researchers should consider the network paradoxes discussed above. First, they should rely on established research networks, but be open for innovative re-considering through new and developed combinations. As shown in the description of the Rome conference, several new research phenomena were evolving, such as value creation, key account management and market making, all with their particular requirements for conceptualisation and modelling. Second, researchers should do their best in trying to affect the research partners in favourable directions, but also accept being affected by their ambitions. The analysis of authorships across research groups showed a substantial increase between Period 1 and Period 2. These joint activities are likely to foster such acceptance in true network spirit. Third, and finally, researchers should strive for influence and control of the network, and at the same time ensure that no one is allowed to dominate the network. Considering the roots in the first IMP project it is quite natural that representatives of these research groups have had a strong impact on the development of IMP. However, the analysis of joint publications showed that several new constellations became established in Period 2. In the current Period 3 it is most likely that many of these connections among research groups, illustrated in Figure 5, will be marked by even "thicker" lines than in Period 2. Moreover, at the Rome conference, several "new" research groups entered the IMP arena with considerable numbers of papers. Hopefully, these newcomers will contribute to the establishment of additional IMP-related centres.
|
The main activity analysed is the annual IMP conference. The development over time is described by comparison of three conferences (1984, 1998 and 2012) with regard to the themes of the papers presented. In addition, some joint research projects are described. The most central resources are the research frameworks and findings presented in books and journals. To illustrate this dimension, the authors have traced all IMP publications that had been cited more than 100 times in 2013. In the actor layer, the authors investigated the development over time of the distribution of publications and conference presentations on research groups.
|
[SECTION: Findings] IMP began as an international research project in 1976 and held its first conference in 1984. In 2016, the 33rd annual IMP conference was organised in Poznan. Over these 40 years, IMP expanded substantially, and today a large number of researchers and research groups identify themselves as belonging to the IMP community by applying IMP models and concepts in their research. As shown in this paper, the IMP community's researchers have published numerous books and journal papers that are highly cited. In addition, more than 200 doctoral dissertations are based on the IMP approach. Today, IMP is visible through a journal published by Emerald and through a website (www.impgroup.org). From this website 2,700 papers (mostly from conferences) and 75 books and dissertations can be downloaded. Another sign of increasing interest in the IMP idea is the occurrence of several publications discussing IMP in more or less explicit ways: what IMP is, what it is not, and the characteristics of IMP as a research field. Some of these publications examined papers from specific conferences, such as Gemunden (1997), Easton et al. (2003) and Windischhofer et al. (2004). Others focussed on the researchers in the IMP community and how they are connected through joint publications, for example, Morlacchi et al. (2005) and Henneberg et al. (2007). Some authors, such as Turnbull et al. (1996), Hakansson and Snehota (2000), Wilkinson (2002), Ford and Hakansson (2006a), Mattsson and Johanson (2006), Cova et al. (2009) and Hakansson and Waluszewski (2016) discussed the development of IMP in broader terms. Finally, a number of researchers compared the IMP perspective with other approaches, for example, Johanson and Mattsson (1987), Mattsson (1997), Ford (2004), Ford (2011), Hunt (2013), Olsen (2013), or with the development of industrial marketing in general, such as Backhaus et al. (2011) and Vieira and Brito (2015). Some authors were critical of certain aspects of the development of IMP, or to IMP in general. Such publications were presented by Lowe (2001), Harrison (2003), Cova and Salle (2003), Cunningham (2008) and Moller (2013). One particular facet of this criticism is that the creativity featuring the initial IMP research was replaced by "increasing uniformity, repetition and stereotyping of the IMP style in recent years" (Cunningham, 2008, p. 48). Other publications discussing IMP development are listed in "Further reading" after the reference list. All of these publications indicate that researchers find an interest in analysing IMP and its development. The main reason for this attention is that IMP has generated a substance worthy of discussion due to its clear research focus over time. For example, in an extensive study of the conferences from 1984 to 2012, Wuehrer and Smejkal (2013) conclude that IMP research features quite a strong stability over time. Their bibliometric analysis describes a clear and consistent picture over that entire time span. The core research dimensions have been the same over the years, implying that IMP is characterised by a certain continuity regarding substance and identity. Deep probing analysis of extensive business relationships was a driving force in the first IMP project and this phenomenon has continued to attract the main attention in research projects and at conferences. When the first project began, business relationships could not be explained with mainstream theories and the role and the dynamics of this phenomenon continue to represent challenges from a theoretical point of view. From the above description, we can conclude that over four decades there has been a continual evolvement of something that can be interpreted as "IMP". Moreover, scholars within and outside the IMP community have identified this phenomenon as interesting enough to analyse and discuss. In this paper, we examine the "IMP substance" in more detail. One aspect of this examination is to relate to artefacts representing concrete output from IMP activities in terms of research findings presented in books, journal papers and doctoral dissertations. A second issue of interest is to analyse the features of the processes that created this output. At the outset, IMP was a project involving research groups in five countries. However, during this venture complementary activities and resources became related to the project. A process was initiated where many researchers and research groups in several countries became connected through their common research interests and research agendas. Over time, numerous projects - more or less international, but always with international connections - were completed or are currently on-going. Researchers have met, discussed and found ways to collaborate around an empirical phenomenon related to how companies evolve through relatively cooperative business relationships embedded in network constellations. Thus, the development of IMP can be described and analysed as the development of a specific research network containing a multitude of activities, resources and actors embedded into the larger context. It is this specific network that we describe and analyse in the paper. Historical analyses of research tend to focus on two alternative aspects. One is to emphasise the researchers - the research community represented by individuals and their roles. The other is to concentrate on the development of ideas in terms of knowledge features and connections to other knowledge fields. This paper follows none of those routes in the analysis of the progress of IMP. Instead, we try to develop a third route by relying on the tools that were generated within IMP to analyse business development. We know that these tools have been useful in creating new and complementary pictures regarding changes in business. Therefore, we examine IMP's research development in terms of the evolution of a research network. The analysis is focussed on the interplay and the combinations of activities, resources and actors. Separation of these three layers in the business reality is a central approach applied in IMP research (Hakansson and Johanson, 1992). This perspective is identified as the ARA model for analysis of industrial networks in terms of the activities undertaken by actors through the use of various resources. The focus is on inter-organisational processes forming the business landscape or, as in this case, the research landscape. The objective of this paper is to provide specific images of the development of IMP as a research network through relying on the ARA model. The main mission is to investigate the features of actors, activities and resources that were connected and combined during the development of IMP. We begin by describing the process leading to the first IMP study - a joint international research project. After this study, the process became much more complex and multifaceted. Therefore, we rely on a classical research strategy to select some aspects of this process to enable more detailed analysis. This approach will be used to provide one potential answer to the basic question, what is IMP? The answer will be formulated in terms of a research stream from a research network community embedded in a wider network of science with three layers - researchers (actors), artefacts such as books and articles (resources) and research projects (activities). Once the ARA model was selected for the framing of the paper, the methodological issues were focussed on identifying relevant dimensions of activities, resources and actors, as well as the connections between them. 2.1 Activities The initial activity for what later became recognised as IMP was a joint international research project. The significant results emanating from this study, launched in 1976 and involving researchers in five European countries, are described in Section 3. A second joint research activity of significance for IMP's evolution was carried out during the years before and after 1990. Central features of this globally oriented project are presented in Section 5. There are also some other large research projects that were organised on a national level in the community of IMP researchers. We bring up those projects that created substantial effects in terms of research results and publications that have been widely cited. The first IMP conference was organised in 1984 in-between the two major IMP projects. The annual conference is the most important continual activity organised by the group. It would be too immense a task to try to illustrate the evolution of IMP over time by describing all the conferences. Therefore, we needed to select some of them to elucidate the IMP development. The first IMP conference, in Manchester in 1984, was the natural starting point. We began working with this paper in the spring 2013. The most recent conference at that time was the one organised in Rome in 2012, therefore it was chosen to represent the other end-point of the time scale. Since the Rome conference was the 28th, we selected the 14th IMP conference organised in Turku, Finland, in 1998, as the natural "mid-point". The three conferences serve as reference points for the activity layer providing three pictures of IMP's development. The conferences are important meeting points for researchers where research ideas, research issues and research results are presented and discussed. 2.2 Resources The most important resources in the IMP network are the IMP-developed research frameworks and the research findings presented in various forms of publications. These resources are both produced and used by the researchers belonging to the IMP community. The most visible and available resources are the publications in books and journals. These resources can be regarded as inspirational sources, but also as documents of research themes and issues that have been significant over time. It might be problematic to assume that the publications most cited are those that were most influential from a knowledge point of view. Despite that, we use the number of citations as one measure in the resource dimension because this factor provides an indication of how the actors in the network have used the publications. To identify these publications we relied on the database "Publish or Perish" that is based on Google Scholar. Our interest is to provide an account of the most cited of the IMP publications and, more importantly, to assess the broader impact of IMP research in general. Therefore, we traced all the publications that had been cited more than 100 times in August 2013, authored by researchers who were identified as belonging to the "IMP community". To illustrate the development of the resource layer over time, we relied on the time periods defined through the selection of the three conferences for analysis of the activity layer. These demarcation lines in time created two time periods of IMP development: Period 1: covering the development between the 1984 and the 1998 conferences. Period 2: covering the development between the 1998 and the 2012 conferences. The phase before the first conference in 1984 is identified as "the start of IMP". Then we grouped the publications into four categories according to the number of citations they had received in 2013. The first one included the top scoring publications with more than 1,000 citations. The three other categories represented publications in the intervals of 500-999, 200-499 and 100-199. We also analysed the research themes covered in the publications and the development of these themes over time through detailed examination of abstracts and, in some cases, the full text. 2.3 Actors In historical accounts, individual researchers tend to be identified as the most important actors. These actors are clearly visible through the authorships of books and publications. In "true IMP spirit" we supplemented this aspect of the actor level with analysis of the impact of collective forces, such as research groups and universities to which the individual actors are connected. This classification caused some problems since some researchers had moved between universities. We used the affiliation registered on the publication for the classification into research groups, although the actual work might have been conducted several years before, and in another research setting. In some cases we merged researchers from the same city into one group although they might represent different universities in this city. The significance of the research groups is demonstrated through the number of citations their publications achieved. To identify connections between groups we also analysed the co-authorships across research groups and how these joint publications developed over time. Moreover, we provide an account of the number of doctoral dissertations presented by the research groups. The evolvement of IMP will, thus, be described as the development of activities, resources and actors over two time periods where we use the three conferences as reference points (see Figure 1). The description of the two time periods is illustrated by the variables listed above Period 1 in the figure, while the text below Turku shows what aspects of the conferences are presented. It is important to note that the two authors of this paper were involved in the development of IMP - one during the entire period and the other from the first conference. This means that two "insiders" have made all limitations and selections, leading to both positive and negative consequences. On the positive side, we were able to use our personal insights and experiences. The negative one is that we cannot claim to be neutral or objective regarding focus and interpretation. We tried to handle these effects of participating observation by using, for example, citations measured by an external search engine and descriptions fetched from other publications as indicators, instead of more personal and soft-quality indicators. The outline of the paper follows the basic structure of Figure 1. We begin by illustrating the start of IMP with a short resume of the initial IMP project, inaugurated in 1976, by describing activities, resources and actors in this phase (Section 3). This is followed by the presentation of the first conference in Manchester 1984 (4), Period 1 ranging between 1985 and 1998 (5), and the conference in Turku 1998 (6). Thereafter, we describe and analyse the development in Period 2 between 1999 and 2013 (7), and finalise the empirical account with the Rome conference (8). Section 9 is devoted to discussion and interpretation of some significant observations regarding the development of the IMP research network. Section 10, the final section, summarises the development of IMP in network terms and brings up aspects related to these dynamics. In addition to the conclusions, some implications and thoughts about IMP's future potential are discussed. 3.1 Network situation at the start The initiation of the first joint international project can be explained by three factors related to resources, activities and actors, respectively. First, there was dissatisfaction, shared by several European researchers, regarding the resource availability of realistic and relevant textbooks and research reports covering the B2B field. The existing literature lacked descriptions, conceptualisations and analyses regarding buying and selling of industrial goods. Empirical studies conducted during the 1960s and early 1970s indicated that research resources, in terms of mainstream models and concepts available at that time, were inadequate for both explaining business reality and providing normative recommendations. Simply speaking, the business landscape showed features that were quite different from what was assumed in the contemporary literature, which called for further exploration of this significant phenomenon. Second, in the early 1970s several attempts were made to initiate international cooperation and coordination of research activities. For example, European Institute for Advanced Studies in Management in Brussels organised international seminars and workshops where young researchers were provided with opportunities to meet for discussions. Through such support for promotion of collective research activities, new international contacts were stimulated and meeting spots were designed that were beneficial for spontaneous discussions and coordinative attempts. Third, at the time, business administration and marketing faculties expanded substantially due to the massive increase of students at universities and the growth of management education. The actor level was affected considerably when vast numbers of young and ambitious researchers were employed and new research units were created. These conditions made it difficult for senior researchers to maintain control of the research operations. Consequently, there was a free space to use for the young actors. These three features contributed considerably to the launch of the first IMP project. The initiators were young people in the early phases of their development as researchers. They were internationally oriented and eager to do something jointly with others. They kept senior researchers outside the project ("no professors" was the rule!) while they actively encouraged young researchers from several countries to get involved. They also found a common interest in aggressively challenging the mainstream way of investigating the buying and selling of industrial goods. They all had empirical experience from the research field and this knowledge indicated features that were difficult to explain given the contemporary theoretical models. 3.2 Activities As described above, several specific circumstances provided opportunities for the establishment of a joint international research programme. The researchers involved in this programme had no ambitions related to more long-term cooperation. But all of them wanted the programme they implemented to become something unique. However, they certainly had a long way to go to carry through such an assignment. The first step was to mobilise a group of researchers representing several countries, and at the end researchers from five European countries became involved. The project was directed toward investigation of the features of marketing and purchasing of industrial goods in international settings, i.e. to characterise international business exchange. On this basis, an extensive study was designed and carried out in industrial companies in the UK, Sweden, West Germany, France and Italy. Both the buying and the selling sides of firms were investigated. The sample of companies was based on selection matrices distinguishing three types of products (raw materials, components, and equipment) and three types of technologies (unit production, mass production and process production). The most important customers and suppliers in the five countries were identified for each company. Thus, for a selling company in Sweden the most important customers in the UK, West Germany, France and Italy together with domestic customers in Sweden were selected for interviews. For each company, data were collected regarding the handling of individual customers, the history of the relationships, how the business processes had developed over the years and important events in terms of specific adaptations or projects in these processes. The same type of procedure for data collection was used on the buying side. Furthermore, for each interviewee, a personal attitude study was conducted regarding the perceptions related to the country of the counterpart involved in the specific relationship. Thus, data were collected about the company in general, its sales to (and purchases from) the countries involved, the business relationship with the most important counterpart in each (or at least some) of the countries and the attitudes of the manager being interviewed about the relationships with business partners in the specific country. The empirical investigation covering the five countries turned out to be a massive undertaking, requiring huge efforts from the project members. Moreover, the openness of companies in relation to researchers varied over the countries. Consequently, both data collection and analysis were time-consuming processes and required large amounts of research resources. In the first book that reported the study (Hakansson, 1982), an account for the methodological issues is provided in chapter 3. In particular, the group emphasised the processual approach that had been central during the project, as well as the fact that each country study had to find domestic financial resources. Progress, or at least the feeling of progress, was vital. Even if unsolved problems sometimes were at hand, the group always tried to continue to the next step in the research process in order to show the members that progress was made and that the project evolved. Furthermore, each country group had to report back to their financial sources about progress in the home country. The empirical results were very distinct across firms and countries. All companies were shown to be working within established long-term business relationships with their most important customers and suppliers. The results also showed that the relationships varied in many dimensions, such as number of persons involved, occurrence, level, and type of adaptations, relationship duration and the handling of monetary terms. These findings strongly contrasted with the contemporary view of marketing and business processes. Therefore, in the introduction to Hakansson (1982, p. 1) the perspective offered in mainstream research and teaching was challenged in four respects: "We challenge the concentration of the industrial buyer behaviour literature on a narrow analysis of a single discrete purchase". "We challenge the view of industrial marketing as the manipulation of marketing mix variables to achieve a response from a passive market". "We challenge the view of an atomistic structure, assuming a large number of buyers and sellers that easily can change business partners". "We challenge the separation which has occurred in analysing either the process of industrial purchasing or of industrial marketing". The basis of the contrasting IMP framing was then formulated in the following ways: "We emphasise the importance of the relationship which exists between buyers and sellers. This relationship is often close and may be long term and involve a complex pattern of interaction". "We believe it necessary to examine the interaction between individual buying and selling firms where either firm may be taking the more active part in the transaction". "We stress the stability of industrial market structures, where the parties know each other well and are aware of others' movements". "We emphasise the similarity of the tasks of the two parties. Industrial marketing can be understood only through simultaneous analysis of both the buying and selling sides of relationships". The interactive view of industrial marketing and purchasing was formulated in a theoretical model - the interaction model - containing five sets of variables characterising the short term episodes, the long-term relationship, the involved parties, the interaction atmosphere and the environment (Figure 2). The main part of the 1982 book is devoted to the presentation of 23 company cases. In each of these cases, the most important business relationships of the company are described and analysed from either the marketing or the purchasing side. The cases are organised according to the basic technology of the focal company. The business relationships between companies in the five countries are described in terms of duration in time (on average around 13 years), extensiveness in contact patterns, adaptations of products, adjustments in production or logistics, and the degree of social exchange. The variety among companies, with regard to their basic technologies and the type of business their products represent, provides a broad view of the business reality. In total, a very detailed and extensive picture is presented showing how a whole set of industrial companies behave with regard to their selling and buying activities in which the business relationships are basic ingredients. The cases were used as input to further the analysis of significant themes related to the basic variables of the interaction model. Consequently, the themes regarded variation in the processes of interaction, variation in the features of the parties involved in interaction, variation in interaction environments and the atmosphere in which interaction occurred. Fourteen researchers from the participating countries were involved as authors of cases and themes, individually and in various combinations. The main contribution of the first IMP book is a very strong empirically grounded argument for the importance of including interaction and business relationships in any systematic study of industrial settings. 3.3 Resources Significant resources activated at the start of IMP were rooted in two research areas, and particularly at Uppsala University the cross-fertilisation of the two was favourable. The first regarded studies on the internationalisation of businesses, while the second related to research on marketing and purchasing issues that increasingly devoted attention to business relationships between customers and suppliers. Based on the assumption that the numbers of citations reflect the use of research publications, the ten most significant resources before the Manchester conference are listed in Table I. Accordingly, in 2013 four publications accounted for more than 1,000 citations, two of which dealt with internationalisation and two with business relationships. In addition, six journal papers had been cited between 200 and 500 times. Furthermore, nine other publications accounted for more than 100 citations, implying that 19 books and papers presented before the Manchester conference had been cited more than 100 times in 2013. The publications from the 1970s can be seen as major inputs into the project while those from the early 1980s can be regarded as outcomes of the first IMP project. Throughout this paper, we present short summaries of the publications with more than 1,000 citations. Of the four related to the start of IMP, the 1982 book was described above. The paper by Johanson and Wiedersheim-Paul (1975) was based on four Swedish firms' internationalisation processes. Studying how these firms established themselves in foreign markets (year and institutional set-up) led to the identification of a typical internationalisation process. The standard pattern was that firms started in neighbouring countries (at a short psychic distance). They also tended to begin with entry forms requiring minor investments. Thus, the internationalisation of a company was found to be a gradual process in relation to institutional set-up - from agent to producing subsidiary - and to successive entrance in countries at a larger psychic distance. The empirical material in Johanson and Wiedersheim-Paul (1975) was one important base in the development of the "Uppsala model" of the internationalisation process, formulated in Johanson and Vahlne (1977). In this paper, the gradual process that was shown to be so significant in the empirical cases is explained by a basic model connecting two aspects: one related to the state of internationalisation and one concerned with adjustments to changing conditions. The state of internationalisation was defined through the knowledge of the foreign markets and the level of market commitment (resources devoted to the markets). The change aspects of the model related to the current activities of the firm and the decisions to commit resources. The model prescribes a mutual dependence between the change and state aspects. The paper by Ford (1980) draws attention to buyer-seller relationships that, until then, had received scant interest in the literature on industrial marketing and purchasing. In the paper, a five-stage model of business relationship development is launched, from the pre-relationship stage to the "final" one. The features of a relationship in these stages are described and analysed regarding: the increasing experience between the parties, the reduction of their uncertainties and the distance between them, the growth of their commitment, the formal and informal adaptations between the two and their mutual investments and savings. It is also of interest to analyse the research themes covered in the publications cited more than 100 times and how they later developed. Considering the text above, it was natural that the categorisation of themes included "internationalisation" and "business relationships", together with studies of marketing (labelled "customer-side focus") and purchasing and supply management ("supplier-side focus"). Other themes that appeared significant from the beginning were strategy and technology/innovation (combined into one common category). It goes without saying that this classification was not always easy to strictly apply. For example, sometimes it was unclear whether a specific publication should be categorised as business relationship or customer-side focus. However, in our coming statistics for the two periods of IMP, each publication represents only one theme. 3.4 Actors Six research institutions were involved in the first IMP project and in the production of the first IMP book: Uppsala University, UMIST in Manchester, Bath University, Lyon Business School, Munich University and ISVOR-Fiat in Italy. (The researchers representing these institutions are listed in Table AI, Appendix 1). Therefore, it is quite natural that the most cited IMP publications before the 1984 conference emanated from these research groups. Uppsala University contributed the majority of the highly cited publications. Similar conditions characterised the publications with a lower citation rate, some of which were co-authored with researchers at the Stockholm School of Economics. These two groups were connected because some people moved between the universities. The two other significant research communities with highly cited publications were located at Bath and in Manchester. In several cases the PhD theses of the involved researchers served as important input to the project that also resulted in several PhD dissertations. Following the first IMP book, and the joint work IMP researchers devoted to further publications, the Manchester group invited research colleagues to an international workshop in 1984 under the heading "Research Developments in International Marketing". This invitation resulted in the first annual conference. Over time, the conferences became the most visible features of the IMP research stream. In all, 20 papers were presented at the Manchester conference. Full details of these pioneering papers are provided in Appendix 2. The papers involved 30 authors from eight countries - 21 European and nine coming from overseas. So already from the beginning IMP was more than a European affair. Manchester, Uppsala, Lyon and Stockholm School of Economics contributed the most papers. Among non-European research groups that later became significant were Penn State from the USA and Sydney in Australia. Japan and Canada were also represented by researchers at the conference. The most important contributions at the Manchester conference came from the two research areas described in the previous section: business relationships and internationalisation. First, business relationships appeared as the main theme and here the results from the first IMP study provided significant input to the discussions at the seminar. These results were presented in two co-authored books, but also in some journal papers that documented the importance of dyadic business relationships between customers and suppliers[1]. The features and the significance of these relationships, from both marketing and purchasing points of view, were discussed in a number of the papers at the conference with a particular focus on which new concepts should be used to characterise these relationships and their functions. Second, since the topic of the conference was "Research Developments in International Marketing", it is natural that considerable interest was devoted to internationalisation. In the debates at the seminar, the Uppsala model of internationalisation[2] played an especially important role. A third theme intensively discussed during the seminar concerned "networks of business relationships" and "network effects". These issues became apparent as several papers and comments addressed the role of connections between relationships. The papers highlighted the significance of relationships and directed attention to how relationships are related. These connections were conceptualised in terms of relationship portfolios and networks. A fourth theme that emerged regarded technological aspects in business relationships. Technology was central in IMP1 and surfaced as significant at the seminar since almost half the number of papers showed some connections to technology. Technological issues appear crucial for both seller and buyer in almost any B2B situation, thus making the technological content of a business relationship substantial and often critical. Moreover, the significant role of business relationships for technical development was shown to be an important conclusion of the IMP1 study. The themes of the papers at the conference are presented in Table II. Regarding the participation of research groups, those involved in the IMP project provided the most attendees at the conference: Manchester 6, Uppsala 5, and Lyon 4. The papers presented by various research groups are shown in Table III. Here we included the research groups involved in the IMP project, as well as those that later evolved as significant parts of the IMP community. 5.1 Activities During this period the most important activity besides the conferences was the joint international study IMP 2. This project involved researchers from the five European countries participating in IMP1, and scholars from several other countries. European researchers from The Netherlands, Norway and Poland joined the group and involvement of researchers from the USA, Japan and Australia made the project increasingly global. Thus, the empirical base expanded substantially and provided opportunities for analysis of cultural and regional differences. The underlying reason for launching the second IMP study was a feeling of dissatisfaction by some of the researchers regarding the way the business environment, or context, was treated in the first study. The main attention had been directed to the development of dyadic business relationships, involving two active and reflective counterparts. The context of the two had been registered but not given any structural dimension or role. However, in the empirical descriptions there were ample examples showing that a specific customer-supplier relationship was related to other relationships. Thus, the environment of the individual relationship was not diffuse or atomistic, but featured specific other relationships. Consequently, rather than explaining the development of each relationship only through the actions of the two focal firms, there seemed to be reasons to investigate the context in terms of interdependencies in relation to other connected relationships. Five types of interdependencies were analysed; regarding technology and knowledge, as well as social, administrative and legal aspects. Relationships were found to be important means for handling these interdependencies and it was evident that a business relationship had to be seen as an element embedded in a network of relationships. The outcome of the project was presented in another IMP book (Hakansson and Snehota, 1995). The most important theoretical result from the study is an analytical scheme for examining the development effects of relationships, best known as the ARA model (Figure 3). The basic dimensions of this model were used for structuring the empirical observations in the study. The book contains chapters dealing with the functioning of relationships as activity links, resource ties, and actor bonds. Each chapter is based on cases that illustrate the conceptual development. The cases represent joint undertakings and are authored by research groups at Uppsala, Lyon, Poznan, Penn State, Eindhoven, Bath and Chalmers in Gothenburg. The study shows that connections within a relationship enabled enhancement of the internal structures of the involved companies, as well as the collective activity pattern, resource constellation and web of actors in the entire network. The cases revealed that relationships are costly, but can provide positive effects for (1) the dyad - the two actors seen as a "team", (2) individually for each of the two involved actors, and (3) third parties, connected to the two. Thus, research findings suggest that business relationships play a central role for positive economic outcomes for single firms, but even more in a collective way for networks of companies. These results led to the conclusion that well-functioning, connected business relationships represent important economic phenomena that need to be considered in any business analysis. Thus, the conclusions from the first IMP study were strengthened and further developed. The book also contains contributions from scholars advocating other analytical approaches, which are compared with the IMP view. These sections deal with technological development (University of Groningen) and the transaction cost approach (Norwegian School of Economies and Business in Bergen). Another research project aiming at a joint publication involving researchers from the UK and Sweden was organised in the late 1980s. After four years and three seminars in different countries, a book was published in 1992 (Axelsson and Easton, 1992). This book contains 13 chapters, written by 11 authors in various combinations, representing Uppsala University, Lancaster University, Stockholm School of Economics, Huddersfield Polytech and Chalmers University in Gothenburg. Five of the chapters were co-authored, some across university and country boundaries. The book's second chapter presented "A model of industrial networks" by Hakan Hakansson and Jan Johanson. Furthermore, an important project for research visibility was the establishment of a journal based on IMP research. The journal was launched in 1986, Industrial Marketing & Purchasing, with Peter Turnbull as the Editor and MCB University Press as the publisher, and lasted for three years. There were also some other collaborative projects initiated with significant future effects. One research programme in Sweden involving Uppsala University and Stockholm School of Economics resulted in important conclusions regarding the technological dimension of business networks. In the middle of the 1990s, collaboration was established between Chalmers University of Technology, Uppsala University, and Trondheim University, with the focus on case studies analysing single companies and the role of their local and global networks. This joint research programme established links among the three universities that still remain in 2018. Regarding conferences, the first Manchester seminar was not intended to be followed by other arrangements. However, the idea of a joint international seminar turned out to be fruitful and from 1985 the IMP conference became an annual event (except for 1987 when no conference was organised). The first conference was arranged by institutions involved in IMP1, but from 1989 other universities entered as hosts. In total, 12 conferences were organised in Period 1, presenting an average number of 66 papers (see Table AII). 5.2 Resources Several books and papers published during this period have been highly cited. No less than ten publications accounted for more than 1,000 citations in 2013 - see Table IV. The top-cited publication was the book edited by Hakan Hakansson and Ivan Snehota that reported the IMP 2 project. It is noteworthy that no less than six of the ten most cited works are publications in books (Table V). In this period, Routledge was the main publisher of IMP books. Of the most cited publications, the books by Hakansson and Snehota, and Axelsson and Easton, were presented above. In the paper by Anderson et al. (1994), the focus is on the network effects of changes in dyadic relationships. These effects can be both positive (constructive effects) and negative (deleterious effects). The positive effects can appear in other actors' resources, in other actors' activities and in relation to the perception of other actors. The same is the case for the negative effects. Both positive and negative effects are illustrated in two network cases and were shown to be related to cooperation and commitment. The paper by Wilson (1995) presents an integrated model of buyer-seller relationships that "blends the empirical knowledge about successful relationship variables with conceptual process models". The 13 relationship variables include trust, commitment, adaptations, mutual goals and interdependencies among others. The process model involves five stages: Partner selection, definition of purpose, setting relationship boundaries, creating relationship value, and maintaining relationships. The paper provides research directions on the concept and model levels, as well as for process research and concludes with managerial implications. The 1988 publication by Johanson and Mattsson is a book chapter aimed at illustrating the usefulness of the network approach in analysis of internationalisation. The authors develop a 2x2 matrix with the degree of internationalisation of the firm and the degree of internationalisation of the market as the two dimensions that both can score low or high. Each cell in the matrix defines one form of internationalisation with specific features and its particular situation regarding advantages and disadvantages. This network approach is then compared with the theory of internalisation and the Uppsala internationalisation model. The paper by Hakansson and Snehota (1989) provides a theoretical discussion of how a company can handle strategic issues within an environment characterised by continuous interactions with counterparts. Three central issues of the mainstream strategic management doctrine are discussed from the viewpoint of the network model: organisational boundaries, determinants of organisational effectiveness and the process of managing business strategy. The main conclusion is that a network context requires alternative assumptions in all three aspects. In a networked environment the focus of management has to move away from focussing on internal resources and structures towards relating the company's own activities and resources to those of important counterparts. Hakansson (ed. 1987) is a joint publication by five authors based on a research programme focussed on technological development in the steel industry network. Chapters are devoted to process and product development, to the importance of supplier relationships and to the role of personal networks among technicians. The basic network is identified as the web of contacts and relationships between suppliers, customers and other parties in the industry. Altogether, it shows that no firm can embark on a technical innovation without carefully considering how such effort may affect all others involved. There is an obvious need for coordination of technical research and development among all involved firms. The book by Ford et al. (1998) is an attempt to apply IMP thinking to managerial issues. The basic aim is to illustrate the consequences for management when the business reality is analysed from a network perspective. The book's main focus is on managing relationships with suppliers and customers. Particular attention is directed to the role of technology in these processes and what strategy actually implies when considered from a network view. In later editions of the book, strategy is conceptualised as an interplay between network pictures, networking and network outcome. In their 1987 paper, Johanson and Mattsson compare the industrial network approach with transaction cost economies. The basic characteristics of the two approaches are described and analysed regarding theoretical foundation, problem orientation, basic concepts, system delimitation and the nature of relationships. Furthermore, the authors provide an illustration that contrasts the features of the two approaches in the analysis of the internationalisation of business. Finally, the book edited by Ford (1990, 1997 and 2002) represents a volume containing previously published papers by researchers belonging to the IMP community. Looking at the themes of the cited publications, "business relationships" was in first position as the theme accounting for most publications. "Internationalisation" continued to be well represented and "networks" entered the list of highly cited themes (Table VI). The themes with 5-10 publications in Table VI kept their positions from the period before the first conference. Three new themes emerged in this period: services, research methods and knowledge exchange/learning. The significance of the focus on business relationships was also illustrated in the list of the ten top-cited publications (Table V). Business relationships is the theme in four of the ten publications, of which three appear at the top of the list. The enhanced attention to networks is illustrated by three publications, while technology/innovation and internationalisation are represented by one each. 5.3 Actors In this period, the research group at the Department of Business Studies at Uppsala University dominated the research arena and accounted for about one-third of the total number of publications cited more than 100 times, and more than half of the top-cited ones. The distribution on research groups of the publications cited more than 100 times is shown in Table VII. Stockholm School of Economics, Manchester and Bath continued to deliver well-cited research and Lyon was now represented on the list. In this period the horizon of IMP publications expanded considerably. Two newcomers accounted for publications with high citation scores: Penn State University in Pittsburgh, and Lancaster University in the UK. Chalmers University in Gothenburg entered the list followed by other newcomers from Australia (Sydney), the US (Georgia State), Finland (Helsinki and Turku), and Germany (Karlsruhe). The number of dissertations is another output that describes the activities of research groups. Up to 1998 we identified 57 dissertations related to IMP. The research groups with more than three dissertations are listed in Table VIII. The 14th IMP conference was organised by Turku University in 1998. In a total, 108 papers were presented at this conference, which was the second largest number of papers so far. The papers were written by 136 authors from 19 countries. Researchers from Finland were the main contributors, followed by UK and Sweden. Other countries that were well represented included Germany, France, Norway, The Netherlands, and the USA. These countries together accounted for 82 per cent of all papers. Industrial Marketing Management published a special issue (the first one dealing with IMP) from the 1998 conference (Vol. 28, No. 5), including ten of the papers presented at the conference. In the editorial, Kristian Moller and Aino Halinen grouped the papers into three interrelated sets. The first set addressed issues related to "network operations and their management". Papers in this group dealt with value generation in business relationships, learning in networks, and the determinants of network competence, i.e. "the skills and qualifications that a firm must master to manage relationships effectively". The papers in the second set examined how "resources are created and managed in buyer and supplier relationships". These papers were concerned with adaptations in business relationships, the role of interfaces with suppliers for productivity and innovation in relationships, and the effects of customer partnering for new product development. The third set focussed on "the organisational and implementation aspects of managing business relationships". The three papers in this group illustrated various aspects of the management of customer relationships: the need for internal coordination in the supplier firm, the functions of a "relationship promoter", and the role of teams and team design in these processes. Business relationships continued to be the main research theme, accounting for one-third of all papers at the conference (Table IX). The table shows that internationalisation and networks also kept their positions as major IMP themes. At this conference, issues related to the customer side of the firm received significant attention, including papers dealing with project marketing, system selling and distribution systems. In a similar vein, the supplier side and purchasing issues were the subjects of several papers. As will be shown later, both the customer and the supplier side accounted for substantial numbers of cited publications in the period following the Turku conference. Furthermore, two other themes in Table IX later showed to be significant regarding number of publications, as well as citations: research methods and knowledge exchange/learning. When it comes to research groups, the domestic ones from Helsinki and Turku accounted for most papers (Table X). Moreover, Oulu University contributed three papers from a newly established research group. The founding research groups at Uppsala, Bath, Manchester and Lyon continued to be well represented, as was the Stockholm School of Economics. The research groups that entered with cited publications in Period 1 also participated with papers at the Turku conference: Lancaster, Chalmers, Karlsruhe and Sydney. The US representatives at Penn State and Georgia State also delivered papers at the conference. So did some research groups that became more established in the period 1999-2012: Copenhagen Business School, Erasmus in Rotterdam, NTNU in Trondheim, and Corvinus in Budapest. 7.1 Activities The most visible of the important collective efforts in this period was probably the establishment of the IMP Journal. The first issue was launched in February 2006. Three issues were published per year with a total number of 100 papers during the first eight years. From 1 January 2015 the journal was taken over by Emerald. To stimulate contributions to the journal, a new activity was introduced in the form of IMP Journal Seminars. The first seminar was organised in Oslo 2005 and the second one in Gothenburg the year after. In addition to the objective of stimulating submissions to the IMP Journal, these seminars provided researchers with practice and training in formulating and interpreting reviews. Each seminar was devoted to specific themes. Ten IMP Journal Seminars were organised through to 2013: Oslo, Gothenburg (2 times), Trondheim, Lancaster, Padova, Lugano, Uppsala, Marseille and Milan. The IMP webpage was launched in this second period. Among other things, doctoral dissertations and conference papers can be downloaded from the site. More than 2,700 papers are available covering the conferences from 2000. The papers from the previous conferences are located in the library of Manchester Business School. In the second period, there were several attempts to conduct collective research, but none of them succeeded in organising a major joint data collection effort. One of these efforts, involving researchers from Sweden, Norway, Holland and Italy, focussed on the role of resource interfaces in the furniture industry (Baraldi and Bocconcelli, 2001). Moreover, there were several substantial national studies with distinct influence on later publications. One Swedish project focussed on the interplay between science, technological development and business, organised in Uppsala and reported in Hakansson and Waluszewski (2007). Two other studies were conducted in Norway. One of them applied a network perspective on logistics with particular focus on resource combining (Jahre et al., 2006). Another study investigated the business network of the global fishing industry (Olsen, 2012). Finally, a major project in Finland paved the way for the development of a framework for analysis of strategic nets (Moller et al., 2005). After Turku in 1998, 13 conferences followed between 1999 and 2011 distributed across Europe (see Table AII). The average number of papers per conference in this second period amounted to 164 (compared with 66 in the first period). Besides the annual IMP conferences, some related international conferences and workshops were organised. "The Nordic Workshop on Relationship Dynamics" with its centre in Finland has been organised nine times. Another example is the "IMP Asia Conference" organised seven times from Australia. 7.2 Resources The number of publications generated during Period 2 that had been cited more than 100 times in 2013 were about the same as in Period 1 - 105 to be compared with 101 (Table XI). However, in reality the frequency of citations has increased substantially. What needs to be taken into account is that it takes some years before a publication has reached the level of 100 citations. On average, the time period when the publications from Period 2 were available for citing is 15 years shorter than for those in Period 1. Similarly, the figures for highly cited papers are lower than in Period 1, due to the shorter life time of the publications. From this period there is one publication that reached the level of 1,000 citations. The ten most cited publications in the second period are listed in Table XII. In this period, eight of the ten most cited publications appeared in journals. One possible explanation for the difference in comparison with Period 1 is that the basic frameworks had been presented in books published previously. The highly cited papers appeared mainly in the Journal of Business Research (4) and Industrial Marketing Management (3). In several cases, they were part of special issues. The paper by Dubois and Gadde (2002), is an attempt to examine methodological challenges in case research that are not addressed in mainstream textbooks on research methodology. The approach, labelled systematic combining, involves the interplay of two simultaneous processes: one dealing with the matching between business reality and theoretical models and concepts, the other with the direction and redirection of a study through the adjustments of the framework and the empirical case that evolves during the process. The paper also suggests alternative ways to evaluate research quality. Regarding the content of the cited publications, business relationships kept the position as the most common theme in this period (Table XIII). Networks, together with the papers focussing on the supplier and customer sides, ranked high as they did in both Period 1 and at the Turku conference, while internationalisation accounted for fewer publications than previously. Knowledge exchange/learning and research methods became more significant than before. Of the new themes, value creation showed a particularly significant impact when it comes to citations. The three other emerging themes can be expected to become increasingly important in the future: network pictures, accounting and supply chain management. 7.3 Actors In this period, substantial changes occurred regarding the citation impact of the various research groups. Uppsala lost their dominant position, because several of the highly cited researchers had moved to chairs at other universities. The three research groups accounting for most of the cited publications were now BI Norwegian Business School in Oslo, Chalmers in Gothenburg and Copenhagen Business School (Table XIV). All research groups with highly cited publications in Period 1 were also represented on the list in Period 2. Thus there is a core of universities with research groups that continuously contribute publications that become highly cited. Furthermore, Trondheim and Oulu, that presented several papers at the Turku conference, now appear on the list of cited papers. Other newcomers on the list are Copenhagen, Erasmus-Rotterdam and Lugano. In all three cases, these advances were due to established researchers moving to these universities. Another aspect of the actor dimension concerns the dissertations presented by the research groups. The number of dissertations we traced increased from 57 in Period 1 to 92 in this period. Uppsala continued to be the main producer of doctoral dissertations, closely followed by BI, Lancaster and Chalmers. Copenhagen and Western Sydney entered the list, reflecting the presence of these universities at the Turku conference. The other universities also appeared on the list in Period 1 which accentuates the significance of the core groups. Table XV shows the research groups with more than three dissertations. At the Rome conference, 161 papers were presented, implying a 50 per cent increase compared to the Turku conference. In all, 24 countries delivered papers, representing a 25 per cent increase. Again, Finland was the country contributing the most papers, closely followed by Sweden and the UK. As always, the hosting country was well represented, so Italy was the fifth country when it comes to numbers of papers. France and Norway substantially increased their participation in comparison with the Turku conference. The three top countries continued to account for a substantial proportion of the papers (48 per cent). Together with the contributions from France, Italy and Norway they covered 73 per cent of the total number of papers. IMM's special issue from this conference (volume 42, issue 7) included 16 papers. The editorial team (Chiara Cantu, Daniela Corsaro, Renato Fiocca and Annalisa Tunisini), categorised these publications into five groups. The first contained papers dealing with "network structure and its dynamics". These papers dealt with competition in business networks, initial relationship development in new ventures, strategising in new ventures, and service network features. The second group involved issues related to "understanding interaction". The papers in this group were concerned with the role of contracts, managing conflict, and assessing and reinforcing internal alignment of new marketing units. The third group was labelled "Actors: Identity and role" and included papers dealing with actor identity in networks, how salespeople facilitate buyers' resource availability, and the changing role of middlemen in distribution networks. The fourth group contained papers on "solutions and value creation." These contributions were concerned with value co-creation, development and implementation of customer solutions, and the transition from products to solutions. The fifth and final group involved papers on "business behaviour in networks". The three papers in this group dealt with enablers and inhibitors of network capability, analysis of organisational networking behaviour, and joint learning in R&D collaboration. The themes of the papers at the conference are presented in Table XVI. Business relationships continued to be a strong theme, but the top position was now overtaken by networks representing the main theme in around one-third of the papers presented. The position of technology/innovation was considerably improved, while customer and supplier sides, as well as services, were well represented among the themes. The observation from the publications in Period 2 that supply chain management, accounting, and network pictures were receiving increasing interest was confirmed by their representation at the conference. Internationalisation, strategy, and research methods were still on the list, while market making appeared as a new theme. The representation of research groups at the conference is illustrated in Table XVII. A couple of research groups that had not been so visible at IMP previously were observed at the Rome conference. Since they participated with several papers at the conference they might become more influential in the future. In comparison with the Turku conference, Helsinki was again among the top paper suppliers. Manchester had doubled their paper representation and even greater increases were shown by BI and Chalmers. Among the research groups from Finland, Oulu continued to manifest a strong presence. Lappenranta, Tampere and Vasa appeared much stronger than in 1998, while Turku had a smaller representation than when they organised the conference. From Italy, Cattolica and Florence contributed with the most papers. Some research groups clearly showed a lesser representation than in 1998. The most prominent example was Bath, but similar tendencies were observed for Karlsruhe and Penn State. In all three cases, the reason was that senior researchers had left the universities. Most other research groups in Table XVII had kept their positions as their figures are quite comparable between 1998 and 2012. In relation to the citations in the period between 1998 and 2012, Helsinki, BI, Chalmers and Lancaster seemed to present adequate numbers of papers at the conference with potential to maintain their strong positions. Manchester, Lyon and Oulu can be predicted to improve their positions, considering their conference representation. It will be interesting to observe what will happen with publications and citations from the "new" research groups in Finland and Italy, as well as Bordeaux and Marseille in France. Being the organiser of a previous IMP conference seems to have stimulated participation from research groups from Glasgow and Budapest. It seems more difficult for Bath to keep its position in the near future since this research group has been reduced substantially, indicated by the fact that only one paper was presented at the conference. The description of the development of IMP in the activity, resource and actor dimensions provides a distinct image: IMP has evolved into a well-developed research network around common research themes, of which business relationships and business networks are the most significant. A huge number of research actors have appeared in the network, some for limited time periods, others over several decades. New research groups have entered, while those established in the network have expanded over time, implying that new researchers have advanced within these groups. A substantial body of resources in terms of books and journal papers have been produced and used in a collective way. Research activities such as conferences, seminars, joint projects and the establishment of its own dedicated journal have been functioning as important network tools. The investigation shows that IMP research activities at various universities and business schools have become increasingly interlinked through joint research programmes, annual conferences and the launch of the specialised journal. The resources dedicated to these research issues, central to IMP, have successively become more substantial in terms of both research input and publications. The number of individuals related to IMP has increased and research groups have been able to raise additional resources to enable enlargement. Furthermore, the relationships among these research groups have evolved and become stronger through joint arrangements and shared resources. The data collected for this paper enables analysis of the enhanced relatedness, both for individual researchers and for research groups, due to the development of the network. Regarding the connections among individual researchers, we examined the development of joint publications between the two periods studied. Here we relied on the statistics regarding publications cited more than 100 times. For these publications we analysed the distribution of authorship and distinguished between papers that were single authored, those that were written by two authors and those co-authored by three or more persons in the two periods (Table XVIII). These figures indicate a substantial development over time regarding collective authorships. The number of publications with three or more authors increased from 13 to 38 per cent. The proportion of papers written by two authors was about the same in both time periods. Consequently, the proportion of single-authored papers decreased substantially - from 34 to 10 per cent. It seems obvious that individual researchers have adapted to the evolving network and the increased cooperation opportunities it provides. One important effect of this development is that the research resources in the network become increasingly shared and embedded; in turn implying that the resource interfaces will develop further. The network will benefit as will the single researcher. Finally, it is an indication of the collective dimension of all knowledge development. In studies of business development in relationships and networks, one significant feature is that the boundaries of companies become increasingly blurred. To analyse these features in the development of a research network we examined the occurrence of co-authorships across research groups. This analysis was based on the assumption that the more joint co-authorships there are, the less clear the boundaries among the research groups will be. Again we used the publications cited more than 100 times and examined which of them were co-authored by representatives of different research groups. The result for Period 1 is presented in Figure 4. In Period 1, the total number of co-authored publications amounted at 41, involving 23 research groups in 11 countries. Considering Uppsala's dominant position regarding publications in this period it is not surprising that this university acts as the spider in this network. The most significant connections relate to the Stockholm School of Economics and Chalmers in Gothenburg, but also involve Bath, Lancaster, Chicago and Bocconi in Italy. The UK connection between Bath and Manchester is linked to the USA through Penn State and through this university to Helsinki Business School. In Finland four other research groups are connected through joint publications but there is yet no link to Helsinki. Two other national co-authorships are not connected to the rest of the network through joint publications - one in the USA and one in Australia. The corresponding analysis for Period 2 resulted in Figure 5. This picture indicates a substantial expansion of co-authorships. In Period 2, the research groups presented 151 joint publications, compared with 41 in the first period. The authors represented 37 research groups from 14 countries and in this network no research groups are unconnected. When it comes to joint publications, the most significant collaborative efforts occur between BI, Chalmers, Lugano, Bath and Marseille. Uppsala is well-connected to the five and constitutes a significant link to Helsinki and the other Finnish research groups that are now all interlinked. Helsinki, in turn, is an important connection to the University of South Wales which is related to Georgia State and Copenhagen Business School. The Copenhagen group is strongly connected to German universities, in particular Berlin. The Stockholm School of Economics is also linked with the group of the 'big five, at the bottom of the figure and provides the connection to researchers in Holland and Belgium through Erasmus. This group is related to Penn State, which in turn connects with Birmingham. Birmingham provides a link to Manchester and Lyon, which are both related to the big five through co-authorships with Bath and Marseille. These results strongly support the idea that the boundaries of the research groups have become more blurred over time and that cooperative efforts across research groups are important network activities. From personal experience, we have also observed that researchers - juniors as well as seniors - have moved between these groups. In this way, the research groups overlap and there are a number of researchers who have been involved in several of them. In some cases this means that, to a certain degree, such groups are more oriented toward research groups at other universities than to groups in their own university. The two figures show a considerable development in the number of co-authorships across research groups. They also suggest that the dynamics of a network involve two different forces. One of these forces increases the relatedness among the actors and makes them more similar. The other force creates diversity among actors. This differentiation occurs because all actors are simultaneously involved in other networks. Therefore, they also have to relate to actors outside the focal network. Such dynamics can be observed in the two figures in terms of distinct sub-networks that change over time. These conditions are representative of network dynamics in general. The two forces create tensions among the actors that have to be handled. For these reasons, members in a network not only develop similarities that are central features for members in a group. Network actors also have to become diverse through their efforts to relate the focal network to other networks of importance to them. Through these processes the actors in a focal network become increasingly differentiated over time. For a network to develop, both forces are important. Therefore, the actors within a focal network will become increasingly differentiated over time, despite the fact that they continue to be involved in a common, central research theme. The illustration of the IMP group's research development is certainly interesting and thought provoking for those who have been involved. But the description and analysis of this process is also of general relevance as a representative example of how research ideas can develop and become influential in different ways through becoming a distinct research network. The development of the quite substantive IMP network illustrates some important effects for the collective level and for individuals. Starting with the individual researcher, we can rely on earlier network studies to discuss some positive and negative effects identified as the "three network paradoxes" (Hakansson and Ford, 2002). Transformation of these paradoxes from the situation of a company to the situation of a research actor leads to the following three paradoxes: A research network is the basis of a research actor's operations, growth and development. But the same network also restricts the freedom of the researcher and may become a cage that imprisons the actor. The relationships of a research actor are, to some extent, the outcome of its own actions. But the researcher is also the outcome of these relationships and what has happened in them. A research actor aims at influencing (and sometimes controlling) the research network. But the more the actor achieves this ambition to control, the less effective and innovative the research network will become. The first paradox illustrates the situation of a young researcher starting a project in one of the central research groups within the IMP network. The existing network provides lots of ideas and possibilities, and also some obvious limitations. The contemporary network of activities, resources and established research actors creates a very fruitful environment with plenty of new ideas and opportunities for research. At the same time, this environment also tends to drive research in certain directions because of established, within-group, ways to formulate and frame research problems. These conditions are not specific to IMP, but a feature of all research traditions. Availability of dedicated journals and other joint publications, as well as an established peer review system are important ingredients in creating this effect. Given the contemporary network around a researcher, there are always alternatives that are much easier and more favourable to research than others. These network features also provide established researchers with secure positions and improved opportunities to attract external funding. The second paradox concerns what the research actor needs to do in order to develop. The researcher will benefit from relationships with specific partners by developing new combinations. In these processes, the actor will try to affect the research partners in ways that are favourable to its own ideas and research activities. But the research partners have the same ambitions, implying that the research actor will be affected by them. Therefore, to be able to become involved in the network and really benefit from others, the actor must accept becoming impacted by them. This will have two effects - increasingly developing relationships within the network, as well as in relation to other networks. The network will consequently become more differentiated. Finally, network evolvement is the outcome of joint actions of network actors. It is the total ambition by all involved actors that drives the development toward both integration and differentiation. Therefore, research actors need to try to influence and control the network. But if one actor becomes too dominant, the development force of the network will weaken, especially in the differentiation dimension. Thus, ambitious actors trying to become influential are needed, but if they are too successful, problems will arise. Well-functioning networks require several centres - thus being multipolar - to keep up the necessary tension. The three paradoxes together provide an interesting illustration of positive and negative aspects of network dynamics. First, the network certainly facilitates the research operations of the single researcher who can build on the existing activities, resources and actors. At the same time the network constrains and limits what the single researcher can do. The network promotes opportunities, but these opportunities are attained within the border of a certain frame. The findings in the study also enable discussion of the role of such a substantive research network in relation to the broader scientific landscape. The IMP network offers an example of how basic ideas are embedded into the larger context in terms of, for example, stability and identity in combination with tension and variety in the interfaces. Researchers belonging to other research networks can observe these features and relate to them. The existence of these network features can be an explanation to the stability that Wuehrer and Smejkal (2013) found with regard to IMP research themes. The network offers both stability and continuity by providing a research base and an important reference point for those working inside and outside the specific network. However, such networks also offer major opportunities for variation through the substantial number of interfaces to other research networks. There are numerous opportunities to develop the interfaces - to combine the basic ideas with several complementary and rival ideas and concepts, thus increasing the tension between involved actors. Stability in combination with increased tensions creates strong development forces. In summary, the process of an emergent research network, described in this paper, illustrates some features that are very similar to those found in studies of the development of business networks. The "network" is an outcome of a networking process where several actors, individually and jointly through research groups, interact and together create a basic structure that remains fluid and powerful. The observed structures and processes are typical from a network point of view since they include some actors that have been involved for the entire period, while others joined the network over time; some becoming very stable actors, while others came and left. IMP has been instrumental in building up an impressive empirical base about business relationships in different contexts and with various functions or roles. This empirical base is far from complete - there are always new contexts to investigate. But the base is already so extensive that it demands further theoretical conceptualisation and model development in order to explain the features and dynamics of the business landscape in more comprehensive ways. Therefore, the empirical base forces the IMP community to continue the research focussed on inter-organisational relationships to explore potential theoretical implications. Future IMP research opportunities reside in the continuous combining and recombining of basic empirical phenomena, such as business relationships and network structures, with empirical fields such as internationalisation, innovation, learning, and value generation, to derive managerial and policy implications. Such analytical combining efforts require additional empirical studies, preferably in international settings, as well as development of theoretical constructs and new theoretical frames. In these efforts, IMP researchers should consider the network paradoxes discussed above. First, they should rely on established research networks, but be open for innovative re-considering through new and developed combinations. As shown in the description of the Rome conference, several new research phenomena were evolving, such as value creation, key account management and market making, all with their particular requirements for conceptualisation and modelling. Second, researchers should do their best in trying to affect the research partners in favourable directions, but also accept being affected by their ambitions. The analysis of authorships across research groups showed a substantial increase between Period 1 and Period 2. These joint activities are likely to foster such acceptance in true network spirit. Third, and finally, researchers should strive for influence and control of the network, and at the same time ensure that no one is allowed to dominate the network. Considering the roots in the first IMP project it is quite natural that representatives of these research groups have had a strong impact on the development of IMP. However, the analysis of joint publications showed that several new constellations became established in Period 2. In the current Period 3 it is most likely that many of these connections among research groups, illustrated in Figure 5, will be marked by even "thicker" lines than in Period 2. Moreover, at the Rome conference, several "new" research groups entered the IMP arena with considerable numbers of papers. Hopefully, these newcomers will contribute to the establishment of additional IMP-related centres.
|
The paper shows how IMP has evolved into a research network around common themes of which business relationships and networks are the most significant. The activities of various research groups have become increasingly interlinked through joint research programmes, annual conferences and seminars, a website and a dedicated journal.
|
[SECTION: Value] IMP began as an international research project in 1976 and held its first conference in 1984. In 2016, the 33rd annual IMP conference was organised in Poznan. Over these 40 years, IMP expanded substantially, and today a large number of researchers and research groups identify themselves as belonging to the IMP community by applying IMP models and concepts in their research. As shown in this paper, the IMP community's researchers have published numerous books and journal papers that are highly cited. In addition, more than 200 doctoral dissertations are based on the IMP approach. Today, IMP is visible through a journal published by Emerald and through a website (www.impgroup.org). From this website 2,700 papers (mostly from conferences) and 75 books and dissertations can be downloaded. Another sign of increasing interest in the IMP idea is the occurrence of several publications discussing IMP in more or less explicit ways: what IMP is, what it is not, and the characteristics of IMP as a research field. Some of these publications examined papers from specific conferences, such as Gemunden (1997), Easton et al. (2003) and Windischhofer et al. (2004). Others focussed on the researchers in the IMP community and how they are connected through joint publications, for example, Morlacchi et al. (2005) and Henneberg et al. (2007). Some authors, such as Turnbull et al. (1996), Hakansson and Snehota (2000), Wilkinson (2002), Ford and Hakansson (2006a), Mattsson and Johanson (2006), Cova et al. (2009) and Hakansson and Waluszewski (2016) discussed the development of IMP in broader terms. Finally, a number of researchers compared the IMP perspective with other approaches, for example, Johanson and Mattsson (1987), Mattsson (1997), Ford (2004), Ford (2011), Hunt (2013), Olsen (2013), or with the development of industrial marketing in general, such as Backhaus et al. (2011) and Vieira and Brito (2015). Some authors were critical of certain aspects of the development of IMP, or to IMP in general. Such publications were presented by Lowe (2001), Harrison (2003), Cova and Salle (2003), Cunningham (2008) and Moller (2013). One particular facet of this criticism is that the creativity featuring the initial IMP research was replaced by "increasing uniformity, repetition and stereotyping of the IMP style in recent years" (Cunningham, 2008, p. 48). Other publications discussing IMP development are listed in "Further reading" after the reference list. All of these publications indicate that researchers find an interest in analysing IMP and its development. The main reason for this attention is that IMP has generated a substance worthy of discussion due to its clear research focus over time. For example, in an extensive study of the conferences from 1984 to 2012, Wuehrer and Smejkal (2013) conclude that IMP research features quite a strong stability over time. Their bibliometric analysis describes a clear and consistent picture over that entire time span. The core research dimensions have been the same over the years, implying that IMP is characterised by a certain continuity regarding substance and identity. Deep probing analysis of extensive business relationships was a driving force in the first IMP project and this phenomenon has continued to attract the main attention in research projects and at conferences. When the first project began, business relationships could not be explained with mainstream theories and the role and the dynamics of this phenomenon continue to represent challenges from a theoretical point of view. From the above description, we can conclude that over four decades there has been a continual evolvement of something that can be interpreted as "IMP". Moreover, scholars within and outside the IMP community have identified this phenomenon as interesting enough to analyse and discuss. In this paper, we examine the "IMP substance" in more detail. One aspect of this examination is to relate to artefacts representing concrete output from IMP activities in terms of research findings presented in books, journal papers and doctoral dissertations. A second issue of interest is to analyse the features of the processes that created this output. At the outset, IMP was a project involving research groups in five countries. However, during this venture complementary activities and resources became related to the project. A process was initiated where many researchers and research groups in several countries became connected through their common research interests and research agendas. Over time, numerous projects - more or less international, but always with international connections - were completed or are currently on-going. Researchers have met, discussed and found ways to collaborate around an empirical phenomenon related to how companies evolve through relatively cooperative business relationships embedded in network constellations. Thus, the development of IMP can be described and analysed as the development of a specific research network containing a multitude of activities, resources and actors embedded into the larger context. It is this specific network that we describe and analyse in the paper. Historical analyses of research tend to focus on two alternative aspects. One is to emphasise the researchers - the research community represented by individuals and their roles. The other is to concentrate on the development of ideas in terms of knowledge features and connections to other knowledge fields. This paper follows none of those routes in the analysis of the progress of IMP. Instead, we try to develop a third route by relying on the tools that were generated within IMP to analyse business development. We know that these tools have been useful in creating new and complementary pictures regarding changes in business. Therefore, we examine IMP's research development in terms of the evolution of a research network. The analysis is focussed on the interplay and the combinations of activities, resources and actors. Separation of these three layers in the business reality is a central approach applied in IMP research (Hakansson and Johanson, 1992). This perspective is identified as the ARA model for analysis of industrial networks in terms of the activities undertaken by actors through the use of various resources. The focus is on inter-organisational processes forming the business landscape or, as in this case, the research landscape. The objective of this paper is to provide specific images of the development of IMP as a research network through relying on the ARA model. The main mission is to investigate the features of actors, activities and resources that were connected and combined during the development of IMP. We begin by describing the process leading to the first IMP study - a joint international research project. After this study, the process became much more complex and multifaceted. Therefore, we rely on a classical research strategy to select some aspects of this process to enable more detailed analysis. This approach will be used to provide one potential answer to the basic question, what is IMP? The answer will be formulated in terms of a research stream from a research network community embedded in a wider network of science with three layers - researchers (actors), artefacts such as books and articles (resources) and research projects (activities). Once the ARA model was selected for the framing of the paper, the methodological issues were focussed on identifying relevant dimensions of activities, resources and actors, as well as the connections between them. 2.1 Activities The initial activity for what later became recognised as IMP was a joint international research project. The significant results emanating from this study, launched in 1976 and involving researchers in five European countries, are described in Section 3. A second joint research activity of significance for IMP's evolution was carried out during the years before and after 1990. Central features of this globally oriented project are presented in Section 5. There are also some other large research projects that were organised on a national level in the community of IMP researchers. We bring up those projects that created substantial effects in terms of research results and publications that have been widely cited. The first IMP conference was organised in 1984 in-between the two major IMP projects. The annual conference is the most important continual activity organised by the group. It would be too immense a task to try to illustrate the evolution of IMP over time by describing all the conferences. Therefore, we needed to select some of them to elucidate the IMP development. The first IMP conference, in Manchester in 1984, was the natural starting point. We began working with this paper in the spring 2013. The most recent conference at that time was the one organised in Rome in 2012, therefore it was chosen to represent the other end-point of the time scale. Since the Rome conference was the 28th, we selected the 14th IMP conference organised in Turku, Finland, in 1998, as the natural "mid-point". The three conferences serve as reference points for the activity layer providing three pictures of IMP's development. The conferences are important meeting points for researchers where research ideas, research issues and research results are presented and discussed. 2.2 Resources The most important resources in the IMP network are the IMP-developed research frameworks and the research findings presented in various forms of publications. These resources are both produced and used by the researchers belonging to the IMP community. The most visible and available resources are the publications in books and journals. These resources can be regarded as inspirational sources, but also as documents of research themes and issues that have been significant over time. It might be problematic to assume that the publications most cited are those that were most influential from a knowledge point of view. Despite that, we use the number of citations as one measure in the resource dimension because this factor provides an indication of how the actors in the network have used the publications. To identify these publications we relied on the database "Publish or Perish" that is based on Google Scholar. Our interest is to provide an account of the most cited of the IMP publications and, more importantly, to assess the broader impact of IMP research in general. Therefore, we traced all the publications that had been cited more than 100 times in August 2013, authored by researchers who were identified as belonging to the "IMP community". To illustrate the development of the resource layer over time, we relied on the time periods defined through the selection of the three conferences for analysis of the activity layer. These demarcation lines in time created two time periods of IMP development: Period 1: covering the development between the 1984 and the 1998 conferences. Period 2: covering the development between the 1998 and the 2012 conferences. The phase before the first conference in 1984 is identified as "the start of IMP". Then we grouped the publications into four categories according to the number of citations they had received in 2013. The first one included the top scoring publications with more than 1,000 citations. The three other categories represented publications in the intervals of 500-999, 200-499 and 100-199. We also analysed the research themes covered in the publications and the development of these themes over time through detailed examination of abstracts and, in some cases, the full text. 2.3 Actors In historical accounts, individual researchers tend to be identified as the most important actors. These actors are clearly visible through the authorships of books and publications. In "true IMP spirit" we supplemented this aspect of the actor level with analysis of the impact of collective forces, such as research groups and universities to which the individual actors are connected. This classification caused some problems since some researchers had moved between universities. We used the affiliation registered on the publication for the classification into research groups, although the actual work might have been conducted several years before, and in another research setting. In some cases we merged researchers from the same city into one group although they might represent different universities in this city. The significance of the research groups is demonstrated through the number of citations their publications achieved. To identify connections between groups we also analysed the co-authorships across research groups and how these joint publications developed over time. Moreover, we provide an account of the number of doctoral dissertations presented by the research groups. The evolvement of IMP will, thus, be described as the development of activities, resources and actors over two time periods where we use the three conferences as reference points (see Figure 1). The description of the two time periods is illustrated by the variables listed above Period 1 in the figure, while the text below Turku shows what aspects of the conferences are presented. It is important to note that the two authors of this paper were involved in the development of IMP - one during the entire period and the other from the first conference. This means that two "insiders" have made all limitations and selections, leading to both positive and negative consequences. On the positive side, we were able to use our personal insights and experiences. The negative one is that we cannot claim to be neutral or objective regarding focus and interpretation. We tried to handle these effects of participating observation by using, for example, citations measured by an external search engine and descriptions fetched from other publications as indicators, instead of more personal and soft-quality indicators. The outline of the paper follows the basic structure of Figure 1. We begin by illustrating the start of IMP with a short resume of the initial IMP project, inaugurated in 1976, by describing activities, resources and actors in this phase (Section 3). This is followed by the presentation of the first conference in Manchester 1984 (4), Period 1 ranging between 1985 and 1998 (5), and the conference in Turku 1998 (6). Thereafter, we describe and analyse the development in Period 2 between 1999 and 2013 (7), and finalise the empirical account with the Rome conference (8). Section 9 is devoted to discussion and interpretation of some significant observations regarding the development of the IMP research network. Section 10, the final section, summarises the development of IMP in network terms and brings up aspects related to these dynamics. In addition to the conclusions, some implications and thoughts about IMP's future potential are discussed. 3.1 Network situation at the start The initiation of the first joint international project can be explained by three factors related to resources, activities and actors, respectively. First, there was dissatisfaction, shared by several European researchers, regarding the resource availability of realistic and relevant textbooks and research reports covering the B2B field. The existing literature lacked descriptions, conceptualisations and analyses regarding buying and selling of industrial goods. Empirical studies conducted during the 1960s and early 1970s indicated that research resources, in terms of mainstream models and concepts available at that time, were inadequate for both explaining business reality and providing normative recommendations. Simply speaking, the business landscape showed features that were quite different from what was assumed in the contemporary literature, which called for further exploration of this significant phenomenon. Second, in the early 1970s several attempts were made to initiate international cooperation and coordination of research activities. For example, European Institute for Advanced Studies in Management in Brussels organised international seminars and workshops where young researchers were provided with opportunities to meet for discussions. Through such support for promotion of collective research activities, new international contacts were stimulated and meeting spots were designed that were beneficial for spontaneous discussions and coordinative attempts. Third, at the time, business administration and marketing faculties expanded substantially due to the massive increase of students at universities and the growth of management education. The actor level was affected considerably when vast numbers of young and ambitious researchers were employed and new research units were created. These conditions made it difficult for senior researchers to maintain control of the research operations. Consequently, there was a free space to use for the young actors. These three features contributed considerably to the launch of the first IMP project. The initiators were young people in the early phases of their development as researchers. They were internationally oriented and eager to do something jointly with others. They kept senior researchers outside the project ("no professors" was the rule!) while they actively encouraged young researchers from several countries to get involved. They also found a common interest in aggressively challenging the mainstream way of investigating the buying and selling of industrial goods. They all had empirical experience from the research field and this knowledge indicated features that were difficult to explain given the contemporary theoretical models. 3.2 Activities As described above, several specific circumstances provided opportunities for the establishment of a joint international research programme. The researchers involved in this programme had no ambitions related to more long-term cooperation. But all of them wanted the programme they implemented to become something unique. However, they certainly had a long way to go to carry through such an assignment. The first step was to mobilise a group of researchers representing several countries, and at the end researchers from five European countries became involved. The project was directed toward investigation of the features of marketing and purchasing of industrial goods in international settings, i.e. to characterise international business exchange. On this basis, an extensive study was designed and carried out in industrial companies in the UK, Sweden, West Germany, France and Italy. Both the buying and the selling sides of firms were investigated. The sample of companies was based on selection matrices distinguishing three types of products (raw materials, components, and equipment) and three types of technologies (unit production, mass production and process production). The most important customers and suppliers in the five countries were identified for each company. Thus, for a selling company in Sweden the most important customers in the UK, West Germany, France and Italy together with domestic customers in Sweden were selected for interviews. For each company, data were collected regarding the handling of individual customers, the history of the relationships, how the business processes had developed over the years and important events in terms of specific adaptations or projects in these processes. The same type of procedure for data collection was used on the buying side. Furthermore, for each interviewee, a personal attitude study was conducted regarding the perceptions related to the country of the counterpart involved in the specific relationship. Thus, data were collected about the company in general, its sales to (and purchases from) the countries involved, the business relationship with the most important counterpart in each (or at least some) of the countries and the attitudes of the manager being interviewed about the relationships with business partners in the specific country. The empirical investigation covering the five countries turned out to be a massive undertaking, requiring huge efforts from the project members. Moreover, the openness of companies in relation to researchers varied over the countries. Consequently, both data collection and analysis were time-consuming processes and required large amounts of research resources. In the first book that reported the study (Hakansson, 1982), an account for the methodological issues is provided in chapter 3. In particular, the group emphasised the processual approach that had been central during the project, as well as the fact that each country study had to find domestic financial resources. Progress, or at least the feeling of progress, was vital. Even if unsolved problems sometimes were at hand, the group always tried to continue to the next step in the research process in order to show the members that progress was made and that the project evolved. Furthermore, each country group had to report back to their financial sources about progress in the home country. The empirical results were very distinct across firms and countries. All companies were shown to be working within established long-term business relationships with their most important customers and suppliers. The results also showed that the relationships varied in many dimensions, such as number of persons involved, occurrence, level, and type of adaptations, relationship duration and the handling of monetary terms. These findings strongly contrasted with the contemporary view of marketing and business processes. Therefore, in the introduction to Hakansson (1982, p. 1) the perspective offered in mainstream research and teaching was challenged in four respects: "We challenge the concentration of the industrial buyer behaviour literature on a narrow analysis of a single discrete purchase". "We challenge the view of industrial marketing as the manipulation of marketing mix variables to achieve a response from a passive market". "We challenge the view of an atomistic structure, assuming a large number of buyers and sellers that easily can change business partners". "We challenge the separation which has occurred in analysing either the process of industrial purchasing or of industrial marketing". The basis of the contrasting IMP framing was then formulated in the following ways: "We emphasise the importance of the relationship which exists between buyers and sellers. This relationship is often close and may be long term and involve a complex pattern of interaction". "We believe it necessary to examine the interaction between individual buying and selling firms where either firm may be taking the more active part in the transaction". "We stress the stability of industrial market structures, where the parties know each other well and are aware of others' movements". "We emphasise the similarity of the tasks of the two parties. Industrial marketing can be understood only through simultaneous analysis of both the buying and selling sides of relationships". The interactive view of industrial marketing and purchasing was formulated in a theoretical model - the interaction model - containing five sets of variables characterising the short term episodes, the long-term relationship, the involved parties, the interaction atmosphere and the environment (Figure 2). The main part of the 1982 book is devoted to the presentation of 23 company cases. In each of these cases, the most important business relationships of the company are described and analysed from either the marketing or the purchasing side. The cases are organised according to the basic technology of the focal company. The business relationships between companies in the five countries are described in terms of duration in time (on average around 13 years), extensiveness in contact patterns, adaptations of products, adjustments in production or logistics, and the degree of social exchange. The variety among companies, with regard to their basic technologies and the type of business their products represent, provides a broad view of the business reality. In total, a very detailed and extensive picture is presented showing how a whole set of industrial companies behave with regard to their selling and buying activities in which the business relationships are basic ingredients. The cases were used as input to further the analysis of significant themes related to the basic variables of the interaction model. Consequently, the themes regarded variation in the processes of interaction, variation in the features of the parties involved in interaction, variation in interaction environments and the atmosphere in which interaction occurred. Fourteen researchers from the participating countries were involved as authors of cases and themes, individually and in various combinations. The main contribution of the first IMP book is a very strong empirically grounded argument for the importance of including interaction and business relationships in any systematic study of industrial settings. 3.3 Resources Significant resources activated at the start of IMP were rooted in two research areas, and particularly at Uppsala University the cross-fertilisation of the two was favourable. The first regarded studies on the internationalisation of businesses, while the second related to research on marketing and purchasing issues that increasingly devoted attention to business relationships between customers and suppliers. Based on the assumption that the numbers of citations reflect the use of research publications, the ten most significant resources before the Manchester conference are listed in Table I. Accordingly, in 2013 four publications accounted for more than 1,000 citations, two of which dealt with internationalisation and two with business relationships. In addition, six journal papers had been cited between 200 and 500 times. Furthermore, nine other publications accounted for more than 100 citations, implying that 19 books and papers presented before the Manchester conference had been cited more than 100 times in 2013. The publications from the 1970s can be seen as major inputs into the project while those from the early 1980s can be regarded as outcomes of the first IMP project. Throughout this paper, we present short summaries of the publications with more than 1,000 citations. Of the four related to the start of IMP, the 1982 book was described above. The paper by Johanson and Wiedersheim-Paul (1975) was based on four Swedish firms' internationalisation processes. Studying how these firms established themselves in foreign markets (year and institutional set-up) led to the identification of a typical internationalisation process. The standard pattern was that firms started in neighbouring countries (at a short psychic distance). They also tended to begin with entry forms requiring minor investments. Thus, the internationalisation of a company was found to be a gradual process in relation to institutional set-up - from agent to producing subsidiary - and to successive entrance in countries at a larger psychic distance. The empirical material in Johanson and Wiedersheim-Paul (1975) was one important base in the development of the "Uppsala model" of the internationalisation process, formulated in Johanson and Vahlne (1977). In this paper, the gradual process that was shown to be so significant in the empirical cases is explained by a basic model connecting two aspects: one related to the state of internationalisation and one concerned with adjustments to changing conditions. The state of internationalisation was defined through the knowledge of the foreign markets and the level of market commitment (resources devoted to the markets). The change aspects of the model related to the current activities of the firm and the decisions to commit resources. The model prescribes a mutual dependence between the change and state aspects. The paper by Ford (1980) draws attention to buyer-seller relationships that, until then, had received scant interest in the literature on industrial marketing and purchasing. In the paper, a five-stage model of business relationship development is launched, from the pre-relationship stage to the "final" one. The features of a relationship in these stages are described and analysed regarding: the increasing experience between the parties, the reduction of their uncertainties and the distance between them, the growth of their commitment, the formal and informal adaptations between the two and their mutual investments and savings. It is also of interest to analyse the research themes covered in the publications cited more than 100 times and how they later developed. Considering the text above, it was natural that the categorisation of themes included "internationalisation" and "business relationships", together with studies of marketing (labelled "customer-side focus") and purchasing and supply management ("supplier-side focus"). Other themes that appeared significant from the beginning were strategy and technology/innovation (combined into one common category). It goes without saying that this classification was not always easy to strictly apply. For example, sometimes it was unclear whether a specific publication should be categorised as business relationship or customer-side focus. However, in our coming statistics for the two periods of IMP, each publication represents only one theme. 3.4 Actors Six research institutions were involved in the first IMP project and in the production of the first IMP book: Uppsala University, UMIST in Manchester, Bath University, Lyon Business School, Munich University and ISVOR-Fiat in Italy. (The researchers representing these institutions are listed in Table AI, Appendix 1). Therefore, it is quite natural that the most cited IMP publications before the 1984 conference emanated from these research groups. Uppsala University contributed the majority of the highly cited publications. Similar conditions characterised the publications with a lower citation rate, some of which were co-authored with researchers at the Stockholm School of Economics. These two groups were connected because some people moved between the universities. The two other significant research communities with highly cited publications were located at Bath and in Manchester. In several cases the PhD theses of the involved researchers served as important input to the project that also resulted in several PhD dissertations. Following the first IMP book, and the joint work IMP researchers devoted to further publications, the Manchester group invited research colleagues to an international workshop in 1984 under the heading "Research Developments in International Marketing". This invitation resulted in the first annual conference. Over time, the conferences became the most visible features of the IMP research stream. In all, 20 papers were presented at the Manchester conference. Full details of these pioneering papers are provided in Appendix 2. The papers involved 30 authors from eight countries - 21 European and nine coming from overseas. So already from the beginning IMP was more than a European affair. Manchester, Uppsala, Lyon and Stockholm School of Economics contributed the most papers. Among non-European research groups that later became significant were Penn State from the USA and Sydney in Australia. Japan and Canada were also represented by researchers at the conference. The most important contributions at the Manchester conference came from the two research areas described in the previous section: business relationships and internationalisation. First, business relationships appeared as the main theme and here the results from the first IMP study provided significant input to the discussions at the seminar. These results were presented in two co-authored books, but also in some journal papers that documented the importance of dyadic business relationships between customers and suppliers[1]. The features and the significance of these relationships, from both marketing and purchasing points of view, were discussed in a number of the papers at the conference with a particular focus on which new concepts should be used to characterise these relationships and their functions. Second, since the topic of the conference was "Research Developments in International Marketing", it is natural that considerable interest was devoted to internationalisation. In the debates at the seminar, the Uppsala model of internationalisation[2] played an especially important role. A third theme intensively discussed during the seminar concerned "networks of business relationships" and "network effects". These issues became apparent as several papers and comments addressed the role of connections between relationships. The papers highlighted the significance of relationships and directed attention to how relationships are related. These connections were conceptualised in terms of relationship portfolios and networks. A fourth theme that emerged regarded technological aspects in business relationships. Technology was central in IMP1 and surfaced as significant at the seminar since almost half the number of papers showed some connections to technology. Technological issues appear crucial for both seller and buyer in almost any B2B situation, thus making the technological content of a business relationship substantial and often critical. Moreover, the significant role of business relationships for technical development was shown to be an important conclusion of the IMP1 study. The themes of the papers at the conference are presented in Table II. Regarding the participation of research groups, those involved in the IMP project provided the most attendees at the conference: Manchester 6, Uppsala 5, and Lyon 4. The papers presented by various research groups are shown in Table III. Here we included the research groups involved in the IMP project, as well as those that later evolved as significant parts of the IMP community. 5.1 Activities During this period the most important activity besides the conferences was the joint international study IMP 2. This project involved researchers from the five European countries participating in IMP1, and scholars from several other countries. European researchers from The Netherlands, Norway and Poland joined the group and involvement of researchers from the USA, Japan and Australia made the project increasingly global. Thus, the empirical base expanded substantially and provided opportunities for analysis of cultural and regional differences. The underlying reason for launching the second IMP study was a feeling of dissatisfaction by some of the researchers regarding the way the business environment, or context, was treated in the first study. The main attention had been directed to the development of dyadic business relationships, involving two active and reflective counterparts. The context of the two had been registered but not given any structural dimension or role. However, in the empirical descriptions there were ample examples showing that a specific customer-supplier relationship was related to other relationships. Thus, the environment of the individual relationship was not diffuse or atomistic, but featured specific other relationships. Consequently, rather than explaining the development of each relationship only through the actions of the two focal firms, there seemed to be reasons to investigate the context in terms of interdependencies in relation to other connected relationships. Five types of interdependencies were analysed; regarding technology and knowledge, as well as social, administrative and legal aspects. Relationships were found to be important means for handling these interdependencies and it was evident that a business relationship had to be seen as an element embedded in a network of relationships. The outcome of the project was presented in another IMP book (Hakansson and Snehota, 1995). The most important theoretical result from the study is an analytical scheme for examining the development effects of relationships, best known as the ARA model (Figure 3). The basic dimensions of this model were used for structuring the empirical observations in the study. The book contains chapters dealing with the functioning of relationships as activity links, resource ties, and actor bonds. Each chapter is based on cases that illustrate the conceptual development. The cases represent joint undertakings and are authored by research groups at Uppsala, Lyon, Poznan, Penn State, Eindhoven, Bath and Chalmers in Gothenburg. The study shows that connections within a relationship enabled enhancement of the internal structures of the involved companies, as well as the collective activity pattern, resource constellation and web of actors in the entire network. The cases revealed that relationships are costly, but can provide positive effects for (1) the dyad - the two actors seen as a "team", (2) individually for each of the two involved actors, and (3) third parties, connected to the two. Thus, research findings suggest that business relationships play a central role for positive economic outcomes for single firms, but even more in a collective way for networks of companies. These results led to the conclusion that well-functioning, connected business relationships represent important economic phenomena that need to be considered in any business analysis. Thus, the conclusions from the first IMP study were strengthened and further developed. The book also contains contributions from scholars advocating other analytical approaches, which are compared with the IMP view. These sections deal with technological development (University of Groningen) and the transaction cost approach (Norwegian School of Economies and Business in Bergen). Another research project aiming at a joint publication involving researchers from the UK and Sweden was organised in the late 1980s. After four years and three seminars in different countries, a book was published in 1992 (Axelsson and Easton, 1992). This book contains 13 chapters, written by 11 authors in various combinations, representing Uppsala University, Lancaster University, Stockholm School of Economics, Huddersfield Polytech and Chalmers University in Gothenburg. Five of the chapters were co-authored, some across university and country boundaries. The book's second chapter presented "A model of industrial networks" by Hakan Hakansson and Jan Johanson. Furthermore, an important project for research visibility was the establishment of a journal based on IMP research. The journal was launched in 1986, Industrial Marketing & Purchasing, with Peter Turnbull as the Editor and MCB University Press as the publisher, and lasted for three years. There were also some other collaborative projects initiated with significant future effects. One research programme in Sweden involving Uppsala University and Stockholm School of Economics resulted in important conclusions regarding the technological dimension of business networks. In the middle of the 1990s, collaboration was established between Chalmers University of Technology, Uppsala University, and Trondheim University, with the focus on case studies analysing single companies and the role of their local and global networks. This joint research programme established links among the three universities that still remain in 2018. Regarding conferences, the first Manchester seminar was not intended to be followed by other arrangements. However, the idea of a joint international seminar turned out to be fruitful and from 1985 the IMP conference became an annual event (except for 1987 when no conference was organised). The first conference was arranged by institutions involved in IMP1, but from 1989 other universities entered as hosts. In total, 12 conferences were organised in Period 1, presenting an average number of 66 papers (see Table AII). 5.2 Resources Several books and papers published during this period have been highly cited. No less than ten publications accounted for more than 1,000 citations in 2013 - see Table IV. The top-cited publication was the book edited by Hakan Hakansson and Ivan Snehota that reported the IMP 2 project. It is noteworthy that no less than six of the ten most cited works are publications in books (Table V). In this period, Routledge was the main publisher of IMP books. Of the most cited publications, the books by Hakansson and Snehota, and Axelsson and Easton, were presented above. In the paper by Anderson et al. (1994), the focus is on the network effects of changes in dyadic relationships. These effects can be both positive (constructive effects) and negative (deleterious effects). The positive effects can appear in other actors' resources, in other actors' activities and in relation to the perception of other actors. The same is the case for the negative effects. Both positive and negative effects are illustrated in two network cases and were shown to be related to cooperation and commitment. The paper by Wilson (1995) presents an integrated model of buyer-seller relationships that "blends the empirical knowledge about successful relationship variables with conceptual process models". The 13 relationship variables include trust, commitment, adaptations, mutual goals and interdependencies among others. The process model involves five stages: Partner selection, definition of purpose, setting relationship boundaries, creating relationship value, and maintaining relationships. The paper provides research directions on the concept and model levels, as well as for process research and concludes with managerial implications. The 1988 publication by Johanson and Mattsson is a book chapter aimed at illustrating the usefulness of the network approach in analysis of internationalisation. The authors develop a 2x2 matrix with the degree of internationalisation of the firm and the degree of internationalisation of the market as the two dimensions that both can score low or high. Each cell in the matrix defines one form of internationalisation with specific features and its particular situation regarding advantages and disadvantages. This network approach is then compared with the theory of internalisation and the Uppsala internationalisation model. The paper by Hakansson and Snehota (1989) provides a theoretical discussion of how a company can handle strategic issues within an environment characterised by continuous interactions with counterparts. Three central issues of the mainstream strategic management doctrine are discussed from the viewpoint of the network model: organisational boundaries, determinants of organisational effectiveness and the process of managing business strategy. The main conclusion is that a network context requires alternative assumptions in all three aspects. In a networked environment the focus of management has to move away from focussing on internal resources and structures towards relating the company's own activities and resources to those of important counterparts. Hakansson (ed. 1987) is a joint publication by five authors based on a research programme focussed on technological development in the steel industry network. Chapters are devoted to process and product development, to the importance of supplier relationships and to the role of personal networks among technicians. The basic network is identified as the web of contacts and relationships between suppliers, customers and other parties in the industry. Altogether, it shows that no firm can embark on a technical innovation without carefully considering how such effort may affect all others involved. There is an obvious need for coordination of technical research and development among all involved firms. The book by Ford et al. (1998) is an attempt to apply IMP thinking to managerial issues. The basic aim is to illustrate the consequences for management when the business reality is analysed from a network perspective. The book's main focus is on managing relationships with suppliers and customers. Particular attention is directed to the role of technology in these processes and what strategy actually implies when considered from a network view. In later editions of the book, strategy is conceptualised as an interplay between network pictures, networking and network outcome. In their 1987 paper, Johanson and Mattsson compare the industrial network approach with transaction cost economies. The basic characteristics of the two approaches are described and analysed regarding theoretical foundation, problem orientation, basic concepts, system delimitation and the nature of relationships. Furthermore, the authors provide an illustration that contrasts the features of the two approaches in the analysis of the internationalisation of business. Finally, the book edited by Ford (1990, 1997 and 2002) represents a volume containing previously published papers by researchers belonging to the IMP community. Looking at the themes of the cited publications, "business relationships" was in first position as the theme accounting for most publications. "Internationalisation" continued to be well represented and "networks" entered the list of highly cited themes (Table VI). The themes with 5-10 publications in Table VI kept their positions from the period before the first conference. Three new themes emerged in this period: services, research methods and knowledge exchange/learning. The significance of the focus on business relationships was also illustrated in the list of the ten top-cited publications (Table V). Business relationships is the theme in four of the ten publications, of which three appear at the top of the list. The enhanced attention to networks is illustrated by three publications, while technology/innovation and internationalisation are represented by one each. 5.3 Actors In this period, the research group at the Department of Business Studies at Uppsala University dominated the research arena and accounted for about one-third of the total number of publications cited more than 100 times, and more than half of the top-cited ones. The distribution on research groups of the publications cited more than 100 times is shown in Table VII. Stockholm School of Economics, Manchester and Bath continued to deliver well-cited research and Lyon was now represented on the list. In this period the horizon of IMP publications expanded considerably. Two newcomers accounted for publications with high citation scores: Penn State University in Pittsburgh, and Lancaster University in the UK. Chalmers University in Gothenburg entered the list followed by other newcomers from Australia (Sydney), the US (Georgia State), Finland (Helsinki and Turku), and Germany (Karlsruhe). The number of dissertations is another output that describes the activities of research groups. Up to 1998 we identified 57 dissertations related to IMP. The research groups with more than three dissertations are listed in Table VIII. The 14th IMP conference was organised by Turku University in 1998. In a total, 108 papers were presented at this conference, which was the second largest number of papers so far. The papers were written by 136 authors from 19 countries. Researchers from Finland were the main contributors, followed by UK and Sweden. Other countries that were well represented included Germany, France, Norway, The Netherlands, and the USA. These countries together accounted for 82 per cent of all papers. Industrial Marketing Management published a special issue (the first one dealing with IMP) from the 1998 conference (Vol. 28, No. 5), including ten of the papers presented at the conference. In the editorial, Kristian Moller and Aino Halinen grouped the papers into three interrelated sets. The first set addressed issues related to "network operations and their management". Papers in this group dealt with value generation in business relationships, learning in networks, and the determinants of network competence, i.e. "the skills and qualifications that a firm must master to manage relationships effectively". The papers in the second set examined how "resources are created and managed in buyer and supplier relationships". These papers were concerned with adaptations in business relationships, the role of interfaces with suppliers for productivity and innovation in relationships, and the effects of customer partnering for new product development. The third set focussed on "the organisational and implementation aspects of managing business relationships". The three papers in this group illustrated various aspects of the management of customer relationships: the need for internal coordination in the supplier firm, the functions of a "relationship promoter", and the role of teams and team design in these processes. Business relationships continued to be the main research theme, accounting for one-third of all papers at the conference (Table IX). The table shows that internationalisation and networks also kept their positions as major IMP themes. At this conference, issues related to the customer side of the firm received significant attention, including papers dealing with project marketing, system selling and distribution systems. In a similar vein, the supplier side and purchasing issues were the subjects of several papers. As will be shown later, both the customer and the supplier side accounted for substantial numbers of cited publications in the period following the Turku conference. Furthermore, two other themes in Table IX later showed to be significant regarding number of publications, as well as citations: research methods and knowledge exchange/learning. When it comes to research groups, the domestic ones from Helsinki and Turku accounted for most papers (Table X). Moreover, Oulu University contributed three papers from a newly established research group. The founding research groups at Uppsala, Bath, Manchester and Lyon continued to be well represented, as was the Stockholm School of Economics. The research groups that entered with cited publications in Period 1 also participated with papers at the Turku conference: Lancaster, Chalmers, Karlsruhe and Sydney. The US representatives at Penn State and Georgia State also delivered papers at the conference. So did some research groups that became more established in the period 1999-2012: Copenhagen Business School, Erasmus in Rotterdam, NTNU in Trondheim, and Corvinus in Budapest. 7.1 Activities The most visible of the important collective efforts in this period was probably the establishment of the IMP Journal. The first issue was launched in February 2006. Three issues were published per year with a total number of 100 papers during the first eight years. From 1 January 2015 the journal was taken over by Emerald. To stimulate contributions to the journal, a new activity was introduced in the form of IMP Journal Seminars. The first seminar was organised in Oslo 2005 and the second one in Gothenburg the year after. In addition to the objective of stimulating submissions to the IMP Journal, these seminars provided researchers with practice and training in formulating and interpreting reviews. Each seminar was devoted to specific themes. Ten IMP Journal Seminars were organised through to 2013: Oslo, Gothenburg (2 times), Trondheim, Lancaster, Padova, Lugano, Uppsala, Marseille and Milan. The IMP webpage was launched in this second period. Among other things, doctoral dissertations and conference papers can be downloaded from the site. More than 2,700 papers are available covering the conferences from 2000. The papers from the previous conferences are located in the library of Manchester Business School. In the second period, there were several attempts to conduct collective research, but none of them succeeded in organising a major joint data collection effort. One of these efforts, involving researchers from Sweden, Norway, Holland and Italy, focussed on the role of resource interfaces in the furniture industry (Baraldi and Bocconcelli, 2001). Moreover, there were several substantial national studies with distinct influence on later publications. One Swedish project focussed on the interplay between science, technological development and business, organised in Uppsala and reported in Hakansson and Waluszewski (2007). Two other studies were conducted in Norway. One of them applied a network perspective on logistics with particular focus on resource combining (Jahre et al., 2006). Another study investigated the business network of the global fishing industry (Olsen, 2012). Finally, a major project in Finland paved the way for the development of a framework for analysis of strategic nets (Moller et al., 2005). After Turku in 1998, 13 conferences followed between 1999 and 2011 distributed across Europe (see Table AII). The average number of papers per conference in this second period amounted to 164 (compared with 66 in the first period). Besides the annual IMP conferences, some related international conferences and workshops were organised. "The Nordic Workshop on Relationship Dynamics" with its centre in Finland has been organised nine times. Another example is the "IMP Asia Conference" organised seven times from Australia. 7.2 Resources The number of publications generated during Period 2 that had been cited more than 100 times in 2013 were about the same as in Period 1 - 105 to be compared with 101 (Table XI). However, in reality the frequency of citations has increased substantially. What needs to be taken into account is that it takes some years before a publication has reached the level of 100 citations. On average, the time period when the publications from Period 2 were available for citing is 15 years shorter than for those in Period 1. Similarly, the figures for highly cited papers are lower than in Period 1, due to the shorter life time of the publications. From this period there is one publication that reached the level of 1,000 citations. The ten most cited publications in the second period are listed in Table XII. In this period, eight of the ten most cited publications appeared in journals. One possible explanation for the difference in comparison with Period 1 is that the basic frameworks had been presented in books published previously. The highly cited papers appeared mainly in the Journal of Business Research (4) and Industrial Marketing Management (3). In several cases, they were part of special issues. The paper by Dubois and Gadde (2002), is an attempt to examine methodological challenges in case research that are not addressed in mainstream textbooks on research methodology. The approach, labelled systematic combining, involves the interplay of two simultaneous processes: one dealing with the matching between business reality and theoretical models and concepts, the other with the direction and redirection of a study through the adjustments of the framework and the empirical case that evolves during the process. The paper also suggests alternative ways to evaluate research quality. Regarding the content of the cited publications, business relationships kept the position as the most common theme in this period (Table XIII). Networks, together with the papers focussing on the supplier and customer sides, ranked high as they did in both Period 1 and at the Turku conference, while internationalisation accounted for fewer publications than previously. Knowledge exchange/learning and research methods became more significant than before. Of the new themes, value creation showed a particularly significant impact when it comes to citations. The three other emerging themes can be expected to become increasingly important in the future: network pictures, accounting and supply chain management. 7.3 Actors In this period, substantial changes occurred regarding the citation impact of the various research groups. Uppsala lost their dominant position, because several of the highly cited researchers had moved to chairs at other universities. The three research groups accounting for most of the cited publications were now BI Norwegian Business School in Oslo, Chalmers in Gothenburg and Copenhagen Business School (Table XIV). All research groups with highly cited publications in Period 1 were also represented on the list in Period 2. Thus there is a core of universities with research groups that continuously contribute publications that become highly cited. Furthermore, Trondheim and Oulu, that presented several papers at the Turku conference, now appear on the list of cited papers. Other newcomers on the list are Copenhagen, Erasmus-Rotterdam and Lugano. In all three cases, these advances were due to established researchers moving to these universities. Another aspect of the actor dimension concerns the dissertations presented by the research groups. The number of dissertations we traced increased from 57 in Period 1 to 92 in this period. Uppsala continued to be the main producer of doctoral dissertations, closely followed by BI, Lancaster and Chalmers. Copenhagen and Western Sydney entered the list, reflecting the presence of these universities at the Turku conference. The other universities also appeared on the list in Period 1 which accentuates the significance of the core groups. Table XV shows the research groups with more than three dissertations. At the Rome conference, 161 papers were presented, implying a 50 per cent increase compared to the Turku conference. In all, 24 countries delivered papers, representing a 25 per cent increase. Again, Finland was the country contributing the most papers, closely followed by Sweden and the UK. As always, the hosting country was well represented, so Italy was the fifth country when it comes to numbers of papers. France and Norway substantially increased their participation in comparison with the Turku conference. The three top countries continued to account for a substantial proportion of the papers (48 per cent). Together with the contributions from France, Italy and Norway they covered 73 per cent of the total number of papers. IMM's special issue from this conference (volume 42, issue 7) included 16 papers. The editorial team (Chiara Cantu, Daniela Corsaro, Renato Fiocca and Annalisa Tunisini), categorised these publications into five groups. The first contained papers dealing with "network structure and its dynamics". These papers dealt with competition in business networks, initial relationship development in new ventures, strategising in new ventures, and service network features. The second group involved issues related to "understanding interaction". The papers in this group were concerned with the role of contracts, managing conflict, and assessing and reinforcing internal alignment of new marketing units. The third group was labelled "Actors: Identity and role" and included papers dealing with actor identity in networks, how salespeople facilitate buyers' resource availability, and the changing role of middlemen in distribution networks. The fourth group contained papers on "solutions and value creation." These contributions were concerned with value co-creation, development and implementation of customer solutions, and the transition from products to solutions. The fifth and final group involved papers on "business behaviour in networks". The three papers in this group dealt with enablers and inhibitors of network capability, analysis of organisational networking behaviour, and joint learning in R&D collaboration. The themes of the papers at the conference are presented in Table XVI. Business relationships continued to be a strong theme, but the top position was now overtaken by networks representing the main theme in around one-third of the papers presented. The position of technology/innovation was considerably improved, while customer and supplier sides, as well as services, were well represented among the themes. The observation from the publications in Period 2 that supply chain management, accounting, and network pictures were receiving increasing interest was confirmed by their representation at the conference. Internationalisation, strategy, and research methods were still on the list, while market making appeared as a new theme. The representation of research groups at the conference is illustrated in Table XVII. A couple of research groups that had not been so visible at IMP previously were observed at the Rome conference. Since they participated with several papers at the conference they might become more influential in the future. In comparison with the Turku conference, Helsinki was again among the top paper suppliers. Manchester had doubled their paper representation and even greater increases were shown by BI and Chalmers. Among the research groups from Finland, Oulu continued to manifest a strong presence. Lappenranta, Tampere and Vasa appeared much stronger than in 1998, while Turku had a smaller representation than when they organised the conference. From Italy, Cattolica and Florence contributed with the most papers. Some research groups clearly showed a lesser representation than in 1998. The most prominent example was Bath, but similar tendencies were observed for Karlsruhe and Penn State. In all three cases, the reason was that senior researchers had left the universities. Most other research groups in Table XVII had kept their positions as their figures are quite comparable between 1998 and 2012. In relation to the citations in the period between 1998 and 2012, Helsinki, BI, Chalmers and Lancaster seemed to present adequate numbers of papers at the conference with potential to maintain their strong positions. Manchester, Lyon and Oulu can be predicted to improve their positions, considering their conference representation. It will be interesting to observe what will happen with publications and citations from the "new" research groups in Finland and Italy, as well as Bordeaux and Marseille in France. Being the organiser of a previous IMP conference seems to have stimulated participation from research groups from Glasgow and Budapest. It seems more difficult for Bath to keep its position in the near future since this research group has been reduced substantially, indicated by the fact that only one paper was presented at the conference. The description of the development of IMP in the activity, resource and actor dimensions provides a distinct image: IMP has evolved into a well-developed research network around common research themes, of which business relationships and business networks are the most significant. A huge number of research actors have appeared in the network, some for limited time periods, others over several decades. New research groups have entered, while those established in the network have expanded over time, implying that new researchers have advanced within these groups. A substantial body of resources in terms of books and journal papers have been produced and used in a collective way. Research activities such as conferences, seminars, joint projects and the establishment of its own dedicated journal have been functioning as important network tools. The investigation shows that IMP research activities at various universities and business schools have become increasingly interlinked through joint research programmes, annual conferences and the launch of the specialised journal. The resources dedicated to these research issues, central to IMP, have successively become more substantial in terms of both research input and publications. The number of individuals related to IMP has increased and research groups have been able to raise additional resources to enable enlargement. Furthermore, the relationships among these research groups have evolved and become stronger through joint arrangements and shared resources. The data collected for this paper enables analysis of the enhanced relatedness, both for individual researchers and for research groups, due to the development of the network. Regarding the connections among individual researchers, we examined the development of joint publications between the two periods studied. Here we relied on the statistics regarding publications cited more than 100 times. For these publications we analysed the distribution of authorship and distinguished between papers that were single authored, those that were written by two authors and those co-authored by three or more persons in the two periods (Table XVIII). These figures indicate a substantial development over time regarding collective authorships. The number of publications with three or more authors increased from 13 to 38 per cent. The proportion of papers written by two authors was about the same in both time periods. Consequently, the proportion of single-authored papers decreased substantially - from 34 to 10 per cent. It seems obvious that individual researchers have adapted to the evolving network and the increased cooperation opportunities it provides. One important effect of this development is that the research resources in the network become increasingly shared and embedded; in turn implying that the resource interfaces will develop further. The network will benefit as will the single researcher. Finally, it is an indication of the collective dimension of all knowledge development. In studies of business development in relationships and networks, one significant feature is that the boundaries of companies become increasingly blurred. To analyse these features in the development of a research network we examined the occurrence of co-authorships across research groups. This analysis was based on the assumption that the more joint co-authorships there are, the less clear the boundaries among the research groups will be. Again we used the publications cited more than 100 times and examined which of them were co-authored by representatives of different research groups. The result for Period 1 is presented in Figure 4. In Period 1, the total number of co-authored publications amounted at 41, involving 23 research groups in 11 countries. Considering Uppsala's dominant position regarding publications in this period it is not surprising that this university acts as the spider in this network. The most significant connections relate to the Stockholm School of Economics and Chalmers in Gothenburg, but also involve Bath, Lancaster, Chicago and Bocconi in Italy. The UK connection between Bath and Manchester is linked to the USA through Penn State and through this university to Helsinki Business School. In Finland four other research groups are connected through joint publications but there is yet no link to Helsinki. Two other national co-authorships are not connected to the rest of the network through joint publications - one in the USA and one in Australia. The corresponding analysis for Period 2 resulted in Figure 5. This picture indicates a substantial expansion of co-authorships. In Period 2, the research groups presented 151 joint publications, compared with 41 in the first period. The authors represented 37 research groups from 14 countries and in this network no research groups are unconnected. When it comes to joint publications, the most significant collaborative efforts occur between BI, Chalmers, Lugano, Bath and Marseille. Uppsala is well-connected to the five and constitutes a significant link to Helsinki and the other Finnish research groups that are now all interlinked. Helsinki, in turn, is an important connection to the University of South Wales which is related to Georgia State and Copenhagen Business School. The Copenhagen group is strongly connected to German universities, in particular Berlin. The Stockholm School of Economics is also linked with the group of the 'big five, at the bottom of the figure and provides the connection to researchers in Holland and Belgium through Erasmus. This group is related to Penn State, which in turn connects with Birmingham. Birmingham provides a link to Manchester and Lyon, which are both related to the big five through co-authorships with Bath and Marseille. These results strongly support the idea that the boundaries of the research groups have become more blurred over time and that cooperative efforts across research groups are important network activities. From personal experience, we have also observed that researchers - juniors as well as seniors - have moved between these groups. In this way, the research groups overlap and there are a number of researchers who have been involved in several of them. In some cases this means that, to a certain degree, such groups are more oriented toward research groups at other universities than to groups in their own university. The two figures show a considerable development in the number of co-authorships across research groups. They also suggest that the dynamics of a network involve two different forces. One of these forces increases the relatedness among the actors and makes them more similar. The other force creates diversity among actors. This differentiation occurs because all actors are simultaneously involved in other networks. Therefore, they also have to relate to actors outside the focal network. Such dynamics can be observed in the two figures in terms of distinct sub-networks that change over time. These conditions are representative of network dynamics in general. The two forces create tensions among the actors that have to be handled. For these reasons, members in a network not only develop similarities that are central features for members in a group. Network actors also have to become diverse through their efforts to relate the focal network to other networks of importance to them. Through these processes the actors in a focal network become increasingly differentiated over time. For a network to develop, both forces are important. Therefore, the actors within a focal network will become increasingly differentiated over time, despite the fact that they continue to be involved in a common, central research theme. The illustration of the IMP group's research development is certainly interesting and thought provoking for those who have been involved. But the description and analysis of this process is also of general relevance as a representative example of how research ideas can develop and become influential in different ways through becoming a distinct research network. The development of the quite substantive IMP network illustrates some important effects for the collective level and for individuals. Starting with the individual researcher, we can rely on earlier network studies to discuss some positive and negative effects identified as the "three network paradoxes" (Hakansson and Ford, 2002). Transformation of these paradoxes from the situation of a company to the situation of a research actor leads to the following three paradoxes: A research network is the basis of a research actor's operations, growth and development. But the same network also restricts the freedom of the researcher and may become a cage that imprisons the actor. The relationships of a research actor are, to some extent, the outcome of its own actions. But the researcher is also the outcome of these relationships and what has happened in them. A research actor aims at influencing (and sometimes controlling) the research network. But the more the actor achieves this ambition to control, the less effective and innovative the research network will become. The first paradox illustrates the situation of a young researcher starting a project in one of the central research groups within the IMP network. The existing network provides lots of ideas and possibilities, and also some obvious limitations. The contemporary network of activities, resources and established research actors creates a very fruitful environment with plenty of new ideas and opportunities for research. At the same time, this environment also tends to drive research in certain directions because of established, within-group, ways to formulate and frame research problems. These conditions are not specific to IMP, but a feature of all research traditions. Availability of dedicated journals and other joint publications, as well as an established peer review system are important ingredients in creating this effect. Given the contemporary network around a researcher, there are always alternatives that are much easier and more favourable to research than others. These network features also provide established researchers with secure positions and improved opportunities to attract external funding. The second paradox concerns what the research actor needs to do in order to develop. The researcher will benefit from relationships with specific partners by developing new combinations. In these processes, the actor will try to affect the research partners in ways that are favourable to its own ideas and research activities. But the research partners have the same ambitions, implying that the research actor will be affected by them. Therefore, to be able to become involved in the network and really benefit from others, the actor must accept becoming impacted by them. This will have two effects - increasingly developing relationships within the network, as well as in relation to other networks. The network will consequently become more differentiated. Finally, network evolvement is the outcome of joint actions of network actors. It is the total ambition by all involved actors that drives the development toward both integration and differentiation. Therefore, research actors need to try to influence and control the network. But if one actor becomes too dominant, the development force of the network will weaken, especially in the differentiation dimension. Thus, ambitious actors trying to become influential are needed, but if they are too successful, problems will arise. Well-functioning networks require several centres - thus being multipolar - to keep up the necessary tension. The three paradoxes together provide an interesting illustration of positive and negative aspects of network dynamics. First, the network certainly facilitates the research operations of the single researcher who can build on the existing activities, resources and actors. At the same time the network constrains and limits what the single researcher can do. The network promotes opportunities, but these opportunities are attained within the border of a certain frame. The findings in the study also enable discussion of the role of such a substantive research network in relation to the broader scientific landscape. The IMP network offers an example of how basic ideas are embedded into the larger context in terms of, for example, stability and identity in combination with tension and variety in the interfaces. Researchers belonging to other research networks can observe these features and relate to them. The existence of these network features can be an explanation to the stability that Wuehrer and Smejkal (2013) found with regard to IMP research themes. The network offers both stability and continuity by providing a research base and an important reference point for those working inside and outside the specific network. However, such networks also offer major opportunities for variation through the substantial number of interfaces to other research networks. There are numerous opportunities to develop the interfaces - to combine the basic ideas with several complementary and rival ideas and concepts, thus increasing the tension between involved actors. Stability in combination with increased tensions creates strong development forces. In summary, the process of an emergent research network, described in this paper, illustrates some features that are very similar to those found in studies of the development of business networks. The "network" is an outcome of a networking process where several actors, individually and jointly through research groups, interact and together create a basic structure that remains fluid and powerful. The observed structures and processes are typical from a network point of view since they include some actors that have been involved for the entire period, while others joined the network over time; some becoming very stable actors, while others came and left. IMP has been instrumental in building up an impressive empirical base about business relationships in different contexts and with various functions or roles. This empirical base is far from complete - there are always new contexts to investigate. But the base is already so extensive that it demands further theoretical conceptualisation and model development in order to explain the features and dynamics of the business landscape in more comprehensive ways. Therefore, the empirical base forces the IMP community to continue the research focussed on inter-organisational relationships to explore potential theoretical implications. Future IMP research opportunities reside in the continuous combining and recombining of basic empirical phenomena, such as business relationships and network structures, with empirical fields such as internationalisation, innovation, learning, and value generation, to derive managerial and policy implications. Such analytical combining efforts require additional empirical studies, preferably in international settings, as well as development of theoretical constructs and new theoretical frames. In these efforts, IMP researchers should consider the network paradoxes discussed above. First, they should rely on established research networks, but be open for innovative re-considering through new and developed combinations. As shown in the description of the Rome conference, several new research phenomena were evolving, such as value creation, key account management and market making, all with their particular requirements for conceptualisation and modelling. Second, researchers should do their best in trying to affect the research partners in favourable directions, but also accept being affected by their ambitions. The analysis of authorships across research groups showed a substantial increase between Period 1 and Period 2. These joint activities are likely to foster such acceptance in true network spirit. Third, and finally, researchers should strive for influence and control of the network, and at the same time ensure that no one is allowed to dominate the network. Considering the roots in the first IMP project it is quite natural that representatives of these research groups have had a strong impact on the development of IMP. However, the analysis of joint publications showed that several new constellations became established in Period 2. In the current Period 3 it is most likely that many of these connections among research groups, illustrated in Figure 5, will be marked by even "thicker" lines than in Period 2. Moreover, at the Rome conference, several "new" research groups entered the IMP arena with considerable numbers of papers. Hopefully, these newcomers will contribute to the establishment of additional IMP-related centres.
|
The paper provides a detailed illustration of the development of the IMP network. The description of this process is of general relevance as an example of how research ideas can develop and become established in terms of a distinct research network.
|
[SECTION: Purpose] The oldest and strongest emotion of mankind is fear (H.P. Lovecraft). The dawn of the internet era has led several companies to explore new and radical marketing channels. In fact, multi-channel use has become the norm rather than the exception (Frazier, 1999). By adding internet channels, companies hope to increase overall performance, consolidate existing markets and expand into new markets (Geyskens et al., 2002). Unfortunately, internet channels are not without potential problems: internet channels are likely to increase uncertainty about market allegiance, generating real risks for long-term business performance (Geyskens et al., 2002), as well as destroying the value of past investments (Chandy and Tellis, 1998). As Porter (2001, p. 73) promulgates, "it is widely assumed that the internet is cannibalistic [and] will replace [or supplement] all conventional ways of doing business." Furthermore, Trembly (2001) suggests that the ubiquity of the internet will result in lower commissions for sales agents and gradual attrition of sales agents. In concurrence, Stucker (1999) found that "60 percent of the carriers surveyed view the web as at least a moderate threat to the agent distribution system" (p. 8). This finding is corroborated by Hagerty (2005), who suggests that web-based brokerage firms have completely shaken up the real estate industry, thus, creating paranoia for the traditional real-estate agents. Hagerty (2005) describes the traditional real estate agents' perceptions of the internet channel as exasperating, demotivating, and cannibalistic in nature. In their study of the insurance industry, Eastman et al. (2002) found that insurance agents experiencing the addition of an internet channel felt insecure about their job. While the internet provides easier communication and increases interactions with customers, the internet also provides multiple options for insurance purchasers, thus increasing the chances of sales cannibalization. This trend has been confirmed by extant research, which suggests that the internet will continue to attract consumers, breed uncertainty, and undoubtedly change sales agents' perceptions of job security as well as their job performance (Frazier, 1999; Greenhalgh and Rosenblatt, 1984). Primarily working with economic and financial terms, marketing researchers have conceptualized cannibalization and assert that, in fact, the addition of internet channels may not generate significant cannibalization (Biyalogorsky and Naik, 2003; Chandy and Tellis, 1998; Deleersnyder et al., 2002; Ward and Morganosky, 2000). Others, however, contend that even if the financial impact is inconsequential, the psychological impact of cannibalization can influence marketers' performance outcomes (Geyskens et al., 2002). In this regard, Porter (2001) posits that salespeople fear internet channel additions in anticipation that the Internet will cannibalize their sales. In addition, salespeople worry that internet channels will make them outmoded and eventually replace them. Elsewhere, Frazier (1999) contends that, regardless of the extent of actual cannibalization, sales agents' fears concerning cannibalization and the security of their jobs can subdue their efforts, breakdown long-standing relationships, reduce their commitment, and make them fearful of an uncertain future (see also Ashford et al., 1989; Gerstner and Hess, 1995; Greenhalgh and Rosenblatt, 1984; Jeuland and Shugan, 1983; Lal et al., 1996; McGuire and Staelin, 1983). This negative influence of perceived cannibalization on sales agents' job outcomes and relationships often offset the potential gains (e.g., increased market penetration and decreased distribution cost) from adding internet channels (Porter, 2001). Although past researchers have emphasized the importance of perceived cannibalization as a determinant of the benefits and risks of adding an internet channel and the consequences of job insecurity, empirical research on cannibalization in the marketing literature remains scant, apart from the study by Gulati et al. (2002), which was focusing on the fear of disintermediation felt by sales agents. Gulati et al. (2002) position "fear of disintermediation" as an endogenous variable. In contrast, the current study positions the SPC as an exogenous variable that influences the sales agent's psychological outcomes. The organizational behavior literature has found similar outcome variables to be significantly related to job insecurity - the perceived "powerlessness to maintain desired continuity in a threatening job situation" (Greenhalgh and Rosenblatt, 1984, p. 438). According to Greenhalgh and Rosenblatt (1984), one of the key reasons for job insecurity is the potential shrinkage of jobs due to external changes. Within the context of our study, individual perception of shrinkage (cannibalization) is attributed to an external change brought about by the internet. Gulati et al. (2002) conceptualized the fear of disintermediation as a perception of complete loss of business. Our conceptualization of perceived cannibalization captures the subjective threat, which is the consequent of an objective threat. The individual's perceptual processes capture the subjective threat from the addition of an internet channel. This study aims to develop a scale that can be used to measure sales agents' perceived cannibalization of their earnings due to the addition of an internet channel. We establish the conceptual foundation for the scale and the scale's ability to demonstrate acceptable psychometric properties. The scale is then applied in the context of sales agents' perception of the addition of internet channels. By doing so, the downstream outcomes of SPC on commitment and alienation from work can be modeled. While a job insecurity scale may have been used to capture the perceived powerlessness of the situation, existing job insecurity scales assess multiple features of job insecurity that are not relevant to our study. Job insecurity scales also do not tap into perceived threats from competing channels (cf. Ashford et al., 1989; Mauno et al., 2001). Lastly, this research study also argues that the influence of perceived cannibalization on sales agent job outcomes may also be contingent upon the state of environmental munificence, which refers to the extent of growth opportunities available in the market. Uncertainty reduction theory (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949), the expectancy theory of motivation (Vroom, 1964), and congruity theory (Osgood and Tannenbaum, 1955) are used to ground the research. Uncertainty reduction theory explicates how the addition of alternatives in a given situation results in increased uncertainty and how these additions may be perceived as cannibalistic (cf. Berger and Calabrese, 1975; Heider, 1958). The expectancy theory of motivation argues that a high level of perceived cannibalization is likely to decrease a sales agent's belief that expending a given amount of effort will result in corresponding reward, thus, resulting in alienation from work (Vroom, 1964). Congruity theory suggests that the lack of expectation that effort will result in appropriate rewards could result in reduced commitment and increased alienation from work (Osgood and Tannenbaum, 1955). Specifically, this research offers a deeper understanding of the influence of the addition of the internet channel on sales agents' motivation. While recognizing that the internet is here to stay and that strategic channel decisions will unlikely be made based on the views or psychological reactions of sales agents alone, incorporating the sales agent perspective does allow organizations to take a holistic view of their distribution system and make market-focused improvements that coordinate all customer touch points (Vargo and Lusch, 2004). In other words, understanding a sales agent's perceived cannibalization can aid managers in developing strategies to keep the sales agents motivated when employing multiple competing marketing channels to reach customers. Although there is surprisingly little empirical work on channel cannibalization, several researchers have expressed their concerns for the hazards of internet disintermediation (e.g. Garven, 2002; Gulati et al., 2002; Narayandas et al., 2002); a situation in which internet channels are added to entrenched channels (e.g. Alba et al., 1997; Brynjolfsson and Smith, 2000; Coughlan et al., 2001). Internet disintermediation results in the replacement of traditional channel partners with the Internet (Narayandas et al., 2002). This phenomenon fortifies the common belief that internet channels potentially cannibalize the sales of entrenched channels (Porter, 2001).First, sales may shift from entrenched channels to new internet channels when the latter provides features that are more appealing to a target audience, such as a substantial amount of information on the products' characteristics, their possible customization and consistent time savings (Alba et al., 1997). Second, the Internet is likely to increase competition since the consumers have better and quicker access to efficient shopping comparison websites. The resulting increase in price competition may explain why online prices for homogenous products are often found to be lower than those of conventional outlets (Brynjolfsson and Smith, 2000). Consequently, sales may shift from conventional to internet channels. Third, total sales may also decrease should impulse purchases be reduced (Machlis, 1998). Indeed, not only are sales of existing channels cannibalized, but aggregate sales over all channels may also suffer because of the internet channel. Fourth, existing channels may view new internet channels as unwelcome competition. Consequently, the former may lose motivation and reduce support for the firm's products. This may, in turn, also result in brand switching towards the firm's competitors and, hence, decreased total sales (Coughlan et al., 2001).In spite of the above observations and much current debate about disintermediation and insecurity generated by internet channels (Mattila, 2002; Useem, 1999), only a handful of empirical studies have tangentially focused on the potentially cannibalizing consequences of adding the internet to the distribution chain. As shown in Table I, previous empirical studies have viewed cannibalization in terms of its effect on the total sales or overall financial value of the firm. What has yet to be investigated is the impact that perceptions of internet cannibalization have on the commitment and work alienation.This research study proffers the construct of sales agent's perceived cannibalization, which refers to salespeople's perceptions of the extent to which sales opportunities are lost to an internet channel. SPC reflects an attitudinal reaction of a sales agent to challenges that may occur due to the addition of the internet channel. If sales agents view the addition of an internet channel by the firm as a high threat to their current and future sales outcomes, perceived cannibalization should be high. Contrarily, if agents do not consider the addition of an internet channel by their firm as a menace to their current and future sales, perceived cannibalization should be low. Scholars have contended that when a firm begins selling through the Internet, sales agents selling through existing channels are likely to perceive losing market share and customers to online sales (Frazier, 1999; Narayandas et al., 2002; Porter, 2001).Relationship commitment and its relationship with sales agent's perceived cannibalization Commitment is an important component of marketing relationships (Morgan and Hunt, 1994). Relationship commitment is defined in the literature as "an enduring desire to maintain a valued relationship" (Morgan and Hunt, 1994, p. 23). In other words, relationship commitment captures sales agents' willingness to maintain a relationship with their company. Plausibly, it can be argued that a channel member who is satisfied with the economical dimension of the relationship is also committed to the relationship (Morgan and Hunt, 1994).Concurrent with the expectancy theory of motivation argument, this research contends a negative influence of SPC on sales agents' relationship commitment. According to expectancy theory, individuals will be motivated if they believe that "expending a given amount of effort on a task will lead to an improved level of performance on some dimensions" (Futrell, 2001, p. 278). In contrast, if individuals believe their efforts will not produce expected results, motivation will suffer (Vroom, 1964) along with these relationships. Thus, perceptions of high level of uncertainty in the outcome of the sales agents' effort result in a lack of interest in maintaining a long-term relationship. Thus, the following is hypothesized:H1. SPC negatively influences sales agents' relationship commitment.Alienation from work and its relationship with sales agent's perceived cannibalization Alienation from work refers to an attitude in which an employee expresses lack of concern about work and works with low enthusiasm (Moch, 1980). Particularly, alienation from work is a psychological separation from work due to a perceived mismatch between effort and outcome. In concurrence with past research, this study contends that agents' perception of sales cannibalization may lead to psychological severance from work. This psychological severance results from the perception that work will bring sub-optimal outcomes (Moch, 1980).Furthermore, past research indicates that sales agents perceived cannibalization, due to the addition of an internet channel, increases uncertainty (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949). The increased uncertainty related to work or role in the organization may result in psychological detachment from work (Allen and LaFollette, 1977).Additionally, past research indicates that job insecurity influences work attitudes (Hellgren et al., 1999). In other words, individuals will not work optimally when their job is in jeopardy. The introduction of an internet channels should breed insecurity for sales agents who perceive the internet channel reduces their earning potential (Porter, 2001). This view is supported by extant literature, suggesting that employees working in an environment that engenders high job insecurity will experience a high level of work alienation (Blauner, 1964; Shepard, 1971). Consequently, sales agents' perception of cannibalization may be positively linked to alienation from work:H2. SPC positively influences sales agents' alienation from work.Environmental munificence and its moderating influence Environmental munificence is foreseen to play a contingency role in relation to perceived cannibalization's influence on salesperson commitment (Celly and Frazier, 1996). Our contention is rooted in past research on environmental munificence. Veliyath (1996) contends that it is easier for channel members to achieve their sales goals in a high munificence environment rather than in a low munificence environment. This is because business risk decreases and industry performance increases with an increase in environmental munificence. Munificence increases the opportunities for performance by providing many growth opportunities. These opportunities can reduce the perceived challenges associated with multiple channels serving the same market. Consequently, salespeople working in an environment with high munificence may anticipate higher sales growth, which would make up for the loss of sales due to the introduction of an internet channel. However, in a low munificence environment, growth opportunities in the environment are limited; hence, the addition of an internet channel can further exacerbate the duress of salespersons. Consequently, the effect of perceived cannibalization on commitment and alienation will be limited in a highly munificent environment, while in a minimally munificent environment, SPC's impact on commitment and alienation from the work relationship will be strong. This view concurs with Chisholm and Cummings (1979) who suggest that environmental conditions may moderate the relationship between work characteristics and outcomes. Therefore:H3a. The relationship between SPC and commitment will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and commitment will be stronger.H3b. The relationship between SPC and alienation from work will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and alienation from work will be stronger. Scale development To operationalize SPC, initially a pool of eight items was generated as the potential measure (Churchill, 1979; DeVellis, 1991). These items were reviewed in an interview format by 20 insurance agents, who verified anxiety towards the introduction of an online sales channel that would increase the competition and decrease their earnings. A panel of three research survey experts who engage in research in this domain also reviewed the items. Considering the transcribed interviews of the insurance agents and comments of the research experts, the number of items was reduced to five.These items reflect the impact that the introduction of online channels might have on the insurance agents' perception of their current and future sales performance. Specifically, the items relate to how the addition of an internet channel will be perceived by insurance agents as a likely driver of reduction in clientele and profits. In other words, the measures are intended to describe insurance agents psychologically perception that their profits and growth opportunities are cannibalized by the addition of internet channels.After discussing the sample, the psychometric properties of the newly developed scale, perceived cannibalization are described. The scales used for measuring commitment and environmental munificence are well-established in the literature. Specifically, a three-item relationship commitment scale developed by Kumar et al. (1995) is used in this study. The scale measures affective commitment of sales agents with their insurance company. A four-item scale was used to measure alienation from work (Miller, 1967; Agarwal, 1993). We used Dwyer and Oh's (1987) five-item scale to measure environmental munificence. The scale measures an insurance salesperson's perception of market opportunities for growth and profit. The respondents were presented with a factual scenario pertaining to the future of the insurance industry. After considering the scenario, respondents were asked to rate there perception of growth opportunities.Sample A list of insurance agents in North Texas, which had been obtained through a private vendor, was compared with the insurance agent listings in the yellow page section of greater Dallas-Fort Worth area and other major towns in North Texas. After this comparison, a contact pool of 2,108 insurance sales agents was developed. All 2,108 sales agents were contacted to request their participation to the survey.The questionnaires and a letter elucidating the nature and purpose of the study were mailed to the contact pool of insurance agents by the primary researcher of this study. Of 578 filled questionnaires, 67 were incomplete, resulting in 511 usable questionnaires (response rate 24.2 percent). There were 370 males and 141 female respondents, with an average work experience in the insurance industry of 7 years. The resulting valid responses indicate that the respondents have the experience in insurance industry to provide a basis to respond to the survey. Participation was completely voluntary. Valid respondent characteristics are reported in Table II.Measurement assessment procedures For the purpose of cross validation, data were randomly divided into two sub-samples, training and validation. Thirty percent of randomly drawn cases formed the training sample (n=150) and the remaining 70 percent constituted the validation sample (n=361). Initially, scale properties were assessed with the training sample and then cross-validated with the validation sample. In addition to the scale of SPC, three other scales of theoretically related constructs, relationship commitment, alienation from work, and munificence, were included in the analyses. The analysis was initiated by conducting exploratory factor analysis on the training sample. Thereafter, confirmatory factor analysis was performed on the validation sample.Exploratory factor analysis Using the principle component method and varimax rotation, all 17 items belonging to 4 constructs were factor analyzed with the training sample (n=150). As expected, four factors emerged with eigenvalues of 4.652, 3.640, 3.552, and 2.884 together accounting for 81.14 percent of the variance. All five items of perceived cannibalization, three items of commitment, four items of alienation from work, and five items of environmental munificence loaded clearly with their respective constructs. Factor loadings ranged from 0.612 to 0.966.Confirmatory factor analysis Next confirmatory factor analysis (CFA) with the validation sample (n=361) was conducted. The CFA model had 17 items - 5 for perceived cannibalization, 3 for commitment, 4 for alienation from work, and 5 for environmental munificence. The initial model fit was not optimal. Based on low factor loading (lower than 0.40), high residual (normalized residual >2.58) and modification indices, one item from the SPC scale and one item from environmental munificence were deleted. The essence of the deleted items was retained by other items of their respective scale, that is, the content validity was not significantly reduced. The resulting fit indices demonstrated a good fit: kh2=109.409 (p=0.028; 62.4 percent of the variance explained), df=83, GFI=0.962, AGFI=0.944, CFI=0.99, RMSEA=0.029, PCLOSE=0.993, and HOELTER's 0.05 and 0.01 were 347 and 382, respectively. The GFI and AGFI values of >0.90 and =0.90 were indicative of a good fit. Also, a RMSEA < 0.05 shows a good fit. The PCLOSE > 0.50 suggests RMSEA is good. Lastly, Hoelter's 0.05 and 0.01 indexes were <200 indicating that the validation sample size was adequate.Convergent and discriminant validity, AVE, and composite reliability Results for the validation sample showed that all the critical ratios of all the indicators were significant (critical ratios >1.96, p < 0.05) and ranged from 6.764 to 96.716. These results were taken as evidence of acceptable convergent validity (Gerbing and Anderson, 1988). The composite reliabilities were 0.905, 0.834, 0.930, and 0.864, respectively, while the average variance extracted (AVE) for the constructs of SPC, commitment, alienation from work, and environmental munificence were 0.706, 0.627, 0.772, and 0.615, respectively (Bagozzi and Yi, 1988) (see Table III). The AVE for each factor exceeded the squared correlations between that factor and all other factors indicating acceptable discriminant validity. Additionally, none of the confidence intervals (+- two standards errors) for the estimated correlations for the constructs included 1.0, providing further support for adequate discriminant validity (Anderson and Gerbing, 1988).Common method bias Since the data for this study were obtained from a single survey, common method variance was possible. Following Podsakoff and Organ (1986), the Harman's one-factor test in which all variables were hypothesized to load on a single factor representing the common method was employed. The principal component factor analysis revealed four factors each with an eigenvalue greater than 1.0. All factors together accounted for over 81 percents of the total variance in the training and validation samples respectively. The first factor accounted for 28 percent of the variance. Additionally, high correlation (r > 0.90) was not found between any construct. Bagozzi et al. (1991) indicate that the presence of common method bias usually results in extremely high correlations between variables. Hence, common method bias may not be a serious concern in this study.Non-response bias To examine the non-response bias, mean responses from early respondents and late respondents were compared (Armstrong and Overton, 1977). Independent t-tests of the mean responses on all the four constructs showed no statistical differences (at 0.05 level), thus, non-response bias should not be a problem.Hypotheses test In order to test the hypotheses, a structural model with four constructs was estimated: SPC, the interaction term of SPC and environmental munificence (SPC*EM), alienation from work, and commitment. We followed the procedure recommended by Aiken and West (1991) and Ping (1995) for creating and using the interaction term. The items of SPC and environmental munificence were mean centered and cross-multiplied to create the interaction term. In the proposed model, commitment and alienation are endogenous, whereas the remaining two constructs are exogenous. The structural model was analyzed with the combined sample of 511 cases. The fit indices demonstrated a reasonable fit: kh2=288.556 (p < 0.001; 60.2 percent of the variance explained), df=86, GFI=0.931, AGFI=0.903, CFI=0.96, NFI=0.944, RMSEA=0.058. Even though the direction of the path was negative as per H1, SPC-commitment link was not significant (estimate=-0.031, t=-0.877, p-value=0.381). That is, H1 was not supported. Thus, no main effect of SPC on commitment was found. The SPC-alienation from work link was significant (estimate=0.068, t=1.981, p-value=0.045). Thus, H2 was supported. Further, the SPC*EM-commitment link (estimate=0.067, t=2.475, p-value=0.013) and SPC*EM-alienation from work link (estimate=-0.064, t=-2.767, p-value=0.006) were both significant, thus, supporting H3a and H3b. Consequently, the influence of SPC on commitment and alienation from work is moderated by environmental munificence. The results are summarized in Table IV. In this study, the concept of perceived cannibalization is tested in a sales context. Many past studies suggested the possibility of the rise of sales agent perceived cannibalization due to the addition of internet channels (Hagerty, 2005; Porter, 2001). Some warnings are evident with respect to the potential negative outcomes of sales agent's perceived cannibalization on various job aspects. Anecdotal evidence has surfaced showing that salespersons are perceiving internet channels as cannibalistic to their current and future sales. Yet no attempt had been made to either conceptualize or operationalize the construct of sales agent's perceived cannibalization.Previous conceptualizations of inter-channel cannibalization were all based on economic terms (Geyskens et al., 2002; Ward and Morganosky, 2000; Deleersnyder et al., 2002) and, hence, were considered myopic by Porter (2001). Indeed, Porter (2001) argued that the psychological impact of the addition of new channels on existing channels had to be taken into account to understand fully the impact of sales cannibalization. Our contention follows Porter's proposition and extends his conceptualizations of cannibalization by examining the impact of the addition of an internet channel in the sales domain.After conceptualizing the construct, a multi-item scale was developed for measuring sales agent's perceived cannibalization. The properties of the scale were assessed following procedures recommended by Churchill (1979), Anderson and Gerbing (1988), Gerbing and Anderson (1988), and Bagozzi and Yi (1988). This study, utilizes both exploratory and confirmatory factor analysis to assess the reliability and validity of the SPC scale. Our scale opens a window of opportunity for empirical research on channel cannibalization in general and sales channels in particular.In addition to advancing the scale, four hypotheses are developed, centering on the principal construct: sales agents perceive cannibalization. By demonstrating that environmental munificence moderates the effect of perceived cannibalization on commitment, this research study demonstrates that perceived cannibalization is not universally damaging to relationship commitment. Rather, only under a low munificent environment perceived cannibalization will significantly reduce salespersons' relationship commitment. Furthermore, perceived cannibalization increases alienation from work but its effect is more severe in low munificent environment. This research study demonstrated that the perception of cannibalization can reduce sales agents' commitment to relationships with their company in a low munificence environment. In addition, it can enhance alienation from work, which may result in low performance outcomes in the long run. Consequently, the view that the internet channel will not cannibalize sales (Deleersnyder et al., 2002) is myopic.It may be possible for salespersons to transform their negative perceptions of cannibalization into motivators. Specifically, while internet channels provide customers easy access to the core business processes such as quotations, policy issuance, and claims, salespersons may be able to develop high levels of competencies in areas that internet channels cannot easily "learn". For example, a sales agent may be able to respond to requests (e.g. from brokers for a quote) and make requests (e.g. to reinsurers for a confirmation of coverage) using electronic channels as tools to carry out decision-making and communication tasks that are unique to the agent's skill set. Thereby, even under a low munificence environment, salespersons may overcome the stress of perceived cannibalization of internet channels.Leonard-Barton (1995) contends that sales agents who are committed to an existing business structure may show a more anxious reaction towards the introduction of a new channel such as the internet. Consequently, these sales agents' who have high commitment to the existing format could have a stronger perception of cannibalization than that of those who have less commitment to the existing business format. Thus, future researchers may explore the moderating effect of commitment to the existing business format in the relationship between SPC and the indicators of salesperson's performance. One of the objectives of this study is to offer prescriptive and descriptive insights to firms regarding perceived cannibalization and its possible consequences on sales agent's motivation. Specifically, when firms operate in environments that characteristically offer few sales opportunities, appropriate measures should be taken to counteract salespersons' perceptions of cannibalization. For example, a firm could design effective incentive systems to reduce the negative feelings towards competing internet channels. One way of doing so is to provide incentives to sales agents who provide service to clients who have purchased insurance online. Another option is to train salespersons on how to make the internet beneficial to their own sales operations. Since changes often necessitate acquisition of new skills, alteration in salespersons' repertoire and adaptability to cope (Ingram et al., 2006; Schuler and Huber, 1993) will become key determinants of whether the new channel helps or hinders the goals of sales agents. Therefore, training salespersons to adapt to change may be critical to the overall success of both new and entrenched channels. Further research may investigate the role of salesperson training and salesperson collaboration in the development of internet channels as they relate to sales agent's perceived cannibalization and its impact.In view of multi-channel business models followed by the large number of companies today, the jury is still out regarding the efficiency of the integration of off-line and on-line selling activities. In this backdrop, the role of sales agent's perceived cannibalization in the integration of off-line and on-line channels needs further exploration. The SPC scale proffered in this study should facilitate empirical research on this important phenomenon. The findings of this study should provide encouragement for further investigation of the antecedents and consequences of the SPC construct. One major limitation of this study is that the data used in this study to develop and validate the scale was from a single sample of insurance agents. Typically, separate samples are required for examining the psychometric properties of a new scale. Bollen (1989) contends that a new data set can be obtained by randomly dividing the initial data pool into two. However, future research may validate the scale using a new sample. Additionally, the results of this study may lack external validity because the sample used in this study is industry specific. Future research may also examine the relationship posited in this study by using data from a different industry.Our failure to find a direct influence of SPC on commitment presents another research opportunity; that being the possibility of other intervening variables. Past research indicates that salespersons' skills and competencies are critical determinants of salesperson's performance outcomes (Churchill et al., 1985). Consequently, future research may explore the intervening role of salesperson's skills and competencies between SPC and performance outcomes. Additionally, there also may be differences across sex, age, and experience for the relationships posited in this study. Opens in a new window.Table I Extant research in cannibalization Opens in a new window.Table II Sample descriptive statistics Opens in a new window.Table III Reliability and validity statistics Opens in a new window.Table IV Structural model results
|
- The purpose of this paper is multifold. First, this study aims to proffer a psychometric scale to measure sales agent's perception of sales cannibalization due to the addition of an internet channel. Second, the study seeks to estimate the downstream impact of sales agents' perceived cannibalization (SPC) on two outcomes, namely, commitment and alienation from work. Third, it aims to examine the moderating role of environmental munificence in the relationship between SPC and the two outcomes.
|
[SECTION: Method] The oldest and strongest emotion of mankind is fear (H.P. Lovecraft). The dawn of the internet era has led several companies to explore new and radical marketing channels. In fact, multi-channel use has become the norm rather than the exception (Frazier, 1999). By adding internet channels, companies hope to increase overall performance, consolidate existing markets and expand into new markets (Geyskens et al., 2002). Unfortunately, internet channels are not without potential problems: internet channels are likely to increase uncertainty about market allegiance, generating real risks for long-term business performance (Geyskens et al., 2002), as well as destroying the value of past investments (Chandy and Tellis, 1998). As Porter (2001, p. 73) promulgates, "it is widely assumed that the internet is cannibalistic [and] will replace [or supplement] all conventional ways of doing business." Furthermore, Trembly (2001) suggests that the ubiquity of the internet will result in lower commissions for sales agents and gradual attrition of sales agents. In concurrence, Stucker (1999) found that "60 percent of the carriers surveyed view the web as at least a moderate threat to the agent distribution system" (p. 8). This finding is corroborated by Hagerty (2005), who suggests that web-based brokerage firms have completely shaken up the real estate industry, thus, creating paranoia for the traditional real-estate agents. Hagerty (2005) describes the traditional real estate agents' perceptions of the internet channel as exasperating, demotivating, and cannibalistic in nature. In their study of the insurance industry, Eastman et al. (2002) found that insurance agents experiencing the addition of an internet channel felt insecure about their job. While the internet provides easier communication and increases interactions with customers, the internet also provides multiple options for insurance purchasers, thus increasing the chances of sales cannibalization. This trend has been confirmed by extant research, which suggests that the internet will continue to attract consumers, breed uncertainty, and undoubtedly change sales agents' perceptions of job security as well as their job performance (Frazier, 1999; Greenhalgh and Rosenblatt, 1984). Primarily working with economic and financial terms, marketing researchers have conceptualized cannibalization and assert that, in fact, the addition of internet channels may not generate significant cannibalization (Biyalogorsky and Naik, 2003; Chandy and Tellis, 1998; Deleersnyder et al., 2002; Ward and Morganosky, 2000). Others, however, contend that even if the financial impact is inconsequential, the psychological impact of cannibalization can influence marketers' performance outcomes (Geyskens et al., 2002). In this regard, Porter (2001) posits that salespeople fear internet channel additions in anticipation that the Internet will cannibalize their sales. In addition, salespeople worry that internet channels will make them outmoded and eventually replace them. Elsewhere, Frazier (1999) contends that, regardless of the extent of actual cannibalization, sales agents' fears concerning cannibalization and the security of their jobs can subdue their efforts, breakdown long-standing relationships, reduce their commitment, and make them fearful of an uncertain future (see also Ashford et al., 1989; Gerstner and Hess, 1995; Greenhalgh and Rosenblatt, 1984; Jeuland and Shugan, 1983; Lal et al., 1996; McGuire and Staelin, 1983). This negative influence of perceived cannibalization on sales agents' job outcomes and relationships often offset the potential gains (e.g., increased market penetration and decreased distribution cost) from adding internet channels (Porter, 2001). Although past researchers have emphasized the importance of perceived cannibalization as a determinant of the benefits and risks of adding an internet channel and the consequences of job insecurity, empirical research on cannibalization in the marketing literature remains scant, apart from the study by Gulati et al. (2002), which was focusing on the fear of disintermediation felt by sales agents. Gulati et al. (2002) position "fear of disintermediation" as an endogenous variable. In contrast, the current study positions the SPC as an exogenous variable that influences the sales agent's psychological outcomes. The organizational behavior literature has found similar outcome variables to be significantly related to job insecurity - the perceived "powerlessness to maintain desired continuity in a threatening job situation" (Greenhalgh and Rosenblatt, 1984, p. 438). According to Greenhalgh and Rosenblatt (1984), one of the key reasons for job insecurity is the potential shrinkage of jobs due to external changes. Within the context of our study, individual perception of shrinkage (cannibalization) is attributed to an external change brought about by the internet. Gulati et al. (2002) conceptualized the fear of disintermediation as a perception of complete loss of business. Our conceptualization of perceived cannibalization captures the subjective threat, which is the consequent of an objective threat. The individual's perceptual processes capture the subjective threat from the addition of an internet channel. This study aims to develop a scale that can be used to measure sales agents' perceived cannibalization of their earnings due to the addition of an internet channel. We establish the conceptual foundation for the scale and the scale's ability to demonstrate acceptable psychometric properties. The scale is then applied in the context of sales agents' perception of the addition of internet channels. By doing so, the downstream outcomes of SPC on commitment and alienation from work can be modeled. While a job insecurity scale may have been used to capture the perceived powerlessness of the situation, existing job insecurity scales assess multiple features of job insecurity that are not relevant to our study. Job insecurity scales also do not tap into perceived threats from competing channels (cf. Ashford et al., 1989; Mauno et al., 2001). Lastly, this research study also argues that the influence of perceived cannibalization on sales agent job outcomes may also be contingent upon the state of environmental munificence, which refers to the extent of growth opportunities available in the market. Uncertainty reduction theory (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949), the expectancy theory of motivation (Vroom, 1964), and congruity theory (Osgood and Tannenbaum, 1955) are used to ground the research. Uncertainty reduction theory explicates how the addition of alternatives in a given situation results in increased uncertainty and how these additions may be perceived as cannibalistic (cf. Berger and Calabrese, 1975; Heider, 1958). The expectancy theory of motivation argues that a high level of perceived cannibalization is likely to decrease a sales agent's belief that expending a given amount of effort will result in corresponding reward, thus, resulting in alienation from work (Vroom, 1964). Congruity theory suggests that the lack of expectation that effort will result in appropriate rewards could result in reduced commitment and increased alienation from work (Osgood and Tannenbaum, 1955). Specifically, this research offers a deeper understanding of the influence of the addition of the internet channel on sales agents' motivation. While recognizing that the internet is here to stay and that strategic channel decisions will unlikely be made based on the views or psychological reactions of sales agents alone, incorporating the sales agent perspective does allow organizations to take a holistic view of their distribution system and make market-focused improvements that coordinate all customer touch points (Vargo and Lusch, 2004). In other words, understanding a sales agent's perceived cannibalization can aid managers in developing strategies to keep the sales agents motivated when employing multiple competing marketing channels to reach customers. Although there is surprisingly little empirical work on channel cannibalization, several researchers have expressed their concerns for the hazards of internet disintermediation (e.g. Garven, 2002; Gulati et al., 2002; Narayandas et al., 2002); a situation in which internet channels are added to entrenched channels (e.g. Alba et al., 1997; Brynjolfsson and Smith, 2000; Coughlan et al., 2001). Internet disintermediation results in the replacement of traditional channel partners with the Internet (Narayandas et al., 2002). This phenomenon fortifies the common belief that internet channels potentially cannibalize the sales of entrenched channels (Porter, 2001).First, sales may shift from entrenched channels to new internet channels when the latter provides features that are more appealing to a target audience, such as a substantial amount of information on the products' characteristics, their possible customization and consistent time savings (Alba et al., 1997). Second, the Internet is likely to increase competition since the consumers have better and quicker access to efficient shopping comparison websites. The resulting increase in price competition may explain why online prices for homogenous products are often found to be lower than those of conventional outlets (Brynjolfsson and Smith, 2000). Consequently, sales may shift from conventional to internet channels. Third, total sales may also decrease should impulse purchases be reduced (Machlis, 1998). Indeed, not only are sales of existing channels cannibalized, but aggregate sales over all channels may also suffer because of the internet channel. Fourth, existing channels may view new internet channels as unwelcome competition. Consequently, the former may lose motivation and reduce support for the firm's products. This may, in turn, also result in brand switching towards the firm's competitors and, hence, decreased total sales (Coughlan et al., 2001).In spite of the above observations and much current debate about disintermediation and insecurity generated by internet channels (Mattila, 2002; Useem, 1999), only a handful of empirical studies have tangentially focused on the potentially cannibalizing consequences of adding the internet to the distribution chain. As shown in Table I, previous empirical studies have viewed cannibalization in terms of its effect on the total sales or overall financial value of the firm. What has yet to be investigated is the impact that perceptions of internet cannibalization have on the commitment and work alienation.This research study proffers the construct of sales agent's perceived cannibalization, which refers to salespeople's perceptions of the extent to which sales opportunities are lost to an internet channel. SPC reflects an attitudinal reaction of a sales agent to challenges that may occur due to the addition of the internet channel. If sales agents view the addition of an internet channel by the firm as a high threat to their current and future sales outcomes, perceived cannibalization should be high. Contrarily, if agents do not consider the addition of an internet channel by their firm as a menace to their current and future sales, perceived cannibalization should be low. Scholars have contended that when a firm begins selling through the Internet, sales agents selling through existing channels are likely to perceive losing market share and customers to online sales (Frazier, 1999; Narayandas et al., 2002; Porter, 2001).Relationship commitment and its relationship with sales agent's perceived cannibalization Commitment is an important component of marketing relationships (Morgan and Hunt, 1994). Relationship commitment is defined in the literature as "an enduring desire to maintain a valued relationship" (Morgan and Hunt, 1994, p. 23). In other words, relationship commitment captures sales agents' willingness to maintain a relationship with their company. Plausibly, it can be argued that a channel member who is satisfied with the economical dimension of the relationship is also committed to the relationship (Morgan and Hunt, 1994).Concurrent with the expectancy theory of motivation argument, this research contends a negative influence of SPC on sales agents' relationship commitment. According to expectancy theory, individuals will be motivated if they believe that "expending a given amount of effort on a task will lead to an improved level of performance on some dimensions" (Futrell, 2001, p. 278). In contrast, if individuals believe their efforts will not produce expected results, motivation will suffer (Vroom, 1964) along with these relationships. Thus, perceptions of high level of uncertainty in the outcome of the sales agents' effort result in a lack of interest in maintaining a long-term relationship. Thus, the following is hypothesized:H1. SPC negatively influences sales agents' relationship commitment.Alienation from work and its relationship with sales agent's perceived cannibalization Alienation from work refers to an attitude in which an employee expresses lack of concern about work and works with low enthusiasm (Moch, 1980). Particularly, alienation from work is a psychological separation from work due to a perceived mismatch between effort and outcome. In concurrence with past research, this study contends that agents' perception of sales cannibalization may lead to psychological severance from work. This psychological severance results from the perception that work will bring sub-optimal outcomes (Moch, 1980).Furthermore, past research indicates that sales agents perceived cannibalization, due to the addition of an internet channel, increases uncertainty (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949). The increased uncertainty related to work or role in the organization may result in psychological detachment from work (Allen and LaFollette, 1977).Additionally, past research indicates that job insecurity influences work attitudes (Hellgren et al., 1999). In other words, individuals will not work optimally when their job is in jeopardy. The introduction of an internet channels should breed insecurity for sales agents who perceive the internet channel reduces their earning potential (Porter, 2001). This view is supported by extant literature, suggesting that employees working in an environment that engenders high job insecurity will experience a high level of work alienation (Blauner, 1964; Shepard, 1971). Consequently, sales agents' perception of cannibalization may be positively linked to alienation from work:H2. SPC positively influences sales agents' alienation from work.Environmental munificence and its moderating influence Environmental munificence is foreseen to play a contingency role in relation to perceived cannibalization's influence on salesperson commitment (Celly and Frazier, 1996). Our contention is rooted in past research on environmental munificence. Veliyath (1996) contends that it is easier for channel members to achieve their sales goals in a high munificence environment rather than in a low munificence environment. This is because business risk decreases and industry performance increases with an increase in environmental munificence. Munificence increases the opportunities for performance by providing many growth opportunities. These opportunities can reduce the perceived challenges associated with multiple channels serving the same market. Consequently, salespeople working in an environment with high munificence may anticipate higher sales growth, which would make up for the loss of sales due to the introduction of an internet channel. However, in a low munificence environment, growth opportunities in the environment are limited; hence, the addition of an internet channel can further exacerbate the duress of salespersons. Consequently, the effect of perceived cannibalization on commitment and alienation will be limited in a highly munificent environment, while in a minimally munificent environment, SPC's impact on commitment and alienation from the work relationship will be strong. This view concurs with Chisholm and Cummings (1979) who suggest that environmental conditions may moderate the relationship between work characteristics and outcomes. Therefore:H3a. The relationship between SPC and commitment will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and commitment will be stronger.H3b. The relationship between SPC and alienation from work will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and alienation from work will be stronger. Scale development To operationalize SPC, initially a pool of eight items was generated as the potential measure (Churchill, 1979; DeVellis, 1991). These items were reviewed in an interview format by 20 insurance agents, who verified anxiety towards the introduction of an online sales channel that would increase the competition and decrease their earnings. A panel of three research survey experts who engage in research in this domain also reviewed the items. Considering the transcribed interviews of the insurance agents and comments of the research experts, the number of items was reduced to five.These items reflect the impact that the introduction of online channels might have on the insurance agents' perception of their current and future sales performance. Specifically, the items relate to how the addition of an internet channel will be perceived by insurance agents as a likely driver of reduction in clientele and profits. In other words, the measures are intended to describe insurance agents psychologically perception that their profits and growth opportunities are cannibalized by the addition of internet channels.After discussing the sample, the psychometric properties of the newly developed scale, perceived cannibalization are described. The scales used for measuring commitment and environmental munificence are well-established in the literature. Specifically, a three-item relationship commitment scale developed by Kumar et al. (1995) is used in this study. The scale measures affective commitment of sales agents with their insurance company. A four-item scale was used to measure alienation from work (Miller, 1967; Agarwal, 1993). We used Dwyer and Oh's (1987) five-item scale to measure environmental munificence. The scale measures an insurance salesperson's perception of market opportunities for growth and profit. The respondents were presented with a factual scenario pertaining to the future of the insurance industry. After considering the scenario, respondents were asked to rate there perception of growth opportunities.Sample A list of insurance agents in North Texas, which had been obtained through a private vendor, was compared with the insurance agent listings in the yellow page section of greater Dallas-Fort Worth area and other major towns in North Texas. After this comparison, a contact pool of 2,108 insurance sales agents was developed. All 2,108 sales agents were contacted to request their participation to the survey.The questionnaires and a letter elucidating the nature and purpose of the study were mailed to the contact pool of insurance agents by the primary researcher of this study. Of 578 filled questionnaires, 67 were incomplete, resulting in 511 usable questionnaires (response rate 24.2 percent). There were 370 males and 141 female respondents, with an average work experience in the insurance industry of 7 years. The resulting valid responses indicate that the respondents have the experience in insurance industry to provide a basis to respond to the survey. Participation was completely voluntary. Valid respondent characteristics are reported in Table II.Measurement assessment procedures For the purpose of cross validation, data were randomly divided into two sub-samples, training and validation. Thirty percent of randomly drawn cases formed the training sample (n=150) and the remaining 70 percent constituted the validation sample (n=361). Initially, scale properties were assessed with the training sample and then cross-validated with the validation sample. In addition to the scale of SPC, three other scales of theoretically related constructs, relationship commitment, alienation from work, and munificence, were included in the analyses. The analysis was initiated by conducting exploratory factor analysis on the training sample. Thereafter, confirmatory factor analysis was performed on the validation sample.Exploratory factor analysis Using the principle component method and varimax rotation, all 17 items belonging to 4 constructs were factor analyzed with the training sample (n=150). As expected, four factors emerged with eigenvalues of 4.652, 3.640, 3.552, and 2.884 together accounting for 81.14 percent of the variance. All five items of perceived cannibalization, three items of commitment, four items of alienation from work, and five items of environmental munificence loaded clearly with their respective constructs. Factor loadings ranged from 0.612 to 0.966.Confirmatory factor analysis Next confirmatory factor analysis (CFA) with the validation sample (n=361) was conducted. The CFA model had 17 items - 5 for perceived cannibalization, 3 for commitment, 4 for alienation from work, and 5 for environmental munificence. The initial model fit was not optimal. Based on low factor loading (lower than 0.40), high residual (normalized residual >2.58) and modification indices, one item from the SPC scale and one item from environmental munificence were deleted. The essence of the deleted items was retained by other items of their respective scale, that is, the content validity was not significantly reduced. The resulting fit indices demonstrated a good fit: kh2=109.409 (p=0.028; 62.4 percent of the variance explained), df=83, GFI=0.962, AGFI=0.944, CFI=0.99, RMSEA=0.029, PCLOSE=0.993, and HOELTER's 0.05 and 0.01 were 347 and 382, respectively. The GFI and AGFI values of >0.90 and =0.90 were indicative of a good fit. Also, a RMSEA < 0.05 shows a good fit. The PCLOSE > 0.50 suggests RMSEA is good. Lastly, Hoelter's 0.05 and 0.01 indexes were <200 indicating that the validation sample size was adequate.Convergent and discriminant validity, AVE, and composite reliability Results for the validation sample showed that all the critical ratios of all the indicators were significant (critical ratios >1.96, p < 0.05) and ranged from 6.764 to 96.716. These results were taken as evidence of acceptable convergent validity (Gerbing and Anderson, 1988). The composite reliabilities were 0.905, 0.834, 0.930, and 0.864, respectively, while the average variance extracted (AVE) for the constructs of SPC, commitment, alienation from work, and environmental munificence were 0.706, 0.627, 0.772, and 0.615, respectively (Bagozzi and Yi, 1988) (see Table III). The AVE for each factor exceeded the squared correlations between that factor and all other factors indicating acceptable discriminant validity. Additionally, none of the confidence intervals (+- two standards errors) for the estimated correlations for the constructs included 1.0, providing further support for adequate discriminant validity (Anderson and Gerbing, 1988).Common method bias Since the data for this study were obtained from a single survey, common method variance was possible. Following Podsakoff and Organ (1986), the Harman's one-factor test in which all variables were hypothesized to load on a single factor representing the common method was employed. The principal component factor analysis revealed four factors each with an eigenvalue greater than 1.0. All factors together accounted for over 81 percents of the total variance in the training and validation samples respectively. The first factor accounted for 28 percent of the variance. Additionally, high correlation (r > 0.90) was not found between any construct. Bagozzi et al. (1991) indicate that the presence of common method bias usually results in extremely high correlations between variables. Hence, common method bias may not be a serious concern in this study.Non-response bias To examine the non-response bias, mean responses from early respondents and late respondents were compared (Armstrong and Overton, 1977). Independent t-tests of the mean responses on all the four constructs showed no statistical differences (at 0.05 level), thus, non-response bias should not be a problem.Hypotheses test In order to test the hypotheses, a structural model with four constructs was estimated: SPC, the interaction term of SPC and environmental munificence (SPC*EM), alienation from work, and commitment. We followed the procedure recommended by Aiken and West (1991) and Ping (1995) for creating and using the interaction term. The items of SPC and environmental munificence were mean centered and cross-multiplied to create the interaction term. In the proposed model, commitment and alienation are endogenous, whereas the remaining two constructs are exogenous. The structural model was analyzed with the combined sample of 511 cases. The fit indices demonstrated a reasonable fit: kh2=288.556 (p < 0.001; 60.2 percent of the variance explained), df=86, GFI=0.931, AGFI=0.903, CFI=0.96, NFI=0.944, RMSEA=0.058. Even though the direction of the path was negative as per H1, SPC-commitment link was not significant (estimate=-0.031, t=-0.877, p-value=0.381). That is, H1 was not supported. Thus, no main effect of SPC on commitment was found. The SPC-alienation from work link was significant (estimate=0.068, t=1.981, p-value=0.045). Thus, H2 was supported. Further, the SPC*EM-commitment link (estimate=0.067, t=2.475, p-value=0.013) and SPC*EM-alienation from work link (estimate=-0.064, t=-2.767, p-value=0.006) were both significant, thus, supporting H3a and H3b. Consequently, the influence of SPC on commitment and alienation from work is moderated by environmental munificence. The results are summarized in Table IV. In this study, the concept of perceived cannibalization is tested in a sales context. Many past studies suggested the possibility of the rise of sales agent perceived cannibalization due to the addition of internet channels (Hagerty, 2005; Porter, 2001). Some warnings are evident with respect to the potential negative outcomes of sales agent's perceived cannibalization on various job aspects. Anecdotal evidence has surfaced showing that salespersons are perceiving internet channels as cannibalistic to their current and future sales. Yet no attempt had been made to either conceptualize or operationalize the construct of sales agent's perceived cannibalization.Previous conceptualizations of inter-channel cannibalization were all based on economic terms (Geyskens et al., 2002; Ward and Morganosky, 2000; Deleersnyder et al., 2002) and, hence, were considered myopic by Porter (2001). Indeed, Porter (2001) argued that the psychological impact of the addition of new channels on existing channels had to be taken into account to understand fully the impact of sales cannibalization. Our contention follows Porter's proposition and extends his conceptualizations of cannibalization by examining the impact of the addition of an internet channel in the sales domain.After conceptualizing the construct, a multi-item scale was developed for measuring sales agent's perceived cannibalization. The properties of the scale were assessed following procedures recommended by Churchill (1979), Anderson and Gerbing (1988), Gerbing and Anderson (1988), and Bagozzi and Yi (1988). This study, utilizes both exploratory and confirmatory factor analysis to assess the reliability and validity of the SPC scale. Our scale opens a window of opportunity for empirical research on channel cannibalization in general and sales channels in particular.In addition to advancing the scale, four hypotheses are developed, centering on the principal construct: sales agents perceive cannibalization. By demonstrating that environmental munificence moderates the effect of perceived cannibalization on commitment, this research study demonstrates that perceived cannibalization is not universally damaging to relationship commitment. Rather, only under a low munificent environment perceived cannibalization will significantly reduce salespersons' relationship commitment. Furthermore, perceived cannibalization increases alienation from work but its effect is more severe in low munificent environment. This research study demonstrated that the perception of cannibalization can reduce sales agents' commitment to relationships with their company in a low munificence environment. In addition, it can enhance alienation from work, which may result in low performance outcomes in the long run. Consequently, the view that the internet channel will not cannibalize sales (Deleersnyder et al., 2002) is myopic.It may be possible for salespersons to transform their negative perceptions of cannibalization into motivators. Specifically, while internet channels provide customers easy access to the core business processes such as quotations, policy issuance, and claims, salespersons may be able to develop high levels of competencies in areas that internet channels cannot easily "learn". For example, a sales agent may be able to respond to requests (e.g. from brokers for a quote) and make requests (e.g. to reinsurers for a confirmation of coverage) using electronic channels as tools to carry out decision-making and communication tasks that are unique to the agent's skill set. Thereby, even under a low munificence environment, salespersons may overcome the stress of perceived cannibalization of internet channels.Leonard-Barton (1995) contends that sales agents who are committed to an existing business structure may show a more anxious reaction towards the introduction of a new channel such as the internet. Consequently, these sales agents' who have high commitment to the existing format could have a stronger perception of cannibalization than that of those who have less commitment to the existing business format. Thus, future researchers may explore the moderating effect of commitment to the existing business format in the relationship between SPC and the indicators of salesperson's performance. One of the objectives of this study is to offer prescriptive and descriptive insights to firms regarding perceived cannibalization and its possible consequences on sales agent's motivation. Specifically, when firms operate in environments that characteristically offer few sales opportunities, appropriate measures should be taken to counteract salespersons' perceptions of cannibalization. For example, a firm could design effective incentive systems to reduce the negative feelings towards competing internet channels. One way of doing so is to provide incentives to sales agents who provide service to clients who have purchased insurance online. Another option is to train salespersons on how to make the internet beneficial to their own sales operations. Since changes often necessitate acquisition of new skills, alteration in salespersons' repertoire and adaptability to cope (Ingram et al., 2006; Schuler and Huber, 1993) will become key determinants of whether the new channel helps or hinders the goals of sales agents. Therefore, training salespersons to adapt to change may be critical to the overall success of both new and entrenched channels. Further research may investigate the role of salesperson training and salesperson collaboration in the development of internet channels as they relate to sales agent's perceived cannibalization and its impact.In view of multi-channel business models followed by the large number of companies today, the jury is still out regarding the efficiency of the integration of off-line and on-line selling activities. In this backdrop, the role of sales agent's perceived cannibalization in the integration of off-line and on-line channels needs further exploration. The SPC scale proffered in this study should facilitate empirical research on this important phenomenon. The findings of this study should provide encouragement for further investigation of the antecedents and consequences of the SPC construct. One major limitation of this study is that the data used in this study to develop and validate the scale was from a single sample of insurance agents. Typically, separate samples are required for examining the psychometric properties of a new scale. Bollen (1989) contends that a new data set can be obtained by randomly dividing the initial data pool into two. However, future research may validate the scale using a new sample. Additionally, the results of this study may lack external validity because the sample used in this study is industry specific. Future research may also examine the relationship posited in this study by using data from a different industry.Our failure to find a direct influence of SPC on commitment presents another research opportunity; that being the possibility of other intervening variables. Past research indicates that salespersons' skills and competencies are critical determinants of salesperson's performance outcomes (Churchill et al., 1985). Consequently, future research may explore the intervening role of salesperson's skills and competencies between SPC and performance outcomes. Additionally, there also may be differences across sex, age, and experience for the relationships posited in this study. Opens in a new window.Table I Extant research in cannibalization Opens in a new window.Table II Sample descriptive statistics Opens in a new window.Table III Reliability and validity statistics Opens in a new window.Table IV Structural model results
|
- The data for this study were collected from a contact pool of 2,108 insurance sales agents. A total of 511 valid responses were attained. Structural equation modeling was employed to examine the relationships posited in this study.
|
[SECTION: Findings] The oldest and strongest emotion of mankind is fear (H.P. Lovecraft). The dawn of the internet era has led several companies to explore new and radical marketing channels. In fact, multi-channel use has become the norm rather than the exception (Frazier, 1999). By adding internet channels, companies hope to increase overall performance, consolidate existing markets and expand into new markets (Geyskens et al., 2002). Unfortunately, internet channels are not without potential problems: internet channels are likely to increase uncertainty about market allegiance, generating real risks for long-term business performance (Geyskens et al., 2002), as well as destroying the value of past investments (Chandy and Tellis, 1998). As Porter (2001, p. 73) promulgates, "it is widely assumed that the internet is cannibalistic [and] will replace [or supplement] all conventional ways of doing business." Furthermore, Trembly (2001) suggests that the ubiquity of the internet will result in lower commissions for sales agents and gradual attrition of sales agents. In concurrence, Stucker (1999) found that "60 percent of the carriers surveyed view the web as at least a moderate threat to the agent distribution system" (p. 8). This finding is corroborated by Hagerty (2005), who suggests that web-based brokerage firms have completely shaken up the real estate industry, thus, creating paranoia for the traditional real-estate agents. Hagerty (2005) describes the traditional real estate agents' perceptions of the internet channel as exasperating, demotivating, and cannibalistic in nature. In their study of the insurance industry, Eastman et al. (2002) found that insurance agents experiencing the addition of an internet channel felt insecure about their job. While the internet provides easier communication and increases interactions with customers, the internet also provides multiple options for insurance purchasers, thus increasing the chances of sales cannibalization. This trend has been confirmed by extant research, which suggests that the internet will continue to attract consumers, breed uncertainty, and undoubtedly change sales agents' perceptions of job security as well as their job performance (Frazier, 1999; Greenhalgh and Rosenblatt, 1984). Primarily working with economic and financial terms, marketing researchers have conceptualized cannibalization and assert that, in fact, the addition of internet channels may not generate significant cannibalization (Biyalogorsky and Naik, 2003; Chandy and Tellis, 1998; Deleersnyder et al., 2002; Ward and Morganosky, 2000). Others, however, contend that even if the financial impact is inconsequential, the psychological impact of cannibalization can influence marketers' performance outcomes (Geyskens et al., 2002). In this regard, Porter (2001) posits that salespeople fear internet channel additions in anticipation that the Internet will cannibalize their sales. In addition, salespeople worry that internet channels will make them outmoded and eventually replace them. Elsewhere, Frazier (1999) contends that, regardless of the extent of actual cannibalization, sales agents' fears concerning cannibalization and the security of their jobs can subdue their efforts, breakdown long-standing relationships, reduce their commitment, and make them fearful of an uncertain future (see also Ashford et al., 1989; Gerstner and Hess, 1995; Greenhalgh and Rosenblatt, 1984; Jeuland and Shugan, 1983; Lal et al., 1996; McGuire and Staelin, 1983). This negative influence of perceived cannibalization on sales agents' job outcomes and relationships often offset the potential gains (e.g., increased market penetration and decreased distribution cost) from adding internet channels (Porter, 2001). Although past researchers have emphasized the importance of perceived cannibalization as a determinant of the benefits and risks of adding an internet channel and the consequences of job insecurity, empirical research on cannibalization in the marketing literature remains scant, apart from the study by Gulati et al. (2002), which was focusing on the fear of disintermediation felt by sales agents. Gulati et al. (2002) position "fear of disintermediation" as an endogenous variable. In contrast, the current study positions the SPC as an exogenous variable that influences the sales agent's psychological outcomes. The organizational behavior literature has found similar outcome variables to be significantly related to job insecurity - the perceived "powerlessness to maintain desired continuity in a threatening job situation" (Greenhalgh and Rosenblatt, 1984, p. 438). According to Greenhalgh and Rosenblatt (1984), one of the key reasons for job insecurity is the potential shrinkage of jobs due to external changes. Within the context of our study, individual perception of shrinkage (cannibalization) is attributed to an external change brought about by the internet. Gulati et al. (2002) conceptualized the fear of disintermediation as a perception of complete loss of business. Our conceptualization of perceived cannibalization captures the subjective threat, which is the consequent of an objective threat. The individual's perceptual processes capture the subjective threat from the addition of an internet channel. This study aims to develop a scale that can be used to measure sales agents' perceived cannibalization of their earnings due to the addition of an internet channel. We establish the conceptual foundation for the scale and the scale's ability to demonstrate acceptable psychometric properties. The scale is then applied in the context of sales agents' perception of the addition of internet channels. By doing so, the downstream outcomes of SPC on commitment and alienation from work can be modeled. While a job insecurity scale may have been used to capture the perceived powerlessness of the situation, existing job insecurity scales assess multiple features of job insecurity that are not relevant to our study. Job insecurity scales also do not tap into perceived threats from competing channels (cf. Ashford et al., 1989; Mauno et al., 2001). Lastly, this research study also argues that the influence of perceived cannibalization on sales agent job outcomes may also be contingent upon the state of environmental munificence, which refers to the extent of growth opportunities available in the market. Uncertainty reduction theory (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949), the expectancy theory of motivation (Vroom, 1964), and congruity theory (Osgood and Tannenbaum, 1955) are used to ground the research. Uncertainty reduction theory explicates how the addition of alternatives in a given situation results in increased uncertainty and how these additions may be perceived as cannibalistic (cf. Berger and Calabrese, 1975; Heider, 1958). The expectancy theory of motivation argues that a high level of perceived cannibalization is likely to decrease a sales agent's belief that expending a given amount of effort will result in corresponding reward, thus, resulting in alienation from work (Vroom, 1964). Congruity theory suggests that the lack of expectation that effort will result in appropriate rewards could result in reduced commitment and increased alienation from work (Osgood and Tannenbaum, 1955). Specifically, this research offers a deeper understanding of the influence of the addition of the internet channel on sales agents' motivation. While recognizing that the internet is here to stay and that strategic channel decisions will unlikely be made based on the views or psychological reactions of sales agents alone, incorporating the sales agent perspective does allow organizations to take a holistic view of their distribution system and make market-focused improvements that coordinate all customer touch points (Vargo and Lusch, 2004). In other words, understanding a sales agent's perceived cannibalization can aid managers in developing strategies to keep the sales agents motivated when employing multiple competing marketing channels to reach customers. Although there is surprisingly little empirical work on channel cannibalization, several researchers have expressed their concerns for the hazards of internet disintermediation (e.g. Garven, 2002; Gulati et al., 2002; Narayandas et al., 2002); a situation in which internet channels are added to entrenched channels (e.g. Alba et al., 1997; Brynjolfsson and Smith, 2000; Coughlan et al., 2001). Internet disintermediation results in the replacement of traditional channel partners with the Internet (Narayandas et al., 2002). This phenomenon fortifies the common belief that internet channels potentially cannibalize the sales of entrenched channels (Porter, 2001).First, sales may shift from entrenched channels to new internet channels when the latter provides features that are more appealing to a target audience, such as a substantial amount of information on the products' characteristics, their possible customization and consistent time savings (Alba et al., 1997). Second, the Internet is likely to increase competition since the consumers have better and quicker access to efficient shopping comparison websites. The resulting increase in price competition may explain why online prices for homogenous products are often found to be lower than those of conventional outlets (Brynjolfsson and Smith, 2000). Consequently, sales may shift from conventional to internet channels. Third, total sales may also decrease should impulse purchases be reduced (Machlis, 1998). Indeed, not only are sales of existing channels cannibalized, but aggregate sales over all channels may also suffer because of the internet channel. Fourth, existing channels may view new internet channels as unwelcome competition. Consequently, the former may lose motivation and reduce support for the firm's products. This may, in turn, also result in brand switching towards the firm's competitors and, hence, decreased total sales (Coughlan et al., 2001).In spite of the above observations and much current debate about disintermediation and insecurity generated by internet channels (Mattila, 2002; Useem, 1999), only a handful of empirical studies have tangentially focused on the potentially cannibalizing consequences of adding the internet to the distribution chain. As shown in Table I, previous empirical studies have viewed cannibalization in terms of its effect on the total sales or overall financial value of the firm. What has yet to be investigated is the impact that perceptions of internet cannibalization have on the commitment and work alienation.This research study proffers the construct of sales agent's perceived cannibalization, which refers to salespeople's perceptions of the extent to which sales opportunities are lost to an internet channel. SPC reflects an attitudinal reaction of a sales agent to challenges that may occur due to the addition of the internet channel. If sales agents view the addition of an internet channel by the firm as a high threat to their current and future sales outcomes, perceived cannibalization should be high. Contrarily, if agents do not consider the addition of an internet channel by their firm as a menace to their current and future sales, perceived cannibalization should be low. Scholars have contended that when a firm begins selling through the Internet, sales agents selling through existing channels are likely to perceive losing market share and customers to online sales (Frazier, 1999; Narayandas et al., 2002; Porter, 2001).Relationship commitment and its relationship with sales agent's perceived cannibalization Commitment is an important component of marketing relationships (Morgan and Hunt, 1994). Relationship commitment is defined in the literature as "an enduring desire to maintain a valued relationship" (Morgan and Hunt, 1994, p. 23). In other words, relationship commitment captures sales agents' willingness to maintain a relationship with their company. Plausibly, it can be argued that a channel member who is satisfied with the economical dimension of the relationship is also committed to the relationship (Morgan and Hunt, 1994).Concurrent with the expectancy theory of motivation argument, this research contends a negative influence of SPC on sales agents' relationship commitment. According to expectancy theory, individuals will be motivated if they believe that "expending a given amount of effort on a task will lead to an improved level of performance on some dimensions" (Futrell, 2001, p. 278). In contrast, if individuals believe their efforts will not produce expected results, motivation will suffer (Vroom, 1964) along with these relationships. Thus, perceptions of high level of uncertainty in the outcome of the sales agents' effort result in a lack of interest in maintaining a long-term relationship. Thus, the following is hypothesized:H1. SPC negatively influences sales agents' relationship commitment.Alienation from work and its relationship with sales agent's perceived cannibalization Alienation from work refers to an attitude in which an employee expresses lack of concern about work and works with low enthusiasm (Moch, 1980). Particularly, alienation from work is a psychological separation from work due to a perceived mismatch between effort and outcome. In concurrence with past research, this study contends that agents' perception of sales cannibalization may lead to psychological severance from work. This psychological severance results from the perception that work will bring sub-optimal outcomes (Moch, 1980).Furthermore, past research indicates that sales agents perceived cannibalization, due to the addition of an internet channel, increases uncertainty (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949). The increased uncertainty related to work or role in the organization may result in psychological detachment from work (Allen and LaFollette, 1977).Additionally, past research indicates that job insecurity influences work attitudes (Hellgren et al., 1999). In other words, individuals will not work optimally when their job is in jeopardy. The introduction of an internet channels should breed insecurity for sales agents who perceive the internet channel reduces their earning potential (Porter, 2001). This view is supported by extant literature, suggesting that employees working in an environment that engenders high job insecurity will experience a high level of work alienation (Blauner, 1964; Shepard, 1971). Consequently, sales agents' perception of cannibalization may be positively linked to alienation from work:H2. SPC positively influences sales agents' alienation from work.Environmental munificence and its moderating influence Environmental munificence is foreseen to play a contingency role in relation to perceived cannibalization's influence on salesperson commitment (Celly and Frazier, 1996). Our contention is rooted in past research on environmental munificence. Veliyath (1996) contends that it is easier for channel members to achieve their sales goals in a high munificence environment rather than in a low munificence environment. This is because business risk decreases and industry performance increases with an increase in environmental munificence. Munificence increases the opportunities for performance by providing many growth opportunities. These opportunities can reduce the perceived challenges associated with multiple channels serving the same market. Consequently, salespeople working in an environment with high munificence may anticipate higher sales growth, which would make up for the loss of sales due to the introduction of an internet channel. However, in a low munificence environment, growth opportunities in the environment are limited; hence, the addition of an internet channel can further exacerbate the duress of salespersons. Consequently, the effect of perceived cannibalization on commitment and alienation will be limited in a highly munificent environment, while in a minimally munificent environment, SPC's impact on commitment and alienation from the work relationship will be strong. This view concurs with Chisholm and Cummings (1979) who suggest that environmental conditions may moderate the relationship between work characteristics and outcomes. Therefore:H3a. The relationship between SPC and commitment will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and commitment will be stronger.H3b. The relationship between SPC and alienation from work will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and alienation from work will be stronger. Scale development To operationalize SPC, initially a pool of eight items was generated as the potential measure (Churchill, 1979; DeVellis, 1991). These items were reviewed in an interview format by 20 insurance agents, who verified anxiety towards the introduction of an online sales channel that would increase the competition and decrease their earnings. A panel of three research survey experts who engage in research in this domain also reviewed the items. Considering the transcribed interviews of the insurance agents and comments of the research experts, the number of items was reduced to five.These items reflect the impact that the introduction of online channels might have on the insurance agents' perception of their current and future sales performance. Specifically, the items relate to how the addition of an internet channel will be perceived by insurance agents as a likely driver of reduction in clientele and profits. In other words, the measures are intended to describe insurance agents psychologically perception that their profits and growth opportunities are cannibalized by the addition of internet channels.After discussing the sample, the psychometric properties of the newly developed scale, perceived cannibalization are described. The scales used for measuring commitment and environmental munificence are well-established in the literature. Specifically, a three-item relationship commitment scale developed by Kumar et al. (1995) is used in this study. The scale measures affective commitment of sales agents with their insurance company. A four-item scale was used to measure alienation from work (Miller, 1967; Agarwal, 1993). We used Dwyer and Oh's (1987) five-item scale to measure environmental munificence. The scale measures an insurance salesperson's perception of market opportunities for growth and profit. The respondents were presented with a factual scenario pertaining to the future of the insurance industry. After considering the scenario, respondents were asked to rate there perception of growth opportunities.Sample A list of insurance agents in North Texas, which had been obtained through a private vendor, was compared with the insurance agent listings in the yellow page section of greater Dallas-Fort Worth area and other major towns in North Texas. After this comparison, a contact pool of 2,108 insurance sales agents was developed. All 2,108 sales agents were contacted to request their participation to the survey.The questionnaires and a letter elucidating the nature and purpose of the study were mailed to the contact pool of insurance agents by the primary researcher of this study. Of 578 filled questionnaires, 67 were incomplete, resulting in 511 usable questionnaires (response rate 24.2 percent). There were 370 males and 141 female respondents, with an average work experience in the insurance industry of 7 years. The resulting valid responses indicate that the respondents have the experience in insurance industry to provide a basis to respond to the survey. Participation was completely voluntary. Valid respondent characteristics are reported in Table II.Measurement assessment procedures For the purpose of cross validation, data were randomly divided into two sub-samples, training and validation. Thirty percent of randomly drawn cases formed the training sample (n=150) and the remaining 70 percent constituted the validation sample (n=361). Initially, scale properties were assessed with the training sample and then cross-validated with the validation sample. In addition to the scale of SPC, three other scales of theoretically related constructs, relationship commitment, alienation from work, and munificence, were included in the analyses. The analysis was initiated by conducting exploratory factor analysis on the training sample. Thereafter, confirmatory factor analysis was performed on the validation sample.Exploratory factor analysis Using the principle component method and varimax rotation, all 17 items belonging to 4 constructs were factor analyzed with the training sample (n=150). As expected, four factors emerged with eigenvalues of 4.652, 3.640, 3.552, and 2.884 together accounting for 81.14 percent of the variance. All five items of perceived cannibalization, three items of commitment, four items of alienation from work, and five items of environmental munificence loaded clearly with their respective constructs. Factor loadings ranged from 0.612 to 0.966.Confirmatory factor analysis Next confirmatory factor analysis (CFA) with the validation sample (n=361) was conducted. The CFA model had 17 items - 5 for perceived cannibalization, 3 for commitment, 4 for alienation from work, and 5 for environmental munificence. The initial model fit was not optimal. Based on low factor loading (lower than 0.40), high residual (normalized residual >2.58) and modification indices, one item from the SPC scale and one item from environmental munificence were deleted. The essence of the deleted items was retained by other items of their respective scale, that is, the content validity was not significantly reduced. The resulting fit indices demonstrated a good fit: kh2=109.409 (p=0.028; 62.4 percent of the variance explained), df=83, GFI=0.962, AGFI=0.944, CFI=0.99, RMSEA=0.029, PCLOSE=0.993, and HOELTER's 0.05 and 0.01 were 347 and 382, respectively. The GFI and AGFI values of >0.90 and =0.90 were indicative of a good fit. Also, a RMSEA < 0.05 shows a good fit. The PCLOSE > 0.50 suggests RMSEA is good. Lastly, Hoelter's 0.05 and 0.01 indexes were <200 indicating that the validation sample size was adequate.Convergent and discriminant validity, AVE, and composite reliability Results for the validation sample showed that all the critical ratios of all the indicators were significant (critical ratios >1.96, p < 0.05) and ranged from 6.764 to 96.716. These results were taken as evidence of acceptable convergent validity (Gerbing and Anderson, 1988). The composite reliabilities were 0.905, 0.834, 0.930, and 0.864, respectively, while the average variance extracted (AVE) for the constructs of SPC, commitment, alienation from work, and environmental munificence were 0.706, 0.627, 0.772, and 0.615, respectively (Bagozzi and Yi, 1988) (see Table III). The AVE for each factor exceeded the squared correlations between that factor and all other factors indicating acceptable discriminant validity. Additionally, none of the confidence intervals (+- two standards errors) for the estimated correlations for the constructs included 1.0, providing further support for adequate discriminant validity (Anderson and Gerbing, 1988).Common method bias Since the data for this study were obtained from a single survey, common method variance was possible. Following Podsakoff and Organ (1986), the Harman's one-factor test in which all variables were hypothesized to load on a single factor representing the common method was employed. The principal component factor analysis revealed four factors each with an eigenvalue greater than 1.0. All factors together accounted for over 81 percents of the total variance in the training and validation samples respectively. The first factor accounted for 28 percent of the variance. Additionally, high correlation (r > 0.90) was not found between any construct. Bagozzi et al. (1991) indicate that the presence of common method bias usually results in extremely high correlations between variables. Hence, common method bias may not be a serious concern in this study.Non-response bias To examine the non-response bias, mean responses from early respondents and late respondents were compared (Armstrong and Overton, 1977). Independent t-tests of the mean responses on all the four constructs showed no statistical differences (at 0.05 level), thus, non-response bias should not be a problem.Hypotheses test In order to test the hypotheses, a structural model with four constructs was estimated: SPC, the interaction term of SPC and environmental munificence (SPC*EM), alienation from work, and commitment. We followed the procedure recommended by Aiken and West (1991) and Ping (1995) for creating and using the interaction term. The items of SPC and environmental munificence were mean centered and cross-multiplied to create the interaction term. In the proposed model, commitment and alienation are endogenous, whereas the remaining two constructs are exogenous. The structural model was analyzed with the combined sample of 511 cases. The fit indices demonstrated a reasonable fit: kh2=288.556 (p < 0.001; 60.2 percent of the variance explained), df=86, GFI=0.931, AGFI=0.903, CFI=0.96, NFI=0.944, RMSEA=0.058. Even though the direction of the path was negative as per H1, SPC-commitment link was not significant (estimate=-0.031, t=-0.877, p-value=0.381). That is, H1 was not supported. Thus, no main effect of SPC on commitment was found. The SPC-alienation from work link was significant (estimate=0.068, t=1.981, p-value=0.045). Thus, H2 was supported. Further, the SPC*EM-commitment link (estimate=0.067, t=2.475, p-value=0.013) and SPC*EM-alienation from work link (estimate=-0.064, t=-2.767, p-value=0.006) were both significant, thus, supporting H3a and H3b. Consequently, the influence of SPC on commitment and alienation from work is moderated by environmental munificence. The results are summarized in Table IV. In this study, the concept of perceived cannibalization is tested in a sales context. Many past studies suggested the possibility of the rise of sales agent perceived cannibalization due to the addition of internet channels (Hagerty, 2005; Porter, 2001). Some warnings are evident with respect to the potential negative outcomes of sales agent's perceived cannibalization on various job aspects. Anecdotal evidence has surfaced showing that salespersons are perceiving internet channels as cannibalistic to their current and future sales. Yet no attempt had been made to either conceptualize or operationalize the construct of sales agent's perceived cannibalization.Previous conceptualizations of inter-channel cannibalization were all based on economic terms (Geyskens et al., 2002; Ward and Morganosky, 2000; Deleersnyder et al., 2002) and, hence, were considered myopic by Porter (2001). Indeed, Porter (2001) argued that the psychological impact of the addition of new channels on existing channels had to be taken into account to understand fully the impact of sales cannibalization. Our contention follows Porter's proposition and extends his conceptualizations of cannibalization by examining the impact of the addition of an internet channel in the sales domain.After conceptualizing the construct, a multi-item scale was developed for measuring sales agent's perceived cannibalization. The properties of the scale were assessed following procedures recommended by Churchill (1979), Anderson and Gerbing (1988), Gerbing and Anderson (1988), and Bagozzi and Yi (1988). This study, utilizes both exploratory and confirmatory factor analysis to assess the reliability and validity of the SPC scale. Our scale opens a window of opportunity for empirical research on channel cannibalization in general and sales channels in particular.In addition to advancing the scale, four hypotheses are developed, centering on the principal construct: sales agents perceive cannibalization. By demonstrating that environmental munificence moderates the effect of perceived cannibalization on commitment, this research study demonstrates that perceived cannibalization is not universally damaging to relationship commitment. Rather, only under a low munificent environment perceived cannibalization will significantly reduce salespersons' relationship commitment. Furthermore, perceived cannibalization increases alienation from work but its effect is more severe in low munificent environment. This research study demonstrated that the perception of cannibalization can reduce sales agents' commitment to relationships with their company in a low munificence environment. In addition, it can enhance alienation from work, which may result in low performance outcomes in the long run. Consequently, the view that the internet channel will not cannibalize sales (Deleersnyder et al., 2002) is myopic.It may be possible for salespersons to transform their negative perceptions of cannibalization into motivators. Specifically, while internet channels provide customers easy access to the core business processes such as quotations, policy issuance, and claims, salespersons may be able to develop high levels of competencies in areas that internet channels cannot easily "learn". For example, a sales agent may be able to respond to requests (e.g. from brokers for a quote) and make requests (e.g. to reinsurers for a confirmation of coverage) using electronic channels as tools to carry out decision-making and communication tasks that are unique to the agent's skill set. Thereby, even under a low munificence environment, salespersons may overcome the stress of perceived cannibalization of internet channels.Leonard-Barton (1995) contends that sales agents who are committed to an existing business structure may show a more anxious reaction towards the introduction of a new channel such as the internet. Consequently, these sales agents' who have high commitment to the existing format could have a stronger perception of cannibalization than that of those who have less commitment to the existing business format. Thus, future researchers may explore the moderating effect of commitment to the existing business format in the relationship between SPC and the indicators of salesperson's performance. One of the objectives of this study is to offer prescriptive and descriptive insights to firms regarding perceived cannibalization and its possible consequences on sales agent's motivation. Specifically, when firms operate in environments that characteristically offer few sales opportunities, appropriate measures should be taken to counteract salespersons' perceptions of cannibalization. For example, a firm could design effective incentive systems to reduce the negative feelings towards competing internet channels. One way of doing so is to provide incentives to sales agents who provide service to clients who have purchased insurance online. Another option is to train salespersons on how to make the internet beneficial to their own sales operations. Since changes often necessitate acquisition of new skills, alteration in salespersons' repertoire and adaptability to cope (Ingram et al., 2006; Schuler and Huber, 1993) will become key determinants of whether the new channel helps or hinders the goals of sales agents. Therefore, training salespersons to adapt to change may be critical to the overall success of both new and entrenched channels. Further research may investigate the role of salesperson training and salesperson collaboration in the development of internet channels as they relate to sales agent's perceived cannibalization and its impact.In view of multi-channel business models followed by the large number of companies today, the jury is still out regarding the efficiency of the integration of off-line and on-line selling activities. In this backdrop, the role of sales agent's perceived cannibalization in the integration of off-line and on-line channels needs further exploration. The SPC scale proffered in this study should facilitate empirical research on this important phenomenon. The findings of this study should provide encouragement for further investigation of the antecedents and consequences of the SPC construct. One major limitation of this study is that the data used in this study to develop and validate the scale was from a single sample of insurance agents. Typically, separate samples are required for examining the psychometric properties of a new scale. Bollen (1989) contends that a new data set can be obtained by randomly dividing the initial data pool into two. However, future research may validate the scale using a new sample. Additionally, the results of this study may lack external validity because the sample used in this study is industry specific. Future research may also examine the relationship posited in this study by using data from a different industry.Our failure to find a direct influence of SPC on commitment presents another research opportunity; that being the possibility of other intervening variables. Past research indicates that salespersons' skills and competencies are critical determinants of salesperson's performance outcomes (Churchill et al., 1985). Consequently, future research may explore the intervening role of salesperson's skills and competencies between SPC and performance outcomes. Additionally, there also may be differences across sex, age, and experience for the relationships posited in this study. Opens in a new window.Table I Extant research in cannibalization Opens in a new window.Table II Sample descriptive statistics Opens in a new window.Table III Reliability and validity statistics Opens in a new window.Table IV Structural model results
|
- First, a multi-item scale was conceptualized and developed for measuring SPC. Second, the properties of the scale were assessed following procedures recommended by Churchill, Anderson, Gerbing, Bagozzi, and Yi. The scale demonstrated satisfactory reliability and validity. Third, SPC was shown to be not universally damaging to commitment. Rather, only under a low munificent environment does perceived cannibalization significantly reduce salespersons' commitment. Additionally, the severity of the influence of SPC on alienation from work increases in low munificent environment.
|
[SECTION: Value] The oldest and strongest emotion of mankind is fear (H.P. Lovecraft). The dawn of the internet era has led several companies to explore new and radical marketing channels. In fact, multi-channel use has become the norm rather than the exception (Frazier, 1999). By adding internet channels, companies hope to increase overall performance, consolidate existing markets and expand into new markets (Geyskens et al., 2002). Unfortunately, internet channels are not without potential problems: internet channels are likely to increase uncertainty about market allegiance, generating real risks for long-term business performance (Geyskens et al., 2002), as well as destroying the value of past investments (Chandy and Tellis, 1998). As Porter (2001, p. 73) promulgates, "it is widely assumed that the internet is cannibalistic [and] will replace [or supplement] all conventional ways of doing business." Furthermore, Trembly (2001) suggests that the ubiquity of the internet will result in lower commissions for sales agents and gradual attrition of sales agents. In concurrence, Stucker (1999) found that "60 percent of the carriers surveyed view the web as at least a moderate threat to the agent distribution system" (p. 8). This finding is corroborated by Hagerty (2005), who suggests that web-based brokerage firms have completely shaken up the real estate industry, thus, creating paranoia for the traditional real-estate agents. Hagerty (2005) describes the traditional real estate agents' perceptions of the internet channel as exasperating, demotivating, and cannibalistic in nature. In their study of the insurance industry, Eastman et al. (2002) found that insurance agents experiencing the addition of an internet channel felt insecure about their job. While the internet provides easier communication and increases interactions with customers, the internet also provides multiple options for insurance purchasers, thus increasing the chances of sales cannibalization. This trend has been confirmed by extant research, which suggests that the internet will continue to attract consumers, breed uncertainty, and undoubtedly change sales agents' perceptions of job security as well as their job performance (Frazier, 1999; Greenhalgh and Rosenblatt, 1984). Primarily working with economic and financial terms, marketing researchers have conceptualized cannibalization and assert that, in fact, the addition of internet channels may not generate significant cannibalization (Biyalogorsky and Naik, 2003; Chandy and Tellis, 1998; Deleersnyder et al., 2002; Ward and Morganosky, 2000). Others, however, contend that even if the financial impact is inconsequential, the psychological impact of cannibalization can influence marketers' performance outcomes (Geyskens et al., 2002). In this regard, Porter (2001) posits that salespeople fear internet channel additions in anticipation that the Internet will cannibalize their sales. In addition, salespeople worry that internet channels will make them outmoded and eventually replace them. Elsewhere, Frazier (1999) contends that, regardless of the extent of actual cannibalization, sales agents' fears concerning cannibalization and the security of their jobs can subdue their efforts, breakdown long-standing relationships, reduce their commitment, and make them fearful of an uncertain future (see also Ashford et al., 1989; Gerstner and Hess, 1995; Greenhalgh and Rosenblatt, 1984; Jeuland and Shugan, 1983; Lal et al., 1996; McGuire and Staelin, 1983). This negative influence of perceived cannibalization on sales agents' job outcomes and relationships often offset the potential gains (e.g., increased market penetration and decreased distribution cost) from adding internet channels (Porter, 2001). Although past researchers have emphasized the importance of perceived cannibalization as a determinant of the benefits and risks of adding an internet channel and the consequences of job insecurity, empirical research on cannibalization in the marketing literature remains scant, apart from the study by Gulati et al. (2002), which was focusing on the fear of disintermediation felt by sales agents. Gulati et al. (2002) position "fear of disintermediation" as an endogenous variable. In contrast, the current study positions the SPC as an exogenous variable that influences the sales agent's psychological outcomes. The organizational behavior literature has found similar outcome variables to be significantly related to job insecurity - the perceived "powerlessness to maintain desired continuity in a threatening job situation" (Greenhalgh and Rosenblatt, 1984, p. 438). According to Greenhalgh and Rosenblatt (1984), one of the key reasons for job insecurity is the potential shrinkage of jobs due to external changes. Within the context of our study, individual perception of shrinkage (cannibalization) is attributed to an external change brought about by the internet. Gulati et al. (2002) conceptualized the fear of disintermediation as a perception of complete loss of business. Our conceptualization of perceived cannibalization captures the subjective threat, which is the consequent of an objective threat. The individual's perceptual processes capture the subjective threat from the addition of an internet channel. This study aims to develop a scale that can be used to measure sales agents' perceived cannibalization of their earnings due to the addition of an internet channel. We establish the conceptual foundation for the scale and the scale's ability to demonstrate acceptable psychometric properties. The scale is then applied in the context of sales agents' perception of the addition of internet channels. By doing so, the downstream outcomes of SPC on commitment and alienation from work can be modeled. While a job insecurity scale may have been used to capture the perceived powerlessness of the situation, existing job insecurity scales assess multiple features of job insecurity that are not relevant to our study. Job insecurity scales also do not tap into perceived threats from competing channels (cf. Ashford et al., 1989; Mauno et al., 2001). Lastly, this research study also argues that the influence of perceived cannibalization on sales agent job outcomes may also be contingent upon the state of environmental munificence, which refers to the extent of growth opportunities available in the market. Uncertainty reduction theory (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949), the expectancy theory of motivation (Vroom, 1964), and congruity theory (Osgood and Tannenbaum, 1955) are used to ground the research. Uncertainty reduction theory explicates how the addition of alternatives in a given situation results in increased uncertainty and how these additions may be perceived as cannibalistic (cf. Berger and Calabrese, 1975; Heider, 1958). The expectancy theory of motivation argues that a high level of perceived cannibalization is likely to decrease a sales agent's belief that expending a given amount of effort will result in corresponding reward, thus, resulting in alienation from work (Vroom, 1964). Congruity theory suggests that the lack of expectation that effort will result in appropriate rewards could result in reduced commitment and increased alienation from work (Osgood and Tannenbaum, 1955). Specifically, this research offers a deeper understanding of the influence of the addition of the internet channel on sales agents' motivation. While recognizing that the internet is here to stay and that strategic channel decisions will unlikely be made based on the views or psychological reactions of sales agents alone, incorporating the sales agent perspective does allow organizations to take a holistic view of their distribution system and make market-focused improvements that coordinate all customer touch points (Vargo and Lusch, 2004). In other words, understanding a sales agent's perceived cannibalization can aid managers in developing strategies to keep the sales agents motivated when employing multiple competing marketing channels to reach customers. Although there is surprisingly little empirical work on channel cannibalization, several researchers have expressed their concerns for the hazards of internet disintermediation (e.g. Garven, 2002; Gulati et al., 2002; Narayandas et al., 2002); a situation in which internet channels are added to entrenched channels (e.g. Alba et al., 1997; Brynjolfsson and Smith, 2000; Coughlan et al., 2001). Internet disintermediation results in the replacement of traditional channel partners with the Internet (Narayandas et al., 2002). This phenomenon fortifies the common belief that internet channels potentially cannibalize the sales of entrenched channels (Porter, 2001).First, sales may shift from entrenched channels to new internet channels when the latter provides features that are more appealing to a target audience, such as a substantial amount of information on the products' characteristics, their possible customization and consistent time savings (Alba et al., 1997). Second, the Internet is likely to increase competition since the consumers have better and quicker access to efficient shopping comparison websites. The resulting increase in price competition may explain why online prices for homogenous products are often found to be lower than those of conventional outlets (Brynjolfsson and Smith, 2000). Consequently, sales may shift from conventional to internet channels. Third, total sales may also decrease should impulse purchases be reduced (Machlis, 1998). Indeed, not only are sales of existing channels cannibalized, but aggregate sales over all channels may also suffer because of the internet channel. Fourth, existing channels may view new internet channels as unwelcome competition. Consequently, the former may lose motivation and reduce support for the firm's products. This may, in turn, also result in brand switching towards the firm's competitors and, hence, decreased total sales (Coughlan et al., 2001).In spite of the above observations and much current debate about disintermediation and insecurity generated by internet channels (Mattila, 2002; Useem, 1999), only a handful of empirical studies have tangentially focused on the potentially cannibalizing consequences of adding the internet to the distribution chain. As shown in Table I, previous empirical studies have viewed cannibalization in terms of its effect on the total sales or overall financial value of the firm. What has yet to be investigated is the impact that perceptions of internet cannibalization have on the commitment and work alienation.This research study proffers the construct of sales agent's perceived cannibalization, which refers to salespeople's perceptions of the extent to which sales opportunities are lost to an internet channel. SPC reflects an attitudinal reaction of a sales agent to challenges that may occur due to the addition of the internet channel. If sales agents view the addition of an internet channel by the firm as a high threat to their current and future sales outcomes, perceived cannibalization should be high. Contrarily, if agents do not consider the addition of an internet channel by their firm as a menace to their current and future sales, perceived cannibalization should be low. Scholars have contended that when a firm begins selling through the Internet, sales agents selling through existing channels are likely to perceive losing market share and customers to online sales (Frazier, 1999; Narayandas et al., 2002; Porter, 2001).Relationship commitment and its relationship with sales agent's perceived cannibalization Commitment is an important component of marketing relationships (Morgan and Hunt, 1994). Relationship commitment is defined in the literature as "an enduring desire to maintain a valued relationship" (Morgan and Hunt, 1994, p. 23). In other words, relationship commitment captures sales agents' willingness to maintain a relationship with their company. Plausibly, it can be argued that a channel member who is satisfied with the economical dimension of the relationship is also committed to the relationship (Morgan and Hunt, 1994).Concurrent with the expectancy theory of motivation argument, this research contends a negative influence of SPC on sales agents' relationship commitment. According to expectancy theory, individuals will be motivated if they believe that "expending a given amount of effort on a task will lead to an improved level of performance on some dimensions" (Futrell, 2001, p. 278). In contrast, if individuals believe their efforts will not produce expected results, motivation will suffer (Vroom, 1964) along with these relationships. Thus, perceptions of high level of uncertainty in the outcome of the sales agents' effort result in a lack of interest in maintaining a long-term relationship. Thus, the following is hypothesized:H1. SPC negatively influences sales agents' relationship commitment.Alienation from work and its relationship with sales agent's perceived cannibalization Alienation from work refers to an attitude in which an employee expresses lack of concern about work and works with low enthusiasm (Moch, 1980). Particularly, alienation from work is a psychological separation from work due to a perceived mismatch between effort and outcome. In concurrence with past research, this study contends that agents' perception of sales cannibalization may lead to psychological severance from work. This psychological severance results from the perception that work will bring sub-optimal outcomes (Moch, 1980).Furthermore, past research indicates that sales agents perceived cannibalization, due to the addition of an internet channel, increases uncertainty (Berger, 1979, 1986; Planalp and Honeycutt, 1985; Shannon and Weaver, 1949). The increased uncertainty related to work or role in the organization may result in psychological detachment from work (Allen and LaFollette, 1977).Additionally, past research indicates that job insecurity influences work attitudes (Hellgren et al., 1999). In other words, individuals will not work optimally when their job is in jeopardy. The introduction of an internet channels should breed insecurity for sales agents who perceive the internet channel reduces their earning potential (Porter, 2001). This view is supported by extant literature, suggesting that employees working in an environment that engenders high job insecurity will experience a high level of work alienation (Blauner, 1964; Shepard, 1971). Consequently, sales agents' perception of cannibalization may be positively linked to alienation from work:H2. SPC positively influences sales agents' alienation from work.Environmental munificence and its moderating influence Environmental munificence is foreseen to play a contingency role in relation to perceived cannibalization's influence on salesperson commitment (Celly and Frazier, 1996). Our contention is rooted in past research on environmental munificence. Veliyath (1996) contends that it is easier for channel members to achieve their sales goals in a high munificence environment rather than in a low munificence environment. This is because business risk decreases and industry performance increases with an increase in environmental munificence. Munificence increases the opportunities for performance by providing many growth opportunities. These opportunities can reduce the perceived challenges associated with multiple channels serving the same market. Consequently, salespeople working in an environment with high munificence may anticipate higher sales growth, which would make up for the loss of sales due to the introduction of an internet channel. However, in a low munificence environment, growth opportunities in the environment are limited; hence, the addition of an internet channel can further exacerbate the duress of salespersons. Consequently, the effect of perceived cannibalization on commitment and alienation will be limited in a highly munificent environment, while in a minimally munificent environment, SPC's impact on commitment and alienation from the work relationship will be strong. This view concurs with Chisholm and Cummings (1979) who suggest that environmental conditions may moderate the relationship between work characteristics and outcomes. Therefore:H3a. The relationship between SPC and commitment will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and commitment will be stronger.H3b. The relationship between SPC and alienation from work will be moderated by environmental munificence such that when environmental munificence is low the relationship between perceived cannibalization and alienation from work will be stronger. Scale development To operationalize SPC, initially a pool of eight items was generated as the potential measure (Churchill, 1979; DeVellis, 1991). These items were reviewed in an interview format by 20 insurance agents, who verified anxiety towards the introduction of an online sales channel that would increase the competition and decrease their earnings. A panel of three research survey experts who engage in research in this domain also reviewed the items. Considering the transcribed interviews of the insurance agents and comments of the research experts, the number of items was reduced to five.These items reflect the impact that the introduction of online channels might have on the insurance agents' perception of their current and future sales performance. Specifically, the items relate to how the addition of an internet channel will be perceived by insurance agents as a likely driver of reduction in clientele and profits. In other words, the measures are intended to describe insurance agents psychologically perception that their profits and growth opportunities are cannibalized by the addition of internet channels.After discussing the sample, the psychometric properties of the newly developed scale, perceived cannibalization are described. The scales used for measuring commitment and environmental munificence are well-established in the literature. Specifically, a three-item relationship commitment scale developed by Kumar et al. (1995) is used in this study. The scale measures affective commitment of sales agents with their insurance company. A four-item scale was used to measure alienation from work (Miller, 1967; Agarwal, 1993). We used Dwyer and Oh's (1987) five-item scale to measure environmental munificence. The scale measures an insurance salesperson's perception of market opportunities for growth and profit. The respondents were presented with a factual scenario pertaining to the future of the insurance industry. After considering the scenario, respondents were asked to rate there perception of growth opportunities.Sample A list of insurance agents in North Texas, which had been obtained through a private vendor, was compared with the insurance agent listings in the yellow page section of greater Dallas-Fort Worth area and other major towns in North Texas. After this comparison, a contact pool of 2,108 insurance sales agents was developed. All 2,108 sales agents were contacted to request their participation to the survey.The questionnaires and a letter elucidating the nature and purpose of the study were mailed to the contact pool of insurance agents by the primary researcher of this study. Of 578 filled questionnaires, 67 were incomplete, resulting in 511 usable questionnaires (response rate 24.2 percent). There were 370 males and 141 female respondents, with an average work experience in the insurance industry of 7 years. The resulting valid responses indicate that the respondents have the experience in insurance industry to provide a basis to respond to the survey. Participation was completely voluntary. Valid respondent characteristics are reported in Table II.Measurement assessment procedures For the purpose of cross validation, data were randomly divided into two sub-samples, training and validation. Thirty percent of randomly drawn cases formed the training sample (n=150) and the remaining 70 percent constituted the validation sample (n=361). Initially, scale properties were assessed with the training sample and then cross-validated with the validation sample. In addition to the scale of SPC, three other scales of theoretically related constructs, relationship commitment, alienation from work, and munificence, were included in the analyses. The analysis was initiated by conducting exploratory factor analysis on the training sample. Thereafter, confirmatory factor analysis was performed on the validation sample.Exploratory factor analysis Using the principle component method and varimax rotation, all 17 items belonging to 4 constructs were factor analyzed with the training sample (n=150). As expected, four factors emerged with eigenvalues of 4.652, 3.640, 3.552, and 2.884 together accounting for 81.14 percent of the variance. All five items of perceived cannibalization, three items of commitment, four items of alienation from work, and five items of environmental munificence loaded clearly with their respective constructs. Factor loadings ranged from 0.612 to 0.966.Confirmatory factor analysis Next confirmatory factor analysis (CFA) with the validation sample (n=361) was conducted. The CFA model had 17 items - 5 for perceived cannibalization, 3 for commitment, 4 for alienation from work, and 5 for environmental munificence. The initial model fit was not optimal. Based on low factor loading (lower than 0.40), high residual (normalized residual >2.58) and modification indices, one item from the SPC scale and one item from environmental munificence were deleted. The essence of the deleted items was retained by other items of their respective scale, that is, the content validity was not significantly reduced. The resulting fit indices demonstrated a good fit: kh2=109.409 (p=0.028; 62.4 percent of the variance explained), df=83, GFI=0.962, AGFI=0.944, CFI=0.99, RMSEA=0.029, PCLOSE=0.993, and HOELTER's 0.05 and 0.01 were 347 and 382, respectively. The GFI and AGFI values of >0.90 and =0.90 were indicative of a good fit. Also, a RMSEA < 0.05 shows a good fit. The PCLOSE > 0.50 suggests RMSEA is good. Lastly, Hoelter's 0.05 and 0.01 indexes were <200 indicating that the validation sample size was adequate.Convergent and discriminant validity, AVE, and composite reliability Results for the validation sample showed that all the critical ratios of all the indicators were significant (critical ratios >1.96, p < 0.05) and ranged from 6.764 to 96.716. These results were taken as evidence of acceptable convergent validity (Gerbing and Anderson, 1988). The composite reliabilities were 0.905, 0.834, 0.930, and 0.864, respectively, while the average variance extracted (AVE) for the constructs of SPC, commitment, alienation from work, and environmental munificence were 0.706, 0.627, 0.772, and 0.615, respectively (Bagozzi and Yi, 1988) (see Table III). The AVE for each factor exceeded the squared correlations between that factor and all other factors indicating acceptable discriminant validity. Additionally, none of the confidence intervals (+- two standards errors) for the estimated correlations for the constructs included 1.0, providing further support for adequate discriminant validity (Anderson and Gerbing, 1988).Common method bias Since the data for this study were obtained from a single survey, common method variance was possible. Following Podsakoff and Organ (1986), the Harman's one-factor test in which all variables were hypothesized to load on a single factor representing the common method was employed. The principal component factor analysis revealed four factors each with an eigenvalue greater than 1.0. All factors together accounted for over 81 percents of the total variance in the training and validation samples respectively. The first factor accounted for 28 percent of the variance. Additionally, high correlation (r > 0.90) was not found between any construct. Bagozzi et al. (1991) indicate that the presence of common method bias usually results in extremely high correlations between variables. Hence, common method bias may not be a serious concern in this study.Non-response bias To examine the non-response bias, mean responses from early respondents and late respondents were compared (Armstrong and Overton, 1977). Independent t-tests of the mean responses on all the four constructs showed no statistical differences (at 0.05 level), thus, non-response bias should not be a problem.Hypotheses test In order to test the hypotheses, a structural model with four constructs was estimated: SPC, the interaction term of SPC and environmental munificence (SPC*EM), alienation from work, and commitment. We followed the procedure recommended by Aiken and West (1991) and Ping (1995) for creating and using the interaction term. The items of SPC and environmental munificence were mean centered and cross-multiplied to create the interaction term. In the proposed model, commitment and alienation are endogenous, whereas the remaining two constructs are exogenous. The structural model was analyzed with the combined sample of 511 cases. The fit indices demonstrated a reasonable fit: kh2=288.556 (p < 0.001; 60.2 percent of the variance explained), df=86, GFI=0.931, AGFI=0.903, CFI=0.96, NFI=0.944, RMSEA=0.058. Even though the direction of the path was negative as per H1, SPC-commitment link was not significant (estimate=-0.031, t=-0.877, p-value=0.381). That is, H1 was not supported. Thus, no main effect of SPC on commitment was found. The SPC-alienation from work link was significant (estimate=0.068, t=1.981, p-value=0.045). Thus, H2 was supported. Further, the SPC*EM-commitment link (estimate=0.067, t=2.475, p-value=0.013) and SPC*EM-alienation from work link (estimate=-0.064, t=-2.767, p-value=0.006) were both significant, thus, supporting H3a and H3b. Consequently, the influence of SPC on commitment and alienation from work is moderated by environmental munificence. The results are summarized in Table IV. In this study, the concept of perceived cannibalization is tested in a sales context. Many past studies suggested the possibility of the rise of sales agent perceived cannibalization due to the addition of internet channels (Hagerty, 2005; Porter, 2001). Some warnings are evident with respect to the potential negative outcomes of sales agent's perceived cannibalization on various job aspects. Anecdotal evidence has surfaced showing that salespersons are perceiving internet channels as cannibalistic to their current and future sales. Yet no attempt had been made to either conceptualize or operationalize the construct of sales agent's perceived cannibalization.Previous conceptualizations of inter-channel cannibalization were all based on economic terms (Geyskens et al., 2002; Ward and Morganosky, 2000; Deleersnyder et al., 2002) and, hence, were considered myopic by Porter (2001). Indeed, Porter (2001) argued that the psychological impact of the addition of new channels on existing channels had to be taken into account to understand fully the impact of sales cannibalization. Our contention follows Porter's proposition and extends his conceptualizations of cannibalization by examining the impact of the addition of an internet channel in the sales domain.After conceptualizing the construct, a multi-item scale was developed for measuring sales agent's perceived cannibalization. The properties of the scale were assessed following procedures recommended by Churchill (1979), Anderson and Gerbing (1988), Gerbing and Anderson (1988), and Bagozzi and Yi (1988). This study, utilizes both exploratory and confirmatory factor analysis to assess the reliability and validity of the SPC scale. Our scale opens a window of opportunity for empirical research on channel cannibalization in general and sales channels in particular.In addition to advancing the scale, four hypotheses are developed, centering on the principal construct: sales agents perceive cannibalization. By demonstrating that environmental munificence moderates the effect of perceived cannibalization on commitment, this research study demonstrates that perceived cannibalization is not universally damaging to relationship commitment. Rather, only under a low munificent environment perceived cannibalization will significantly reduce salespersons' relationship commitment. Furthermore, perceived cannibalization increases alienation from work but its effect is more severe in low munificent environment. This research study demonstrated that the perception of cannibalization can reduce sales agents' commitment to relationships with their company in a low munificence environment. In addition, it can enhance alienation from work, which may result in low performance outcomes in the long run. Consequently, the view that the internet channel will not cannibalize sales (Deleersnyder et al., 2002) is myopic.It may be possible for salespersons to transform their negative perceptions of cannibalization into motivators. Specifically, while internet channels provide customers easy access to the core business processes such as quotations, policy issuance, and claims, salespersons may be able to develop high levels of competencies in areas that internet channels cannot easily "learn". For example, a sales agent may be able to respond to requests (e.g. from brokers for a quote) and make requests (e.g. to reinsurers for a confirmation of coverage) using electronic channels as tools to carry out decision-making and communication tasks that are unique to the agent's skill set. Thereby, even under a low munificence environment, salespersons may overcome the stress of perceived cannibalization of internet channels.Leonard-Barton (1995) contends that sales agents who are committed to an existing business structure may show a more anxious reaction towards the introduction of a new channel such as the internet. Consequently, these sales agents' who have high commitment to the existing format could have a stronger perception of cannibalization than that of those who have less commitment to the existing business format. Thus, future researchers may explore the moderating effect of commitment to the existing business format in the relationship between SPC and the indicators of salesperson's performance. One of the objectives of this study is to offer prescriptive and descriptive insights to firms regarding perceived cannibalization and its possible consequences on sales agent's motivation. Specifically, when firms operate in environments that characteristically offer few sales opportunities, appropriate measures should be taken to counteract salespersons' perceptions of cannibalization. For example, a firm could design effective incentive systems to reduce the negative feelings towards competing internet channels. One way of doing so is to provide incentives to sales agents who provide service to clients who have purchased insurance online. Another option is to train salespersons on how to make the internet beneficial to their own sales operations. Since changes often necessitate acquisition of new skills, alteration in salespersons' repertoire and adaptability to cope (Ingram et al., 2006; Schuler and Huber, 1993) will become key determinants of whether the new channel helps or hinders the goals of sales agents. Therefore, training salespersons to adapt to change may be critical to the overall success of both new and entrenched channels. Further research may investigate the role of salesperson training and salesperson collaboration in the development of internet channels as they relate to sales agent's perceived cannibalization and its impact.In view of multi-channel business models followed by the large number of companies today, the jury is still out regarding the efficiency of the integration of off-line and on-line selling activities. In this backdrop, the role of sales agent's perceived cannibalization in the integration of off-line and on-line channels needs further exploration. The SPC scale proffered in this study should facilitate empirical research on this important phenomenon. The findings of this study should provide encouragement for further investigation of the antecedents and consequences of the SPC construct. One major limitation of this study is that the data used in this study to develop and validate the scale was from a single sample of insurance agents. Typically, separate samples are required for examining the psychometric properties of a new scale. Bollen (1989) contends that a new data set can be obtained by randomly dividing the initial data pool into two. However, future research may validate the scale using a new sample. Additionally, the results of this study may lack external validity because the sample used in this study is industry specific. Future research may also examine the relationship posited in this study by using data from a different industry.Our failure to find a direct influence of SPC on commitment presents another research opportunity; that being the possibility of other intervening variables. Past research indicates that salespersons' skills and competencies are critical determinants of salesperson's performance outcomes (Churchill et al., 1985). Consequently, future research may explore the intervening role of salesperson's skills and competencies between SPC and performance outcomes. Additionally, there also may be differences across sex, age, and experience for the relationships posited in this study. Opens in a new window.Table I Extant research in cannibalization Opens in a new window.Table II Sample descriptive statistics Opens in a new window.Table III Reliability and validity statistics Opens in a new window.Table IV Structural model results
|
- The data for this study were collected using a single survey of insurance agents. Future researchers should attempt to examine the relationships posited in this study using a sample from a different industry.
|
[SECTION: Purpose] In recent decades, the Scandinavian countries have swung from previously characteristic social democratic regimes to neoliberal policy regimes. The swing has been underpinned by institutions such as the Organisation for Economic Co-operation and Development (OECD). In educational contexts, since the beginning of the 2000s the shift has included increasing focus on school leadership, with demands from the state for school leaders to ensure that student results (which have been declining in international comparisons) improve via scientific goal- and result-oriented management (Blossing et al., 2014; Lundahl, 2005). The OECD's Programme for International Student Assessment (PISA) has strongly promoted these demands, and data from PISA evaluations have been analysed from various perspectives to test hypotheses that may explain variations in student performance (e.g. Baumann and Krskova, 2016). A recent report by the OECD (2015) strongly focused on school leaders' management in explanations of the declining student performances in Sweden, and (inter alia) recommended establishment of a national institute to raise school leader quality. School leaders in other European countries face similar requirements (Hall et al., 2017). However, several studies have shown that school leaders have difficulties in meeting requirements for goal- and result-oriented management in practice (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009), and instead pay most attention to relational and social aspects of their work (Hult et al., 2016; Tornsen, 2009). Social aspects are essential in working situations involving meetings with numerous teachers, students, parents and officials, which inevitably create complex web of relations. Clearly, this raises questions about if and how such situations, and school leaders' relational and social orientation, may influence key elements of scientific management of their schools, for example: goalsetting, evaluation of teaching and learning, and establishment of development plans. To address these questions, in this paper we use the notions of techno and socio structure, drawn from work by Dalin (1994) on organisational profiles, in attempts to capture scientific goal and management aspects (in contrast to relational and social aspects) of school leaders' work orientation. Dalin understands the techno-structure of organisations as consisting of the two aspects of object (goal-setting) and formal (working routines), and the socio structure as consisting of person and symbol aspects. We investigate possible reasons for the school leaders' difficulties mentioned above, using Dalin's understanding as a lens to focus on views expressed in interviews by 26 K-9 school leaders in a Swedish municipality. The aim is to contribute to the understanding of school leaders' relational and management work orientation in an organisational perspective, in terms of techno and socio structure dimensions. School leaders' work orientation is captured here by the interviewees' descriptions of its most prominent aspects. Inter alia, we test the following two hypotheses. First, that the school leaders strongly attend to socio structural aspects of their work, indicating that their orientation is predominantly relational. Second, that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities. School leaders' work Managers need various skills depending on the leadership situation (Armstrong, 2012). According to organisational leadership and human resource literature, an important aspect of the situation is a manager's position in the authority hierarchy of the organisation. Being in the middle of the hierarchy requires a mix of technical, interpersonal and conceptual skills (Yukl, 2013, p. 162). An apparently common perception is that school leaders can be regarded as middle managers, implying that they are exposed to pressure from both senior managers and subordinates. Teachers seek collegiality, and want school leaders who can protect them from external pressure and thus orientate their efforts towards social aspects of their organisations. In contrast, senior managers feel responsible for budgets, and demand efficiency and control connected to organisations' management aspects (Liljenberg, 2015; Moos et al., 2004). Several researchers have shown that school leaders have difficulties meeting the expectations placed on them in the contemporary results- and goal-oriented school system (Holmgren et al., 2013; Moller, 2009; Spillane et al., 2002). Kelchtermans et al. (2011) conclude that school leaders act at a junction of interests and agendas, where they must not only react to external pressure and ideas but also proactively ensure that ongoing processes initiated within the organisation can develop without being jeopardised by conflicting interference from outside. From the teachers' perspective, the school leader is one of them. However, school leaders are intermediaries who must promote both internal and external agendas, interests and aspirations (Honig and Hatch, 2004; Hult et al., 2016), and can, thus, be regarded as both poachers and gamekeepers. To cope with this situation, managerial knowledge and skills are not enough. Inevitably, school leadership involves relational, moral and emotional agendas. Additionally, school leaders have to handle these agendas in parallel, as many aspects of school leadership are intertwined (Arlestig and Tornsen, 2014). Orientation towards social aspects of the organisation is considered essential for success as a school leader (Leo, 2015; Northfield, 2014; Sugrue, 2015; Tornsen, 2009). An excessive focus on social aspects with an "open door policy" makes it difficult for school leaders to attend sufficiently to techno structural aspects, such as long-term planning and systematic quality work. Thus, for instance, Scherp (1998) found that development of teaching and learning practice tends to be relatively weak in schools where principals are relatively strongly service and relation oriented. Similarly, Hoog et al. (2005, 2009) argue that attention to structural and cultural aspects must be balanced by attention to techno and socio aspects for successful school leadership. Nelson et al. (2008) studied novice school leaders' experiences of their new jobs, and concluded that they faced challenges connected to both technical and relational aspects of leadership, linking these to Sergiovanni's (2004) work related to the systemworld and lifeworld of school leadership. Systemworld and lifeworld are interconnected, and a proper balance between the two is essential for a successful school and successful school leadership, according to Nelson et al. (2008). In line with this argument, Tornsen (2011) found that principals monitored in successful schools had a more versatile leadership repertoire than those in less successful schools, including elements (in Dalin's terms) of both techno and socio orientation. Additionally, Demski and Racherbaumer (2015) conclude that successful principal leadership is "data wise," as successful principals use externally and internally generated data to improve practice. Lindberg (2014) concludes that school leaders lack knowledge about how to use goal-setting theory and management by objectives in their strategic work. Consequently, as goals for long-term school development are not properly established, it becomes more or less impossible for the school leaders to steer activities in directions that lead to high quality and lastingly good student performances. In summary, previous research indicates that school leaders require both relational and managerial knowledge and skills to be effective (e.g. Garza et al., 2014; Moos et al., 2011). It also shows that they face strong pressure to orient towards social and relational aspects in their daily social practice with teachers, but policy trends towards accountability impose pressure to orient towards more managerial or techno structure aspects. However, there is evidence that school leaders' focus is often biased towards the relational aspects, and they have difficulties in meeting managerial demands to pay more attention to goals and results. Theoretical framework For the analysis of school leaders' relational and management work we rely on Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. Researchers have used various notions to distinguish between relational and management aspects of school leaders' work. Hoog et al. (2005, 2009) use the notions culture and structure. Nelson et al. (2008) use the terms relational and technical, and also refer to the notions lifeworld and systemworld introduced by Sergiovanni (2004). These notions are derived from more common categorisations of aspects of general social life. However, we have chosen Dalin's concepts to frame our analysis because they facilitate consideration of the school leaders' work on prominent aspects of organisational functions that are important for forging improvements. Moreover, object and formal aspects of techno structure aspects can be distinguished, as well as person and symbolic aspects of socio structure. All four of these sets of aspects are potentially important concerns, and thus important components or dimensions of leaders' or manager's work orientations, so the concepts allow substantially richer analysis than the dichotomized categories mentioned above. In our use of Dalin's terminology, the object dimension of orientation refers to attention to objective or factual content, and rational allocation of tasks to people with specific positions or roles in the organisation. Formalisation implies a prescribed, and often written, distribution of tasks, responsibilities and working processes. The combination of these aspects is called techno structure, which, according to Dalin, is rooted in classical organisation theory that regards hierarchy, position and function as key building blocks for an effective organisation. A shift towards this type of orientation is clearly evident in the goal- and result-focused management trend that has influenced school work in the neoliberal policy era (Blossing et al., 2014). The socio structural person aspects are related to individuals' motivation, knowledge, skills and learning potentials, which are regarded here and by Dalin as critical foundations for an organisation. How the people in the organisation cooperate, especially how the leaders collaborate with the other staff, is also regarded as critical. The person aspects are strongly associated with symbolic aspects, i.e. sense making, prized values and norms. For successful leadership, these must be communicated to staff, so symbols are more socio-structurally important than formal descriptions of working processes. According to Dalin, this is also aligned with a humanistic organisation perspective, in which organisational culture and sense-making are core features. Furthermore, it is manifested in many schools' efforts to establish a vision and a profile that signal not only what each school is, but also what it means to work there as a teacher. The 26 interviewed K-9 school leaders (17 women, 9 men) were sampled ad hoc from a Swedish municipality with approximately 40,000 inhabitants located close to one of Sweden's biggest cities. Of them, 12 had at least ten years' experience of work as a school leader, 8 had five to nine years' experience and 6 had at most four years' experience. Data used to analyse the school leaders' work orientation in techno and socio structure terms were collected from semi-structured, hour-long interviews with the 26 school leaders. The interviews started with the following open question and task: "What do you consider as your main tasks as a school leader, and which occupy most of your time? Please write them down on post-it notes." The school leaders were subsequently asked to talk about what they had written on the post-it notes. They were also asked to reflect on the kind of work they engaged in most, the aspects they felt most comfortable with, their development in relation to the work they thought was important for them, and what acting organisationally could mean for them. Field notes were taken during the interviews, all of which were also digitally audio-recorded and subsequently transcribed verbatim. NVivo 10 software was used to organise the transcripts. In this paper, we test the utility of techno- and socio-structure notions for analysing the school leaders' work orientation. For this purpose, we coded statements in the interview transcripts into the four categories shown in the analytical matrix presented in Table I, which were operationalized using Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. The formal, object, symbol and person dimensions of each school leader's work orientation were subsequently scored as low, moderate or high following methodology described by Miles et al. (2014). Thus, the scores were based on both numbers of words assigned to the respective categories and indications by the school leaders of how strongly the statements reflected their priorities and engagement. The scores were eventually dichotomized as low or moderate to high, to avoid exaggerating the importance of the scores per se. In this section, we first present results concerning each of the considered aspects and dimensions of the school leaders' work orientations, with illustrative quotations. Then we draw conclusions and relate them to both previous findings and our hypotheses, that their orientation may be strongly social and relational, but they may pay too little attention to managerial (techno structural) aspects of their work. Interestingly, the summed scores in Table II indicate that most of the interviewed school leaders paid moderate to strong attention to formal (24) and person (13) aspects, and fewer paid moderate to strong attention to object and symbol aspects of their organisations (11 and 6, respectively). The results clearly indicate that work orientations of most of the interviewed school leaders had strong formal components, and in many cases strong person components. This is illustrated by the following extract regarding what school leader K say about her or his leadership:K: [...] and then actually a teacher wrote that I have never had a principal who praises us as much as you do, you continuously give us positive feedback, strengthen us and listen to us [...] and I got [comments like] that from two teachers last week, so that was fine. But I think it is, that I [...] I want the teachers to go along with me, I seldom boss them around, and if I do I think they would complain to me, and probably it would be justified. I think they feel that I'm very confident in my leadership, I do think that. This desire to maintain harmonious relations with the teachers, reflected in wanting "the teachers to go along with me," permeates K's comments and manifests a strong person element, as do statements on procedures (reflecting the formal element of her or his work orientation). In the next extract, K talks about how s/he has attempted to improve formal processes:K: [...] I can have a tendency to expand what I do, so I'm careful to ensure that I come back to the things we've decided, that now we're going to address this in the meetings, so I've become much more precise about structure, concerning how I should structure what to do and what this meeting should cover, and keep to that [...]. Our interpretation is that many school leaders in the sample were occupied with formalising the relationships of the teachers in various kinds of working procedures in their respective organisations. In the following extract, school leader B illustrates the strong prioritisation of organising the working procedures in a formal infrastructure:B: [...] so I think that this is important in what I do, it is important that we create a good infrastructure.Interviewer: Could you say something more about what you mean by infrastructure?B: I think how we communicate with each other, the different kinds of meetings we have, how we talk with each other, how does it look in the different school houses, in the different groupings, who meets where? Where do we make decisions? What kinds of 'carrots' do we have? [...] Two school leaders (H and C) clearly paid more attention to person aspects than formal aspects, as illustrated by the following quotation from C, manifesting a very clear focus on the teachers as persons, and her or his interest in meeting them to hear their concerns (minor and major):C: [...] if you're there when people come in the morning [...] when people have a break, they can raise those little things, things that I might not see as problems but may be major clouds for those people, and if we let the clouds grow they'll get out of control and the people won't be able to focus on their tasks [...]: the coffee machine is my best friend, people stand there for a minute, then start talking about little things. Throughout the interview with school leader C, s/he expressed her or his interest in meeting the needs of every individual teacher, which appeared to be the most highly prioritised task in C's leadership. This orientation is also evident in his or her comments on managing the team and the team leaders' development:C: [...] in this course on team leadership, we're considering [...] last year we considered this [in relation to] different personality types in a team, and this time we've focused on leadership versus "co-workership" and examined Susan Wheelan's theories a little. I've used Targama before, we work with the team-leaders so they won't fall into these pits of discussing big or small balls in the school yard, these non-issues [...]. This attention to personality types, leadership, co-workership and the need to focus on important matters could all be linked to C's interest in and prioritisation of person aspects. In sharp contrast, school leader G's comments reflected strong attention to object aspects of team leadership:G: [...] but we start by setting the effect goals so we all have a joint understanding of what we want, then the teams' work consist of making a plan for the coming year to show what we need to reach them, and what everyone needs to do and when. When we have delegated the work to the teams, the members of each team must tackle their tasks, which they do during the week. Then there is a structure during the whole year that includes operative meetings; there are specific times when the teams have meetings and they talk about the goals, and requirements, to reach them, during the operative meetings and meetings with the team leaders. There is one leader in each team [...]. School leader G's scores for object and formal dimensions of work orientation were moderate to high, indicating strong prioritisation of techno structural aspects. The following quotation from leader G expresses the importance s/he attached to goals and visions, and how they have been processed in G's school:G: [...] you can't manage a school if some staff members don't like being there and it's about shaping goals, visions and we have visions. We leaders have developed the visions, then together with the staff we've developed clear goals that everyone knows. Then the staff have been involved and asked how they think we should reach them, and established intermediate goals and how long they think it will take, and what they should do to reach to those points. We've streamlined the goals into effect goals and we've been selective, because we don't think we can do everything at the same time, but we've chosen different things to focus on this year, and that's what they've participated in, development of processes that are going to be started [...]. Overall, the school leaders' symbol orientation scores were low (moderate to high for just two of them). In the following extract, school leader H talks about participation and clearly shows that this is important in her or his leadership, as it is crucial for teacher collaboration and organising a leader-group. It is also highlighted when the interviewer subsequently asks which of all the aspects H has hitherto mentioned that H feels most at home with:H: It is perhaps participation. Because I really believe that we're a team that together can reach results, where we have different responsibilities. Typically, statements manifesting a strong symbol component of work orientation in the interviews indicated that certain value-laden notions are raised, and repeated as objectives in themselves or as ideals to strive for, often without specifying the procedures that are intended to foster the desired values. This is illustrated by the following quotation, where school leader M indicates that trust is the leadership aspect that s/he feels most at home or secure with:M: Well, I think I am most at home here (points to a post-it where "trust and confidence" is written).Interviewer: Where [...] here? (points to the post-it)M: Yes!Interviewer: Trust and confidence.M: Create relations. I also believe a lot in relations. You get that here (pointing to a post-it indicating relations), trust and confidence, so to speak. But it is that (pointing to a post-it where goal fulfilment is written) [...].Interviewer: Goal fulfilment?M: Yes, and I think it's very much my way to reach it. I think it's very much my way, and I know that I'm at work a lot. It's easy [for people] to get in touch with me, I'm accessible. Not if I'm busy with something, but that's not all the time. So the paths [to me] are short. In our interpretation, M treats trust and confidence as a symbol, guiding the relation-building in order to fulfil goals. Results of the analysis of the school leaders' work orientation are summarised in Table II. The 13 school leaders (50 per cent of the sample) in the top half had moderate to high scores for both techno and socio dimensions of work orientation, while the others had moderate to high scores for either techno or socio dimensions. Thus, the scores for our sample do not confirm that a social or relational orientation prevailed among our sample of school leaders. However, 24 of the 26 school leaders (92 per cent) had moderate or high scores for the formal dimension of work orientation, in five cases with low scores for all of the other three dimensions (object, person and symbol), while the others had roughly equal scores for both techno and socio dimensions. Nevertheless, the hypothesis that the school leaders may pay too little attention to object aspects of their work was verified: 15 of the 26 school leaders (58 per cent) had low scores for the object dimension. It should also be noted that only six of them (23 per cent) obtained moderate or high scores for the symbol dimension. As already mentioned, one of the objectives of this study was to test the utility of techno- and socio-structure notions for analysing school leaders' work orientations. The results confirm that the interviewees' work orientations can be described in terms of several permutations of moderate to high or low degrees of attention to object and formal (techno structure), and person and symbol (socio structure) aspects of their work. They articulated attention to object aspects in comments regarding setting effective goals, and involving staff in both establishing intermediate goals and making plans to reach them. Consideration of formal aspects is expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. Recognition of the importance of person aspects can be discerned in comments about attending to the teachers, listening to their needs at the coffee machine, giving responses that encourage their co-operation with the leaders' (and/or agreed) agendas, and working with their personality types. Finally, engagement with symbol aspects is evident in the highlighting of words like participation, trust and confidence, as well as attempts to ensure that values symbolised by the words permeate the leaders' organisations. Unlike previous authors, we found no evidence in our analysis (considering attention to both person and symbol aspects of socio structure) that our school leaders' work orientation was predominantly relational. Rather, we found that attention to formal aspects prevailed, for example, in shaping working routines. We suspect that this may have been masked in previous studies (e.g. Hoog et al., 2005; Northfield, 2014; Tornsen, 2009), which highlighted school leaders' emphasis on the importance of being skilled in building good relations. However, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects. A lack of attention to person aspects in terms of identifying and addressing teachers' needs and motives could strongly impair professional development in a school. This is because it is not enough to organise teams for professional development, the professional needs must also be analysed in relation to work objectives. Our results are consistent with earlier findings that school leaders pay insufficient attention to managerial aspects of their work (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009). However, they add nuance by showing that (among our sample) this did not concern formal or organising aspects, but object aspects, i.e. setting effective goals, and involving the staff in establishing intermediate goals and making plans to reach them. Thus, our findings do not verify the first hypothesis; that the school leaders strongly prioritise socio structural aspects of their work, indicating that their orientation is predominantly relational. Instead, they indicate that they prioritise formal aspects, as expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. The second hypothesis (that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities) is verified by the low scores for the object dimension obtained for 15 of the 26 school leaders (58 per cent). However, scores for the formal dimension (an important techno-structural element of our theoretical framework of management work orientation) were moderate to high for all but two of the school leaders. A surprising finding is that only two school leaders displayed moderate to high attention to symbol aspects, which together with person aspects constitute the socio structural dimensions of their work orientation. We expected more school leaders to show such attention, since seven of the school leaders prioritised person aspects, and previous empirical and theoretical analyses of organisational profiles suggest that social and personal aspects should strongly correlate. These findings raise intriguing questions. According to Dalin (1994), symbols provide the core channels for communicating meaning in the social world, but our investigated school leaders do not seem to use them. Instead they seem to communicate through objects, or through neither objects nor symbols but merely through formalising. A possibility that warrants further attention is that this may be due to increasing accountability pressure to act in a managerial manner. In fact, the dominant formal (-person) work orientation we have detected may be a result of two pressures: an external accountability pressure from the administration, promoting prioritisation of formal aspects, with (sometimes) an internal pressure to attend to person aspects in daily social practice with teachers. Another interesting finding is that some school leaders seem to pay moderate to strong attention to both techno and socio structure aspects, while others predominantly address either techno or socio structure aspects (in both cases in various permutations). We presume that school leaders who score highly for one of the two main elements of both techno and socio structure will have better foundations for addressing their weaknesses than those who have low scores for both main elements of either techno or socio structure. This raises questions about the optimal organisation of collaboration and in-service training to foster improvement of school leaders' relational and management work in practice, which is highly challenging according to several studies (e.g. Cunningham and Sherman, 2008). Reflecting on the analyses, one could question whether the interviews captured school leaders' work, but the aim was to investigate their work orientation, not their entire leadership practice. Moreover, school leaders are not the only organisational members who are involved in leadership. However, we argue that the school leaders' work orientation must be considered an important part of their practice. Further, we consider that the four dimensions we considered captured the school leaders' work orientations sufficiently for categorisation and fruitful analysis. A further limitation we should mention is that we found it difficult to distinguish robustly between different degrees of engagement, consequently the scores we obtained may have been biased towards the moderate-to-high ends of the scales. Theoretically, our findings contribute to an understanding of school leaders' relational and management work orientation, by showing that it can be understood in terms of several permutations of moderate to high or low degrees of object and formal (techno structure), and person and symbol (socio structure) components. The results have practical implications for the improvement of school leaders' management skills, since they contribute to a more nuanced starting point for development, considering both external neoliberal accountability pressure from the administration and internal relational pressure. Interestingly, we found no evidence of the relational dominance in school leaders' work orientations reportedly detected by previous authors. Rather, we found indications that our interviewees primarily attended to formal aspects, particularly shaping working routines. For practical reasons, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects, because it is not enough to organise teams for professional development; the professional needs must also be analysed in relation to the work objectives. An important practical implication is that there is a need to investigate school leaders' attention to symbol aspects of their work and organisations. If symbols are core channels to communicate important values, as claimed by previous authors, it is worrying that the school leaders' scores for the symbol dimension were very low. We assert that it is important to establish and communicate core symbols like democracy, equity and solidarity in compulsory schooling, otherwise management could result in mere instrumentality, especially through external accountability pressures from sources such as PISA, thereby leading to a teaching and learning environment in schools where it is difficult for students to find meaning. However, we welcome further research to assess the general validity of our conclusions, as well as analyses with larger samples and more detailed exploration of the implications of different permutations of weak and strong object, formal, person and symbol components of work orientation.
|
The purpose of this paper is to contribute to the understanding of Swedish school leaders' relational and management work orientation, in terms of both techno and socio structure dimensions. The background is the neoliberal policy regime, underpinned by OECD and PISA, and an increased focus on school leaders' management work.
|
[SECTION: Method] In recent decades, the Scandinavian countries have swung from previously characteristic social democratic regimes to neoliberal policy regimes. The swing has been underpinned by institutions such as the Organisation for Economic Co-operation and Development (OECD). In educational contexts, since the beginning of the 2000s the shift has included increasing focus on school leadership, with demands from the state for school leaders to ensure that student results (which have been declining in international comparisons) improve via scientific goal- and result-oriented management (Blossing et al., 2014; Lundahl, 2005). The OECD's Programme for International Student Assessment (PISA) has strongly promoted these demands, and data from PISA evaluations have been analysed from various perspectives to test hypotheses that may explain variations in student performance (e.g. Baumann and Krskova, 2016). A recent report by the OECD (2015) strongly focused on school leaders' management in explanations of the declining student performances in Sweden, and (inter alia) recommended establishment of a national institute to raise school leader quality. School leaders in other European countries face similar requirements (Hall et al., 2017). However, several studies have shown that school leaders have difficulties in meeting requirements for goal- and result-oriented management in practice (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009), and instead pay most attention to relational and social aspects of their work (Hult et al., 2016; Tornsen, 2009). Social aspects are essential in working situations involving meetings with numerous teachers, students, parents and officials, which inevitably create complex web of relations. Clearly, this raises questions about if and how such situations, and school leaders' relational and social orientation, may influence key elements of scientific management of their schools, for example: goalsetting, evaluation of teaching and learning, and establishment of development plans. To address these questions, in this paper we use the notions of techno and socio structure, drawn from work by Dalin (1994) on organisational profiles, in attempts to capture scientific goal and management aspects (in contrast to relational and social aspects) of school leaders' work orientation. Dalin understands the techno-structure of organisations as consisting of the two aspects of object (goal-setting) and formal (working routines), and the socio structure as consisting of person and symbol aspects. We investigate possible reasons for the school leaders' difficulties mentioned above, using Dalin's understanding as a lens to focus on views expressed in interviews by 26 K-9 school leaders in a Swedish municipality. The aim is to contribute to the understanding of school leaders' relational and management work orientation in an organisational perspective, in terms of techno and socio structure dimensions. School leaders' work orientation is captured here by the interviewees' descriptions of its most prominent aspects. Inter alia, we test the following two hypotheses. First, that the school leaders strongly attend to socio structural aspects of their work, indicating that their orientation is predominantly relational. Second, that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities. School leaders' work Managers need various skills depending on the leadership situation (Armstrong, 2012). According to organisational leadership and human resource literature, an important aspect of the situation is a manager's position in the authority hierarchy of the organisation. Being in the middle of the hierarchy requires a mix of technical, interpersonal and conceptual skills (Yukl, 2013, p. 162). An apparently common perception is that school leaders can be regarded as middle managers, implying that they are exposed to pressure from both senior managers and subordinates. Teachers seek collegiality, and want school leaders who can protect them from external pressure and thus orientate their efforts towards social aspects of their organisations. In contrast, senior managers feel responsible for budgets, and demand efficiency and control connected to organisations' management aspects (Liljenberg, 2015; Moos et al., 2004). Several researchers have shown that school leaders have difficulties meeting the expectations placed on them in the contemporary results- and goal-oriented school system (Holmgren et al., 2013; Moller, 2009; Spillane et al., 2002). Kelchtermans et al. (2011) conclude that school leaders act at a junction of interests and agendas, where they must not only react to external pressure and ideas but also proactively ensure that ongoing processes initiated within the organisation can develop without being jeopardised by conflicting interference from outside. From the teachers' perspective, the school leader is one of them. However, school leaders are intermediaries who must promote both internal and external agendas, interests and aspirations (Honig and Hatch, 2004; Hult et al., 2016), and can, thus, be regarded as both poachers and gamekeepers. To cope with this situation, managerial knowledge and skills are not enough. Inevitably, school leadership involves relational, moral and emotional agendas. Additionally, school leaders have to handle these agendas in parallel, as many aspects of school leadership are intertwined (Arlestig and Tornsen, 2014). Orientation towards social aspects of the organisation is considered essential for success as a school leader (Leo, 2015; Northfield, 2014; Sugrue, 2015; Tornsen, 2009). An excessive focus on social aspects with an "open door policy" makes it difficult for school leaders to attend sufficiently to techno structural aspects, such as long-term planning and systematic quality work. Thus, for instance, Scherp (1998) found that development of teaching and learning practice tends to be relatively weak in schools where principals are relatively strongly service and relation oriented. Similarly, Hoog et al. (2005, 2009) argue that attention to structural and cultural aspects must be balanced by attention to techno and socio aspects for successful school leadership. Nelson et al. (2008) studied novice school leaders' experiences of their new jobs, and concluded that they faced challenges connected to both technical and relational aspects of leadership, linking these to Sergiovanni's (2004) work related to the systemworld and lifeworld of school leadership. Systemworld and lifeworld are interconnected, and a proper balance between the two is essential for a successful school and successful school leadership, according to Nelson et al. (2008). In line with this argument, Tornsen (2011) found that principals monitored in successful schools had a more versatile leadership repertoire than those in less successful schools, including elements (in Dalin's terms) of both techno and socio orientation. Additionally, Demski and Racherbaumer (2015) conclude that successful principal leadership is "data wise," as successful principals use externally and internally generated data to improve practice. Lindberg (2014) concludes that school leaders lack knowledge about how to use goal-setting theory and management by objectives in their strategic work. Consequently, as goals for long-term school development are not properly established, it becomes more or less impossible for the school leaders to steer activities in directions that lead to high quality and lastingly good student performances. In summary, previous research indicates that school leaders require both relational and managerial knowledge and skills to be effective (e.g. Garza et al., 2014; Moos et al., 2011). It also shows that they face strong pressure to orient towards social and relational aspects in their daily social practice with teachers, but policy trends towards accountability impose pressure to orient towards more managerial or techno structure aspects. However, there is evidence that school leaders' focus is often biased towards the relational aspects, and they have difficulties in meeting managerial demands to pay more attention to goals and results. Theoretical framework For the analysis of school leaders' relational and management work we rely on Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. Researchers have used various notions to distinguish between relational and management aspects of school leaders' work. Hoog et al. (2005, 2009) use the notions culture and structure. Nelson et al. (2008) use the terms relational and technical, and also refer to the notions lifeworld and systemworld introduced by Sergiovanni (2004). These notions are derived from more common categorisations of aspects of general social life. However, we have chosen Dalin's concepts to frame our analysis because they facilitate consideration of the school leaders' work on prominent aspects of organisational functions that are important for forging improvements. Moreover, object and formal aspects of techno structure aspects can be distinguished, as well as person and symbolic aspects of socio structure. All four of these sets of aspects are potentially important concerns, and thus important components or dimensions of leaders' or manager's work orientations, so the concepts allow substantially richer analysis than the dichotomized categories mentioned above. In our use of Dalin's terminology, the object dimension of orientation refers to attention to objective or factual content, and rational allocation of tasks to people with specific positions or roles in the organisation. Formalisation implies a prescribed, and often written, distribution of tasks, responsibilities and working processes. The combination of these aspects is called techno structure, which, according to Dalin, is rooted in classical organisation theory that regards hierarchy, position and function as key building blocks for an effective organisation. A shift towards this type of orientation is clearly evident in the goal- and result-focused management trend that has influenced school work in the neoliberal policy era (Blossing et al., 2014). The socio structural person aspects are related to individuals' motivation, knowledge, skills and learning potentials, which are regarded here and by Dalin as critical foundations for an organisation. How the people in the organisation cooperate, especially how the leaders collaborate with the other staff, is also regarded as critical. The person aspects are strongly associated with symbolic aspects, i.e. sense making, prized values and norms. For successful leadership, these must be communicated to staff, so symbols are more socio-structurally important than formal descriptions of working processes. According to Dalin, this is also aligned with a humanistic organisation perspective, in which organisational culture and sense-making are core features. Furthermore, it is manifested in many schools' efforts to establish a vision and a profile that signal not only what each school is, but also what it means to work there as a teacher. The 26 interviewed K-9 school leaders (17 women, 9 men) were sampled ad hoc from a Swedish municipality with approximately 40,000 inhabitants located close to one of Sweden's biggest cities. Of them, 12 had at least ten years' experience of work as a school leader, 8 had five to nine years' experience and 6 had at most four years' experience. Data used to analyse the school leaders' work orientation in techno and socio structure terms were collected from semi-structured, hour-long interviews with the 26 school leaders. The interviews started with the following open question and task: "What do you consider as your main tasks as a school leader, and which occupy most of your time? Please write them down on post-it notes." The school leaders were subsequently asked to talk about what they had written on the post-it notes. They were also asked to reflect on the kind of work they engaged in most, the aspects they felt most comfortable with, their development in relation to the work they thought was important for them, and what acting organisationally could mean for them. Field notes were taken during the interviews, all of which were also digitally audio-recorded and subsequently transcribed verbatim. NVivo 10 software was used to organise the transcripts. In this paper, we test the utility of techno- and socio-structure notions for analysing the school leaders' work orientation. For this purpose, we coded statements in the interview transcripts into the four categories shown in the analytical matrix presented in Table I, which were operationalized using Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. The formal, object, symbol and person dimensions of each school leader's work orientation were subsequently scored as low, moderate or high following methodology described by Miles et al. (2014). Thus, the scores were based on both numbers of words assigned to the respective categories and indications by the school leaders of how strongly the statements reflected their priorities and engagement. The scores were eventually dichotomized as low or moderate to high, to avoid exaggerating the importance of the scores per se. In this section, we first present results concerning each of the considered aspects and dimensions of the school leaders' work orientations, with illustrative quotations. Then we draw conclusions and relate them to both previous findings and our hypotheses, that their orientation may be strongly social and relational, but they may pay too little attention to managerial (techno structural) aspects of their work. Interestingly, the summed scores in Table II indicate that most of the interviewed school leaders paid moderate to strong attention to formal (24) and person (13) aspects, and fewer paid moderate to strong attention to object and symbol aspects of their organisations (11 and 6, respectively). The results clearly indicate that work orientations of most of the interviewed school leaders had strong formal components, and in many cases strong person components. This is illustrated by the following extract regarding what school leader K say about her or his leadership:K: [...] and then actually a teacher wrote that I have never had a principal who praises us as much as you do, you continuously give us positive feedback, strengthen us and listen to us [...] and I got [comments like] that from two teachers last week, so that was fine. But I think it is, that I [...] I want the teachers to go along with me, I seldom boss them around, and if I do I think they would complain to me, and probably it would be justified. I think they feel that I'm very confident in my leadership, I do think that. This desire to maintain harmonious relations with the teachers, reflected in wanting "the teachers to go along with me," permeates K's comments and manifests a strong person element, as do statements on procedures (reflecting the formal element of her or his work orientation). In the next extract, K talks about how s/he has attempted to improve formal processes:K: [...] I can have a tendency to expand what I do, so I'm careful to ensure that I come back to the things we've decided, that now we're going to address this in the meetings, so I've become much more precise about structure, concerning how I should structure what to do and what this meeting should cover, and keep to that [...]. Our interpretation is that many school leaders in the sample were occupied with formalising the relationships of the teachers in various kinds of working procedures in their respective organisations. In the following extract, school leader B illustrates the strong prioritisation of organising the working procedures in a formal infrastructure:B: [...] so I think that this is important in what I do, it is important that we create a good infrastructure.Interviewer: Could you say something more about what you mean by infrastructure?B: I think how we communicate with each other, the different kinds of meetings we have, how we talk with each other, how does it look in the different school houses, in the different groupings, who meets where? Where do we make decisions? What kinds of 'carrots' do we have? [...] Two school leaders (H and C) clearly paid more attention to person aspects than formal aspects, as illustrated by the following quotation from C, manifesting a very clear focus on the teachers as persons, and her or his interest in meeting them to hear their concerns (minor and major):C: [...] if you're there when people come in the morning [...] when people have a break, they can raise those little things, things that I might not see as problems but may be major clouds for those people, and if we let the clouds grow they'll get out of control and the people won't be able to focus on their tasks [...]: the coffee machine is my best friend, people stand there for a minute, then start talking about little things. Throughout the interview with school leader C, s/he expressed her or his interest in meeting the needs of every individual teacher, which appeared to be the most highly prioritised task in C's leadership. This orientation is also evident in his or her comments on managing the team and the team leaders' development:C: [...] in this course on team leadership, we're considering [...] last year we considered this [in relation to] different personality types in a team, and this time we've focused on leadership versus "co-workership" and examined Susan Wheelan's theories a little. I've used Targama before, we work with the team-leaders so they won't fall into these pits of discussing big or small balls in the school yard, these non-issues [...]. This attention to personality types, leadership, co-workership and the need to focus on important matters could all be linked to C's interest in and prioritisation of person aspects. In sharp contrast, school leader G's comments reflected strong attention to object aspects of team leadership:G: [...] but we start by setting the effect goals so we all have a joint understanding of what we want, then the teams' work consist of making a plan for the coming year to show what we need to reach them, and what everyone needs to do and when. When we have delegated the work to the teams, the members of each team must tackle their tasks, which they do during the week. Then there is a structure during the whole year that includes operative meetings; there are specific times when the teams have meetings and they talk about the goals, and requirements, to reach them, during the operative meetings and meetings with the team leaders. There is one leader in each team [...]. School leader G's scores for object and formal dimensions of work orientation were moderate to high, indicating strong prioritisation of techno structural aspects. The following quotation from leader G expresses the importance s/he attached to goals and visions, and how they have been processed in G's school:G: [...] you can't manage a school if some staff members don't like being there and it's about shaping goals, visions and we have visions. We leaders have developed the visions, then together with the staff we've developed clear goals that everyone knows. Then the staff have been involved and asked how they think we should reach them, and established intermediate goals and how long they think it will take, and what they should do to reach to those points. We've streamlined the goals into effect goals and we've been selective, because we don't think we can do everything at the same time, but we've chosen different things to focus on this year, and that's what they've participated in, development of processes that are going to be started [...]. Overall, the school leaders' symbol orientation scores were low (moderate to high for just two of them). In the following extract, school leader H talks about participation and clearly shows that this is important in her or his leadership, as it is crucial for teacher collaboration and organising a leader-group. It is also highlighted when the interviewer subsequently asks which of all the aspects H has hitherto mentioned that H feels most at home with:H: It is perhaps participation. Because I really believe that we're a team that together can reach results, where we have different responsibilities. Typically, statements manifesting a strong symbol component of work orientation in the interviews indicated that certain value-laden notions are raised, and repeated as objectives in themselves or as ideals to strive for, often without specifying the procedures that are intended to foster the desired values. This is illustrated by the following quotation, where school leader M indicates that trust is the leadership aspect that s/he feels most at home or secure with:M: Well, I think I am most at home here (points to a post-it where "trust and confidence" is written).Interviewer: Where [...] here? (points to the post-it)M: Yes!Interviewer: Trust and confidence.M: Create relations. I also believe a lot in relations. You get that here (pointing to a post-it indicating relations), trust and confidence, so to speak. But it is that (pointing to a post-it where goal fulfilment is written) [...].Interviewer: Goal fulfilment?M: Yes, and I think it's very much my way to reach it. I think it's very much my way, and I know that I'm at work a lot. It's easy [for people] to get in touch with me, I'm accessible. Not if I'm busy with something, but that's not all the time. So the paths [to me] are short. In our interpretation, M treats trust and confidence as a symbol, guiding the relation-building in order to fulfil goals. Results of the analysis of the school leaders' work orientation are summarised in Table II. The 13 school leaders (50 per cent of the sample) in the top half had moderate to high scores for both techno and socio dimensions of work orientation, while the others had moderate to high scores for either techno or socio dimensions. Thus, the scores for our sample do not confirm that a social or relational orientation prevailed among our sample of school leaders. However, 24 of the 26 school leaders (92 per cent) had moderate or high scores for the formal dimension of work orientation, in five cases with low scores for all of the other three dimensions (object, person and symbol), while the others had roughly equal scores for both techno and socio dimensions. Nevertheless, the hypothesis that the school leaders may pay too little attention to object aspects of their work was verified: 15 of the 26 school leaders (58 per cent) had low scores for the object dimension. It should also be noted that only six of them (23 per cent) obtained moderate or high scores for the symbol dimension. As already mentioned, one of the objectives of this study was to test the utility of techno- and socio-structure notions for analysing school leaders' work orientations. The results confirm that the interviewees' work orientations can be described in terms of several permutations of moderate to high or low degrees of attention to object and formal (techno structure), and person and symbol (socio structure) aspects of their work. They articulated attention to object aspects in comments regarding setting effective goals, and involving staff in both establishing intermediate goals and making plans to reach them. Consideration of formal aspects is expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. Recognition of the importance of person aspects can be discerned in comments about attending to the teachers, listening to their needs at the coffee machine, giving responses that encourage their co-operation with the leaders' (and/or agreed) agendas, and working with their personality types. Finally, engagement with symbol aspects is evident in the highlighting of words like participation, trust and confidence, as well as attempts to ensure that values symbolised by the words permeate the leaders' organisations. Unlike previous authors, we found no evidence in our analysis (considering attention to both person and symbol aspects of socio structure) that our school leaders' work orientation was predominantly relational. Rather, we found that attention to formal aspects prevailed, for example, in shaping working routines. We suspect that this may have been masked in previous studies (e.g. Hoog et al., 2005; Northfield, 2014; Tornsen, 2009), which highlighted school leaders' emphasis on the importance of being skilled in building good relations. However, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects. A lack of attention to person aspects in terms of identifying and addressing teachers' needs and motives could strongly impair professional development in a school. This is because it is not enough to organise teams for professional development, the professional needs must also be analysed in relation to work objectives. Our results are consistent with earlier findings that school leaders pay insufficient attention to managerial aspects of their work (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009). However, they add nuance by showing that (among our sample) this did not concern formal or organising aspects, but object aspects, i.e. setting effective goals, and involving the staff in establishing intermediate goals and making plans to reach them. Thus, our findings do not verify the first hypothesis; that the school leaders strongly prioritise socio structural aspects of their work, indicating that their orientation is predominantly relational. Instead, they indicate that they prioritise formal aspects, as expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. The second hypothesis (that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities) is verified by the low scores for the object dimension obtained for 15 of the 26 school leaders (58 per cent). However, scores for the formal dimension (an important techno-structural element of our theoretical framework of management work orientation) were moderate to high for all but two of the school leaders. A surprising finding is that only two school leaders displayed moderate to high attention to symbol aspects, which together with person aspects constitute the socio structural dimensions of their work orientation. We expected more school leaders to show such attention, since seven of the school leaders prioritised person aspects, and previous empirical and theoretical analyses of organisational profiles suggest that social and personal aspects should strongly correlate. These findings raise intriguing questions. According to Dalin (1994), symbols provide the core channels for communicating meaning in the social world, but our investigated school leaders do not seem to use them. Instead they seem to communicate through objects, or through neither objects nor symbols but merely through formalising. A possibility that warrants further attention is that this may be due to increasing accountability pressure to act in a managerial manner. In fact, the dominant formal (-person) work orientation we have detected may be a result of two pressures: an external accountability pressure from the administration, promoting prioritisation of formal aspects, with (sometimes) an internal pressure to attend to person aspects in daily social practice with teachers. Another interesting finding is that some school leaders seem to pay moderate to strong attention to both techno and socio structure aspects, while others predominantly address either techno or socio structure aspects (in both cases in various permutations). We presume that school leaders who score highly for one of the two main elements of both techno and socio structure will have better foundations for addressing their weaknesses than those who have low scores for both main elements of either techno or socio structure. This raises questions about the optimal organisation of collaboration and in-service training to foster improvement of school leaders' relational and management work in practice, which is highly challenging according to several studies (e.g. Cunningham and Sherman, 2008). Reflecting on the analyses, one could question whether the interviews captured school leaders' work, but the aim was to investigate their work orientation, not their entire leadership practice. Moreover, school leaders are not the only organisational members who are involved in leadership. However, we argue that the school leaders' work orientation must be considered an important part of their practice. Further, we consider that the four dimensions we considered captured the school leaders' work orientations sufficiently for categorisation and fruitful analysis. A further limitation we should mention is that we found it difficult to distinguish robustly between different degrees of engagement, consequently the scores we obtained may have been biased towards the moderate-to-high ends of the scales. Theoretically, our findings contribute to an understanding of school leaders' relational and management work orientation, by showing that it can be understood in terms of several permutations of moderate to high or low degrees of object and formal (techno structure), and person and symbol (socio structure) components. The results have practical implications for the improvement of school leaders' management skills, since they contribute to a more nuanced starting point for development, considering both external neoliberal accountability pressure from the administration and internal relational pressure. Interestingly, we found no evidence of the relational dominance in school leaders' work orientations reportedly detected by previous authors. Rather, we found indications that our interviewees primarily attended to formal aspects, particularly shaping working routines. For practical reasons, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects, because it is not enough to organise teams for professional development; the professional needs must also be analysed in relation to the work objectives. An important practical implication is that there is a need to investigate school leaders' attention to symbol aspects of their work and organisations. If symbols are core channels to communicate important values, as claimed by previous authors, it is worrying that the school leaders' scores for the symbol dimension were very low. We assert that it is important to establish and communicate core symbols like democracy, equity and solidarity in compulsory schooling, otherwise management could result in mere instrumentality, especially through external accountability pressures from sources such as PISA, thereby leading to a teaching and learning environment in schools where it is difficult for students to find meaning. However, we welcome further research to assess the general validity of our conclusions, as well as analyses with larger samples and more detailed exploration of the implications of different permutations of weak and strong object, formal, person and symbol components of work orientation.
|
In total, 26 school leaders in a Swedish municipality were interviewed, and their responses were analysed to score their expressed orientations in terms of techno structure (object and formal) and socio structure (person and symbolic) dimensions.
|
[SECTION: Findings] In recent decades, the Scandinavian countries have swung from previously characteristic social democratic regimes to neoliberal policy regimes. The swing has been underpinned by institutions such as the Organisation for Economic Co-operation and Development (OECD). In educational contexts, since the beginning of the 2000s the shift has included increasing focus on school leadership, with demands from the state for school leaders to ensure that student results (which have been declining in international comparisons) improve via scientific goal- and result-oriented management (Blossing et al., 2014; Lundahl, 2005). The OECD's Programme for International Student Assessment (PISA) has strongly promoted these demands, and data from PISA evaluations have been analysed from various perspectives to test hypotheses that may explain variations in student performance (e.g. Baumann and Krskova, 2016). A recent report by the OECD (2015) strongly focused on school leaders' management in explanations of the declining student performances in Sweden, and (inter alia) recommended establishment of a national institute to raise school leader quality. School leaders in other European countries face similar requirements (Hall et al., 2017). However, several studies have shown that school leaders have difficulties in meeting requirements for goal- and result-oriented management in practice (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009), and instead pay most attention to relational and social aspects of their work (Hult et al., 2016; Tornsen, 2009). Social aspects are essential in working situations involving meetings with numerous teachers, students, parents and officials, which inevitably create complex web of relations. Clearly, this raises questions about if and how such situations, and school leaders' relational and social orientation, may influence key elements of scientific management of their schools, for example: goalsetting, evaluation of teaching and learning, and establishment of development plans. To address these questions, in this paper we use the notions of techno and socio structure, drawn from work by Dalin (1994) on organisational profiles, in attempts to capture scientific goal and management aspects (in contrast to relational and social aspects) of school leaders' work orientation. Dalin understands the techno-structure of organisations as consisting of the two aspects of object (goal-setting) and formal (working routines), and the socio structure as consisting of person and symbol aspects. We investigate possible reasons for the school leaders' difficulties mentioned above, using Dalin's understanding as a lens to focus on views expressed in interviews by 26 K-9 school leaders in a Swedish municipality. The aim is to contribute to the understanding of school leaders' relational and management work orientation in an organisational perspective, in terms of techno and socio structure dimensions. School leaders' work orientation is captured here by the interviewees' descriptions of its most prominent aspects. Inter alia, we test the following two hypotheses. First, that the school leaders strongly attend to socio structural aspects of their work, indicating that their orientation is predominantly relational. Second, that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities. School leaders' work Managers need various skills depending on the leadership situation (Armstrong, 2012). According to organisational leadership and human resource literature, an important aspect of the situation is a manager's position in the authority hierarchy of the organisation. Being in the middle of the hierarchy requires a mix of technical, interpersonal and conceptual skills (Yukl, 2013, p. 162). An apparently common perception is that school leaders can be regarded as middle managers, implying that they are exposed to pressure from both senior managers and subordinates. Teachers seek collegiality, and want school leaders who can protect them from external pressure and thus orientate their efforts towards social aspects of their organisations. In contrast, senior managers feel responsible for budgets, and demand efficiency and control connected to organisations' management aspects (Liljenberg, 2015; Moos et al., 2004). Several researchers have shown that school leaders have difficulties meeting the expectations placed on them in the contemporary results- and goal-oriented school system (Holmgren et al., 2013; Moller, 2009; Spillane et al., 2002). Kelchtermans et al. (2011) conclude that school leaders act at a junction of interests and agendas, where they must not only react to external pressure and ideas but also proactively ensure that ongoing processes initiated within the organisation can develop without being jeopardised by conflicting interference from outside. From the teachers' perspective, the school leader is one of them. However, school leaders are intermediaries who must promote both internal and external agendas, interests and aspirations (Honig and Hatch, 2004; Hult et al., 2016), and can, thus, be regarded as both poachers and gamekeepers. To cope with this situation, managerial knowledge and skills are not enough. Inevitably, school leadership involves relational, moral and emotional agendas. Additionally, school leaders have to handle these agendas in parallel, as many aspects of school leadership are intertwined (Arlestig and Tornsen, 2014). Orientation towards social aspects of the organisation is considered essential for success as a school leader (Leo, 2015; Northfield, 2014; Sugrue, 2015; Tornsen, 2009). An excessive focus on social aspects with an "open door policy" makes it difficult for school leaders to attend sufficiently to techno structural aspects, such as long-term planning and systematic quality work. Thus, for instance, Scherp (1998) found that development of teaching and learning practice tends to be relatively weak in schools where principals are relatively strongly service and relation oriented. Similarly, Hoog et al. (2005, 2009) argue that attention to structural and cultural aspects must be balanced by attention to techno and socio aspects for successful school leadership. Nelson et al. (2008) studied novice school leaders' experiences of their new jobs, and concluded that they faced challenges connected to both technical and relational aspects of leadership, linking these to Sergiovanni's (2004) work related to the systemworld and lifeworld of school leadership. Systemworld and lifeworld are interconnected, and a proper balance between the two is essential for a successful school and successful school leadership, according to Nelson et al. (2008). In line with this argument, Tornsen (2011) found that principals monitored in successful schools had a more versatile leadership repertoire than those in less successful schools, including elements (in Dalin's terms) of both techno and socio orientation. Additionally, Demski and Racherbaumer (2015) conclude that successful principal leadership is "data wise," as successful principals use externally and internally generated data to improve practice. Lindberg (2014) concludes that school leaders lack knowledge about how to use goal-setting theory and management by objectives in their strategic work. Consequently, as goals for long-term school development are not properly established, it becomes more or less impossible for the school leaders to steer activities in directions that lead to high quality and lastingly good student performances. In summary, previous research indicates that school leaders require both relational and managerial knowledge and skills to be effective (e.g. Garza et al., 2014; Moos et al., 2011). It also shows that they face strong pressure to orient towards social and relational aspects in their daily social practice with teachers, but policy trends towards accountability impose pressure to orient towards more managerial or techno structure aspects. However, there is evidence that school leaders' focus is often biased towards the relational aspects, and they have difficulties in meeting managerial demands to pay more attention to goals and results. Theoretical framework For the analysis of school leaders' relational and management work we rely on Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. Researchers have used various notions to distinguish between relational and management aspects of school leaders' work. Hoog et al. (2005, 2009) use the notions culture and structure. Nelson et al. (2008) use the terms relational and technical, and also refer to the notions lifeworld and systemworld introduced by Sergiovanni (2004). These notions are derived from more common categorisations of aspects of general social life. However, we have chosen Dalin's concepts to frame our analysis because they facilitate consideration of the school leaders' work on prominent aspects of organisational functions that are important for forging improvements. Moreover, object and formal aspects of techno structure aspects can be distinguished, as well as person and symbolic aspects of socio structure. All four of these sets of aspects are potentially important concerns, and thus important components or dimensions of leaders' or manager's work orientations, so the concepts allow substantially richer analysis than the dichotomized categories mentioned above. In our use of Dalin's terminology, the object dimension of orientation refers to attention to objective or factual content, and rational allocation of tasks to people with specific positions or roles in the organisation. Formalisation implies a prescribed, and often written, distribution of tasks, responsibilities and working processes. The combination of these aspects is called techno structure, which, according to Dalin, is rooted in classical organisation theory that regards hierarchy, position and function as key building blocks for an effective organisation. A shift towards this type of orientation is clearly evident in the goal- and result-focused management trend that has influenced school work in the neoliberal policy era (Blossing et al., 2014). The socio structural person aspects are related to individuals' motivation, knowledge, skills and learning potentials, which are regarded here and by Dalin as critical foundations for an organisation. How the people in the organisation cooperate, especially how the leaders collaborate with the other staff, is also regarded as critical. The person aspects are strongly associated with symbolic aspects, i.e. sense making, prized values and norms. For successful leadership, these must be communicated to staff, so symbols are more socio-structurally important than formal descriptions of working processes. According to Dalin, this is also aligned with a humanistic organisation perspective, in which organisational culture and sense-making are core features. Furthermore, it is manifested in many schools' efforts to establish a vision and a profile that signal not only what each school is, but also what it means to work there as a teacher. The 26 interviewed K-9 school leaders (17 women, 9 men) were sampled ad hoc from a Swedish municipality with approximately 40,000 inhabitants located close to one of Sweden's biggest cities. Of them, 12 had at least ten years' experience of work as a school leader, 8 had five to nine years' experience and 6 had at most four years' experience. Data used to analyse the school leaders' work orientation in techno and socio structure terms were collected from semi-structured, hour-long interviews with the 26 school leaders. The interviews started with the following open question and task: "What do you consider as your main tasks as a school leader, and which occupy most of your time? Please write them down on post-it notes." The school leaders were subsequently asked to talk about what they had written on the post-it notes. They were also asked to reflect on the kind of work they engaged in most, the aspects they felt most comfortable with, their development in relation to the work they thought was important for them, and what acting organisationally could mean for them. Field notes were taken during the interviews, all of which were also digitally audio-recorded and subsequently transcribed verbatim. NVivo 10 software was used to organise the transcripts. In this paper, we test the utility of techno- and socio-structure notions for analysing the school leaders' work orientation. For this purpose, we coded statements in the interview transcripts into the four categories shown in the analytical matrix presented in Table I, which were operationalized using Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. The formal, object, symbol and person dimensions of each school leader's work orientation were subsequently scored as low, moderate or high following methodology described by Miles et al. (2014). Thus, the scores were based on both numbers of words assigned to the respective categories and indications by the school leaders of how strongly the statements reflected their priorities and engagement. The scores were eventually dichotomized as low or moderate to high, to avoid exaggerating the importance of the scores per se. In this section, we first present results concerning each of the considered aspects and dimensions of the school leaders' work orientations, with illustrative quotations. Then we draw conclusions and relate them to both previous findings and our hypotheses, that their orientation may be strongly social and relational, but they may pay too little attention to managerial (techno structural) aspects of their work. Interestingly, the summed scores in Table II indicate that most of the interviewed school leaders paid moderate to strong attention to formal (24) and person (13) aspects, and fewer paid moderate to strong attention to object and symbol aspects of their organisations (11 and 6, respectively). The results clearly indicate that work orientations of most of the interviewed school leaders had strong formal components, and in many cases strong person components. This is illustrated by the following extract regarding what school leader K say about her or his leadership:K: [...] and then actually a teacher wrote that I have never had a principal who praises us as much as you do, you continuously give us positive feedback, strengthen us and listen to us [...] and I got [comments like] that from two teachers last week, so that was fine. But I think it is, that I [...] I want the teachers to go along with me, I seldom boss them around, and if I do I think they would complain to me, and probably it would be justified. I think they feel that I'm very confident in my leadership, I do think that. This desire to maintain harmonious relations with the teachers, reflected in wanting "the teachers to go along with me," permeates K's comments and manifests a strong person element, as do statements on procedures (reflecting the formal element of her or his work orientation). In the next extract, K talks about how s/he has attempted to improve formal processes:K: [...] I can have a tendency to expand what I do, so I'm careful to ensure that I come back to the things we've decided, that now we're going to address this in the meetings, so I've become much more precise about structure, concerning how I should structure what to do and what this meeting should cover, and keep to that [...]. Our interpretation is that many school leaders in the sample were occupied with formalising the relationships of the teachers in various kinds of working procedures in their respective organisations. In the following extract, school leader B illustrates the strong prioritisation of organising the working procedures in a formal infrastructure:B: [...] so I think that this is important in what I do, it is important that we create a good infrastructure.Interviewer: Could you say something more about what you mean by infrastructure?B: I think how we communicate with each other, the different kinds of meetings we have, how we talk with each other, how does it look in the different school houses, in the different groupings, who meets where? Where do we make decisions? What kinds of 'carrots' do we have? [...] Two school leaders (H and C) clearly paid more attention to person aspects than formal aspects, as illustrated by the following quotation from C, manifesting a very clear focus on the teachers as persons, and her or his interest in meeting them to hear their concerns (minor and major):C: [...] if you're there when people come in the morning [...] when people have a break, they can raise those little things, things that I might not see as problems but may be major clouds for those people, and if we let the clouds grow they'll get out of control and the people won't be able to focus on their tasks [...]: the coffee machine is my best friend, people stand there for a minute, then start talking about little things. Throughout the interview with school leader C, s/he expressed her or his interest in meeting the needs of every individual teacher, which appeared to be the most highly prioritised task in C's leadership. This orientation is also evident in his or her comments on managing the team and the team leaders' development:C: [...] in this course on team leadership, we're considering [...] last year we considered this [in relation to] different personality types in a team, and this time we've focused on leadership versus "co-workership" and examined Susan Wheelan's theories a little. I've used Targama before, we work with the team-leaders so they won't fall into these pits of discussing big or small balls in the school yard, these non-issues [...]. This attention to personality types, leadership, co-workership and the need to focus on important matters could all be linked to C's interest in and prioritisation of person aspects. In sharp contrast, school leader G's comments reflected strong attention to object aspects of team leadership:G: [...] but we start by setting the effect goals so we all have a joint understanding of what we want, then the teams' work consist of making a plan for the coming year to show what we need to reach them, and what everyone needs to do and when. When we have delegated the work to the teams, the members of each team must tackle their tasks, which they do during the week. Then there is a structure during the whole year that includes operative meetings; there are specific times when the teams have meetings and they talk about the goals, and requirements, to reach them, during the operative meetings and meetings with the team leaders. There is one leader in each team [...]. School leader G's scores for object and formal dimensions of work orientation were moderate to high, indicating strong prioritisation of techno structural aspects. The following quotation from leader G expresses the importance s/he attached to goals and visions, and how they have been processed in G's school:G: [...] you can't manage a school if some staff members don't like being there and it's about shaping goals, visions and we have visions. We leaders have developed the visions, then together with the staff we've developed clear goals that everyone knows. Then the staff have been involved and asked how they think we should reach them, and established intermediate goals and how long they think it will take, and what they should do to reach to those points. We've streamlined the goals into effect goals and we've been selective, because we don't think we can do everything at the same time, but we've chosen different things to focus on this year, and that's what they've participated in, development of processes that are going to be started [...]. Overall, the school leaders' symbol orientation scores were low (moderate to high for just two of them). In the following extract, school leader H talks about participation and clearly shows that this is important in her or his leadership, as it is crucial for teacher collaboration and organising a leader-group. It is also highlighted when the interviewer subsequently asks which of all the aspects H has hitherto mentioned that H feels most at home with:H: It is perhaps participation. Because I really believe that we're a team that together can reach results, where we have different responsibilities. Typically, statements manifesting a strong symbol component of work orientation in the interviews indicated that certain value-laden notions are raised, and repeated as objectives in themselves or as ideals to strive for, often without specifying the procedures that are intended to foster the desired values. This is illustrated by the following quotation, where school leader M indicates that trust is the leadership aspect that s/he feels most at home or secure with:M: Well, I think I am most at home here (points to a post-it where "trust and confidence" is written).Interviewer: Where [...] here? (points to the post-it)M: Yes!Interviewer: Trust and confidence.M: Create relations. I also believe a lot in relations. You get that here (pointing to a post-it indicating relations), trust and confidence, so to speak. But it is that (pointing to a post-it where goal fulfilment is written) [...].Interviewer: Goal fulfilment?M: Yes, and I think it's very much my way to reach it. I think it's very much my way, and I know that I'm at work a lot. It's easy [for people] to get in touch with me, I'm accessible. Not if I'm busy with something, but that's not all the time. So the paths [to me] are short. In our interpretation, M treats trust and confidence as a symbol, guiding the relation-building in order to fulfil goals. Results of the analysis of the school leaders' work orientation are summarised in Table II. The 13 school leaders (50 per cent of the sample) in the top half had moderate to high scores for both techno and socio dimensions of work orientation, while the others had moderate to high scores for either techno or socio dimensions. Thus, the scores for our sample do not confirm that a social or relational orientation prevailed among our sample of school leaders. However, 24 of the 26 school leaders (92 per cent) had moderate or high scores for the formal dimension of work orientation, in five cases with low scores for all of the other three dimensions (object, person and symbol), while the others had roughly equal scores for both techno and socio dimensions. Nevertheless, the hypothesis that the school leaders may pay too little attention to object aspects of their work was verified: 15 of the 26 school leaders (58 per cent) had low scores for the object dimension. It should also be noted that only six of them (23 per cent) obtained moderate or high scores for the symbol dimension. As already mentioned, one of the objectives of this study was to test the utility of techno- and socio-structure notions for analysing school leaders' work orientations. The results confirm that the interviewees' work orientations can be described in terms of several permutations of moderate to high or low degrees of attention to object and formal (techno structure), and person and symbol (socio structure) aspects of their work. They articulated attention to object aspects in comments regarding setting effective goals, and involving staff in both establishing intermediate goals and making plans to reach them. Consideration of formal aspects is expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. Recognition of the importance of person aspects can be discerned in comments about attending to the teachers, listening to their needs at the coffee machine, giving responses that encourage their co-operation with the leaders' (and/or agreed) agendas, and working with their personality types. Finally, engagement with symbol aspects is evident in the highlighting of words like participation, trust and confidence, as well as attempts to ensure that values symbolised by the words permeate the leaders' organisations. Unlike previous authors, we found no evidence in our analysis (considering attention to both person and symbol aspects of socio structure) that our school leaders' work orientation was predominantly relational. Rather, we found that attention to formal aspects prevailed, for example, in shaping working routines. We suspect that this may have been masked in previous studies (e.g. Hoog et al., 2005; Northfield, 2014; Tornsen, 2009), which highlighted school leaders' emphasis on the importance of being skilled in building good relations. However, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects. A lack of attention to person aspects in terms of identifying and addressing teachers' needs and motives could strongly impair professional development in a school. This is because it is not enough to organise teams for professional development, the professional needs must also be analysed in relation to work objectives. Our results are consistent with earlier findings that school leaders pay insufficient attention to managerial aspects of their work (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009). However, they add nuance by showing that (among our sample) this did not concern formal or organising aspects, but object aspects, i.e. setting effective goals, and involving the staff in establishing intermediate goals and making plans to reach them. Thus, our findings do not verify the first hypothesis; that the school leaders strongly prioritise socio structural aspects of their work, indicating that their orientation is predominantly relational. Instead, they indicate that they prioritise formal aspects, as expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. The second hypothesis (that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities) is verified by the low scores for the object dimension obtained for 15 of the 26 school leaders (58 per cent). However, scores for the formal dimension (an important techno-structural element of our theoretical framework of management work orientation) were moderate to high for all but two of the school leaders. A surprising finding is that only two school leaders displayed moderate to high attention to symbol aspects, which together with person aspects constitute the socio structural dimensions of their work orientation. We expected more school leaders to show such attention, since seven of the school leaders prioritised person aspects, and previous empirical and theoretical analyses of organisational profiles suggest that social and personal aspects should strongly correlate. These findings raise intriguing questions. According to Dalin (1994), symbols provide the core channels for communicating meaning in the social world, but our investigated school leaders do not seem to use them. Instead they seem to communicate through objects, or through neither objects nor symbols but merely through formalising. A possibility that warrants further attention is that this may be due to increasing accountability pressure to act in a managerial manner. In fact, the dominant formal (-person) work orientation we have detected may be a result of two pressures: an external accountability pressure from the administration, promoting prioritisation of formal aspects, with (sometimes) an internal pressure to attend to person aspects in daily social practice with teachers. Another interesting finding is that some school leaders seem to pay moderate to strong attention to both techno and socio structure aspects, while others predominantly address either techno or socio structure aspects (in both cases in various permutations). We presume that school leaders who score highly for one of the two main elements of both techno and socio structure will have better foundations for addressing their weaknesses than those who have low scores for both main elements of either techno or socio structure. This raises questions about the optimal organisation of collaboration and in-service training to foster improvement of school leaders' relational and management work in practice, which is highly challenging according to several studies (e.g. Cunningham and Sherman, 2008). Reflecting on the analyses, one could question whether the interviews captured school leaders' work, but the aim was to investigate their work orientation, not their entire leadership practice. Moreover, school leaders are not the only organisational members who are involved in leadership. However, we argue that the school leaders' work orientation must be considered an important part of their practice. Further, we consider that the four dimensions we considered captured the school leaders' work orientations sufficiently for categorisation and fruitful analysis. A further limitation we should mention is that we found it difficult to distinguish robustly between different degrees of engagement, consequently the scores we obtained may have been biased towards the moderate-to-high ends of the scales. Theoretically, our findings contribute to an understanding of school leaders' relational and management work orientation, by showing that it can be understood in terms of several permutations of moderate to high or low degrees of object and formal (techno structure), and person and symbol (socio structure) components. The results have practical implications for the improvement of school leaders' management skills, since they contribute to a more nuanced starting point for development, considering both external neoliberal accountability pressure from the administration and internal relational pressure. Interestingly, we found no evidence of the relational dominance in school leaders' work orientations reportedly detected by previous authors. Rather, we found indications that our interviewees primarily attended to formal aspects, particularly shaping working routines. For practical reasons, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects, because it is not enough to organise teams for professional development; the professional needs must also be analysed in relation to the work objectives. An important practical implication is that there is a need to investigate school leaders' attention to symbol aspects of their work and organisations. If symbols are core channels to communicate important values, as claimed by previous authors, it is worrying that the school leaders' scores for the symbol dimension were very low. We assert that it is important to establish and communicate core symbols like democracy, equity and solidarity in compulsory schooling, otherwise management could result in mere instrumentality, especially through external accountability pressures from sources such as PISA, thereby leading to a teaching and learning environment in schools where it is difficult for students to find meaning. However, we welcome further research to assess the general validity of our conclusions, as well as analyses with larger samples and more detailed exploration of the implications of different permutations of weak and strong object, formal, person and symbol components of work orientation.
|
The school leaders had predominantly formal work orientations, expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. Scores for object (goal and result) and symbolic dimensions of their management orientation were low.
|
[SECTION: Value] In recent decades, the Scandinavian countries have swung from previously characteristic social democratic regimes to neoliberal policy regimes. The swing has been underpinned by institutions such as the Organisation for Economic Co-operation and Development (OECD). In educational contexts, since the beginning of the 2000s the shift has included increasing focus on school leadership, with demands from the state for school leaders to ensure that student results (which have been declining in international comparisons) improve via scientific goal- and result-oriented management (Blossing et al., 2014; Lundahl, 2005). The OECD's Programme for International Student Assessment (PISA) has strongly promoted these demands, and data from PISA evaluations have been analysed from various perspectives to test hypotheses that may explain variations in student performance (e.g. Baumann and Krskova, 2016). A recent report by the OECD (2015) strongly focused on school leaders' management in explanations of the declining student performances in Sweden, and (inter alia) recommended establishment of a national institute to raise school leader quality. School leaders in other European countries face similar requirements (Hall et al., 2017). However, several studies have shown that school leaders have difficulties in meeting requirements for goal- and result-oriented management in practice (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009), and instead pay most attention to relational and social aspects of their work (Hult et al., 2016; Tornsen, 2009). Social aspects are essential in working situations involving meetings with numerous teachers, students, parents and officials, which inevitably create complex web of relations. Clearly, this raises questions about if and how such situations, and school leaders' relational and social orientation, may influence key elements of scientific management of their schools, for example: goalsetting, evaluation of teaching and learning, and establishment of development plans. To address these questions, in this paper we use the notions of techno and socio structure, drawn from work by Dalin (1994) on organisational profiles, in attempts to capture scientific goal and management aspects (in contrast to relational and social aspects) of school leaders' work orientation. Dalin understands the techno-structure of organisations as consisting of the two aspects of object (goal-setting) and formal (working routines), and the socio structure as consisting of person and symbol aspects. We investigate possible reasons for the school leaders' difficulties mentioned above, using Dalin's understanding as a lens to focus on views expressed in interviews by 26 K-9 school leaders in a Swedish municipality. The aim is to contribute to the understanding of school leaders' relational and management work orientation in an organisational perspective, in terms of techno and socio structure dimensions. School leaders' work orientation is captured here by the interviewees' descriptions of its most prominent aspects. Inter alia, we test the following two hypotheses. First, that the school leaders strongly attend to socio structural aspects of their work, indicating that their orientation is predominantly relational. Second, that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities. School leaders' work Managers need various skills depending on the leadership situation (Armstrong, 2012). According to organisational leadership and human resource literature, an important aspect of the situation is a manager's position in the authority hierarchy of the organisation. Being in the middle of the hierarchy requires a mix of technical, interpersonal and conceptual skills (Yukl, 2013, p. 162). An apparently common perception is that school leaders can be regarded as middle managers, implying that they are exposed to pressure from both senior managers and subordinates. Teachers seek collegiality, and want school leaders who can protect them from external pressure and thus orientate their efforts towards social aspects of their organisations. In contrast, senior managers feel responsible for budgets, and demand efficiency and control connected to organisations' management aspects (Liljenberg, 2015; Moos et al., 2004). Several researchers have shown that school leaders have difficulties meeting the expectations placed on them in the contemporary results- and goal-oriented school system (Holmgren et al., 2013; Moller, 2009; Spillane et al., 2002). Kelchtermans et al. (2011) conclude that school leaders act at a junction of interests and agendas, where they must not only react to external pressure and ideas but also proactively ensure that ongoing processes initiated within the organisation can develop without being jeopardised by conflicting interference from outside. From the teachers' perspective, the school leader is one of them. However, school leaders are intermediaries who must promote both internal and external agendas, interests and aspirations (Honig and Hatch, 2004; Hult et al., 2016), and can, thus, be regarded as both poachers and gamekeepers. To cope with this situation, managerial knowledge and skills are not enough. Inevitably, school leadership involves relational, moral and emotional agendas. Additionally, school leaders have to handle these agendas in parallel, as many aspects of school leadership are intertwined (Arlestig and Tornsen, 2014). Orientation towards social aspects of the organisation is considered essential for success as a school leader (Leo, 2015; Northfield, 2014; Sugrue, 2015; Tornsen, 2009). An excessive focus on social aspects with an "open door policy" makes it difficult for school leaders to attend sufficiently to techno structural aspects, such as long-term planning and systematic quality work. Thus, for instance, Scherp (1998) found that development of teaching and learning practice tends to be relatively weak in schools where principals are relatively strongly service and relation oriented. Similarly, Hoog et al. (2005, 2009) argue that attention to structural and cultural aspects must be balanced by attention to techno and socio aspects for successful school leadership. Nelson et al. (2008) studied novice school leaders' experiences of their new jobs, and concluded that they faced challenges connected to both technical and relational aspects of leadership, linking these to Sergiovanni's (2004) work related to the systemworld and lifeworld of school leadership. Systemworld and lifeworld are interconnected, and a proper balance between the two is essential for a successful school and successful school leadership, according to Nelson et al. (2008). In line with this argument, Tornsen (2011) found that principals monitored in successful schools had a more versatile leadership repertoire than those in less successful schools, including elements (in Dalin's terms) of both techno and socio orientation. Additionally, Demski and Racherbaumer (2015) conclude that successful principal leadership is "data wise," as successful principals use externally and internally generated data to improve practice. Lindberg (2014) concludes that school leaders lack knowledge about how to use goal-setting theory and management by objectives in their strategic work. Consequently, as goals for long-term school development are not properly established, it becomes more or less impossible for the school leaders to steer activities in directions that lead to high quality and lastingly good student performances. In summary, previous research indicates that school leaders require both relational and managerial knowledge and skills to be effective (e.g. Garza et al., 2014; Moos et al., 2011). It also shows that they face strong pressure to orient towards social and relational aspects in their daily social practice with teachers, but policy trends towards accountability impose pressure to orient towards more managerial or techno structure aspects. However, there is evidence that school leaders' focus is often biased towards the relational aspects, and they have difficulties in meeting managerial demands to pay more attention to goals and results. Theoretical framework For the analysis of school leaders' relational and management work we rely on Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. Researchers have used various notions to distinguish between relational and management aspects of school leaders' work. Hoog et al. (2005, 2009) use the notions culture and structure. Nelson et al. (2008) use the terms relational and technical, and also refer to the notions lifeworld and systemworld introduced by Sergiovanni (2004). These notions are derived from more common categorisations of aspects of general social life. However, we have chosen Dalin's concepts to frame our analysis because they facilitate consideration of the school leaders' work on prominent aspects of organisational functions that are important for forging improvements. Moreover, object and formal aspects of techno structure aspects can be distinguished, as well as person and symbolic aspects of socio structure. All four of these sets of aspects are potentially important concerns, and thus important components or dimensions of leaders' or manager's work orientations, so the concepts allow substantially richer analysis than the dichotomized categories mentioned above. In our use of Dalin's terminology, the object dimension of orientation refers to attention to objective or factual content, and rational allocation of tasks to people with specific positions or roles in the organisation. Formalisation implies a prescribed, and often written, distribution of tasks, responsibilities and working processes. The combination of these aspects is called techno structure, which, according to Dalin, is rooted in classical organisation theory that regards hierarchy, position and function as key building blocks for an effective organisation. A shift towards this type of orientation is clearly evident in the goal- and result-focused management trend that has influenced school work in the neoliberal policy era (Blossing et al., 2014). The socio structural person aspects are related to individuals' motivation, knowledge, skills and learning potentials, which are regarded here and by Dalin as critical foundations for an organisation. How the people in the organisation cooperate, especially how the leaders collaborate with the other staff, is also regarded as critical. The person aspects are strongly associated with symbolic aspects, i.e. sense making, prized values and norms. For successful leadership, these must be communicated to staff, so symbols are more socio-structurally important than formal descriptions of working processes. According to Dalin, this is also aligned with a humanistic organisation perspective, in which organisational culture and sense-making are core features. Furthermore, it is manifested in many schools' efforts to establish a vision and a profile that signal not only what each school is, but also what it means to work there as a teacher. The 26 interviewed K-9 school leaders (17 women, 9 men) were sampled ad hoc from a Swedish municipality with approximately 40,000 inhabitants located close to one of Sweden's biggest cities. Of them, 12 had at least ten years' experience of work as a school leader, 8 had five to nine years' experience and 6 had at most four years' experience. Data used to analyse the school leaders' work orientation in techno and socio structure terms were collected from semi-structured, hour-long interviews with the 26 school leaders. The interviews started with the following open question and task: "What do you consider as your main tasks as a school leader, and which occupy most of your time? Please write them down on post-it notes." The school leaders were subsequently asked to talk about what they had written on the post-it notes. They were also asked to reflect on the kind of work they engaged in most, the aspects they felt most comfortable with, their development in relation to the work they thought was important for them, and what acting organisationally could mean for them. Field notes were taken during the interviews, all of which were also digitally audio-recorded and subsequently transcribed verbatim. NVivo 10 software was used to organise the transcripts. In this paper, we test the utility of techno- and socio-structure notions for analysing the school leaders' work orientation. For this purpose, we coded statements in the interview transcripts into the four categories shown in the analytical matrix presented in Table I, which were operationalized using Dalin's (1994) work on organisational profiles, and the two main elements of such profiles: techno and socio structure. The formal, object, symbol and person dimensions of each school leader's work orientation were subsequently scored as low, moderate or high following methodology described by Miles et al. (2014). Thus, the scores were based on both numbers of words assigned to the respective categories and indications by the school leaders of how strongly the statements reflected their priorities and engagement. The scores were eventually dichotomized as low or moderate to high, to avoid exaggerating the importance of the scores per se. In this section, we first present results concerning each of the considered aspects and dimensions of the school leaders' work orientations, with illustrative quotations. Then we draw conclusions and relate them to both previous findings and our hypotheses, that their orientation may be strongly social and relational, but they may pay too little attention to managerial (techno structural) aspects of their work. Interestingly, the summed scores in Table II indicate that most of the interviewed school leaders paid moderate to strong attention to formal (24) and person (13) aspects, and fewer paid moderate to strong attention to object and symbol aspects of their organisations (11 and 6, respectively). The results clearly indicate that work orientations of most of the interviewed school leaders had strong formal components, and in many cases strong person components. This is illustrated by the following extract regarding what school leader K say about her or his leadership:K: [...] and then actually a teacher wrote that I have never had a principal who praises us as much as you do, you continuously give us positive feedback, strengthen us and listen to us [...] and I got [comments like] that from two teachers last week, so that was fine. But I think it is, that I [...] I want the teachers to go along with me, I seldom boss them around, and if I do I think they would complain to me, and probably it would be justified. I think they feel that I'm very confident in my leadership, I do think that. This desire to maintain harmonious relations with the teachers, reflected in wanting "the teachers to go along with me," permeates K's comments and manifests a strong person element, as do statements on procedures (reflecting the formal element of her or his work orientation). In the next extract, K talks about how s/he has attempted to improve formal processes:K: [...] I can have a tendency to expand what I do, so I'm careful to ensure that I come back to the things we've decided, that now we're going to address this in the meetings, so I've become much more precise about structure, concerning how I should structure what to do and what this meeting should cover, and keep to that [...]. Our interpretation is that many school leaders in the sample were occupied with formalising the relationships of the teachers in various kinds of working procedures in their respective organisations. In the following extract, school leader B illustrates the strong prioritisation of organising the working procedures in a formal infrastructure:B: [...] so I think that this is important in what I do, it is important that we create a good infrastructure.Interviewer: Could you say something more about what you mean by infrastructure?B: I think how we communicate with each other, the different kinds of meetings we have, how we talk with each other, how does it look in the different school houses, in the different groupings, who meets where? Where do we make decisions? What kinds of 'carrots' do we have? [...] Two school leaders (H and C) clearly paid more attention to person aspects than formal aspects, as illustrated by the following quotation from C, manifesting a very clear focus on the teachers as persons, and her or his interest in meeting them to hear their concerns (minor and major):C: [...] if you're there when people come in the morning [...] when people have a break, they can raise those little things, things that I might not see as problems but may be major clouds for those people, and if we let the clouds grow they'll get out of control and the people won't be able to focus on their tasks [...]: the coffee machine is my best friend, people stand there for a minute, then start talking about little things. Throughout the interview with school leader C, s/he expressed her or his interest in meeting the needs of every individual teacher, which appeared to be the most highly prioritised task in C's leadership. This orientation is also evident in his or her comments on managing the team and the team leaders' development:C: [...] in this course on team leadership, we're considering [...] last year we considered this [in relation to] different personality types in a team, and this time we've focused on leadership versus "co-workership" and examined Susan Wheelan's theories a little. I've used Targama before, we work with the team-leaders so they won't fall into these pits of discussing big or small balls in the school yard, these non-issues [...]. This attention to personality types, leadership, co-workership and the need to focus on important matters could all be linked to C's interest in and prioritisation of person aspects. In sharp contrast, school leader G's comments reflected strong attention to object aspects of team leadership:G: [...] but we start by setting the effect goals so we all have a joint understanding of what we want, then the teams' work consist of making a plan for the coming year to show what we need to reach them, and what everyone needs to do and when. When we have delegated the work to the teams, the members of each team must tackle their tasks, which they do during the week. Then there is a structure during the whole year that includes operative meetings; there are specific times when the teams have meetings and they talk about the goals, and requirements, to reach them, during the operative meetings and meetings with the team leaders. There is one leader in each team [...]. School leader G's scores for object and formal dimensions of work orientation were moderate to high, indicating strong prioritisation of techno structural aspects. The following quotation from leader G expresses the importance s/he attached to goals and visions, and how they have been processed in G's school:G: [...] you can't manage a school if some staff members don't like being there and it's about shaping goals, visions and we have visions. We leaders have developed the visions, then together with the staff we've developed clear goals that everyone knows. Then the staff have been involved and asked how they think we should reach them, and established intermediate goals and how long they think it will take, and what they should do to reach to those points. We've streamlined the goals into effect goals and we've been selective, because we don't think we can do everything at the same time, but we've chosen different things to focus on this year, and that's what they've participated in, development of processes that are going to be started [...]. Overall, the school leaders' symbol orientation scores were low (moderate to high for just two of them). In the following extract, school leader H talks about participation and clearly shows that this is important in her or his leadership, as it is crucial for teacher collaboration and organising a leader-group. It is also highlighted when the interviewer subsequently asks which of all the aspects H has hitherto mentioned that H feels most at home with:H: It is perhaps participation. Because I really believe that we're a team that together can reach results, where we have different responsibilities. Typically, statements manifesting a strong symbol component of work orientation in the interviews indicated that certain value-laden notions are raised, and repeated as objectives in themselves or as ideals to strive for, often without specifying the procedures that are intended to foster the desired values. This is illustrated by the following quotation, where school leader M indicates that trust is the leadership aspect that s/he feels most at home or secure with:M: Well, I think I am most at home here (points to a post-it where "trust and confidence" is written).Interviewer: Where [...] here? (points to the post-it)M: Yes!Interviewer: Trust and confidence.M: Create relations. I also believe a lot in relations. You get that here (pointing to a post-it indicating relations), trust and confidence, so to speak. But it is that (pointing to a post-it where goal fulfilment is written) [...].Interviewer: Goal fulfilment?M: Yes, and I think it's very much my way to reach it. I think it's very much my way, and I know that I'm at work a lot. It's easy [for people] to get in touch with me, I'm accessible. Not if I'm busy with something, but that's not all the time. So the paths [to me] are short. In our interpretation, M treats trust and confidence as a symbol, guiding the relation-building in order to fulfil goals. Results of the analysis of the school leaders' work orientation are summarised in Table II. The 13 school leaders (50 per cent of the sample) in the top half had moderate to high scores for both techno and socio dimensions of work orientation, while the others had moderate to high scores for either techno or socio dimensions. Thus, the scores for our sample do not confirm that a social or relational orientation prevailed among our sample of school leaders. However, 24 of the 26 school leaders (92 per cent) had moderate or high scores for the formal dimension of work orientation, in five cases with low scores for all of the other three dimensions (object, person and symbol), while the others had roughly equal scores for both techno and socio dimensions. Nevertheless, the hypothesis that the school leaders may pay too little attention to object aspects of their work was verified: 15 of the 26 school leaders (58 per cent) had low scores for the object dimension. It should also be noted that only six of them (23 per cent) obtained moderate or high scores for the symbol dimension. As already mentioned, one of the objectives of this study was to test the utility of techno- and socio-structure notions for analysing school leaders' work orientations. The results confirm that the interviewees' work orientations can be described in terms of several permutations of moderate to high or low degrees of attention to object and formal (techno structure), and person and symbol (socio structure) aspects of their work. They articulated attention to object aspects in comments regarding setting effective goals, and involving staff in both establishing intermediate goals and making plans to reach them. Consideration of formal aspects is expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. Recognition of the importance of person aspects can be discerned in comments about attending to the teachers, listening to their needs at the coffee machine, giving responses that encourage their co-operation with the leaders' (and/or agreed) agendas, and working with their personality types. Finally, engagement with symbol aspects is evident in the highlighting of words like participation, trust and confidence, as well as attempts to ensure that values symbolised by the words permeate the leaders' organisations. Unlike previous authors, we found no evidence in our analysis (considering attention to both person and symbol aspects of socio structure) that our school leaders' work orientation was predominantly relational. Rather, we found that attention to formal aspects prevailed, for example, in shaping working routines. We suspect that this may have been masked in previous studies (e.g. Hoog et al., 2005; Northfield, 2014; Tornsen, 2009), which highlighted school leaders' emphasis on the importance of being skilled in building good relations. However, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects. A lack of attention to person aspects in terms of identifying and addressing teachers' needs and motives could strongly impair professional development in a school. This is because it is not enough to organise teams for professional development, the professional needs must also be analysed in relation to work objectives. Our results are consistent with earlier findings that school leaders pay insufficient attention to managerial aspects of their work (Holmgren et al., 2013; Moos et al., 2011; Moller, 2009). However, they add nuance by showing that (among our sample) this did not concern formal or organising aspects, but object aspects, i.e. setting effective goals, and involving the staff in establishing intermediate goals and making plans to reach them. Thus, our findings do not verify the first hypothesis; that the school leaders strongly prioritise socio structural aspects of their work, indicating that their orientation is predominantly relational. Instead, they indicate that they prioritise formal aspects, as expressed in comments about organising teams, scheduling teacher meetings, shaping working routines in meetings, making plans and (in some cases) creating an infrastructure. The second hypothesis (that they have low degrees of techno structure orientation, indicating that they may pay too little attention to the attainment of goals and results in their management activities) is verified by the low scores for the object dimension obtained for 15 of the 26 school leaders (58 per cent). However, scores for the formal dimension (an important techno-structural element of our theoretical framework of management work orientation) were moderate to high for all but two of the school leaders. A surprising finding is that only two school leaders displayed moderate to high attention to symbol aspects, which together with person aspects constitute the socio structural dimensions of their work orientation. We expected more school leaders to show such attention, since seven of the school leaders prioritised person aspects, and previous empirical and theoretical analyses of organisational profiles suggest that social and personal aspects should strongly correlate. These findings raise intriguing questions. According to Dalin (1994), symbols provide the core channels for communicating meaning in the social world, but our investigated school leaders do not seem to use them. Instead they seem to communicate through objects, or through neither objects nor symbols but merely through formalising. A possibility that warrants further attention is that this may be due to increasing accountability pressure to act in a managerial manner. In fact, the dominant formal (-person) work orientation we have detected may be a result of two pressures: an external accountability pressure from the administration, promoting prioritisation of formal aspects, with (sometimes) an internal pressure to attend to person aspects in daily social practice with teachers. Another interesting finding is that some school leaders seem to pay moderate to strong attention to both techno and socio structure aspects, while others predominantly address either techno or socio structure aspects (in both cases in various permutations). We presume that school leaders who score highly for one of the two main elements of both techno and socio structure will have better foundations for addressing their weaknesses than those who have low scores for both main elements of either techno or socio structure. This raises questions about the optimal organisation of collaboration and in-service training to foster improvement of school leaders' relational and management work in practice, which is highly challenging according to several studies (e.g. Cunningham and Sherman, 2008). Reflecting on the analyses, one could question whether the interviews captured school leaders' work, but the aim was to investigate their work orientation, not their entire leadership practice. Moreover, school leaders are not the only organisational members who are involved in leadership. However, we argue that the school leaders' work orientation must be considered an important part of their practice. Further, we consider that the four dimensions we considered captured the school leaders' work orientations sufficiently for categorisation and fruitful analysis. A further limitation we should mention is that we found it difficult to distinguish robustly between different degrees of engagement, consequently the scores we obtained may have been biased towards the moderate-to-high ends of the scales. Theoretically, our findings contribute to an understanding of school leaders' relational and management work orientation, by showing that it can be understood in terms of several permutations of moderate to high or low degrees of object and formal (techno structure), and person and symbol (socio structure) components. The results have practical implications for the improvement of school leaders' management skills, since they contribute to a more nuanced starting point for development, considering both external neoliberal accountability pressure from the administration and internal relational pressure. Interestingly, we found no evidence of the relational dominance in school leaders' work orientations reportedly detected by previous authors. Rather, we found indications that our interviewees primarily attended to formal aspects, particularly shaping working routines. For practical reasons, we suggest that it could be fruitful to distinguish between formal or organising aspects and person aspects, because it is not enough to organise teams for professional development; the professional needs must also be analysed in relation to the work objectives. An important practical implication is that there is a need to investigate school leaders' attention to symbol aspects of their work and organisations. If symbols are core channels to communicate important values, as claimed by previous authors, it is worrying that the school leaders' scores for the symbol dimension were very low. We assert that it is important to establish and communicate core symbols like democracy, equity and solidarity in compulsory schooling, otherwise management could result in mere instrumentality, especially through external accountability pressures from sources such as PISA, thereby leading to a teaching and learning environment in schools where it is difficult for students to find meaning. However, we welcome further research to assess the general validity of our conclusions, as well as analyses with larger samples and more detailed exploration of the implications of different permutations of weak and strong object, formal, person and symbol components of work orientation.
|
The results suggest a need to increase Swedish school leaders' attention to object aspects, and both person and symbolic aspects of the formal or organising dimension, of their work. They also indicate the importance of establishing and communicating core symbols in compulsory schooling, like democracy and equity, to avoid external accountability pressures instrumentally shaping schools' management.
|
[SECTION: Purpose] At the start of the twenty-first century, people face considerable challenges, mainly on the social and environmental levels, such as climate change and the deepening of economic inequalities around the world. For this reason, societies and consumers are demanding that companies, as important agents of change in society, participate actively in the solution of the social problems that communities are facing.Many surveys have suggested that a positive relationship exists between a company's corporate social responsibility (CSR) actions and the consumers' reaction to that company and its products (Bhattacharya and Sen, 2004; Brown and Dacin, 1997; Creyer and Ross, 1997; Ellen et al., 2006; Smith and Langford, 2009). However, other investigations have demonstrated that the relationship between a company's CSR actions and the consumers' reaction is not always direct and evident and show that numerous factors affect whether a firm's CSR activities lead to consumer purchase (Carrigan and Attalla, 2001; Ellen et al., 2000; Maignan and Ferrell, 2004; Valor, 2008). There seems to be a contradiction between what the international polls and surveys have established in terms of people's intentions to buy products with CSR features and their actual purchasing decisions (Devinney et al., 2006). Auger et al. (2003) explained that the differences have occurred because in former studies, researchers used surveys to rank the importance of a number of CSR issues, without any trade-off between traditional features (named in this study as corporate abilities (CA), which concern functional product features) and CSR product features (concerning non-product ethical features). This would explain why consumers' ethical concerns "do not necessarily become manifest in their actual purchasing behaviour" (Fan, 2005, p. 347).In that context, this study has three different objectives. The first one is to analyse how CSR and CA influence socially responsible consumption (SRC). The second objective is to analyse whether there exist significant differences between CSR parameters in the purchasing decisions of consumers from Peru and Spain. The final objective is to contribute to the ongoing debate, using an experimental model that permits not only to test the first two purposes but also to measure people's trade-off between social and traditional features in terms of their willingness to pay (WTP). In addition, the study provides insights on how SRC is seen in different national contexts, as there is little research (Marin and Ruiz, 2007) regarding how consumers in those contexts may perceive the same CSR attributes. A more detailed explanation is offered in the sampling section.In the following section, the main concepts and relationships this research is based on are developed. Next, the objectives are stated and the methodology explained. Finally, the results are presented and discussed. In this section, we present the key issues that enable better understanding of the link between CSR and SRC. Some authors (Bhattacharya and Sen, 2003; Brown, 1998) have pointed out that corporation values, patterns, and general characteristics are perceived by consumers, who then form corporate associations that influence consumers' responses to a company and its products. Other studies (Berens, 2004; Brown and Dacin, 1997) have recognized CA and CSR as types of corporate associations demonstrating that "what consumers know about a company can influence their evaluations of products introduced by the company" (Brown and Dacin, 1997, p. 68). Gupta (2002) used these associations and their interactions to measure the effectiveness of a company's image. Basing the research study on these concepts, we aim to measure the trade-off between CA and CSR.According to the ISO 26000 (2010), social responsibility is "[the] responsibility of an organization for the impacts of its decisions and activities on society and the environment, through transparent and ethical behaviour" (p. 3). As proposed by the ISO, this social responsibility behaviour must be expressed through a set of six core subjects, which are human rights, labour practices, the environment, fair operating practices, consumer issues, and community involvement and development. From these, we selected three issues, addressed in sub-clauses: conditions of work (sub-clause 6.4.4), protection of the environment (sub-clause 6.5.6), and wealth and income creation (sub-clause 6.8.7).Auger et al. (2006) have highlighted the role SRC plays in consumer behaviour. SRC is variously defined as "the conscious and deliberate choice to make certain consumption choices based on personal and moral beliefs" (Auger et al., 2006, p. 32) and as "a person basing his or her acquisition, usage, and disposition of products on a desire to minimize or eliminate any harmful effects and maximize the long-run beneficial impact on society" (Mohr et al., 2001, p. 47). Both approaches, then, focus on SRC as the consumers' free selection of goods or services that is based on concerns regarding the impact of these goods or services on society.Recent investigations, however, have demonstrated that the relationship between CSR and SRC is not always direct and evident. Research findings have highlighted trade-offs between traditional criteria such as price, quality, convenience, and lack of information (Pomering and Dolnicar, 2008) or corporate brand dominance (Berens et al., 2005) and the specific CSR actions developed, product quality, the consumers' personal support for the CSR issues, and their general beliefs about CSR (Sen and Bhattacharya, 2001; Pomering and Dolnicar, 2008).Based on the concepts presented, the following hypotheses were proposed:H1. There exists a positive relationship between a company's good labour practices (b1) and SRC.H2. There exists a positive relationship between a company's environmental commitment (b2) and SRC.H3. There exists a positive relationship between corporate giving to worthy causes (b3) and SRC.To complete the purchasing behaviour model we present in this study, CA are included in the experiment to force the election and trade-off between the attributes of CA and CSR that consumers consider when purchasing. Brown and Dacin (1997) defined CA as the corporation's expertise in the production and commercialization of its goods and services. More recently, Gupta (2002, p. 28) broadened the definition of CA to include "manufacturing expertise, product quality, a company's customer orientation, firm innovativeness, research and development, employee expertise, and after-sales service". Most researchers have agreed that CA are the most important consideration when evaluating a product (Berens, 2004; Berens et al., 2005; Brown and Dacin, 1997; Dacin and Brown, 2002; Sen and Bhattacharya, 2001). Taking into account the variety of traditional features and following Gupta's (2002) CA variables, we proposed the next hypotheses:H4. There exists a positive relationship between a company's leadership in the industry (b4) and SRC.H5. There exists a positive relationship between the quality of a company's products (b5) and SRC.H6. There exists a positive relationship between a company's technological innovation (b6) and SRC.The proposed conceptual framework is that CSR and CA constructs can influence people's SRC positively, as shown in Figure 1.Additionally, when anticipating the differences between our Peruvian and Spanish samples, we expected to find that CSR parameters in Spain would be higher than in Peru. Because Spain is a developed country, its citizens enjoy a better quality of life. According to Maslow's (1943) hierarchy of needs, often pictured as a pyramid, people who have already covered their basics needs (shown at the bottom of the pyramid), given their high living standards, are more interested in fulfilling needs located on higher pyramid levels, such as ethical values.Consequently, we proposed one last hypothesis:H7. Spain has a higher CSR construct than Peru, taking parameters b1, b2, and b3 together.To finalize this section, we must explain the theoretical background underlying the estimation of the WTP for specific social features. The derivation of the WTP is done to quantify the trade-off that exists in every purchasing decision; in our case, we looked for one monetary measure that reflects the trade-off between CSR and CA features. We used a discrete choice model (DCM) to understand and model consumer decisions according to the probabilistic choice theory named random utility theory developed by McFadden (2001). When the perceived stimuli are interpreted as levels of satisfaction, or utility, this can be understood as a model for economic choice in which "the individual chooses the option yielding the greatest realization of utility" (McFadden, 2001, p. 361). Hence, we took a trade-off perspective to understand the behavioural process that leads to the agent's choice. 3.1 Research design Traditional survey methods using simple rating scales may be overstating the importance of ethical issues in the purchasing behaviour of consumers, even in those who reveal themselves as supportive of social causes. For this reason, we decided that "an experimental methodology that more closely mimics a real purchase situation" (Auger and Devinney, 2005, p. 26) may be more appropriate for this type of research. Hence, the influence of CSR and CA on the purchasing behaviour of business school students was measured using an experimental design following choice-based conjoint (CBC) modelling, a methodology that allows the researcher to probe whether beliefs and behaviours are connected (Adamowicz et al., 1998; Hensher et al., 2005; Lancsar, 2002; Louviere et al., 2004).CBC modelling requires that consumers make choices in simulated situations derived from realistic variations of actual product offerings. The process used in generating and setting the discrete choice experiment followed the steps proposed by Verma et al. (2004): identification of determinant attributes; specification of attribute levels; experimental design.Identification of determinant attributes A pre-test using the instrument developed by Auger et al. (2003) with 32 choice alternatives with 14 variables each was carried out, but answer rates were low. We thus decided to reduce CSR and CA attributes and the total number of questionnaires included in the survey. The attributes obtained from the literature review were presented to a panel of local experts to be validated[1]. Next, a pilot experiment with 12 persons was developed. The final list of CSR attributes were labour practices, a company's environmental commitment, and corporate giving to worthy causes. The functional attributes included leadership in the industry, quality products, and technological innovation. Additionally, a price attribute was included to capture the WTP for each attribute, and no interactions among attributes were included.Specification of attribute levels The ranges of the attributes are often chosen to represent actual values observed in the marketplace. An end-point design (Louviere et al., 2004) was applied, utilizing the two extremes of the attribute level range (upper and lower).Experimental design The fractional factorial optimal design was generated with the following characteristics: joint statistical efficiency for all the model parameters, both balanced and orthogonal features, and optimized D-efficiency (Hensher et al., 2005; Kanninen, 2002). The final design consisted in 16 choice tasks, which can be found in the matrix presented in Table I. Athletic shoes, classified as fashion products according to the Foote, Cone, and Belding Grid, were the selected products because they elicit a high degree of involvement on the part of the clients due to emotional criteria at the time of purchasing (Vaughn, 1986). This product also allowed the evaluation of environmental issues, working conditions, and other traditional characteristics. The brand names for the experiment were fictitious to avoid any other factors that could have affected the purchase decision.Figure 2 shows an example from the final instrument.Additionally, three validation tests were applied. First, a consistency test was applied to find out whether the respondents understood the concept of the DCM and the extent to which they act rationally when expressing their preferences. The results of the test showed that 100 per cent of respondents chose the right response, which indicates that there was consistency in the questionnaires. Then, an internal validity test was developed, whereby identical measures were used in two groups: if no confounding variables exist, any difference between the groups on the dependent variable must be attributed to the effort of the independent variable. The results showed that no significant differences were found in the model estimation of the two groups of the pilot study. Finally, a reliability test was applied according to the test-retest method. The correlation coefficient between two sets of responses is often used as a quantitative measure of reliability. The final correlation observed was 0.75 (a correlation between 0.7 and 0.8 is considered satisfactory).3.2 Research type The CBC modelling technique was used for this study. SAS 9.1 and STATA 11 software were used for the design and the model run, respectively.One of the most useful features of this choice-based experimental methodology is its ability to convert the probability of consideration and purchase directly into conditional monetary equivalents. Consequently, researchers can estimate the marginal rate of substitution, or trade-offs, respondents are willing to make between two attributes, which are financial indicators of WTP (Kanninen, 2002). We used the expression proposed by Louviere et al. (2004) to obtain the WTP for the CSR and CA variables:(Equation 1)where MRSk is the marginal rate of substitution between attribute k relative to price and delta P represents the difference between product price levels presented to the respondents. This approach allows for the evaluation in monetary terms of the trade-offs that consumers can make between various aspects of CSR and CA.3.3 Sampling and data collection method Several studies have suggested that CSR is an important intangible asset (Ellen et al., 2006; Marin and Ruiz, 2007; Mohr and Webb, 2005; Schroeder and McEachern, 2005), but there is little research regarding how consumers from different countries perceive each of the CSR attributes. Therefore, students from a master's dual degree programme in Peru and Spain were chosen as the population to be surveyed. They answered the survey questions on paper, during classes. The aim of this experiment was to prove "the stability of the model across national samples" (Cadogan, 2010, p. 604) and to provide preliminary insights regarding how CSR attributes may be seen in different contexts. The sample size was 118 students in Peru and 121 in Spain.Peru and Spain were chosen for this comparative study because they share a number of common traits but are quite dissimilar in other ways. Following the application context of Duque and Lado (2010), we found that these two countries share a historical background (Peru was a colony of Spain during the sixteenth and nineteenth centuries), a language (Spanish), a religion (Catholicism), and important commercial activities. Furthermore, in 2010, Spain was the country with the highest foreign investment in Peru, according to data from the Agency for the Promotion of Private Investment in Peru (PROINVERSION)[2]. One major difference is that Peru is a developing country while Spain is a developed one. Garza-Carranza et al. (2009) have distinguished between what they call more developed Latin regions (Spain) and less developed Latin regions (Peru). According to Schwartz (2006), who derived seven dimensions of values for comparing cultures, west European cultures like Spain are the highest of all regions in values such as egalitarianism, intellectual autonomy, and harmony and the lowest on hierarchy and embeddedness. In contrast, Latin American cultures, like Peru, are "higher in hierarchy and embeddedness, presumably the main components of collectivism, and lower in intellectual autonomy, presumably the main component of individualism" (Schwartz, 2006, p. 161). This gives us enough difference to compare our model in two contrasting national contexts.Table II shows the description of our sample: about 55 per cent of all the surveyed students were male, but there were more women in the sample group from Spain (55.6 per cent). About 40 per cent of the students were 25-29 years old. In each country, the students completed the questionnaire voluntarily and anonymously. Table III shows results for each country. As expected according to economic pricing theory, the parameter for the price of athletic shoes is negative and significant for the model, revealing that higher prices decrease the maximum utility that individuals at a given income level could obtain from the shoes. The first six hypotheses are confirmed, and all estimated parameters are significant, except the first one for the Peruvian sample, as p-values in Table III show. Thus, the probability of purchasing rises when CSR and CA attributes under study are present in the performance of the company that manufactures the chosen product.The only non-significant parameter was "Good labour practices" in the Peruvian sample. This result seems to be coherent with Garavito's (2007) conclusions regarding labour practices. Garavito investigated CSR in the Peruvian labour market context and argued that the reason for the low interest in CSR labour policies was the lack of demand from society for such policies: "[given] a social necessities hierarchy where, due to the poverty level and the weakness of our institutional system, labour rights are considered a luxury good" (Garavito, 2007, p. 2).In contrast, intercepts measure the inherent preferences of the consumers for buying athletic shoes that are not gathered by the independent variables of the model, measuring the impact of all unobserved attributes, and they therefore provide an assessment of switching inertia (Verma et al., 2004). Intercepts are thus an appropriate measure of choice inertia. In Table III, we notice that the intercepts are significant and negative (-1.50 and -1.29 for the Peruvian and Spanish samples, respectively), which means that the consumers of athletic shoes chose the option "neither" more often than the two alternatives offered to them. We can thus conclude that potential customers of athletic shoes need to be offered some substantial value to persuade them to consider a new alternative. However, a wise combination of price, CA, and CSR attributes is sufficient to overcome the consumer switching barrier.In order to analyse the differences made by the variables, we ran a statistic Chow test version for discrete models between the estimated parameters of the pooled sample and split samples for each country. The null hypothesis of no differences amongst the samples was rejected; thus, the parameters of the model estimated for the two countries were significantly different (Likelihood-ratio test kh2(8)=22.52; p-value=0.00). Table III shows that the quality products attribute is the one with most impact in both samples. In the Peruvian case, the second attribute with the greater influence on purchasing behaviour is the one regarding the company's environmental commitment, whereas for the Spanish students, it is technological innovation, and environmental commitment is placed third.In general, the impact of the CSR attributes varies in the Peruvian and Spanish samples. Table III results show that even though the CSR attributes are positive and significant in most cases, the influence of CSR as a whole is higher in the Peruvian sample than in the Spanish sample.Additionally, results show that, taken together, the CA effects are more important than the CSR effects, and both are higher than the main effect of price. In other words, it would seem that CSR as a whole is a feature more valued by customers than price, but not as much as CA. However, if we compare the effects results for the CSR construct in order to analyse the rejection of H7, we find that it is not validated since CSR coefficients as a whole for the Peruvian sample are higher than for the Spanish sample.As price is a discrete variable in this experiment, the delta of the price levels (40 monetary units expressed in the respective currencies of Spain and Peru) has been considered the monetary unit for purposes of calculating the WTP. In order to make possible the comparison of the WTP attributes in both countries, the WTP was expressed in percentages regarding the minimum monthly income in Peru and Spain for the year 2010. Table IV shows the results of the WTP calculated with the previously estimated coefficients presented in Table II.Table IV shows that the surveyed students from Peru and Spain were willing to pay more for the product quality attribute than for the ethical attributes: the most valued attribute in both samples was the quality of products. However, the company's environmental commitment was the second attribute most valued by the Peruvian students (about 7 per cent), whereas the Spanish students chose technological innovation as the second highest attribute (4 per cent). Finally, when comparing the WTP in both countries, we find that for each of the CSR and CA attributes, the Peruvian students show a greater WTP than their Spanish counterparts (a difference of 4 and 3 per cent, respectively, as a whole construct). The exception concerns the attributes of good labour practices and leadership in the industry; the latter WTP is equal for both samples.Thus, we obtained an empirical validation of the role of CSR on consumers' behaviour in Peru and Spain. CSR can become a baseline that offers a clear possibility of differentiation among competitors, which shows that the profit maximization approach is not necessarily in conflict with a better social return investment. The relevance of CA attributes does not mean an oversight of the CSR activities, and the key managerial task is to find the ideal bundle of attributes that maximizes the consumers' WTP. If corporations are able to find this ideal combination, they do not necessarily have to compete on price, for they would be laying the foundations for a competitive advantage via differentiation.The differences in the importance given to the various attributes by the students from the two countries could be explained by cultural factors, such as egalitarianism for example. Schwartz (2006) established a number of differences between west Europe and Latin American cultures: intellectual autonomy, egalitarianism, and harmony are highlighted as core attributes in west Europe whereas Latin American countries show more embeddedness and affective autonomy. Other explanations may be found in consumer ethnocentrism (Josiassen et al., 2011) or in the notion of psychic distance (Sousa and Lages, 2011). Those studies could provide a starting point for other investigations regarding the reasons for the differences in the students' responses regarding the importance of the attributes identified in the present study. This research study shows the positive influence of CSR on consumer behaviour, thus confirming previous studies demonstrating that CSR is an important intangible asset offering a competitive advantage through differentiation (Auger et al., 2003; Bhattacharya and Sen, 2004; Carrigan et al., 2004; Ellen et al., 2006; Schroeder and McEachern, 2005; Marin and Ruiz, 2007; Mohr and Webb, 2005; Oksanen and Uusitalo, 2004). Researchers have found that both types of associations, CA and CSR, influence consumer purchasing behaviour. This study reveals that the purchasing probability increases with a good combination of CA and CSR. Though previous research showed that CA has a stronger influence in social responsibility associations (Berens, 2004; Berens et al., 2005), this study shows that both criteria as a whole are determinant although the importance of each attribute may vary according to contextual factors.The results also suggest that CSR can contribute to an increased brand value and reputation as well as higher financial results through customers' higher WTP. These results are in line with those previously obtained by Jones et al. (2005) and Papasolomou-Dukakis et al. (2005), thus enabling corporations to evaluate how their CSR investments could have a positive impact on the purchasing behaviour of customers. An orientation towards maximization of profits is not necessarily in conflict with the search for a better return in terms of social responsibility. Therefore, corporations have a great opportunity to contribute to the creation of a better world by not only generating economic benefits but also providing solutions to social problems.This research study shows that environmental protection is highly valued as a CSR activity in Spain and Peru. The environmental commitment attribute has special relevance as it is placed in the top three attributes for both Spanish and Peruvian students. Sriram and Forman (1993) have already highlighted noteworthy differences between US and Dutch consumers, showing that there are cross-cultural differences in how consumers perceive the importance of a product's environmental attributes. Future research studies should continue investigating contextual differences that may help account for different values given to product attributes. Our findings support the contention that corporations should design CSR strategies based on the consumers' preferences rather than on their own philanthropic ideas.Limitations of this study include a narrow focus: the investigation concerned one product only, namely athletic shoes. The investigation was restricted to test only linear and main effects of a narrow set of attributes. Several future research directions arise from the limitations inherent in this research. Research investigating different kinds of products, further cross-cultural studies, quadratic effects of price, including interactions among variables, would add to the validity and generalization of this study's findings. Finally, researchers could widen the sample selection to obtain valid results at national levels. Opens in a new window.Figure 1 Conceptual framework Opens in a new window.Figure 2 Questionnaire example for the Peruvian sample Opens in a new window.Table I Matrix of choice tasks for the Peruxian Sample Opens in a new window.Table II Sample description Opens in a new window.Table III Results by country of origin Opens in a new window.Table IV Willingness to pay in terms of the minimum monthly payment (percentage) Opens in a new window.Equation 1
|
- The research study has three objectives. One is to provide empirical validation of the relationship between corporate social responsibility (CSR) and corporate abilities (CA) as an influential factor in socially responsible consumption. The second is to ascertain whether there are significant differences between CSR parameters estimated in the purchasing decisions of consumers from Peru and Spain. Finally, the authors aim to measure people's trade-off between the social (CSR) and traditional (CA) features of their purchasing decisions in terms of their willingness to pay.
|
[SECTION: Method] At the start of the twenty-first century, people face considerable challenges, mainly on the social and environmental levels, such as climate change and the deepening of economic inequalities around the world. For this reason, societies and consumers are demanding that companies, as important agents of change in society, participate actively in the solution of the social problems that communities are facing.Many surveys have suggested that a positive relationship exists between a company's corporate social responsibility (CSR) actions and the consumers' reaction to that company and its products (Bhattacharya and Sen, 2004; Brown and Dacin, 1997; Creyer and Ross, 1997; Ellen et al., 2006; Smith and Langford, 2009). However, other investigations have demonstrated that the relationship between a company's CSR actions and the consumers' reaction is not always direct and evident and show that numerous factors affect whether a firm's CSR activities lead to consumer purchase (Carrigan and Attalla, 2001; Ellen et al., 2000; Maignan and Ferrell, 2004; Valor, 2008). There seems to be a contradiction between what the international polls and surveys have established in terms of people's intentions to buy products with CSR features and their actual purchasing decisions (Devinney et al., 2006). Auger et al. (2003) explained that the differences have occurred because in former studies, researchers used surveys to rank the importance of a number of CSR issues, without any trade-off between traditional features (named in this study as corporate abilities (CA), which concern functional product features) and CSR product features (concerning non-product ethical features). This would explain why consumers' ethical concerns "do not necessarily become manifest in their actual purchasing behaviour" (Fan, 2005, p. 347).In that context, this study has three different objectives. The first one is to analyse how CSR and CA influence socially responsible consumption (SRC). The second objective is to analyse whether there exist significant differences between CSR parameters in the purchasing decisions of consumers from Peru and Spain. The final objective is to contribute to the ongoing debate, using an experimental model that permits not only to test the first two purposes but also to measure people's trade-off between social and traditional features in terms of their willingness to pay (WTP). In addition, the study provides insights on how SRC is seen in different national contexts, as there is little research (Marin and Ruiz, 2007) regarding how consumers in those contexts may perceive the same CSR attributes. A more detailed explanation is offered in the sampling section.In the following section, the main concepts and relationships this research is based on are developed. Next, the objectives are stated and the methodology explained. Finally, the results are presented and discussed. In this section, we present the key issues that enable better understanding of the link between CSR and SRC. Some authors (Bhattacharya and Sen, 2003; Brown, 1998) have pointed out that corporation values, patterns, and general characteristics are perceived by consumers, who then form corporate associations that influence consumers' responses to a company and its products. Other studies (Berens, 2004; Brown and Dacin, 1997) have recognized CA and CSR as types of corporate associations demonstrating that "what consumers know about a company can influence their evaluations of products introduced by the company" (Brown and Dacin, 1997, p. 68). Gupta (2002) used these associations and their interactions to measure the effectiveness of a company's image. Basing the research study on these concepts, we aim to measure the trade-off between CA and CSR.According to the ISO 26000 (2010), social responsibility is "[the] responsibility of an organization for the impacts of its decisions and activities on society and the environment, through transparent and ethical behaviour" (p. 3). As proposed by the ISO, this social responsibility behaviour must be expressed through a set of six core subjects, which are human rights, labour practices, the environment, fair operating practices, consumer issues, and community involvement and development. From these, we selected three issues, addressed in sub-clauses: conditions of work (sub-clause 6.4.4), protection of the environment (sub-clause 6.5.6), and wealth and income creation (sub-clause 6.8.7).Auger et al. (2006) have highlighted the role SRC plays in consumer behaviour. SRC is variously defined as "the conscious and deliberate choice to make certain consumption choices based on personal and moral beliefs" (Auger et al., 2006, p. 32) and as "a person basing his or her acquisition, usage, and disposition of products on a desire to minimize or eliminate any harmful effects and maximize the long-run beneficial impact on society" (Mohr et al., 2001, p. 47). Both approaches, then, focus on SRC as the consumers' free selection of goods or services that is based on concerns regarding the impact of these goods or services on society.Recent investigations, however, have demonstrated that the relationship between CSR and SRC is not always direct and evident. Research findings have highlighted trade-offs between traditional criteria such as price, quality, convenience, and lack of information (Pomering and Dolnicar, 2008) or corporate brand dominance (Berens et al., 2005) and the specific CSR actions developed, product quality, the consumers' personal support for the CSR issues, and their general beliefs about CSR (Sen and Bhattacharya, 2001; Pomering and Dolnicar, 2008).Based on the concepts presented, the following hypotheses were proposed:H1. There exists a positive relationship between a company's good labour practices (b1) and SRC.H2. There exists a positive relationship between a company's environmental commitment (b2) and SRC.H3. There exists a positive relationship between corporate giving to worthy causes (b3) and SRC.To complete the purchasing behaviour model we present in this study, CA are included in the experiment to force the election and trade-off between the attributes of CA and CSR that consumers consider when purchasing. Brown and Dacin (1997) defined CA as the corporation's expertise in the production and commercialization of its goods and services. More recently, Gupta (2002, p. 28) broadened the definition of CA to include "manufacturing expertise, product quality, a company's customer orientation, firm innovativeness, research and development, employee expertise, and after-sales service". Most researchers have agreed that CA are the most important consideration when evaluating a product (Berens, 2004; Berens et al., 2005; Brown and Dacin, 1997; Dacin and Brown, 2002; Sen and Bhattacharya, 2001). Taking into account the variety of traditional features and following Gupta's (2002) CA variables, we proposed the next hypotheses:H4. There exists a positive relationship between a company's leadership in the industry (b4) and SRC.H5. There exists a positive relationship between the quality of a company's products (b5) and SRC.H6. There exists a positive relationship between a company's technological innovation (b6) and SRC.The proposed conceptual framework is that CSR and CA constructs can influence people's SRC positively, as shown in Figure 1.Additionally, when anticipating the differences between our Peruvian and Spanish samples, we expected to find that CSR parameters in Spain would be higher than in Peru. Because Spain is a developed country, its citizens enjoy a better quality of life. According to Maslow's (1943) hierarchy of needs, often pictured as a pyramid, people who have already covered their basics needs (shown at the bottom of the pyramid), given their high living standards, are more interested in fulfilling needs located on higher pyramid levels, such as ethical values.Consequently, we proposed one last hypothesis:H7. Spain has a higher CSR construct than Peru, taking parameters b1, b2, and b3 together.To finalize this section, we must explain the theoretical background underlying the estimation of the WTP for specific social features. The derivation of the WTP is done to quantify the trade-off that exists in every purchasing decision; in our case, we looked for one monetary measure that reflects the trade-off between CSR and CA features. We used a discrete choice model (DCM) to understand and model consumer decisions according to the probabilistic choice theory named random utility theory developed by McFadden (2001). When the perceived stimuli are interpreted as levels of satisfaction, or utility, this can be understood as a model for economic choice in which "the individual chooses the option yielding the greatest realization of utility" (McFadden, 2001, p. 361). Hence, we took a trade-off perspective to understand the behavioural process that leads to the agent's choice. 3.1 Research design Traditional survey methods using simple rating scales may be overstating the importance of ethical issues in the purchasing behaviour of consumers, even in those who reveal themselves as supportive of social causes. For this reason, we decided that "an experimental methodology that more closely mimics a real purchase situation" (Auger and Devinney, 2005, p. 26) may be more appropriate for this type of research. Hence, the influence of CSR and CA on the purchasing behaviour of business school students was measured using an experimental design following choice-based conjoint (CBC) modelling, a methodology that allows the researcher to probe whether beliefs and behaviours are connected (Adamowicz et al., 1998; Hensher et al., 2005; Lancsar, 2002; Louviere et al., 2004).CBC modelling requires that consumers make choices in simulated situations derived from realistic variations of actual product offerings. The process used in generating and setting the discrete choice experiment followed the steps proposed by Verma et al. (2004): identification of determinant attributes; specification of attribute levels; experimental design.Identification of determinant attributes A pre-test using the instrument developed by Auger et al. (2003) with 32 choice alternatives with 14 variables each was carried out, but answer rates were low. We thus decided to reduce CSR and CA attributes and the total number of questionnaires included in the survey. The attributes obtained from the literature review were presented to a panel of local experts to be validated[1]. Next, a pilot experiment with 12 persons was developed. The final list of CSR attributes were labour practices, a company's environmental commitment, and corporate giving to worthy causes. The functional attributes included leadership in the industry, quality products, and technological innovation. Additionally, a price attribute was included to capture the WTP for each attribute, and no interactions among attributes were included.Specification of attribute levels The ranges of the attributes are often chosen to represent actual values observed in the marketplace. An end-point design (Louviere et al., 2004) was applied, utilizing the two extremes of the attribute level range (upper and lower).Experimental design The fractional factorial optimal design was generated with the following characteristics: joint statistical efficiency for all the model parameters, both balanced and orthogonal features, and optimized D-efficiency (Hensher et al., 2005; Kanninen, 2002). The final design consisted in 16 choice tasks, which can be found in the matrix presented in Table I. Athletic shoes, classified as fashion products according to the Foote, Cone, and Belding Grid, were the selected products because they elicit a high degree of involvement on the part of the clients due to emotional criteria at the time of purchasing (Vaughn, 1986). This product also allowed the evaluation of environmental issues, working conditions, and other traditional characteristics. The brand names for the experiment were fictitious to avoid any other factors that could have affected the purchase decision.Figure 2 shows an example from the final instrument.Additionally, three validation tests were applied. First, a consistency test was applied to find out whether the respondents understood the concept of the DCM and the extent to which they act rationally when expressing their preferences. The results of the test showed that 100 per cent of respondents chose the right response, which indicates that there was consistency in the questionnaires. Then, an internal validity test was developed, whereby identical measures were used in two groups: if no confounding variables exist, any difference between the groups on the dependent variable must be attributed to the effort of the independent variable. The results showed that no significant differences were found in the model estimation of the two groups of the pilot study. Finally, a reliability test was applied according to the test-retest method. The correlation coefficient between two sets of responses is often used as a quantitative measure of reliability. The final correlation observed was 0.75 (a correlation between 0.7 and 0.8 is considered satisfactory).3.2 Research type The CBC modelling technique was used for this study. SAS 9.1 and STATA 11 software were used for the design and the model run, respectively.One of the most useful features of this choice-based experimental methodology is its ability to convert the probability of consideration and purchase directly into conditional monetary equivalents. Consequently, researchers can estimate the marginal rate of substitution, or trade-offs, respondents are willing to make between two attributes, which are financial indicators of WTP (Kanninen, 2002). We used the expression proposed by Louviere et al. (2004) to obtain the WTP for the CSR and CA variables:(Equation 1)where MRSk is the marginal rate of substitution between attribute k relative to price and delta P represents the difference between product price levels presented to the respondents. This approach allows for the evaluation in monetary terms of the trade-offs that consumers can make between various aspects of CSR and CA.3.3 Sampling and data collection method Several studies have suggested that CSR is an important intangible asset (Ellen et al., 2006; Marin and Ruiz, 2007; Mohr and Webb, 2005; Schroeder and McEachern, 2005), but there is little research regarding how consumers from different countries perceive each of the CSR attributes. Therefore, students from a master's dual degree programme in Peru and Spain were chosen as the population to be surveyed. They answered the survey questions on paper, during classes. The aim of this experiment was to prove "the stability of the model across national samples" (Cadogan, 2010, p. 604) and to provide preliminary insights regarding how CSR attributes may be seen in different contexts. The sample size was 118 students in Peru and 121 in Spain.Peru and Spain were chosen for this comparative study because they share a number of common traits but are quite dissimilar in other ways. Following the application context of Duque and Lado (2010), we found that these two countries share a historical background (Peru was a colony of Spain during the sixteenth and nineteenth centuries), a language (Spanish), a religion (Catholicism), and important commercial activities. Furthermore, in 2010, Spain was the country with the highest foreign investment in Peru, according to data from the Agency for the Promotion of Private Investment in Peru (PROINVERSION)[2]. One major difference is that Peru is a developing country while Spain is a developed one. Garza-Carranza et al. (2009) have distinguished between what they call more developed Latin regions (Spain) and less developed Latin regions (Peru). According to Schwartz (2006), who derived seven dimensions of values for comparing cultures, west European cultures like Spain are the highest of all regions in values such as egalitarianism, intellectual autonomy, and harmony and the lowest on hierarchy and embeddedness. In contrast, Latin American cultures, like Peru, are "higher in hierarchy and embeddedness, presumably the main components of collectivism, and lower in intellectual autonomy, presumably the main component of individualism" (Schwartz, 2006, p. 161). This gives us enough difference to compare our model in two contrasting national contexts.Table II shows the description of our sample: about 55 per cent of all the surveyed students were male, but there were more women in the sample group from Spain (55.6 per cent). About 40 per cent of the students were 25-29 years old. In each country, the students completed the questionnaire voluntarily and anonymously. Table III shows results for each country. As expected according to economic pricing theory, the parameter for the price of athletic shoes is negative and significant for the model, revealing that higher prices decrease the maximum utility that individuals at a given income level could obtain from the shoes. The first six hypotheses are confirmed, and all estimated parameters are significant, except the first one for the Peruvian sample, as p-values in Table III show. Thus, the probability of purchasing rises when CSR and CA attributes under study are present in the performance of the company that manufactures the chosen product.The only non-significant parameter was "Good labour practices" in the Peruvian sample. This result seems to be coherent with Garavito's (2007) conclusions regarding labour practices. Garavito investigated CSR in the Peruvian labour market context and argued that the reason for the low interest in CSR labour policies was the lack of demand from society for such policies: "[given] a social necessities hierarchy where, due to the poverty level and the weakness of our institutional system, labour rights are considered a luxury good" (Garavito, 2007, p. 2).In contrast, intercepts measure the inherent preferences of the consumers for buying athletic shoes that are not gathered by the independent variables of the model, measuring the impact of all unobserved attributes, and they therefore provide an assessment of switching inertia (Verma et al., 2004). Intercepts are thus an appropriate measure of choice inertia. In Table III, we notice that the intercepts are significant and negative (-1.50 and -1.29 for the Peruvian and Spanish samples, respectively), which means that the consumers of athletic shoes chose the option "neither" more often than the two alternatives offered to them. We can thus conclude that potential customers of athletic shoes need to be offered some substantial value to persuade them to consider a new alternative. However, a wise combination of price, CA, and CSR attributes is sufficient to overcome the consumer switching barrier.In order to analyse the differences made by the variables, we ran a statistic Chow test version for discrete models between the estimated parameters of the pooled sample and split samples for each country. The null hypothesis of no differences amongst the samples was rejected; thus, the parameters of the model estimated for the two countries were significantly different (Likelihood-ratio test kh2(8)=22.52; p-value=0.00). Table III shows that the quality products attribute is the one with most impact in both samples. In the Peruvian case, the second attribute with the greater influence on purchasing behaviour is the one regarding the company's environmental commitment, whereas for the Spanish students, it is technological innovation, and environmental commitment is placed third.In general, the impact of the CSR attributes varies in the Peruvian and Spanish samples. Table III results show that even though the CSR attributes are positive and significant in most cases, the influence of CSR as a whole is higher in the Peruvian sample than in the Spanish sample.Additionally, results show that, taken together, the CA effects are more important than the CSR effects, and both are higher than the main effect of price. In other words, it would seem that CSR as a whole is a feature more valued by customers than price, but not as much as CA. However, if we compare the effects results for the CSR construct in order to analyse the rejection of H7, we find that it is not validated since CSR coefficients as a whole for the Peruvian sample are higher than for the Spanish sample.As price is a discrete variable in this experiment, the delta of the price levels (40 monetary units expressed in the respective currencies of Spain and Peru) has been considered the monetary unit for purposes of calculating the WTP. In order to make possible the comparison of the WTP attributes in both countries, the WTP was expressed in percentages regarding the minimum monthly income in Peru and Spain for the year 2010. Table IV shows the results of the WTP calculated with the previously estimated coefficients presented in Table II.Table IV shows that the surveyed students from Peru and Spain were willing to pay more for the product quality attribute than for the ethical attributes: the most valued attribute in both samples was the quality of products. However, the company's environmental commitment was the second attribute most valued by the Peruvian students (about 7 per cent), whereas the Spanish students chose technological innovation as the second highest attribute (4 per cent). Finally, when comparing the WTP in both countries, we find that for each of the CSR and CA attributes, the Peruvian students show a greater WTP than their Spanish counterparts (a difference of 4 and 3 per cent, respectively, as a whole construct). The exception concerns the attributes of good labour practices and leadership in the industry; the latter WTP is equal for both samples.Thus, we obtained an empirical validation of the role of CSR on consumers' behaviour in Peru and Spain. CSR can become a baseline that offers a clear possibility of differentiation among competitors, which shows that the profit maximization approach is not necessarily in conflict with a better social return investment. The relevance of CA attributes does not mean an oversight of the CSR activities, and the key managerial task is to find the ideal bundle of attributes that maximizes the consumers' WTP. If corporations are able to find this ideal combination, they do not necessarily have to compete on price, for they would be laying the foundations for a competitive advantage via differentiation.The differences in the importance given to the various attributes by the students from the two countries could be explained by cultural factors, such as egalitarianism for example. Schwartz (2006) established a number of differences between west Europe and Latin American cultures: intellectual autonomy, egalitarianism, and harmony are highlighted as core attributes in west Europe whereas Latin American countries show more embeddedness and affective autonomy. Other explanations may be found in consumer ethnocentrism (Josiassen et al., 2011) or in the notion of psychic distance (Sousa and Lages, 2011). Those studies could provide a starting point for other investigations regarding the reasons for the differences in the students' responses regarding the importance of the attributes identified in the present study. This research study shows the positive influence of CSR on consumer behaviour, thus confirming previous studies demonstrating that CSR is an important intangible asset offering a competitive advantage through differentiation (Auger et al., 2003; Bhattacharya and Sen, 2004; Carrigan et al., 2004; Ellen et al., 2006; Schroeder and McEachern, 2005; Marin and Ruiz, 2007; Mohr and Webb, 2005; Oksanen and Uusitalo, 2004). Researchers have found that both types of associations, CA and CSR, influence consumer purchasing behaviour. This study reveals that the purchasing probability increases with a good combination of CA and CSR. Though previous research showed that CA has a stronger influence in social responsibility associations (Berens, 2004; Berens et al., 2005), this study shows that both criteria as a whole are determinant although the importance of each attribute may vary according to contextual factors.The results also suggest that CSR can contribute to an increased brand value and reputation as well as higher financial results through customers' higher WTP. These results are in line with those previously obtained by Jones et al. (2005) and Papasolomou-Dukakis et al. (2005), thus enabling corporations to evaluate how their CSR investments could have a positive impact on the purchasing behaviour of customers. An orientation towards maximization of profits is not necessarily in conflict with the search for a better return in terms of social responsibility. Therefore, corporations have a great opportunity to contribute to the creation of a better world by not only generating economic benefits but also providing solutions to social problems.This research study shows that environmental protection is highly valued as a CSR activity in Spain and Peru. The environmental commitment attribute has special relevance as it is placed in the top three attributes for both Spanish and Peruvian students. Sriram and Forman (1993) have already highlighted noteworthy differences between US and Dutch consumers, showing that there are cross-cultural differences in how consumers perceive the importance of a product's environmental attributes. Future research studies should continue investigating contextual differences that may help account for different values given to product attributes. Our findings support the contention that corporations should design CSR strategies based on the consumers' preferences rather than on their own philanthropic ideas.Limitations of this study include a narrow focus: the investigation concerned one product only, namely athletic shoes. The investigation was restricted to test only linear and main effects of a narrow set of attributes. Several future research directions arise from the limitations inherent in this research. Research investigating different kinds of products, further cross-cultural studies, quadratic effects of price, including interactions among variables, would add to the validity and generalization of this study's findings. Finally, researchers could widen the sample selection to obtain valid results at national levels. Opens in a new window.Figure 1 Conceptual framework Opens in a new window.Figure 2 Questionnaire example for the Peruvian sample Opens in a new window.Table I Matrix of choice tasks for the Peruxian Sample Opens in a new window.Table II Sample description Opens in a new window.Table III Results by country of origin Opens in a new window.Table IV Willingness to pay in terms of the minimum monthly payment (percentage) Opens in a new window.Equation 1
|
- A discrete choice modelling experiment was used to test the relationship between CSR and CA, quantify consumers' intention to purchase, and establish their willingness to pay for specific social features.
|
[SECTION: Findings] At the start of the twenty-first century, people face considerable challenges, mainly on the social and environmental levels, such as climate change and the deepening of economic inequalities around the world. For this reason, societies and consumers are demanding that companies, as important agents of change in society, participate actively in the solution of the social problems that communities are facing.Many surveys have suggested that a positive relationship exists between a company's corporate social responsibility (CSR) actions and the consumers' reaction to that company and its products (Bhattacharya and Sen, 2004; Brown and Dacin, 1997; Creyer and Ross, 1997; Ellen et al., 2006; Smith and Langford, 2009). However, other investigations have demonstrated that the relationship between a company's CSR actions and the consumers' reaction is not always direct and evident and show that numerous factors affect whether a firm's CSR activities lead to consumer purchase (Carrigan and Attalla, 2001; Ellen et al., 2000; Maignan and Ferrell, 2004; Valor, 2008). There seems to be a contradiction between what the international polls and surveys have established in terms of people's intentions to buy products with CSR features and their actual purchasing decisions (Devinney et al., 2006). Auger et al. (2003) explained that the differences have occurred because in former studies, researchers used surveys to rank the importance of a number of CSR issues, without any trade-off between traditional features (named in this study as corporate abilities (CA), which concern functional product features) and CSR product features (concerning non-product ethical features). This would explain why consumers' ethical concerns "do not necessarily become manifest in their actual purchasing behaviour" (Fan, 2005, p. 347).In that context, this study has three different objectives. The first one is to analyse how CSR and CA influence socially responsible consumption (SRC). The second objective is to analyse whether there exist significant differences between CSR parameters in the purchasing decisions of consumers from Peru and Spain. The final objective is to contribute to the ongoing debate, using an experimental model that permits not only to test the first two purposes but also to measure people's trade-off between social and traditional features in terms of their willingness to pay (WTP). In addition, the study provides insights on how SRC is seen in different national contexts, as there is little research (Marin and Ruiz, 2007) regarding how consumers in those contexts may perceive the same CSR attributes. A more detailed explanation is offered in the sampling section.In the following section, the main concepts and relationships this research is based on are developed. Next, the objectives are stated and the methodology explained. Finally, the results are presented and discussed. In this section, we present the key issues that enable better understanding of the link between CSR and SRC. Some authors (Bhattacharya and Sen, 2003; Brown, 1998) have pointed out that corporation values, patterns, and general characteristics are perceived by consumers, who then form corporate associations that influence consumers' responses to a company and its products. Other studies (Berens, 2004; Brown and Dacin, 1997) have recognized CA and CSR as types of corporate associations demonstrating that "what consumers know about a company can influence their evaluations of products introduced by the company" (Brown and Dacin, 1997, p. 68). Gupta (2002) used these associations and their interactions to measure the effectiveness of a company's image. Basing the research study on these concepts, we aim to measure the trade-off between CA and CSR.According to the ISO 26000 (2010), social responsibility is "[the] responsibility of an organization for the impacts of its decisions and activities on society and the environment, through transparent and ethical behaviour" (p. 3). As proposed by the ISO, this social responsibility behaviour must be expressed through a set of six core subjects, which are human rights, labour practices, the environment, fair operating practices, consumer issues, and community involvement and development. From these, we selected three issues, addressed in sub-clauses: conditions of work (sub-clause 6.4.4), protection of the environment (sub-clause 6.5.6), and wealth and income creation (sub-clause 6.8.7).Auger et al. (2006) have highlighted the role SRC plays in consumer behaviour. SRC is variously defined as "the conscious and deliberate choice to make certain consumption choices based on personal and moral beliefs" (Auger et al., 2006, p. 32) and as "a person basing his or her acquisition, usage, and disposition of products on a desire to minimize or eliminate any harmful effects and maximize the long-run beneficial impact on society" (Mohr et al., 2001, p. 47). Both approaches, then, focus on SRC as the consumers' free selection of goods or services that is based on concerns regarding the impact of these goods or services on society.Recent investigations, however, have demonstrated that the relationship between CSR and SRC is not always direct and evident. Research findings have highlighted trade-offs between traditional criteria such as price, quality, convenience, and lack of information (Pomering and Dolnicar, 2008) or corporate brand dominance (Berens et al., 2005) and the specific CSR actions developed, product quality, the consumers' personal support for the CSR issues, and their general beliefs about CSR (Sen and Bhattacharya, 2001; Pomering and Dolnicar, 2008).Based on the concepts presented, the following hypotheses were proposed:H1. There exists a positive relationship between a company's good labour practices (b1) and SRC.H2. There exists a positive relationship between a company's environmental commitment (b2) and SRC.H3. There exists a positive relationship between corporate giving to worthy causes (b3) and SRC.To complete the purchasing behaviour model we present in this study, CA are included in the experiment to force the election and trade-off between the attributes of CA and CSR that consumers consider when purchasing. Brown and Dacin (1997) defined CA as the corporation's expertise in the production and commercialization of its goods and services. More recently, Gupta (2002, p. 28) broadened the definition of CA to include "manufacturing expertise, product quality, a company's customer orientation, firm innovativeness, research and development, employee expertise, and after-sales service". Most researchers have agreed that CA are the most important consideration when evaluating a product (Berens, 2004; Berens et al., 2005; Brown and Dacin, 1997; Dacin and Brown, 2002; Sen and Bhattacharya, 2001). Taking into account the variety of traditional features and following Gupta's (2002) CA variables, we proposed the next hypotheses:H4. There exists a positive relationship between a company's leadership in the industry (b4) and SRC.H5. There exists a positive relationship between the quality of a company's products (b5) and SRC.H6. There exists a positive relationship between a company's technological innovation (b6) and SRC.The proposed conceptual framework is that CSR and CA constructs can influence people's SRC positively, as shown in Figure 1.Additionally, when anticipating the differences between our Peruvian and Spanish samples, we expected to find that CSR parameters in Spain would be higher than in Peru. Because Spain is a developed country, its citizens enjoy a better quality of life. According to Maslow's (1943) hierarchy of needs, often pictured as a pyramid, people who have already covered their basics needs (shown at the bottom of the pyramid), given their high living standards, are more interested in fulfilling needs located on higher pyramid levels, such as ethical values.Consequently, we proposed one last hypothesis:H7. Spain has a higher CSR construct than Peru, taking parameters b1, b2, and b3 together.To finalize this section, we must explain the theoretical background underlying the estimation of the WTP for specific social features. The derivation of the WTP is done to quantify the trade-off that exists in every purchasing decision; in our case, we looked for one monetary measure that reflects the trade-off between CSR and CA features. We used a discrete choice model (DCM) to understand and model consumer decisions according to the probabilistic choice theory named random utility theory developed by McFadden (2001). When the perceived stimuli are interpreted as levels of satisfaction, or utility, this can be understood as a model for economic choice in which "the individual chooses the option yielding the greatest realization of utility" (McFadden, 2001, p. 361). Hence, we took a trade-off perspective to understand the behavioural process that leads to the agent's choice. 3.1 Research design Traditional survey methods using simple rating scales may be overstating the importance of ethical issues in the purchasing behaviour of consumers, even in those who reveal themselves as supportive of social causes. For this reason, we decided that "an experimental methodology that more closely mimics a real purchase situation" (Auger and Devinney, 2005, p. 26) may be more appropriate for this type of research. Hence, the influence of CSR and CA on the purchasing behaviour of business school students was measured using an experimental design following choice-based conjoint (CBC) modelling, a methodology that allows the researcher to probe whether beliefs and behaviours are connected (Adamowicz et al., 1998; Hensher et al., 2005; Lancsar, 2002; Louviere et al., 2004).CBC modelling requires that consumers make choices in simulated situations derived from realistic variations of actual product offerings. The process used in generating and setting the discrete choice experiment followed the steps proposed by Verma et al. (2004): identification of determinant attributes; specification of attribute levels; experimental design.Identification of determinant attributes A pre-test using the instrument developed by Auger et al. (2003) with 32 choice alternatives with 14 variables each was carried out, but answer rates were low. We thus decided to reduce CSR and CA attributes and the total number of questionnaires included in the survey. The attributes obtained from the literature review were presented to a panel of local experts to be validated[1]. Next, a pilot experiment with 12 persons was developed. The final list of CSR attributes were labour practices, a company's environmental commitment, and corporate giving to worthy causes. The functional attributes included leadership in the industry, quality products, and technological innovation. Additionally, a price attribute was included to capture the WTP for each attribute, and no interactions among attributes were included.Specification of attribute levels The ranges of the attributes are often chosen to represent actual values observed in the marketplace. An end-point design (Louviere et al., 2004) was applied, utilizing the two extremes of the attribute level range (upper and lower).Experimental design The fractional factorial optimal design was generated with the following characteristics: joint statistical efficiency for all the model parameters, both balanced and orthogonal features, and optimized D-efficiency (Hensher et al., 2005; Kanninen, 2002). The final design consisted in 16 choice tasks, which can be found in the matrix presented in Table I. Athletic shoes, classified as fashion products according to the Foote, Cone, and Belding Grid, were the selected products because they elicit a high degree of involvement on the part of the clients due to emotional criteria at the time of purchasing (Vaughn, 1986). This product also allowed the evaluation of environmental issues, working conditions, and other traditional characteristics. The brand names for the experiment were fictitious to avoid any other factors that could have affected the purchase decision.Figure 2 shows an example from the final instrument.Additionally, three validation tests were applied. First, a consistency test was applied to find out whether the respondents understood the concept of the DCM and the extent to which they act rationally when expressing their preferences. The results of the test showed that 100 per cent of respondents chose the right response, which indicates that there was consistency in the questionnaires. Then, an internal validity test was developed, whereby identical measures were used in two groups: if no confounding variables exist, any difference between the groups on the dependent variable must be attributed to the effort of the independent variable. The results showed that no significant differences were found in the model estimation of the two groups of the pilot study. Finally, a reliability test was applied according to the test-retest method. The correlation coefficient between two sets of responses is often used as a quantitative measure of reliability. The final correlation observed was 0.75 (a correlation between 0.7 and 0.8 is considered satisfactory).3.2 Research type The CBC modelling technique was used for this study. SAS 9.1 and STATA 11 software were used for the design and the model run, respectively.One of the most useful features of this choice-based experimental methodology is its ability to convert the probability of consideration and purchase directly into conditional monetary equivalents. Consequently, researchers can estimate the marginal rate of substitution, or trade-offs, respondents are willing to make between two attributes, which are financial indicators of WTP (Kanninen, 2002). We used the expression proposed by Louviere et al. (2004) to obtain the WTP for the CSR and CA variables:(Equation 1)where MRSk is the marginal rate of substitution between attribute k relative to price and delta P represents the difference between product price levels presented to the respondents. This approach allows for the evaluation in monetary terms of the trade-offs that consumers can make between various aspects of CSR and CA.3.3 Sampling and data collection method Several studies have suggested that CSR is an important intangible asset (Ellen et al., 2006; Marin and Ruiz, 2007; Mohr and Webb, 2005; Schroeder and McEachern, 2005), but there is little research regarding how consumers from different countries perceive each of the CSR attributes. Therefore, students from a master's dual degree programme in Peru and Spain were chosen as the population to be surveyed. They answered the survey questions on paper, during classes. The aim of this experiment was to prove "the stability of the model across national samples" (Cadogan, 2010, p. 604) and to provide preliminary insights regarding how CSR attributes may be seen in different contexts. The sample size was 118 students in Peru and 121 in Spain.Peru and Spain were chosen for this comparative study because they share a number of common traits but are quite dissimilar in other ways. Following the application context of Duque and Lado (2010), we found that these two countries share a historical background (Peru was a colony of Spain during the sixteenth and nineteenth centuries), a language (Spanish), a religion (Catholicism), and important commercial activities. Furthermore, in 2010, Spain was the country with the highest foreign investment in Peru, according to data from the Agency for the Promotion of Private Investment in Peru (PROINVERSION)[2]. One major difference is that Peru is a developing country while Spain is a developed one. Garza-Carranza et al. (2009) have distinguished between what they call more developed Latin regions (Spain) and less developed Latin regions (Peru). According to Schwartz (2006), who derived seven dimensions of values for comparing cultures, west European cultures like Spain are the highest of all regions in values such as egalitarianism, intellectual autonomy, and harmony and the lowest on hierarchy and embeddedness. In contrast, Latin American cultures, like Peru, are "higher in hierarchy and embeddedness, presumably the main components of collectivism, and lower in intellectual autonomy, presumably the main component of individualism" (Schwartz, 2006, p. 161). This gives us enough difference to compare our model in two contrasting national contexts.Table II shows the description of our sample: about 55 per cent of all the surveyed students were male, but there were more women in the sample group from Spain (55.6 per cent). About 40 per cent of the students were 25-29 years old. In each country, the students completed the questionnaire voluntarily and anonymously. Table III shows results for each country. As expected according to economic pricing theory, the parameter for the price of athletic shoes is negative and significant for the model, revealing that higher prices decrease the maximum utility that individuals at a given income level could obtain from the shoes. The first six hypotheses are confirmed, and all estimated parameters are significant, except the first one for the Peruvian sample, as p-values in Table III show. Thus, the probability of purchasing rises when CSR and CA attributes under study are present in the performance of the company that manufactures the chosen product.The only non-significant parameter was "Good labour practices" in the Peruvian sample. This result seems to be coherent with Garavito's (2007) conclusions regarding labour practices. Garavito investigated CSR in the Peruvian labour market context and argued that the reason for the low interest in CSR labour policies was the lack of demand from society for such policies: "[given] a social necessities hierarchy where, due to the poverty level and the weakness of our institutional system, labour rights are considered a luxury good" (Garavito, 2007, p. 2).In contrast, intercepts measure the inherent preferences of the consumers for buying athletic shoes that are not gathered by the independent variables of the model, measuring the impact of all unobserved attributes, and they therefore provide an assessment of switching inertia (Verma et al., 2004). Intercepts are thus an appropriate measure of choice inertia. In Table III, we notice that the intercepts are significant and negative (-1.50 and -1.29 for the Peruvian and Spanish samples, respectively), which means that the consumers of athletic shoes chose the option "neither" more often than the two alternatives offered to them. We can thus conclude that potential customers of athletic shoes need to be offered some substantial value to persuade them to consider a new alternative. However, a wise combination of price, CA, and CSR attributes is sufficient to overcome the consumer switching barrier.In order to analyse the differences made by the variables, we ran a statistic Chow test version for discrete models between the estimated parameters of the pooled sample and split samples for each country. The null hypothesis of no differences amongst the samples was rejected; thus, the parameters of the model estimated for the two countries were significantly different (Likelihood-ratio test kh2(8)=22.52; p-value=0.00). Table III shows that the quality products attribute is the one with most impact in both samples. In the Peruvian case, the second attribute with the greater influence on purchasing behaviour is the one regarding the company's environmental commitment, whereas for the Spanish students, it is technological innovation, and environmental commitment is placed third.In general, the impact of the CSR attributes varies in the Peruvian and Spanish samples. Table III results show that even though the CSR attributes are positive and significant in most cases, the influence of CSR as a whole is higher in the Peruvian sample than in the Spanish sample.Additionally, results show that, taken together, the CA effects are more important than the CSR effects, and both are higher than the main effect of price. In other words, it would seem that CSR as a whole is a feature more valued by customers than price, but not as much as CA. However, if we compare the effects results for the CSR construct in order to analyse the rejection of H7, we find that it is not validated since CSR coefficients as a whole for the Peruvian sample are higher than for the Spanish sample.As price is a discrete variable in this experiment, the delta of the price levels (40 monetary units expressed in the respective currencies of Spain and Peru) has been considered the monetary unit for purposes of calculating the WTP. In order to make possible the comparison of the WTP attributes in both countries, the WTP was expressed in percentages regarding the minimum monthly income in Peru and Spain for the year 2010. Table IV shows the results of the WTP calculated with the previously estimated coefficients presented in Table II.Table IV shows that the surveyed students from Peru and Spain were willing to pay more for the product quality attribute than for the ethical attributes: the most valued attribute in both samples was the quality of products. However, the company's environmental commitment was the second attribute most valued by the Peruvian students (about 7 per cent), whereas the Spanish students chose technological innovation as the second highest attribute (4 per cent). Finally, when comparing the WTP in both countries, we find that for each of the CSR and CA attributes, the Peruvian students show a greater WTP than their Spanish counterparts (a difference of 4 and 3 per cent, respectively, as a whole construct). The exception concerns the attributes of good labour practices and leadership in the industry; the latter WTP is equal for both samples.Thus, we obtained an empirical validation of the role of CSR on consumers' behaviour in Peru and Spain. CSR can become a baseline that offers a clear possibility of differentiation among competitors, which shows that the profit maximization approach is not necessarily in conflict with a better social return investment. The relevance of CA attributes does not mean an oversight of the CSR activities, and the key managerial task is to find the ideal bundle of attributes that maximizes the consumers' WTP. If corporations are able to find this ideal combination, they do not necessarily have to compete on price, for they would be laying the foundations for a competitive advantage via differentiation.The differences in the importance given to the various attributes by the students from the two countries could be explained by cultural factors, such as egalitarianism for example. Schwartz (2006) established a number of differences between west Europe and Latin American cultures: intellectual autonomy, egalitarianism, and harmony are highlighted as core attributes in west Europe whereas Latin American countries show more embeddedness and affective autonomy. Other explanations may be found in consumer ethnocentrism (Josiassen et al., 2011) or in the notion of psychic distance (Sousa and Lages, 2011). Those studies could provide a starting point for other investigations regarding the reasons for the differences in the students' responses regarding the importance of the attributes identified in the present study. This research study shows the positive influence of CSR on consumer behaviour, thus confirming previous studies demonstrating that CSR is an important intangible asset offering a competitive advantage through differentiation (Auger et al., 2003; Bhattacharya and Sen, 2004; Carrigan et al., 2004; Ellen et al., 2006; Schroeder and McEachern, 2005; Marin and Ruiz, 2007; Mohr and Webb, 2005; Oksanen and Uusitalo, 2004). Researchers have found that both types of associations, CA and CSR, influence consumer purchasing behaviour. This study reveals that the purchasing probability increases with a good combination of CA and CSR. Though previous research showed that CA has a stronger influence in social responsibility associations (Berens, 2004; Berens et al., 2005), this study shows that both criteria as a whole are determinant although the importance of each attribute may vary according to contextual factors.The results also suggest that CSR can contribute to an increased brand value and reputation as well as higher financial results through customers' higher WTP. These results are in line with those previously obtained by Jones et al. (2005) and Papasolomou-Dukakis et al. (2005), thus enabling corporations to evaluate how their CSR investments could have a positive impact on the purchasing behaviour of customers. An orientation towards maximization of profits is not necessarily in conflict with the search for a better return in terms of social responsibility. Therefore, corporations have a great opportunity to contribute to the creation of a better world by not only generating economic benefits but also providing solutions to social problems.This research study shows that environmental protection is highly valued as a CSR activity in Spain and Peru. The environmental commitment attribute has special relevance as it is placed in the top three attributes for both Spanish and Peruvian students. Sriram and Forman (1993) have already highlighted noteworthy differences between US and Dutch consumers, showing that there are cross-cultural differences in how consumers perceive the importance of a product's environmental attributes. Future research studies should continue investigating contextual differences that may help account for different values given to product attributes. Our findings support the contention that corporations should design CSR strategies based on the consumers' preferences rather than on their own philanthropic ideas.Limitations of this study include a narrow focus: the investigation concerned one product only, namely athletic shoes. The investigation was restricted to test only linear and main effects of a narrow set of attributes. Several future research directions arise from the limitations inherent in this research. Research investigating different kinds of products, further cross-cultural studies, quadratic effects of price, including interactions among variables, would add to the validity and generalization of this study's findings. Finally, researchers could widen the sample selection to obtain valid results at national levels. Opens in a new window.Figure 1 Conceptual framework Opens in a new window.Figure 2 Questionnaire example for the Peruvian sample Opens in a new window.Table I Matrix of choice tasks for the Peruxian Sample Opens in a new window.Table II Sample description Opens in a new window.Table III Results by country of origin Opens in a new window.Table IV Willingness to pay in terms of the minimum monthly payment (percentage) Opens in a new window.Equation 1
|
- It was found that there is a positive relationship between CSR and CA regarding consumer behaviour and that Peruvian consumers seem to be more sensitive to CSR features of products than Spanish consumers. Moreover, the results show that the willingness to pay for each specific social feature seems to be contextually defined.
|
[SECTION: Value] At the start of the twenty-first century, people face considerable challenges, mainly on the social and environmental levels, such as climate change and the deepening of economic inequalities around the world. For this reason, societies and consumers are demanding that companies, as important agents of change in society, participate actively in the solution of the social problems that communities are facing.Many surveys have suggested that a positive relationship exists between a company's corporate social responsibility (CSR) actions and the consumers' reaction to that company and its products (Bhattacharya and Sen, 2004; Brown and Dacin, 1997; Creyer and Ross, 1997; Ellen et al., 2006; Smith and Langford, 2009). However, other investigations have demonstrated that the relationship between a company's CSR actions and the consumers' reaction is not always direct and evident and show that numerous factors affect whether a firm's CSR activities lead to consumer purchase (Carrigan and Attalla, 2001; Ellen et al., 2000; Maignan and Ferrell, 2004; Valor, 2008). There seems to be a contradiction between what the international polls and surveys have established in terms of people's intentions to buy products with CSR features and their actual purchasing decisions (Devinney et al., 2006). Auger et al. (2003) explained that the differences have occurred because in former studies, researchers used surveys to rank the importance of a number of CSR issues, without any trade-off between traditional features (named in this study as corporate abilities (CA), which concern functional product features) and CSR product features (concerning non-product ethical features). This would explain why consumers' ethical concerns "do not necessarily become manifest in their actual purchasing behaviour" (Fan, 2005, p. 347).In that context, this study has three different objectives. The first one is to analyse how CSR and CA influence socially responsible consumption (SRC). The second objective is to analyse whether there exist significant differences between CSR parameters in the purchasing decisions of consumers from Peru and Spain. The final objective is to contribute to the ongoing debate, using an experimental model that permits not only to test the first two purposes but also to measure people's trade-off between social and traditional features in terms of their willingness to pay (WTP). In addition, the study provides insights on how SRC is seen in different national contexts, as there is little research (Marin and Ruiz, 2007) regarding how consumers in those contexts may perceive the same CSR attributes. A more detailed explanation is offered in the sampling section.In the following section, the main concepts and relationships this research is based on are developed. Next, the objectives are stated and the methodology explained. Finally, the results are presented and discussed. In this section, we present the key issues that enable better understanding of the link between CSR and SRC. Some authors (Bhattacharya and Sen, 2003; Brown, 1998) have pointed out that corporation values, patterns, and general characteristics are perceived by consumers, who then form corporate associations that influence consumers' responses to a company and its products. Other studies (Berens, 2004; Brown and Dacin, 1997) have recognized CA and CSR as types of corporate associations demonstrating that "what consumers know about a company can influence their evaluations of products introduced by the company" (Brown and Dacin, 1997, p. 68). Gupta (2002) used these associations and their interactions to measure the effectiveness of a company's image. Basing the research study on these concepts, we aim to measure the trade-off between CA and CSR.According to the ISO 26000 (2010), social responsibility is "[the] responsibility of an organization for the impacts of its decisions and activities on society and the environment, through transparent and ethical behaviour" (p. 3). As proposed by the ISO, this social responsibility behaviour must be expressed through a set of six core subjects, which are human rights, labour practices, the environment, fair operating practices, consumer issues, and community involvement and development. From these, we selected three issues, addressed in sub-clauses: conditions of work (sub-clause 6.4.4), protection of the environment (sub-clause 6.5.6), and wealth and income creation (sub-clause 6.8.7).Auger et al. (2006) have highlighted the role SRC plays in consumer behaviour. SRC is variously defined as "the conscious and deliberate choice to make certain consumption choices based on personal and moral beliefs" (Auger et al., 2006, p. 32) and as "a person basing his or her acquisition, usage, and disposition of products on a desire to minimize or eliminate any harmful effects and maximize the long-run beneficial impact on society" (Mohr et al., 2001, p. 47). Both approaches, then, focus on SRC as the consumers' free selection of goods or services that is based on concerns regarding the impact of these goods or services on society.Recent investigations, however, have demonstrated that the relationship between CSR and SRC is not always direct and evident. Research findings have highlighted trade-offs between traditional criteria such as price, quality, convenience, and lack of information (Pomering and Dolnicar, 2008) or corporate brand dominance (Berens et al., 2005) and the specific CSR actions developed, product quality, the consumers' personal support for the CSR issues, and their general beliefs about CSR (Sen and Bhattacharya, 2001; Pomering and Dolnicar, 2008).Based on the concepts presented, the following hypotheses were proposed:H1. There exists a positive relationship between a company's good labour practices (b1) and SRC.H2. There exists a positive relationship between a company's environmental commitment (b2) and SRC.H3. There exists a positive relationship between corporate giving to worthy causes (b3) and SRC.To complete the purchasing behaviour model we present in this study, CA are included in the experiment to force the election and trade-off between the attributes of CA and CSR that consumers consider when purchasing. Brown and Dacin (1997) defined CA as the corporation's expertise in the production and commercialization of its goods and services. More recently, Gupta (2002, p. 28) broadened the definition of CA to include "manufacturing expertise, product quality, a company's customer orientation, firm innovativeness, research and development, employee expertise, and after-sales service". Most researchers have agreed that CA are the most important consideration when evaluating a product (Berens, 2004; Berens et al., 2005; Brown and Dacin, 1997; Dacin and Brown, 2002; Sen and Bhattacharya, 2001). Taking into account the variety of traditional features and following Gupta's (2002) CA variables, we proposed the next hypotheses:H4. There exists a positive relationship between a company's leadership in the industry (b4) and SRC.H5. There exists a positive relationship between the quality of a company's products (b5) and SRC.H6. There exists a positive relationship between a company's technological innovation (b6) and SRC.The proposed conceptual framework is that CSR and CA constructs can influence people's SRC positively, as shown in Figure 1.Additionally, when anticipating the differences between our Peruvian and Spanish samples, we expected to find that CSR parameters in Spain would be higher than in Peru. Because Spain is a developed country, its citizens enjoy a better quality of life. According to Maslow's (1943) hierarchy of needs, often pictured as a pyramid, people who have already covered their basics needs (shown at the bottom of the pyramid), given their high living standards, are more interested in fulfilling needs located on higher pyramid levels, such as ethical values.Consequently, we proposed one last hypothesis:H7. Spain has a higher CSR construct than Peru, taking parameters b1, b2, and b3 together.To finalize this section, we must explain the theoretical background underlying the estimation of the WTP for specific social features. The derivation of the WTP is done to quantify the trade-off that exists in every purchasing decision; in our case, we looked for one monetary measure that reflects the trade-off between CSR and CA features. We used a discrete choice model (DCM) to understand and model consumer decisions according to the probabilistic choice theory named random utility theory developed by McFadden (2001). When the perceived stimuli are interpreted as levels of satisfaction, or utility, this can be understood as a model for economic choice in which "the individual chooses the option yielding the greatest realization of utility" (McFadden, 2001, p. 361). Hence, we took a trade-off perspective to understand the behavioural process that leads to the agent's choice. 3.1 Research design Traditional survey methods using simple rating scales may be overstating the importance of ethical issues in the purchasing behaviour of consumers, even in those who reveal themselves as supportive of social causes. For this reason, we decided that "an experimental methodology that more closely mimics a real purchase situation" (Auger and Devinney, 2005, p. 26) may be more appropriate for this type of research. Hence, the influence of CSR and CA on the purchasing behaviour of business school students was measured using an experimental design following choice-based conjoint (CBC) modelling, a methodology that allows the researcher to probe whether beliefs and behaviours are connected (Adamowicz et al., 1998; Hensher et al., 2005; Lancsar, 2002; Louviere et al., 2004).CBC modelling requires that consumers make choices in simulated situations derived from realistic variations of actual product offerings. The process used in generating and setting the discrete choice experiment followed the steps proposed by Verma et al. (2004): identification of determinant attributes; specification of attribute levels; experimental design.Identification of determinant attributes A pre-test using the instrument developed by Auger et al. (2003) with 32 choice alternatives with 14 variables each was carried out, but answer rates were low. We thus decided to reduce CSR and CA attributes and the total number of questionnaires included in the survey. The attributes obtained from the literature review were presented to a panel of local experts to be validated[1]. Next, a pilot experiment with 12 persons was developed. The final list of CSR attributes were labour practices, a company's environmental commitment, and corporate giving to worthy causes. The functional attributes included leadership in the industry, quality products, and technological innovation. Additionally, a price attribute was included to capture the WTP for each attribute, and no interactions among attributes were included.Specification of attribute levels The ranges of the attributes are often chosen to represent actual values observed in the marketplace. An end-point design (Louviere et al., 2004) was applied, utilizing the two extremes of the attribute level range (upper and lower).Experimental design The fractional factorial optimal design was generated with the following characteristics: joint statistical efficiency for all the model parameters, both balanced and orthogonal features, and optimized D-efficiency (Hensher et al., 2005; Kanninen, 2002). The final design consisted in 16 choice tasks, which can be found in the matrix presented in Table I. Athletic shoes, classified as fashion products according to the Foote, Cone, and Belding Grid, were the selected products because they elicit a high degree of involvement on the part of the clients due to emotional criteria at the time of purchasing (Vaughn, 1986). This product also allowed the evaluation of environmental issues, working conditions, and other traditional characteristics. The brand names for the experiment were fictitious to avoid any other factors that could have affected the purchase decision.Figure 2 shows an example from the final instrument.Additionally, three validation tests were applied. First, a consistency test was applied to find out whether the respondents understood the concept of the DCM and the extent to which they act rationally when expressing their preferences. The results of the test showed that 100 per cent of respondents chose the right response, which indicates that there was consistency in the questionnaires. Then, an internal validity test was developed, whereby identical measures were used in two groups: if no confounding variables exist, any difference between the groups on the dependent variable must be attributed to the effort of the independent variable. The results showed that no significant differences were found in the model estimation of the two groups of the pilot study. Finally, a reliability test was applied according to the test-retest method. The correlation coefficient between two sets of responses is often used as a quantitative measure of reliability. The final correlation observed was 0.75 (a correlation between 0.7 and 0.8 is considered satisfactory).3.2 Research type The CBC modelling technique was used for this study. SAS 9.1 and STATA 11 software were used for the design and the model run, respectively.One of the most useful features of this choice-based experimental methodology is its ability to convert the probability of consideration and purchase directly into conditional monetary equivalents. Consequently, researchers can estimate the marginal rate of substitution, or trade-offs, respondents are willing to make between two attributes, which are financial indicators of WTP (Kanninen, 2002). We used the expression proposed by Louviere et al. (2004) to obtain the WTP for the CSR and CA variables:(Equation 1)where MRSk is the marginal rate of substitution between attribute k relative to price and delta P represents the difference between product price levels presented to the respondents. This approach allows for the evaluation in monetary terms of the trade-offs that consumers can make between various aspects of CSR and CA.3.3 Sampling and data collection method Several studies have suggested that CSR is an important intangible asset (Ellen et al., 2006; Marin and Ruiz, 2007; Mohr and Webb, 2005; Schroeder and McEachern, 2005), but there is little research regarding how consumers from different countries perceive each of the CSR attributes. Therefore, students from a master's dual degree programme in Peru and Spain were chosen as the population to be surveyed. They answered the survey questions on paper, during classes. The aim of this experiment was to prove "the stability of the model across national samples" (Cadogan, 2010, p. 604) and to provide preliminary insights regarding how CSR attributes may be seen in different contexts. The sample size was 118 students in Peru and 121 in Spain.Peru and Spain were chosen for this comparative study because they share a number of common traits but are quite dissimilar in other ways. Following the application context of Duque and Lado (2010), we found that these two countries share a historical background (Peru was a colony of Spain during the sixteenth and nineteenth centuries), a language (Spanish), a religion (Catholicism), and important commercial activities. Furthermore, in 2010, Spain was the country with the highest foreign investment in Peru, according to data from the Agency for the Promotion of Private Investment in Peru (PROINVERSION)[2]. One major difference is that Peru is a developing country while Spain is a developed one. Garza-Carranza et al. (2009) have distinguished between what they call more developed Latin regions (Spain) and less developed Latin regions (Peru). According to Schwartz (2006), who derived seven dimensions of values for comparing cultures, west European cultures like Spain are the highest of all regions in values such as egalitarianism, intellectual autonomy, and harmony and the lowest on hierarchy and embeddedness. In contrast, Latin American cultures, like Peru, are "higher in hierarchy and embeddedness, presumably the main components of collectivism, and lower in intellectual autonomy, presumably the main component of individualism" (Schwartz, 2006, p. 161). This gives us enough difference to compare our model in two contrasting national contexts.Table II shows the description of our sample: about 55 per cent of all the surveyed students were male, but there were more women in the sample group from Spain (55.6 per cent). About 40 per cent of the students were 25-29 years old. In each country, the students completed the questionnaire voluntarily and anonymously. Table III shows results for each country. As expected according to economic pricing theory, the parameter for the price of athletic shoes is negative and significant for the model, revealing that higher prices decrease the maximum utility that individuals at a given income level could obtain from the shoes. The first six hypotheses are confirmed, and all estimated parameters are significant, except the first one for the Peruvian sample, as p-values in Table III show. Thus, the probability of purchasing rises when CSR and CA attributes under study are present in the performance of the company that manufactures the chosen product.The only non-significant parameter was "Good labour practices" in the Peruvian sample. This result seems to be coherent with Garavito's (2007) conclusions regarding labour practices. Garavito investigated CSR in the Peruvian labour market context and argued that the reason for the low interest in CSR labour policies was the lack of demand from society for such policies: "[given] a social necessities hierarchy where, due to the poverty level and the weakness of our institutional system, labour rights are considered a luxury good" (Garavito, 2007, p. 2).In contrast, intercepts measure the inherent preferences of the consumers for buying athletic shoes that are not gathered by the independent variables of the model, measuring the impact of all unobserved attributes, and they therefore provide an assessment of switching inertia (Verma et al., 2004). Intercepts are thus an appropriate measure of choice inertia. In Table III, we notice that the intercepts are significant and negative (-1.50 and -1.29 for the Peruvian and Spanish samples, respectively), which means that the consumers of athletic shoes chose the option "neither" more often than the two alternatives offered to them. We can thus conclude that potential customers of athletic shoes need to be offered some substantial value to persuade them to consider a new alternative. However, a wise combination of price, CA, and CSR attributes is sufficient to overcome the consumer switching barrier.In order to analyse the differences made by the variables, we ran a statistic Chow test version for discrete models between the estimated parameters of the pooled sample and split samples for each country. The null hypothesis of no differences amongst the samples was rejected; thus, the parameters of the model estimated for the two countries were significantly different (Likelihood-ratio test kh2(8)=22.52; p-value=0.00). Table III shows that the quality products attribute is the one with most impact in both samples. In the Peruvian case, the second attribute with the greater influence on purchasing behaviour is the one regarding the company's environmental commitment, whereas for the Spanish students, it is technological innovation, and environmental commitment is placed third.In general, the impact of the CSR attributes varies in the Peruvian and Spanish samples. Table III results show that even though the CSR attributes are positive and significant in most cases, the influence of CSR as a whole is higher in the Peruvian sample than in the Spanish sample.Additionally, results show that, taken together, the CA effects are more important than the CSR effects, and both are higher than the main effect of price. In other words, it would seem that CSR as a whole is a feature more valued by customers than price, but not as much as CA. However, if we compare the effects results for the CSR construct in order to analyse the rejection of H7, we find that it is not validated since CSR coefficients as a whole for the Peruvian sample are higher than for the Spanish sample.As price is a discrete variable in this experiment, the delta of the price levels (40 monetary units expressed in the respective currencies of Spain and Peru) has been considered the monetary unit for purposes of calculating the WTP. In order to make possible the comparison of the WTP attributes in both countries, the WTP was expressed in percentages regarding the minimum monthly income in Peru and Spain for the year 2010. Table IV shows the results of the WTP calculated with the previously estimated coefficients presented in Table II.Table IV shows that the surveyed students from Peru and Spain were willing to pay more for the product quality attribute than for the ethical attributes: the most valued attribute in both samples was the quality of products. However, the company's environmental commitment was the second attribute most valued by the Peruvian students (about 7 per cent), whereas the Spanish students chose technological innovation as the second highest attribute (4 per cent). Finally, when comparing the WTP in both countries, we find that for each of the CSR and CA attributes, the Peruvian students show a greater WTP than their Spanish counterparts (a difference of 4 and 3 per cent, respectively, as a whole construct). The exception concerns the attributes of good labour practices and leadership in the industry; the latter WTP is equal for both samples.Thus, we obtained an empirical validation of the role of CSR on consumers' behaviour in Peru and Spain. CSR can become a baseline that offers a clear possibility of differentiation among competitors, which shows that the profit maximization approach is not necessarily in conflict with a better social return investment. The relevance of CA attributes does not mean an oversight of the CSR activities, and the key managerial task is to find the ideal bundle of attributes that maximizes the consumers' WTP. If corporations are able to find this ideal combination, they do not necessarily have to compete on price, for they would be laying the foundations for a competitive advantage via differentiation.The differences in the importance given to the various attributes by the students from the two countries could be explained by cultural factors, such as egalitarianism for example. Schwartz (2006) established a number of differences between west Europe and Latin American cultures: intellectual autonomy, egalitarianism, and harmony are highlighted as core attributes in west Europe whereas Latin American countries show more embeddedness and affective autonomy. Other explanations may be found in consumer ethnocentrism (Josiassen et al., 2011) or in the notion of psychic distance (Sousa and Lages, 2011). Those studies could provide a starting point for other investigations regarding the reasons for the differences in the students' responses regarding the importance of the attributes identified in the present study. This research study shows the positive influence of CSR on consumer behaviour, thus confirming previous studies demonstrating that CSR is an important intangible asset offering a competitive advantage through differentiation (Auger et al., 2003; Bhattacharya and Sen, 2004; Carrigan et al., 2004; Ellen et al., 2006; Schroeder and McEachern, 2005; Marin and Ruiz, 2007; Mohr and Webb, 2005; Oksanen and Uusitalo, 2004). Researchers have found that both types of associations, CA and CSR, influence consumer purchasing behaviour. This study reveals that the purchasing probability increases with a good combination of CA and CSR. Though previous research showed that CA has a stronger influence in social responsibility associations (Berens, 2004; Berens et al., 2005), this study shows that both criteria as a whole are determinant although the importance of each attribute may vary according to contextual factors.The results also suggest that CSR can contribute to an increased brand value and reputation as well as higher financial results through customers' higher WTP. These results are in line with those previously obtained by Jones et al. (2005) and Papasolomou-Dukakis et al. (2005), thus enabling corporations to evaluate how their CSR investments could have a positive impact on the purchasing behaviour of customers. An orientation towards maximization of profits is not necessarily in conflict with the search for a better return in terms of social responsibility. Therefore, corporations have a great opportunity to contribute to the creation of a better world by not only generating economic benefits but also providing solutions to social problems.This research study shows that environmental protection is highly valued as a CSR activity in Spain and Peru. The environmental commitment attribute has special relevance as it is placed in the top three attributes for both Spanish and Peruvian students. Sriram and Forman (1993) have already highlighted noteworthy differences between US and Dutch consumers, showing that there are cross-cultural differences in how consumers perceive the importance of a product's environmental attributes. Future research studies should continue investigating contextual differences that may help account for different values given to product attributes. Our findings support the contention that corporations should design CSR strategies based on the consumers' preferences rather than on their own philanthropic ideas.Limitations of this study include a narrow focus: the investigation concerned one product only, namely athletic shoes. The investigation was restricted to test only linear and main effects of a narrow set of attributes. Several future research directions arise from the limitations inherent in this research. Research investigating different kinds of products, further cross-cultural studies, quadratic effects of price, including interactions among variables, would add to the validity and generalization of this study's findings. Finally, researchers could widen the sample selection to obtain valid results at national levels. Opens in a new window.Figure 1 Conceptual framework Opens in a new window.Figure 2 Questionnaire example for the Peruvian sample Opens in a new window.Table I Matrix of choice tasks for the Peruxian Sample Opens in a new window.Table II Sample description Opens in a new window.Table III Results by country of origin Opens in a new window.Table IV Willingness to pay in terms of the minimum monthly payment (percentage) Opens in a new window.Equation 1
|
- This paper contributes to the ongoing debate regarding the importance of corporate social responsibility as an influential factor in consumers' socially responsible consumption. It quantifies the social features of companies' products and willingness to pay.
|
[SECTION: Purpose] Sustainable production and consumption have become more important internationally which has led to the transformation of market structures and competitive situations into the direction of servitisation (Baines et al., 2011; Bandinelli and Gamberi, 2011). For a manufacturing company the shift towards being a service provider is characterised by a high level of uncertainty about the future strategic development of the company caused by, e.g. inadequate knowledge and information (Song et al., 2007). For this research, a service is defined as an activity or a process which is aimed at the change of the state of the service issue such as the repair of a machine or the supply of flying hours for an aircraft (Araujo and Spring, 2006; Gadrey, 2000). In this context, the supply of product-centred services becomes more important. These services tend to be long-lived. For example, Babcock (2012) announced their support contract for the Australian Anzac class surface ship fleet until 2023. Another example is Rolls-Royce's Flotilla Support Programme for their submarines until 2017 (Rolls-Royce, 2011). The shift to a being a supplier for these services can cause many uncertainties, especially for companies that have previously focused on the production and manufacturing of products. The delivery of a service is usually embedded in a contract which is an agreement between the parties about the technical details of the service and is intended to be legally binding (Nellore, 2001; Rowley, 1997). Service contracts are often allocated through the process of competitive bidding where the competing suppliers communicate their service specifications and price bid to the customer who then evaluates the bids (Rexfelt and Ornas, 2009; Bubshait and Almohawis, 1994). This bidding process can include different levels of negotiation with the customer which can vary from an auction type bid (Friedman, 1956; Neugebauer and Pezanis-Christou, 2007) to an elaborate information exchange process (Lehman, 1986; Bajari et al., 2008). These varying levels of negotiation leave the bidding supplier with different levels of uncertainty influencing the pricing decision process. The pricing approach that is applied most frequently in practice is the cost-based pricing process which puts the starting point of the research at the estimation of the costs of the service contract (Hytonen, 2005). Cost estimation is concerned with predicting the future, thus, uncertainty is inherent to the process (Goh et al., 2010; Christoffersen, 1998). This uncertainty can be included in the cost estimate in different ways, one possibility is the range or density forecast which consists of a range of possible future values (Tay and Wallis, 2000). Included in the range forecast can be the minimum, maximum and average value connected to different assumptions about the future (Giordani and Soderlind, 2003). An exemplary cost estimate is shown in Figure 1. At the bidding stage, the decision maker has to select one point within the given range as a price bid to communicate to the customer; one example is marked in Figure 1. Choosing a price that is too high may result in being underbid by competitors and, thus, potential loss of the business (Lucas and Kirillova, 2011; Chapman et al., 2000). A too low price may influence the customer's perception of the quality of the service and, thus, be rejected (Freedman, 1988) or the failure to recover the costs and profit of the service (Swinney and Netessine, 2009; Wang et al., 2007). For the pricing decision at the bidding process the decision maker has to: understand the uncertainty in the cost estimate; and understand other uncertainties that influence the bidding success and the fulfilment of the service contract. The aim of this paper is to identify the availability and use of information at the competitive bidding stage. For this, an interview study with industrialists from different sectors was conducted. The related literature in contract bidding including the bidding process, contract conditions and typical payment methods is described in Section 2. Sections 3 and 4 describe the interview study and its results. Most literature describing theory on bidding decisions focus on auction-type processes (Cai et al., 2009; Schoenherr and Mabert, 2008; Neugebauer and Pezanis-Christou, 2007). This means that the described approaches focus on a constrained bidding environment and low complexity and duration of the services discussed in this context (Schoenherr and Mabert, 2008). This means that the model and theories described have limited applicability to the research described in this paper. This research focuses on services of high complexity, which are typically embedded in contracts of long duration. Literature describing the decision-making processes at the competitive bidding stage typically focuses on products (Li and Graves, 2012; Bhaskaran and Ramachandran, 2011; Sosic, 2011; Li and Wang, 2010). Particularly the pricing decisions of products has been highlighted to be influenced by uncertainty (Sosic, 2011). For example, customers can be expected to evaluate the competitive bids according to an individual list of preferences (Chaneton and Vulcano, 2011; Guo et al., 2009). Reasons for this more elaborate body of literature in product-focused decision making may be the longer history of the business process in industry and the issues connected to it. However, approaches describing the pricing of services can be found in the literature. One example is described by Guo et al. (2009), who model the strategic decision in a single-supplier context. While the approach offers valuable insights into the decision-making processes, it has two limitations for the application to servitisation: it does not include the existence of competition at its influence on the bidding strategy; and it describes services of low complexity such as hotel accommodation or restaurant dining. It can be summarised that current literature offers limited insights into the strategic decision-making processes at the competitive bidding stage, particularly from an industrial viewpoint. In particular, they do not consider the information that is available and the strategic process of its consideration in industry. Research that fails to consider these aspects will fail to accurately represent the decision-making process at the competitive bidding stage and will not be adopted by industry. This paper aims at closing this gap by introducing an exploratory study which describes the availability of information at the competitive bidding stage and its strategic consideration in practice. The aim of this study was to explore the availability of relevant information in the context of competitive bidding for a service contract on the supplier's side and to describe the subjective processes of the decision maker at the bidding stage. To examine this aim, an interview study was conducted. The following sections describe the applied method of this study in more detail. First, the interview procedure is described, then the design of the interview with the questions is explained, and last, the number of interviewees and the for example the sectors they work in are then described. 3.1 Interview procedure A standardised interview was carried out meaning the wording and sequence of questions was determined in advance, thus, each interviewee was asked the same questions in the same order (Teddlie and Tashakkori, 2009). This ensured that all topics were covered in each interview allowing a comparison between the answers of the different interviewees (Patton, 2002). The questions were open-ended, i.e. no predetermined answers were given (or suggested) and the interviewees were encouraged to describe the processes in their own words. This reduced possible bias of the replies. The interviews were not recorded as most of the interviewees were from organisations of the defence sector or simply not comfortable with recording. The results are based on the notes the researcher took during the interview processes. However, to ensure the correctness and limit the misinterpretation of the given information, the responses were returned to the interviewees after the interview for confirmation and validation as explained in Robinson et al. (2007). 3.2 Questionnaire design The questionnaire design was based both on previous empirical work and the literature in the field. The empirical work focused on two experimental studies undertaken with a total of 72 cost engineers and bidding decision makers from practice. These studies focused on the different influences on the bidding decision-making process, including the approach of displaying the cost estimate (Kreye et al., 2012) and the influence of the existence of competition on the decision outcome and rationale (Kreye, 2011). The participants were given a set of questionnaires which consisted of a pricing scenario and various questions connected to their decision-making process for this hypothetical example. From the answers in the experimental studies it became clear, that industry did not have a universal set of definitions for the terminology. Thus, it was decided that in the beginning of the interview, the participant's specific definition had to be clarified and established. The literature highlights the influence of contextual issues to the pricing decision. These are for example the contract situation within the company (Monroe, 2002; Chapman et al., 2000), the bidding process (Lehman, 1986) and the payment process (Tseng et al., 2009). Thus, the decision context was the focus of the second area of interview questions. In the experimental studies preceding the interview, one of the questions focused on the further influences on the decision-making process. The answers to this question could be categorised into market uncertainties (which included developments such as inflation, economic changes and technology development), cost estimation uncertainty, product uncertainties (including performance of the machine and risk of failures), competition uncertainty (manifesting itself in the risk of losing the contract) and customer uncertainties. These five main influences were used as a basis for the interview questionnaire, in particular to establish the amount of information typically available about these issues. As the bidding decision making is highly influenced by strategic considerations (Harrington, 2009; Afuah, 2009), the fourth area of interview questions focused on the bidding strategy. Based on the literature in the field, it was found that different influences are of importance. For example, due to the highly subjective nature of decision making, the choice of the bidding decision maker has been highlighted as an important factor (Tulloch, 1980). Further influences include the decision maker's interpretation of the cost estimate by based on his/her experience and assumptions (Kreye et al., 2012) and the calculation of the price bid (Hytonen, 2005; Monroe, 2002; Lehman, 1986). Thus, it can be summarised that the design of the interview questionnaire was based on an iterative process of combining results of preceding empirical studies with industry and the literature in the field. Based on this process, the interview questionnaire was compiled which is described in the following section. 3.3 Interview questionnaire The questions covered four main areas: uncertainty and risk, bidding context, input information for the pricing decision, and bidding strategy. Questions included in the first main area established the meanings the practitioners applied to the terms risk and uncertainty and how these are considered and identified in the pricing process. These established a common ground for the terminology in comparison to the definitions applied in the presented research and formed the basis for later questions. The second main area about the bidding context established background information that can potentially influence the bidding strategy. The issues investigated were the current contract situation of the company (Monroe, 2002; Chapman et al., 2000), the usual bidding process for service contracts (Lehman, 1986) and the typical payment method once the contract was awarded (Tseng et al., 2009). The last two areas form the main focus of the interviews. The area of the input information for the pricing decision examined the form and type of information normally used in the decision process and possible assumptions the decision maker may form (Goh et al., 2010; Bolton et al., 2006; Fargier and Sabbadin, 2005; Rubinstein, 1998; Loewenstein and Prelec, 1993; Lehman, 1986). The questions in this area examined; the form of the cost estimate, the uncertainties included in the cost estimate, possible further uncertainties that the decision maker considers in the pricing process, the available information about the competitors and the customer, and the amount of input information that is considered in the decision-making process. The area of bidding strategy established the subjective aspects of decision making in the competitive bidding situation as this may influence the outcome of the decision process (Kreye et al., 2012; Stecher, 2008; Yager, 1999; Lehman, 1986; Tulloch, 1980). The questions explored; the selection process of the decision maker, the interpretation of the cost estimate, the calculation of the price bid, the calculation of the minimum price bid, and the possibility of accepting contracts with a high risk of making a loss. The next sub-section describes the participants of this empirical study. 3.4 Interviewees The interviews were carried out over one year (March 2010 to March 2011) during a rebound period after the global economic recession of 2008-2009. Nine interviews were undertaken where the investigated sectors and numbers of interviewees were: defence (1), aerospace (1) and both defence and aerospace (2); engineering (2); research (1); information technology (1); and construction (1). The interviewed companies ranged from large and globally acting providers (with employee numbers varying between about 40,000 and 1,800 employees) to smaller, nationally acting providers (with less than 300 employees). The group of interviewees focused on the suppliers of product-centred services with varying levels of complexity. The contract complexity describes its value with a fuzzy distinction of its attributes, in other words there is no distinct value or factor that defines the difference between the two complexity grades. Thus, the service contracts included in this interview study were separated as follows: Low complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is low (Skaggs and Youndt, 2004; Shostack, 1987). In other words, the requirements are clear to the involved parties (Bajari et al., 2008). The interviewees of this study named these "small contracts" and characterised them using phrases such as "less than PS3 million", "less than 150.000 Euro", or "simple requirements such as the need of three engineers to do some testing". High complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is high (Skaggs and Youndt, 2004; Shostack, 1987). In other words, at the point of the bid invitation, the service design may be hard to define in detail (Bajari et al., 2008). The interviewees named these "large contracts" and distinguished the with phrases such as "more than PS3 million", "complex tasks such as 18 months contract" or "site management". Figure 2 shows the frequency of answers from the interviewees. Four of the nine interviewees said they hold a portfolio of different complexity contracts, two focused on contracts of low complexity and three interviewees concentrated on contracts of high complexity. 3.5 Methodology for result analysis To analyse the responses, a qualitative approach was applied. This means the results and their implications are discussed verbally to highlight the importance in the bidding and pricing process (Saunders et al., 2012). Nevertheless, to demonstrate the relative importance of specific answers, a quantitative presentation of these results was chosen for specific questions. This is included mainly for demonstration purposes to show where trends may emerge. Due to the limited number of interviewees, a complete statistical analysis of these trends was not possible and is not included in this paper. In general, the basis for the data analysis was the differentiation of the interviewees depending on the size of their service contracts as described in Section 3.4. However, when the interviews showed a relationship between questions or even interview areas, this relationship is emphasised in the data analysis. For example, a connection was found between the included uncertainty in the cost estimate, the approach to communicating this cost estimate (both questions concerning the input information) and the decision maker's interpretation of this estimate (question asked in connection to the bidding strategy). This relationship is analysed in one section. To demonstrate when these cross-relationships were found in the interview process, Figure 3 shows the data collection methodology in contrast to the analysis methodology. This section analyses the results of the interview study and presents them in the four main areas, namely uncertainty and risk, bidding context, input information, and bidding strategy. The term bidding strategy refers to the pattern of activities which has an impact on the achievement of bidding goals such as winning a profitable contract. 4.1 Uncertainty and risk The aim of the questions in this section was to clarify the terminology used by the industrialists and thus to guide further discussion of the topic. Differences could be observed between the interviewees in general. Some had corporate-wide definitions for the two terms; others used examples to describe their individual understanding, two interviewees did not use the term uncertainty. However, comparing the meaning or interpretation of the definitions, similarities can be found. Out of nine interviewees, seven understood uncertainty as the variation of an aspect of the contract such as the cost estimate. Discussing the term risk, the interviewees agreed that it is connected to an impact. Furthermore, seven interviewees stated that it was connected to a specific event, such as the risk of a red light during a car journey or the loss of a team member whose knowledge is central to the fulfilment of the service. Two interviewees described it as the impact on the project as a whole. The interviewees' definitions of the terms risk and uncertainty were utilised throughout the process of interviewing as a basis for clarity. However, for the purpose of this research, the described definition of uncertainty (see list of definitions) is applied in the further analysis of the interview results; the concept of risk is not discussed further. The interviewees' sources of identification and management tools for uncertainty can be classified based on the level of subjectivity. To identify the uncertainty connected to a project, all interviewees identified experience as the main source which was typically connected to the team that put the bid together (stated by six interviewees) or to the project manager (stated by three interviewees). In addition, more objective identification sources were used such as a formalised risk analysis process in the form of, e.g. a risk management handbook or databases of previous projects. This category was mentioned by four interviewees. For the identification of uncertainties, the practitioners used either a subjective method on its own or in combination with an objective method. To manage uncertainty, subjective approaches were of less importance than for the identification; only five interviewees named this approach. Four interviewees mentioned objective management methods out of which three also mentioned objective identification methods. Table I depicts the connection between the classification of information sources and management tools for uncertainty. The frequencies highlight the amount of times each individual aspect was mentioned and thus do not add up with the combinatorial numbers in the rest of Table I. 4.2 Bidding context Describing the bidding process, the interviewees' answers were categorised into four groups: one-bid process, two-bid process without negotiation, two-bid process with negotiation, and negotiation. In the one-bid process, the competitors have one opportunity to submit their bid including the bid price and the specifications of the service and the contract. The customer then evaluates these bids and agrees to one of the offers. This includes the assumption that the customer has the ability to understand the technical and commercial details of the bids. In the two-bid process without negotiation, the bidding process is split into two phases. In the first phase, a number of possible suppliers submit their bid which usually includes their suitability for the service contract (this can be based on an invitation to bid or an open access). This number of competitors is reduced to the most suitable ones who are then invited to submit their full bid in the second phase. In this second phase, the competitors typically know the identity of each other. None of the phases includes negotiation with the customer. In the two-bid process with negotiation, the bidding process is split into two phases similar as described above. However, the second phase is characterised by a negotiation between the competitors and the customer to clarify important issues and questions. The answers to these questions can be published to all competitors or stay confidential between the two negotiating parties. A bidding process which includes negotiation is characterised by an exchange of large amounts of information concerning the service requirements, the customer's intention, technical scope or any other issues concerning the contract or bid. The bidding process which the interviewees typically faced in their decision process depended on the size of the contract to be bid for. The definitions as described in Section 3.3 are used to describe the contract size. Figure 4 shows the answer frequency of the usual bidding process connected to the contract size. The values in Figure 4 distinguish between usual and possible bidding processes as indicated by the interviewees. The numbers do not add up to nine as multiple answers were given by the interviewees managing a contract portfolio. The results of Figure 4 indicate that low complex contracts with clear requirements are typically not negotiated which can be constituted with the reason that negotiation is a time and cost consuming process (Bajari et al., 2008). In contrast, contracts of high complexity are typically agreed after negotiation with varying levels of depth of this process. This suggests that the uncertainty that may arise from unclear requirements can usually be reduced by collecting further information from the customer. The parties were willing to commit additional time and costs to this process to ensure that the service outcome best fits the needs of each of them. The interviewees' answers regarding the usual payment methods for service contracts can be divided into three categories: fixed price, cost-based payment and payment on completion. Seven of the nine interviewees stated that (some of) their company's service contracts are paid with fixed prices which can be based on milestones (mentioned by four) or over a set period of time (stated by three interviewees) such as a monthly payment. Three of the interviewees stated that the payment is based on the actually spent costs which can be assessed through, e.g. timesheets. In the category of payment on completion, the service supplier is paid upon completion of the project which was mentioned by one interviewee. It is to be noted that this company offered research services which usually only have deliverables at the end of the service period in the form of, e.g. a research report. Multiple answers were possible. Based on these results it can be summarised that fixed price payment seemed to be the standard method for service contracts. The following section describes the input information of a pricing decision. 4.3 Input information The results of the interviewees' answers to the questions of the input information section were analysed in three main sections: cost estimate and uncertainty, customer and competitors. These are described in this section. 4.3.1 Cost estimate and uncertainty The way the cost estimate is communicated during the bidding process was found to be distinguishable into two categories: presented using a table or a graph. The costing information included in a table was found to be in two different ways. Four interviewees used a detailed cost breakdown in the form of the necessary work steps, the time and expertise needed for each step and the cost value assigned to the different steps. The other approach was mentioned to include a three-point-estimate which includes pessimistic, most likely and optimistic assumptions represented in a tabular form. The approach that was used most to include cost estimating information in a graph was a three-point estimate. Another approach mentioned was an s-curve which displays the cumulative costs over time and usually adopts the form of the letter S (Cioffi, 2005). The specification of the available costing information in practice was found to be influenced by the way uncertainty was included in the estimate. The levels of uncertainty included in the cost estimate were reported as: none, variation in the input data and quantification of qualitative uncertainty. Four interviewees stated that they included no uncertainties in their cost estimate. In the second group, the available information that the cost estimate is based on can vary; for example, to fulfil a specific task, a particular engineer may have taken 4 or 5 hours depending on other variables. The third group includes the assessment of the question of "what can go wrong" and connecting a value to this assessment. This occurs subjectively through the experience of the decision maker. Furthermore, the interpretation of the cost estimate was found to be dependent on the way uncertainty was included in the cost estimate, thus it is discussed in this section (this question was asked in connection to the bidding strategy). The answers were grouped as: none, a point estimate and a range estimate. Participants who stated that they included no uncertainties in their cost estimate also said that the cost estimate they received was not interpreted. This means the cost estimate was taken as it was. However, two of those said that the possibility was kept in mind that the cost estimate may be reduced due to the fact that it was based on conservative values. For example, if the historic data would show that a specific task took between 4 and 5 hours, the cost estimate would be based on the 5-hour estimate. If the final cost estimate would be considered too high, these cost values would be adjusted in a second iteration of the process. In the second category, the costing information with the related uncertainty was stated to be interpreted as a point estimate, based on, e.g. the 50 or 80 per cent line in the graph. One interviewee stated that this was only held up when the uncertainty connected to the contract was low, otherwise a cost range was kept. In the third category, the communicated costing information was carried forward in the pricing process as a range estimate, either with its original spread or as a reduced spread. One interviewee stated that the full range was utilised when there was high uncertainty connected to the contract in the form of a high variation in the input data. Table II shows the comparison of the way the cost estimates were presented and interpreted against the uncertainty that is included. The total values do not add up to nine because two interviewees stated the use of multiple methods to communicate their cost estimate; one used both types of graphical displays, the other one stated the use of tables to present the cost breakdown and graphs to present the overall costs. However, the total values give an indication of how often each type of presentation was mentioned and which uncertainty is included. As depicted in Table II, the companies that presented the cost estimate as a breakdown in a table did not include any uncertainty; it was rather based on specific assumptions. These assumptions included the choice of a conservative value when the input data varied, e.g. when a task was recorded to take between 4 and 5 hours, the estimate would be based on 5 hours. Furthermore, when uncertainty was included, the cost estimate was more likely to be presented in graphical form. All interviewees who stated that they used a graphical approach to display their costing information included uncertainty in it. The interviews also assessed which further uncertainties can influence the pricing decision. Two out of the three interviewees who stated that their cost estimate did not contain any uncertainties, also stated there were no further uncertainties influencing the pricing decision. Both of them, however, stated that they would reduce the cost estimate if the originally derived price bid would be considered as too high. The other uncertainties influencing the pricing decision were categorised into: customer related uncertainties, competitor related uncertainties, cost estimation uncertainties, economic uncertainties and others. Customer related uncertainties included the customer's previous choices of bidders for similar projects to recognise observable patterns. For example, the customer may always go for the price bid that is 5 per cent below their stated budget limit. Other factors were mentioned as the assessment of questions such as the possible consequences if the customer found a mistake in the bid, the location of the customer to evaluate the possible travel costs, and assumptions about the usage of the serviced product or machine. Another aspect that was mentioned was the level of experience of the customer's personnel involved in the usage of the product or machine that was part of the service contract. Further aspects related to the customer are analysed at a later point in this section. Competitor related uncertainties assessed the identification of the competitors for the particular service contract and an evaluation of their most likely bid. Furthermore, the contract might be let to multiple suppliers who would either focus on different aspects of the service or would have to be able to share the project. Further aspects related to the competitors are analysed at a later point in this section. As discussed in this section, the cost estimate was stated to either include different uncertainties in the form of a spread or was based on assumptions that may not prove true. Further uncertainties included the possibility of cost reductions through, e.g. a reduction of the overhead costs. Economic uncertainties include factors which may influence the commercial activities such as legal changes, gains that can be achieved with the contract, the situation of the overall economy, of the market place and of the specific sector. Other mentioned uncertainties included the bidding company's contract situation and the uncertainty arising from the technical requirements. Most interviewees mentioned more than one of the presented sources of uncertainty with a clear emphasis on one important factor, usually concerning an example from the recent past. For this reason, there is no quantitative analysis of the relative importance of each of the mentioned categories. 4.3.2 Customer The available information concerning the customer considered the areas of their bidding strategy, the past relationships, their future needs and whether these aspects influence the decision maker of the bidding company. For these interviews, the customer's bidding strategy was addressed through the aspect of their budget and their evaluation criteria regarding the bids. The interviewees' answers indicate two different categories: either these strategic aspects are communicated with the service requirements or they can be assessed through a "getting to know the client" process in which usually a commercial team is involved. Of the nine interviewees, four stated that the customer's bidding strategy was communicated, two said it could be assessed, and three that it varies between these two categories depending on the kind of customer (resulting from aspects such as if they had worked with them before, what the preferred bidding process of the customer was). The past relationship between the bidding company and the customer was described by all interviewees as an important source of information. An ideal bidding situation would involve a long past relationship where trust had been build up and the parties would know each other. When this is not the case, the bidding company may still have previous experience with the customer to build up knowledge about them. In cases where there is no previous experience, the bidding company has to rely on the information communicated by the customer themselves or published in, e.g. the press. The assessment of the customer's possible future needs caused different reactions with the interviewees. One part (seven of nine interviewees) stated that this was one aspect that they assess during the process of compiling the bid and included it if appropriate. These interviewees stated the importance of possible follow-up work, future relations and the length of the service contract to demonstrate the suitability for, e.g. the next five years. The other two interviewees highlighted that the bid only covered the service requirements and that a consideration of the customer's possible future needs was highly speculative and thus not included in the bid-compiling process. Thus, for a specific competitive bidding situation, the customer's future needs may play an important role in the bidding process and would need to be considered in a conceptual framework of the influencing uncertainties at the bidding stage. Regarding the consideration of the available information about the customer, all interviewees stated that it was of importance for the decision maker and the compiling of the bid. Five said that all the available information is considered, two described the customer and their bidding strategy as the most important influence on the bid, and two stated that there were other more important aspects such as the contract costs. This means that the customer can constitute a central factor in a bidding decision, however, its relative importance depends on the particular service contract. 4.3.3 Competitors The interviewees were asked questions which aimed at determining the following information regarding their competitors, namely; their identity, their cost estimate, their available technology or knowledge, and which of these aspects would be considered in the pricing decision. As indicated in the discussion in Section 4.2, the identity of the competitors may be known depending on the bidding process. If this is not the case, the bidding company may either have a "pretty good" idea regarding their competitors, due to their experience about who is capable of dealing with the requirements or not be able to identify them at all, particularly when trying to bid in new market segments where their experience is limited. For the purpose of this analysis, the three possibilities are named as the competitors' identity is known, knowable or not known. The competitors' cost estimates are not usually known to the bidding company which was confirmed by all interviewees. However, there are different levels of speculation. Based on previous experiences, a "ballpark" or top level deduction may be known which can be formulated as an absolute value or assessed in relation to the bidding company's costs. Another possibility is the knowledge of cost details such as salaries based on information obtained from previous employees of the competitor. In other cases, particularly when dealing with new or unknown competitors, the cost estimates may be neither known nor deducible. The third investigated aspect concerned the information about the competitors' available technology or level of knowledge which may give them a competitive advantage. The answers varied between three categories. A common answer (by six out of nine interviewees) was that it is known as the competitors advertise themselves on, e.g. the internet and their homepages or have other publicity in, e.g. newspapers. Two interviewees stated that this aspect of the competitors is knowable due to the decision maker's experience in the area. In other cases, particularly when the company bids in a new market segment, this aspect was stated to be not known and not knowable by two interviewees. Table III shows the frequency of the interviewees' answers for their knowledge of the competitors' cost estimates and their available technology or knowledge plotted against the competitors' identity. The numbers do not sum up to nine due to the fact that four interviewees stated multiple answers regarding the competitors' identity which can be dependent on the particular service contract. Hence, their answers varied also for the other aspects. The results shown in Table III give an indication to the availability of information about the competitors and thus the level of uncertainty connected to them. In cases where the competitors' identity is known or determinable, the bidding company also had a reasonable level of knowledge about other aspects. In other words, the bidding company is not ignorant about their competitors and their possible bidding strategies unless it is bidding in a new market sector. Investigating the interviewees' consideration of these aspects during the decision process, six replied that they used all the information that is available to them and two stated that they considered the available information but that there are other more important factors such as the customer. One interviewee said that the information regarding the competitors is not considered in the pricing-decision process. This confirms the results of the second empirical study, namely that competition is one of the influences on a pricing decision. Furthermore, most of the interviewed companies (seven out of nine) stated that it was one of the most important factors. Similarly, the availability of the original service and contract requirements was assessed with the interview as they would have been communicated by the customer at the beginning of the bidding process. They were stated by all interviewees to be available and included in the decision process. The following section describes the interviewees' answers regarding their bidding strategy. 4.4 Bidding strategy The interviewees' answers to the questions concerning the bidding strategy were analysed in three main sections: the choice of the decision maker, the method to obtain the price bid and the acceptance of a contract with a high risk of making a loss. These are described in this section. 4.4.1 Choice of the decision maker As the bidding strategy can be very subjective, the interview assessed how the decision maker was chosen. Most of the interviewees (seven out of nine) highlighted that the decision was made by a team; two stated that a team was involved in the bid compilation and the final decision was made by the team manager. The team decision was connected to contracts of both low and high complexity; four of the seven interviewees managed contract portfolios, one dealt with contracts of low complexity and two focused on ones of high complexity. Thus, it can be derived that the assignment of a team to the decision process is not correlated with the contract size. This means that team dynamics may influence the decision outcome and that the uncertainty caused by human behaviour which is connected to one individual decision maker is of minor importance in this context. The decision makers were chosen based on different criteria: experience, delegation and completed courses. Multiple replies were possible. In the first group, the decision maker(s) would be chosen based on their experience with bidding in general, bidding for similar contracts or in managing (similar) service contracts. In the second group, the decision maker(s) had to have a certain level of authority to make the bidding decision. The third category was mentioned as courses that were offered in the companies on, e.g. writing proposals or negotiating. The most importance criterion for choosing a decision maker was named as their experience which was mentioned by six of nine interviewees. Of similar importance (mentioned by five interviewees) and connected to experience is the category of delegation in the company which was a further important criterion for the choice of the bidding decision maker. The completion of courses was mentioned by two interviewees, both highlighted that this was only a supportive aspect; the decision maker(s) would not be chosen based on the courses they had completed. 4.4.2 Obtaining the price bid The calculation of the price bid, in other words the assessment of the monetary values to be included in the bid, can be categorised in two different approaches: "cost+profit margin=price" and price-focused process. The "equation" of the first group is a simplified depiction of the approach, most of the interviewees (seven out of nine) utilised in their bidding process. To the interpretation of the cost estimate a profit margin is added which can include a contingency, an administration margin and the consideration of inflation. Two of the interviewees stated that their process was focused on the price and the costs were not considered separate from that. This means that the price is considered in different steps within the bidding company regarding to either its suitability to the customer's stated budget (one of the interviewees) or to strategic evaluation of the market situation and the customer needs (the other interviewee). Following this question was the assessment of the minimum price bid underneath which the bidder would not accept the contract. The interviewees agreed that there was not a usual process to calculate this price before the tendering or negotiation process. However, the valuation of the minimum price can be categorised as: "cost+minimum profit", available alternatives and the potential of follow-on work. Six of the nine interviewees stated that they were prepared to reduce their profit in the bidding situation (first group). This includes the situation of no profit and excludes a deliberate loss. One of the interviewees of that category stated that the price bid communicated to the customer would be the minimum acceptable price. Two of the interviewees said that the minimum price varied according to the available alternatives in the economic situation at the time of bidding (second group). This comparison could include not achieving an agreement. In the third group, the minimum price was dependent on strategic aims such as the possibility of receiving future contracts with this customer. Two of the interviewees belonged to this category, one of which stated it in addition to the best available alternative. 4.4.3 Acceptance of a contract with a high risk of making a loss To assess other strategic aspects that may influence the bidding decision, the interviewees were asked if they had agreed to contracts which deliberately made a loss. Of the nine interviewees five stated that they would not accept such a contract, four said they had done. The answers to the question can be categorised as depicted in Table IV. Table IV shows that there was just one reason mentioned by the interviewees regarding the refusal of a contract with a high probability of making a loss which was typically connected to company policy or the usual conduct in the market sector. However, for the acceptance, the answers could be divided into three categories, namely the bidding company's long-term gains, the possibility of eliminating competition and the profile of the customer as a client. The interviewees that stated that they would accept such a contract usually mentioned multiple aims of these categories. The pricing process used by most of the interviewees was cost based which confirms the assumptions of previous studies (Avlonitis and Indounas, 2005). Furthermore, a connection could be observed between the complexity of the contract and the bidding process which determines the level of negotiation between customer and possible supplier. It was found that the more complex a service contract, the closer the two parties work together throughout the bidding process. This confirms the research of Bajari et al. (2008). However, a connection between the payment method and the bidding process as described by Bajari et al. (2008) was not confirmed in this study. The cost estimate usually included uncertainty in the form of a cost range. If uncertainty was not explicitly included in the cost estimate, it was usually based on specific assumptions which would be reassessed during the following pricing process. The uncertainty in a pricing decision was usually considered in the process (in one way or another). Where possible this uncertainty was reduced, for example if the service requirements were not clear or vague, the bidding company usually had the opportunity to receive further information from the customer through negotiation. Focusing on certain sources of uncertainty such as the competitors and the customer, the bidding company was usually not ignorant about these factors and their possible influence on the decision outcome. The identity of the competitors was usually known to the bidding company or could be assessed during the process of compiling the bid. This means that the competitors' profile and available resources can be taken into account in the process. Similarly, the customer's bidding strategy was either known or assessable. This means that the customer's evaluation of the service price and quality as well as other criteria is or can be known at least vaguely. Particularly customers that the bidding company had had a previous connection with to build up trust (Johnson and Grayson, 2005) form an important source of information and reduce the level of uncertainty. The presented interview study found that the pricing decision under uncertainty was based on the subjective evaluation of the decision maker(s) regarding the consideration of different uncertainties. As indicated by literature in uncertainty research (Samson et al., 2009; Thunnissen, 2003), the terms uncertainty and risk are hard to define and distinguish comprehensively. This was confirmed by the interview study, some interviewees used examples to overcome this difficulty. For the identification of uncertainties that may influence the considered service contracts, subjective methods were prominent while for their management subjective methods are used but often supported by objective methods such as Monte Carlo modelling. This suggests that there is a need for models to support the decision process in practice. Another aspect to overcome the uncertainty arising from individual assessment was the involvement of a decision team. Limitations of this empirical study include the small set of participants. However, the results are to be understood as indicative as opposed to a comprehensive characterisation of the current bidding situation for service contracts. With this purpose, they identify common patterns of approaching the decision problem, aspects and opportunities for further improvement and possibilities for offering support to the decision maker. This paper presented an interview study with industrialists from manufacturing companies facing the change of market structures towards servitisation. The study gave insights into the typically available information. Table V shows a summary of the findings. The findings from the interview study described in this paper show the influences and considerations during the decision-making process at the competitive bidding stage for service contracts. This forms a first step towards a more elaborate understanding of the processes involved in practice and of the development of a support for industry to make more informed decisions and secure the profitability of their service contracts. In addition to the aim of the presented interview study, namely the identification of the available information for manufacturing companies at the competitive bidding stage for service contracts, the study delivered further results. For example, it was found that costing information is typically communicated within the company either in tabular form as a cost breakdown or in a graphical form as a three-point estimate. Recent research found that these approaches are suboptimal in raising the decision maker's awareness of the uncertainty connected to the cost forecast (Kreye et al., 2012). Thus, further research is necessary to support industry in adapting optimal approaches for the communication of the uncertainty associated with the decision-making problem. The findings described in this paper can be used for future research to develop a uncertainty model for competitive bidding. This uncertainty model can include the information connected to the customer and competitors to determine the manufacturing company's probability of winning the service contracts and its probability of making a profit. This information supports the decision makers at the bidding stage to make a more informed decision, evaluate the level of risk with their pricing decision and, thus, ensure the long-term profitability and sustainability of their business. Opens in a new window. Figure 1 Example of a cost estimate and the possible price bid Opens in a new window. Figure 2 Interviewees' positioning regarding the size of their service contracts Opens in a new window. Figure 3 Comparison of data collection and analysis methodologies Opens in a new window. Figure 4 Characterisation of bidding process regarding the type of contract to be bid for Opens in a new window. Table I Interviewees' responses regarding sources of information and management tools of uncertainty in the decision process at the bidding stage Opens in a new window. Table II Appearance of cost estimate in dependence of included uncertainty Opens in a new window. Table III Available information about the competitors at the bidding stage Opens in a new window. Table IV Interviewees' reasoning behind refusing or accepting a contract with high probability of making loss Opens in a new window. Table V Summary of research findings of interview study
|
The purpose of this paper is to explore the information that manufacturing companies have available when competitively bidding for service contracts.
|
[SECTION: Method] Sustainable production and consumption have become more important internationally which has led to the transformation of market structures and competitive situations into the direction of servitisation (Baines et al., 2011; Bandinelli and Gamberi, 2011). For a manufacturing company the shift towards being a service provider is characterised by a high level of uncertainty about the future strategic development of the company caused by, e.g. inadequate knowledge and information (Song et al., 2007). For this research, a service is defined as an activity or a process which is aimed at the change of the state of the service issue such as the repair of a machine or the supply of flying hours for an aircraft (Araujo and Spring, 2006; Gadrey, 2000). In this context, the supply of product-centred services becomes more important. These services tend to be long-lived. For example, Babcock (2012) announced their support contract for the Australian Anzac class surface ship fleet until 2023. Another example is Rolls-Royce's Flotilla Support Programme for their submarines until 2017 (Rolls-Royce, 2011). The shift to a being a supplier for these services can cause many uncertainties, especially for companies that have previously focused on the production and manufacturing of products. The delivery of a service is usually embedded in a contract which is an agreement between the parties about the technical details of the service and is intended to be legally binding (Nellore, 2001; Rowley, 1997). Service contracts are often allocated through the process of competitive bidding where the competing suppliers communicate their service specifications and price bid to the customer who then evaluates the bids (Rexfelt and Ornas, 2009; Bubshait and Almohawis, 1994). This bidding process can include different levels of negotiation with the customer which can vary from an auction type bid (Friedman, 1956; Neugebauer and Pezanis-Christou, 2007) to an elaborate information exchange process (Lehman, 1986; Bajari et al., 2008). These varying levels of negotiation leave the bidding supplier with different levels of uncertainty influencing the pricing decision process. The pricing approach that is applied most frequently in practice is the cost-based pricing process which puts the starting point of the research at the estimation of the costs of the service contract (Hytonen, 2005). Cost estimation is concerned with predicting the future, thus, uncertainty is inherent to the process (Goh et al., 2010; Christoffersen, 1998). This uncertainty can be included in the cost estimate in different ways, one possibility is the range or density forecast which consists of a range of possible future values (Tay and Wallis, 2000). Included in the range forecast can be the minimum, maximum and average value connected to different assumptions about the future (Giordani and Soderlind, 2003). An exemplary cost estimate is shown in Figure 1. At the bidding stage, the decision maker has to select one point within the given range as a price bid to communicate to the customer; one example is marked in Figure 1. Choosing a price that is too high may result in being underbid by competitors and, thus, potential loss of the business (Lucas and Kirillova, 2011; Chapman et al., 2000). A too low price may influence the customer's perception of the quality of the service and, thus, be rejected (Freedman, 1988) or the failure to recover the costs and profit of the service (Swinney and Netessine, 2009; Wang et al., 2007). For the pricing decision at the bidding process the decision maker has to: understand the uncertainty in the cost estimate; and understand other uncertainties that influence the bidding success and the fulfilment of the service contract. The aim of this paper is to identify the availability and use of information at the competitive bidding stage. For this, an interview study with industrialists from different sectors was conducted. The related literature in contract bidding including the bidding process, contract conditions and typical payment methods is described in Section 2. Sections 3 and 4 describe the interview study and its results. Most literature describing theory on bidding decisions focus on auction-type processes (Cai et al., 2009; Schoenherr and Mabert, 2008; Neugebauer and Pezanis-Christou, 2007). This means that the described approaches focus on a constrained bidding environment and low complexity and duration of the services discussed in this context (Schoenherr and Mabert, 2008). This means that the model and theories described have limited applicability to the research described in this paper. This research focuses on services of high complexity, which are typically embedded in contracts of long duration. Literature describing the decision-making processes at the competitive bidding stage typically focuses on products (Li and Graves, 2012; Bhaskaran and Ramachandran, 2011; Sosic, 2011; Li and Wang, 2010). Particularly the pricing decisions of products has been highlighted to be influenced by uncertainty (Sosic, 2011). For example, customers can be expected to evaluate the competitive bids according to an individual list of preferences (Chaneton and Vulcano, 2011; Guo et al., 2009). Reasons for this more elaborate body of literature in product-focused decision making may be the longer history of the business process in industry and the issues connected to it. However, approaches describing the pricing of services can be found in the literature. One example is described by Guo et al. (2009), who model the strategic decision in a single-supplier context. While the approach offers valuable insights into the decision-making processes, it has two limitations for the application to servitisation: it does not include the existence of competition at its influence on the bidding strategy; and it describes services of low complexity such as hotel accommodation or restaurant dining. It can be summarised that current literature offers limited insights into the strategic decision-making processes at the competitive bidding stage, particularly from an industrial viewpoint. In particular, they do not consider the information that is available and the strategic process of its consideration in industry. Research that fails to consider these aspects will fail to accurately represent the decision-making process at the competitive bidding stage and will not be adopted by industry. This paper aims at closing this gap by introducing an exploratory study which describes the availability of information at the competitive bidding stage and its strategic consideration in practice. The aim of this study was to explore the availability of relevant information in the context of competitive bidding for a service contract on the supplier's side and to describe the subjective processes of the decision maker at the bidding stage. To examine this aim, an interview study was conducted. The following sections describe the applied method of this study in more detail. First, the interview procedure is described, then the design of the interview with the questions is explained, and last, the number of interviewees and the for example the sectors they work in are then described. 3.1 Interview procedure A standardised interview was carried out meaning the wording and sequence of questions was determined in advance, thus, each interviewee was asked the same questions in the same order (Teddlie and Tashakkori, 2009). This ensured that all topics were covered in each interview allowing a comparison between the answers of the different interviewees (Patton, 2002). The questions were open-ended, i.e. no predetermined answers were given (or suggested) and the interviewees were encouraged to describe the processes in their own words. This reduced possible bias of the replies. The interviews were not recorded as most of the interviewees were from organisations of the defence sector or simply not comfortable with recording. The results are based on the notes the researcher took during the interview processes. However, to ensure the correctness and limit the misinterpretation of the given information, the responses were returned to the interviewees after the interview for confirmation and validation as explained in Robinson et al. (2007). 3.2 Questionnaire design The questionnaire design was based both on previous empirical work and the literature in the field. The empirical work focused on two experimental studies undertaken with a total of 72 cost engineers and bidding decision makers from practice. These studies focused on the different influences on the bidding decision-making process, including the approach of displaying the cost estimate (Kreye et al., 2012) and the influence of the existence of competition on the decision outcome and rationale (Kreye, 2011). The participants were given a set of questionnaires which consisted of a pricing scenario and various questions connected to their decision-making process for this hypothetical example. From the answers in the experimental studies it became clear, that industry did not have a universal set of definitions for the terminology. Thus, it was decided that in the beginning of the interview, the participant's specific definition had to be clarified and established. The literature highlights the influence of contextual issues to the pricing decision. These are for example the contract situation within the company (Monroe, 2002; Chapman et al., 2000), the bidding process (Lehman, 1986) and the payment process (Tseng et al., 2009). Thus, the decision context was the focus of the second area of interview questions. In the experimental studies preceding the interview, one of the questions focused on the further influences on the decision-making process. The answers to this question could be categorised into market uncertainties (which included developments such as inflation, economic changes and technology development), cost estimation uncertainty, product uncertainties (including performance of the machine and risk of failures), competition uncertainty (manifesting itself in the risk of losing the contract) and customer uncertainties. These five main influences were used as a basis for the interview questionnaire, in particular to establish the amount of information typically available about these issues. As the bidding decision making is highly influenced by strategic considerations (Harrington, 2009; Afuah, 2009), the fourth area of interview questions focused on the bidding strategy. Based on the literature in the field, it was found that different influences are of importance. For example, due to the highly subjective nature of decision making, the choice of the bidding decision maker has been highlighted as an important factor (Tulloch, 1980). Further influences include the decision maker's interpretation of the cost estimate by based on his/her experience and assumptions (Kreye et al., 2012) and the calculation of the price bid (Hytonen, 2005; Monroe, 2002; Lehman, 1986). Thus, it can be summarised that the design of the interview questionnaire was based on an iterative process of combining results of preceding empirical studies with industry and the literature in the field. Based on this process, the interview questionnaire was compiled which is described in the following section. 3.3 Interview questionnaire The questions covered four main areas: uncertainty and risk, bidding context, input information for the pricing decision, and bidding strategy. Questions included in the first main area established the meanings the practitioners applied to the terms risk and uncertainty and how these are considered and identified in the pricing process. These established a common ground for the terminology in comparison to the definitions applied in the presented research and formed the basis for later questions. The second main area about the bidding context established background information that can potentially influence the bidding strategy. The issues investigated were the current contract situation of the company (Monroe, 2002; Chapman et al., 2000), the usual bidding process for service contracts (Lehman, 1986) and the typical payment method once the contract was awarded (Tseng et al., 2009). The last two areas form the main focus of the interviews. The area of the input information for the pricing decision examined the form and type of information normally used in the decision process and possible assumptions the decision maker may form (Goh et al., 2010; Bolton et al., 2006; Fargier and Sabbadin, 2005; Rubinstein, 1998; Loewenstein and Prelec, 1993; Lehman, 1986). The questions in this area examined; the form of the cost estimate, the uncertainties included in the cost estimate, possible further uncertainties that the decision maker considers in the pricing process, the available information about the competitors and the customer, and the amount of input information that is considered in the decision-making process. The area of bidding strategy established the subjective aspects of decision making in the competitive bidding situation as this may influence the outcome of the decision process (Kreye et al., 2012; Stecher, 2008; Yager, 1999; Lehman, 1986; Tulloch, 1980). The questions explored; the selection process of the decision maker, the interpretation of the cost estimate, the calculation of the price bid, the calculation of the minimum price bid, and the possibility of accepting contracts with a high risk of making a loss. The next sub-section describes the participants of this empirical study. 3.4 Interviewees The interviews were carried out over one year (March 2010 to March 2011) during a rebound period after the global economic recession of 2008-2009. Nine interviews were undertaken where the investigated sectors and numbers of interviewees were: defence (1), aerospace (1) and both defence and aerospace (2); engineering (2); research (1); information technology (1); and construction (1). The interviewed companies ranged from large and globally acting providers (with employee numbers varying between about 40,000 and 1,800 employees) to smaller, nationally acting providers (with less than 300 employees). The group of interviewees focused on the suppliers of product-centred services with varying levels of complexity. The contract complexity describes its value with a fuzzy distinction of its attributes, in other words there is no distinct value or factor that defines the difference between the two complexity grades. Thus, the service contracts included in this interview study were separated as follows: Low complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is low (Skaggs and Youndt, 2004; Shostack, 1987). In other words, the requirements are clear to the involved parties (Bajari et al., 2008). The interviewees of this study named these "small contracts" and characterised them using phrases such as "less than PS3 million", "less than 150.000 Euro", or "simple requirements such as the need of three engineers to do some testing". High complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is high (Skaggs and Youndt, 2004; Shostack, 1987). In other words, at the point of the bid invitation, the service design may be hard to define in detail (Bajari et al., 2008). The interviewees named these "large contracts" and distinguished the with phrases such as "more than PS3 million", "complex tasks such as 18 months contract" or "site management". Figure 2 shows the frequency of answers from the interviewees. Four of the nine interviewees said they hold a portfolio of different complexity contracts, two focused on contracts of low complexity and three interviewees concentrated on contracts of high complexity. 3.5 Methodology for result analysis To analyse the responses, a qualitative approach was applied. This means the results and their implications are discussed verbally to highlight the importance in the bidding and pricing process (Saunders et al., 2012). Nevertheless, to demonstrate the relative importance of specific answers, a quantitative presentation of these results was chosen for specific questions. This is included mainly for demonstration purposes to show where trends may emerge. Due to the limited number of interviewees, a complete statistical analysis of these trends was not possible and is not included in this paper. In general, the basis for the data analysis was the differentiation of the interviewees depending on the size of their service contracts as described in Section 3.4. However, when the interviews showed a relationship between questions or even interview areas, this relationship is emphasised in the data analysis. For example, a connection was found between the included uncertainty in the cost estimate, the approach to communicating this cost estimate (both questions concerning the input information) and the decision maker's interpretation of this estimate (question asked in connection to the bidding strategy). This relationship is analysed in one section. To demonstrate when these cross-relationships were found in the interview process, Figure 3 shows the data collection methodology in contrast to the analysis methodology. This section analyses the results of the interview study and presents them in the four main areas, namely uncertainty and risk, bidding context, input information, and bidding strategy. The term bidding strategy refers to the pattern of activities which has an impact on the achievement of bidding goals such as winning a profitable contract. 4.1 Uncertainty and risk The aim of the questions in this section was to clarify the terminology used by the industrialists and thus to guide further discussion of the topic. Differences could be observed between the interviewees in general. Some had corporate-wide definitions for the two terms; others used examples to describe their individual understanding, two interviewees did not use the term uncertainty. However, comparing the meaning or interpretation of the definitions, similarities can be found. Out of nine interviewees, seven understood uncertainty as the variation of an aspect of the contract such as the cost estimate. Discussing the term risk, the interviewees agreed that it is connected to an impact. Furthermore, seven interviewees stated that it was connected to a specific event, such as the risk of a red light during a car journey or the loss of a team member whose knowledge is central to the fulfilment of the service. Two interviewees described it as the impact on the project as a whole. The interviewees' definitions of the terms risk and uncertainty were utilised throughout the process of interviewing as a basis for clarity. However, for the purpose of this research, the described definition of uncertainty (see list of definitions) is applied in the further analysis of the interview results; the concept of risk is not discussed further. The interviewees' sources of identification and management tools for uncertainty can be classified based on the level of subjectivity. To identify the uncertainty connected to a project, all interviewees identified experience as the main source which was typically connected to the team that put the bid together (stated by six interviewees) or to the project manager (stated by three interviewees). In addition, more objective identification sources were used such as a formalised risk analysis process in the form of, e.g. a risk management handbook or databases of previous projects. This category was mentioned by four interviewees. For the identification of uncertainties, the practitioners used either a subjective method on its own or in combination with an objective method. To manage uncertainty, subjective approaches were of less importance than for the identification; only five interviewees named this approach. Four interviewees mentioned objective management methods out of which three also mentioned objective identification methods. Table I depicts the connection between the classification of information sources and management tools for uncertainty. The frequencies highlight the amount of times each individual aspect was mentioned and thus do not add up with the combinatorial numbers in the rest of Table I. 4.2 Bidding context Describing the bidding process, the interviewees' answers were categorised into four groups: one-bid process, two-bid process without negotiation, two-bid process with negotiation, and negotiation. In the one-bid process, the competitors have one opportunity to submit their bid including the bid price and the specifications of the service and the contract. The customer then evaluates these bids and agrees to one of the offers. This includes the assumption that the customer has the ability to understand the technical and commercial details of the bids. In the two-bid process without negotiation, the bidding process is split into two phases. In the first phase, a number of possible suppliers submit their bid which usually includes their suitability for the service contract (this can be based on an invitation to bid or an open access). This number of competitors is reduced to the most suitable ones who are then invited to submit their full bid in the second phase. In this second phase, the competitors typically know the identity of each other. None of the phases includes negotiation with the customer. In the two-bid process with negotiation, the bidding process is split into two phases similar as described above. However, the second phase is characterised by a negotiation between the competitors and the customer to clarify important issues and questions. The answers to these questions can be published to all competitors or stay confidential between the two negotiating parties. A bidding process which includes negotiation is characterised by an exchange of large amounts of information concerning the service requirements, the customer's intention, technical scope or any other issues concerning the contract or bid. The bidding process which the interviewees typically faced in their decision process depended on the size of the contract to be bid for. The definitions as described in Section 3.3 are used to describe the contract size. Figure 4 shows the answer frequency of the usual bidding process connected to the contract size. The values in Figure 4 distinguish between usual and possible bidding processes as indicated by the interviewees. The numbers do not add up to nine as multiple answers were given by the interviewees managing a contract portfolio. The results of Figure 4 indicate that low complex contracts with clear requirements are typically not negotiated which can be constituted with the reason that negotiation is a time and cost consuming process (Bajari et al., 2008). In contrast, contracts of high complexity are typically agreed after negotiation with varying levels of depth of this process. This suggests that the uncertainty that may arise from unclear requirements can usually be reduced by collecting further information from the customer. The parties were willing to commit additional time and costs to this process to ensure that the service outcome best fits the needs of each of them. The interviewees' answers regarding the usual payment methods for service contracts can be divided into three categories: fixed price, cost-based payment and payment on completion. Seven of the nine interviewees stated that (some of) their company's service contracts are paid with fixed prices which can be based on milestones (mentioned by four) or over a set period of time (stated by three interviewees) such as a monthly payment. Three of the interviewees stated that the payment is based on the actually spent costs which can be assessed through, e.g. timesheets. In the category of payment on completion, the service supplier is paid upon completion of the project which was mentioned by one interviewee. It is to be noted that this company offered research services which usually only have deliverables at the end of the service period in the form of, e.g. a research report. Multiple answers were possible. Based on these results it can be summarised that fixed price payment seemed to be the standard method for service contracts. The following section describes the input information of a pricing decision. 4.3 Input information The results of the interviewees' answers to the questions of the input information section were analysed in three main sections: cost estimate and uncertainty, customer and competitors. These are described in this section. 4.3.1 Cost estimate and uncertainty The way the cost estimate is communicated during the bidding process was found to be distinguishable into two categories: presented using a table or a graph. The costing information included in a table was found to be in two different ways. Four interviewees used a detailed cost breakdown in the form of the necessary work steps, the time and expertise needed for each step and the cost value assigned to the different steps. The other approach was mentioned to include a three-point-estimate which includes pessimistic, most likely and optimistic assumptions represented in a tabular form. The approach that was used most to include cost estimating information in a graph was a three-point estimate. Another approach mentioned was an s-curve which displays the cumulative costs over time and usually adopts the form of the letter S (Cioffi, 2005). The specification of the available costing information in practice was found to be influenced by the way uncertainty was included in the estimate. The levels of uncertainty included in the cost estimate were reported as: none, variation in the input data and quantification of qualitative uncertainty. Four interviewees stated that they included no uncertainties in their cost estimate. In the second group, the available information that the cost estimate is based on can vary; for example, to fulfil a specific task, a particular engineer may have taken 4 or 5 hours depending on other variables. The third group includes the assessment of the question of "what can go wrong" and connecting a value to this assessment. This occurs subjectively through the experience of the decision maker. Furthermore, the interpretation of the cost estimate was found to be dependent on the way uncertainty was included in the cost estimate, thus it is discussed in this section (this question was asked in connection to the bidding strategy). The answers were grouped as: none, a point estimate and a range estimate. Participants who stated that they included no uncertainties in their cost estimate also said that the cost estimate they received was not interpreted. This means the cost estimate was taken as it was. However, two of those said that the possibility was kept in mind that the cost estimate may be reduced due to the fact that it was based on conservative values. For example, if the historic data would show that a specific task took between 4 and 5 hours, the cost estimate would be based on the 5-hour estimate. If the final cost estimate would be considered too high, these cost values would be adjusted in a second iteration of the process. In the second category, the costing information with the related uncertainty was stated to be interpreted as a point estimate, based on, e.g. the 50 or 80 per cent line in the graph. One interviewee stated that this was only held up when the uncertainty connected to the contract was low, otherwise a cost range was kept. In the third category, the communicated costing information was carried forward in the pricing process as a range estimate, either with its original spread or as a reduced spread. One interviewee stated that the full range was utilised when there was high uncertainty connected to the contract in the form of a high variation in the input data. Table II shows the comparison of the way the cost estimates were presented and interpreted against the uncertainty that is included. The total values do not add up to nine because two interviewees stated the use of multiple methods to communicate their cost estimate; one used both types of graphical displays, the other one stated the use of tables to present the cost breakdown and graphs to present the overall costs. However, the total values give an indication of how often each type of presentation was mentioned and which uncertainty is included. As depicted in Table II, the companies that presented the cost estimate as a breakdown in a table did not include any uncertainty; it was rather based on specific assumptions. These assumptions included the choice of a conservative value when the input data varied, e.g. when a task was recorded to take between 4 and 5 hours, the estimate would be based on 5 hours. Furthermore, when uncertainty was included, the cost estimate was more likely to be presented in graphical form. All interviewees who stated that they used a graphical approach to display their costing information included uncertainty in it. The interviews also assessed which further uncertainties can influence the pricing decision. Two out of the three interviewees who stated that their cost estimate did not contain any uncertainties, also stated there were no further uncertainties influencing the pricing decision. Both of them, however, stated that they would reduce the cost estimate if the originally derived price bid would be considered as too high. The other uncertainties influencing the pricing decision were categorised into: customer related uncertainties, competitor related uncertainties, cost estimation uncertainties, economic uncertainties and others. Customer related uncertainties included the customer's previous choices of bidders for similar projects to recognise observable patterns. For example, the customer may always go for the price bid that is 5 per cent below their stated budget limit. Other factors were mentioned as the assessment of questions such as the possible consequences if the customer found a mistake in the bid, the location of the customer to evaluate the possible travel costs, and assumptions about the usage of the serviced product or machine. Another aspect that was mentioned was the level of experience of the customer's personnel involved in the usage of the product or machine that was part of the service contract. Further aspects related to the customer are analysed at a later point in this section. Competitor related uncertainties assessed the identification of the competitors for the particular service contract and an evaluation of their most likely bid. Furthermore, the contract might be let to multiple suppliers who would either focus on different aspects of the service or would have to be able to share the project. Further aspects related to the competitors are analysed at a later point in this section. As discussed in this section, the cost estimate was stated to either include different uncertainties in the form of a spread or was based on assumptions that may not prove true. Further uncertainties included the possibility of cost reductions through, e.g. a reduction of the overhead costs. Economic uncertainties include factors which may influence the commercial activities such as legal changes, gains that can be achieved with the contract, the situation of the overall economy, of the market place and of the specific sector. Other mentioned uncertainties included the bidding company's contract situation and the uncertainty arising from the technical requirements. Most interviewees mentioned more than one of the presented sources of uncertainty with a clear emphasis on one important factor, usually concerning an example from the recent past. For this reason, there is no quantitative analysis of the relative importance of each of the mentioned categories. 4.3.2 Customer The available information concerning the customer considered the areas of their bidding strategy, the past relationships, their future needs and whether these aspects influence the decision maker of the bidding company. For these interviews, the customer's bidding strategy was addressed through the aspect of their budget and their evaluation criteria regarding the bids. The interviewees' answers indicate two different categories: either these strategic aspects are communicated with the service requirements or they can be assessed through a "getting to know the client" process in which usually a commercial team is involved. Of the nine interviewees, four stated that the customer's bidding strategy was communicated, two said it could be assessed, and three that it varies between these two categories depending on the kind of customer (resulting from aspects such as if they had worked with them before, what the preferred bidding process of the customer was). The past relationship between the bidding company and the customer was described by all interviewees as an important source of information. An ideal bidding situation would involve a long past relationship where trust had been build up and the parties would know each other. When this is not the case, the bidding company may still have previous experience with the customer to build up knowledge about them. In cases where there is no previous experience, the bidding company has to rely on the information communicated by the customer themselves or published in, e.g. the press. The assessment of the customer's possible future needs caused different reactions with the interviewees. One part (seven of nine interviewees) stated that this was one aspect that they assess during the process of compiling the bid and included it if appropriate. These interviewees stated the importance of possible follow-up work, future relations and the length of the service contract to demonstrate the suitability for, e.g. the next five years. The other two interviewees highlighted that the bid only covered the service requirements and that a consideration of the customer's possible future needs was highly speculative and thus not included in the bid-compiling process. Thus, for a specific competitive bidding situation, the customer's future needs may play an important role in the bidding process and would need to be considered in a conceptual framework of the influencing uncertainties at the bidding stage. Regarding the consideration of the available information about the customer, all interviewees stated that it was of importance for the decision maker and the compiling of the bid. Five said that all the available information is considered, two described the customer and their bidding strategy as the most important influence on the bid, and two stated that there were other more important aspects such as the contract costs. This means that the customer can constitute a central factor in a bidding decision, however, its relative importance depends on the particular service contract. 4.3.3 Competitors The interviewees were asked questions which aimed at determining the following information regarding their competitors, namely; their identity, their cost estimate, their available technology or knowledge, and which of these aspects would be considered in the pricing decision. As indicated in the discussion in Section 4.2, the identity of the competitors may be known depending on the bidding process. If this is not the case, the bidding company may either have a "pretty good" idea regarding their competitors, due to their experience about who is capable of dealing with the requirements or not be able to identify them at all, particularly when trying to bid in new market segments where their experience is limited. For the purpose of this analysis, the three possibilities are named as the competitors' identity is known, knowable or not known. The competitors' cost estimates are not usually known to the bidding company which was confirmed by all interviewees. However, there are different levels of speculation. Based on previous experiences, a "ballpark" or top level deduction may be known which can be formulated as an absolute value or assessed in relation to the bidding company's costs. Another possibility is the knowledge of cost details such as salaries based on information obtained from previous employees of the competitor. In other cases, particularly when dealing with new or unknown competitors, the cost estimates may be neither known nor deducible. The third investigated aspect concerned the information about the competitors' available technology or level of knowledge which may give them a competitive advantage. The answers varied between three categories. A common answer (by six out of nine interviewees) was that it is known as the competitors advertise themselves on, e.g. the internet and their homepages or have other publicity in, e.g. newspapers. Two interviewees stated that this aspect of the competitors is knowable due to the decision maker's experience in the area. In other cases, particularly when the company bids in a new market segment, this aspect was stated to be not known and not knowable by two interviewees. Table III shows the frequency of the interviewees' answers for their knowledge of the competitors' cost estimates and their available technology or knowledge plotted against the competitors' identity. The numbers do not sum up to nine due to the fact that four interviewees stated multiple answers regarding the competitors' identity which can be dependent on the particular service contract. Hence, their answers varied also for the other aspects. The results shown in Table III give an indication to the availability of information about the competitors and thus the level of uncertainty connected to them. In cases where the competitors' identity is known or determinable, the bidding company also had a reasonable level of knowledge about other aspects. In other words, the bidding company is not ignorant about their competitors and their possible bidding strategies unless it is bidding in a new market sector. Investigating the interviewees' consideration of these aspects during the decision process, six replied that they used all the information that is available to them and two stated that they considered the available information but that there are other more important factors such as the customer. One interviewee said that the information regarding the competitors is not considered in the pricing-decision process. This confirms the results of the second empirical study, namely that competition is one of the influences on a pricing decision. Furthermore, most of the interviewed companies (seven out of nine) stated that it was one of the most important factors. Similarly, the availability of the original service and contract requirements was assessed with the interview as they would have been communicated by the customer at the beginning of the bidding process. They were stated by all interviewees to be available and included in the decision process. The following section describes the interviewees' answers regarding their bidding strategy. 4.4 Bidding strategy The interviewees' answers to the questions concerning the bidding strategy were analysed in three main sections: the choice of the decision maker, the method to obtain the price bid and the acceptance of a contract with a high risk of making a loss. These are described in this section. 4.4.1 Choice of the decision maker As the bidding strategy can be very subjective, the interview assessed how the decision maker was chosen. Most of the interviewees (seven out of nine) highlighted that the decision was made by a team; two stated that a team was involved in the bid compilation and the final decision was made by the team manager. The team decision was connected to contracts of both low and high complexity; four of the seven interviewees managed contract portfolios, one dealt with contracts of low complexity and two focused on ones of high complexity. Thus, it can be derived that the assignment of a team to the decision process is not correlated with the contract size. This means that team dynamics may influence the decision outcome and that the uncertainty caused by human behaviour which is connected to one individual decision maker is of minor importance in this context. The decision makers were chosen based on different criteria: experience, delegation and completed courses. Multiple replies were possible. In the first group, the decision maker(s) would be chosen based on their experience with bidding in general, bidding for similar contracts or in managing (similar) service contracts. In the second group, the decision maker(s) had to have a certain level of authority to make the bidding decision. The third category was mentioned as courses that were offered in the companies on, e.g. writing proposals or negotiating. The most importance criterion for choosing a decision maker was named as their experience which was mentioned by six of nine interviewees. Of similar importance (mentioned by five interviewees) and connected to experience is the category of delegation in the company which was a further important criterion for the choice of the bidding decision maker. The completion of courses was mentioned by two interviewees, both highlighted that this was only a supportive aspect; the decision maker(s) would not be chosen based on the courses they had completed. 4.4.2 Obtaining the price bid The calculation of the price bid, in other words the assessment of the monetary values to be included in the bid, can be categorised in two different approaches: "cost+profit margin=price" and price-focused process. The "equation" of the first group is a simplified depiction of the approach, most of the interviewees (seven out of nine) utilised in their bidding process. To the interpretation of the cost estimate a profit margin is added which can include a contingency, an administration margin and the consideration of inflation. Two of the interviewees stated that their process was focused on the price and the costs were not considered separate from that. This means that the price is considered in different steps within the bidding company regarding to either its suitability to the customer's stated budget (one of the interviewees) or to strategic evaluation of the market situation and the customer needs (the other interviewee). Following this question was the assessment of the minimum price bid underneath which the bidder would not accept the contract. The interviewees agreed that there was not a usual process to calculate this price before the tendering or negotiation process. However, the valuation of the minimum price can be categorised as: "cost+minimum profit", available alternatives and the potential of follow-on work. Six of the nine interviewees stated that they were prepared to reduce their profit in the bidding situation (first group). This includes the situation of no profit and excludes a deliberate loss. One of the interviewees of that category stated that the price bid communicated to the customer would be the minimum acceptable price. Two of the interviewees said that the minimum price varied according to the available alternatives in the economic situation at the time of bidding (second group). This comparison could include not achieving an agreement. In the third group, the minimum price was dependent on strategic aims such as the possibility of receiving future contracts with this customer. Two of the interviewees belonged to this category, one of which stated it in addition to the best available alternative. 4.4.3 Acceptance of a contract with a high risk of making a loss To assess other strategic aspects that may influence the bidding decision, the interviewees were asked if they had agreed to contracts which deliberately made a loss. Of the nine interviewees five stated that they would not accept such a contract, four said they had done. The answers to the question can be categorised as depicted in Table IV. Table IV shows that there was just one reason mentioned by the interviewees regarding the refusal of a contract with a high probability of making a loss which was typically connected to company policy or the usual conduct in the market sector. However, for the acceptance, the answers could be divided into three categories, namely the bidding company's long-term gains, the possibility of eliminating competition and the profile of the customer as a client. The interviewees that stated that they would accept such a contract usually mentioned multiple aims of these categories. The pricing process used by most of the interviewees was cost based which confirms the assumptions of previous studies (Avlonitis and Indounas, 2005). Furthermore, a connection could be observed between the complexity of the contract and the bidding process which determines the level of negotiation between customer and possible supplier. It was found that the more complex a service contract, the closer the two parties work together throughout the bidding process. This confirms the research of Bajari et al. (2008). However, a connection between the payment method and the bidding process as described by Bajari et al. (2008) was not confirmed in this study. The cost estimate usually included uncertainty in the form of a cost range. If uncertainty was not explicitly included in the cost estimate, it was usually based on specific assumptions which would be reassessed during the following pricing process. The uncertainty in a pricing decision was usually considered in the process (in one way or another). Where possible this uncertainty was reduced, for example if the service requirements were not clear or vague, the bidding company usually had the opportunity to receive further information from the customer through negotiation. Focusing on certain sources of uncertainty such as the competitors and the customer, the bidding company was usually not ignorant about these factors and their possible influence on the decision outcome. The identity of the competitors was usually known to the bidding company or could be assessed during the process of compiling the bid. This means that the competitors' profile and available resources can be taken into account in the process. Similarly, the customer's bidding strategy was either known or assessable. This means that the customer's evaluation of the service price and quality as well as other criteria is or can be known at least vaguely. Particularly customers that the bidding company had had a previous connection with to build up trust (Johnson and Grayson, 2005) form an important source of information and reduce the level of uncertainty. The presented interview study found that the pricing decision under uncertainty was based on the subjective evaluation of the decision maker(s) regarding the consideration of different uncertainties. As indicated by literature in uncertainty research (Samson et al., 2009; Thunnissen, 2003), the terms uncertainty and risk are hard to define and distinguish comprehensively. This was confirmed by the interview study, some interviewees used examples to overcome this difficulty. For the identification of uncertainties that may influence the considered service contracts, subjective methods were prominent while for their management subjective methods are used but often supported by objective methods such as Monte Carlo modelling. This suggests that there is a need for models to support the decision process in practice. Another aspect to overcome the uncertainty arising from individual assessment was the involvement of a decision team. Limitations of this empirical study include the small set of participants. However, the results are to be understood as indicative as opposed to a comprehensive characterisation of the current bidding situation for service contracts. With this purpose, they identify common patterns of approaching the decision problem, aspects and opportunities for further improvement and possibilities for offering support to the decision maker. This paper presented an interview study with industrialists from manufacturing companies facing the change of market structures towards servitisation. The study gave insights into the typically available information. Table V shows a summary of the findings. The findings from the interview study described in this paper show the influences and considerations during the decision-making process at the competitive bidding stage for service contracts. This forms a first step towards a more elaborate understanding of the processes involved in practice and of the development of a support for industry to make more informed decisions and secure the profitability of their service contracts. In addition to the aim of the presented interview study, namely the identification of the available information for manufacturing companies at the competitive bidding stage for service contracts, the study delivered further results. For example, it was found that costing information is typically communicated within the company either in tabular form as a cost breakdown or in a graphical form as a three-point estimate. Recent research found that these approaches are suboptimal in raising the decision maker's awareness of the uncertainty connected to the cost forecast (Kreye et al., 2012). Thus, further research is necessary to support industry in adapting optimal approaches for the communication of the uncertainty associated with the decision-making problem. The findings described in this paper can be used for future research to develop a uncertainty model for competitive bidding. This uncertainty model can include the information connected to the customer and competitors to determine the manufacturing company's probability of winning the service contracts and its probability of making a profit. This information supports the decision makers at the bidding stage to make a more informed decision, evaluate the level of risk with their pricing decision and, thus, ensure the long-term profitability and sustainability of their business. Opens in a new window. Figure 1 Example of a cost estimate and the possible price bid Opens in a new window. Figure 2 Interviewees' positioning regarding the size of their service contracts Opens in a new window. Figure 3 Comparison of data collection and analysis methodologies Opens in a new window. Figure 4 Characterisation of bidding process regarding the type of contract to be bid for Opens in a new window. Table I Interviewees' responses regarding sources of information and management tools of uncertainty in the decision process at the bidding stage Opens in a new window. Table II Appearance of cost estimate in dependence of included uncertainty Opens in a new window. Table III Available information about the competitors at the bidding stage Opens in a new window. Table IV Interviewees' reasoning behind refusing or accepting a contract with high probability of making loss Opens in a new window. Table V Summary of research findings of interview study
|
A semi-structured interview study was undertaken with industrialists in various sectors, which are currently facing the issue of servitisation.
|
[SECTION: Findings] Sustainable production and consumption have become more important internationally which has led to the transformation of market structures and competitive situations into the direction of servitisation (Baines et al., 2011; Bandinelli and Gamberi, 2011). For a manufacturing company the shift towards being a service provider is characterised by a high level of uncertainty about the future strategic development of the company caused by, e.g. inadequate knowledge and information (Song et al., 2007). For this research, a service is defined as an activity or a process which is aimed at the change of the state of the service issue such as the repair of a machine or the supply of flying hours for an aircraft (Araujo and Spring, 2006; Gadrey, 2000). In this context, the supply of product-centred services becomes more important. These services tend to be long-lived. For example, Babcock (2012) announced their support contract for the Australian Anzac class surface ship fleet until 2023. Another example is Rolls-Royce's Flotilla Support Programme for their submarines until 2017 (Rolls-Royce, 2011). The shift to a being a supplier for these services can cause many uncertainties, especially for companies that have previously focused on the production and manufacturing of products. The delivery of a service is usually embedded in a contract which is an agreement between the parties about the technical details of the service and is intended to be legally binding (Nellore, 2001; Rowley, 1997). Service contracts are often allocated through the process of competitive bidding where the competing suppliers communicate their service specifications and price bid to the customer who then evaluates the bids (Rexfelt and Ornas, 2009; Bubshait and Almohawis, 1994). This bidding process can include different levels of negotiation with the customer which can vary from an auction type bid (Friedman, 1956; Neugebauer and Pezanis-Christou, 2007) to an elaborate information exchange process (Lehman, 1986; Bajari et al., 2008). These varying levels of negotiation leave the bidding supplier with different levels of uncertainty influencing the pricing decision process. The pricing approach that is applied most frequently in practice is the cost-based pricing process which puts the starting point of the research at the estimation of the costs of the service contract (Hytonen, 2005). Cost estimation is concerned with predicting the future, thus, uncertainty is inherent to the process (Goh et al., 2010; Christoffersen, 1998). This uncertainty can be included in the cost estimate in different ways, one possibility is the range or density forecast which consists of a range of possible future values (Tay and Wallis, 2000). Included in the range forecast can be the minimum, maximum and average value connected to different assumptions about the future (Giordani and Soderlind, 2003). An exemplary cost estimate is shown in Figure 1. At the bidding stage, the decision maker has to select one point within the given range as a price bid to communicate to the customer; one example is marked in Figure 1. Choosing a price that is too high may result in being underbid by competitors and, thus, potential loss of the business (Lucas and Kirillova, 2011; Chapman et al., 2000). A too low price may influence the customer's perception of the quality of the service and, thus, be rejected (Freedman, 1988) or the failure to recover the costs and profit of the service (Swinney and Netessine, 2009; Wang et al., 2007). For the pricing decision at the bidding process the decision maker has to: understand the uncertainty in the cost estimate; and understand other uncertainties that influence the bidding success and the fulfilment of the service contract. The aim of this paper is to identify the availability and use of information at the competitive bidding stage. For this, an interview study with industrialists from different sectors was conducted. The related literature in contract bidding including the bidding process, contract conditions and typical payment methods is described in Section 2. Sections 3 and 4 describe the interview study and its results. Most literature describing theory on bidding decisions focus on auction-type processes (Cai et al., 2009; Schoenherr and Mabert, 2008; Neugebauer and Pezanis-Christou, 2007). This means that the described approaches focus on a constrained bidding environment and low complexity and duration of the services discussed in this context (Schoenherr and Mabert, 2008). This means that the model and theories described have limited applicability to the research described in this paper. This research focuses on services of high complexity, which are typically embedded in contracts of long duration. Literature describing the decision-making processes at the competitive bidding stage typically focuses on products (Li and Graves, 2012; Bhaskaran and Ramachandran, 2011; Sosic, 2011; Li and Wang, 2010). Particularly the pricing decisions of products has been highlighted to be influenced by uncertainty (Sosic, 2011). For example, customers can be expected to evaluate the competitive bids according to an individual list of preferences (Chaneton and Vulcano, 2011; Guo et al., 2009). Reasons for this more elaborate body of literature in product-focused decision making may be the longer history of the business process in industry and the issues connected to it. However, approaches describing the pricing of services can be found in the literature. One example is described by Guo et al. (2009), who model the strategic decision in a single-supplier context. While the approach offers valuable insights into the decision-making processes, it has two limitations for the application to servitisation: it does not include the existence of competition at its influence on the bidding strategy; and it describes services of low complexity such as hotel accommodation or restaurant dining. It can be summarised that current literature offers limited insights into the strategic decision-making processes at the competitive bidding stage, particularly from an industrial viewpoint. In particular, they do not consider the information that is available and the strategic process of its consideration in industry. Research that fails to consider these aspects will fail to accurately represent the decision-making process at the competitive bidding stage and will not be adopted by industry. This paper aims at closing this gap by introducing an exploratory study which describes the availability of information at the competitive bidding stage and its strategic consideration in practice. The aim of this study was to explore the availability of relevant information in the context of competitive bidding for a service contract on the supplier's side and to describe the subjective processes of the decision maker at the bidding stage. To examine this aim, an interview study was conducted. The following sections describe the applied method of this study in more detail. First, the interview procedure is described, then the design of the interview with the questions is explained, and last, the number of interviewees and the for example the sectors they work in are then described. 3.1 Interview procedure A standardised interview was carried out meaning the wording and sequence of questions was determined in advance, thus, each interviewee was asked the same questions in the same order (Teddlie and Tashakkori, 2009). This ensured that all topics were covered in each interview allowing a comparison between the answers of the different interviewees (Patton, 2002). The questions were open-ended, i.e. no predetermined answers were given (or suggested) and the interviewees were encouraged to describe the processes in their own words. This reduced possible bias of the replies. The interviews were not recorded as most of the interviewees were from organisations of the defence sector or simply not comfortable with recording. The results are based on the notes the researcher took during the interview processes. However, to ensure the correctness and limit the misinterpretation of the given information, the responses were returned to the interviewees after the interview for confirmation and validation as explained in Robinson et al. (2007). 3.2 Questionnaire design The questionnaire design was based both on previous empirical work and the literature in the field. The empirical work focused on two experimental studies undertaken with a total of 72 cost engineers and bidding decision makers from practice. These studies focused on the different influences on the bidding decision-making process, including the approach of displaying the cost estimate (Kreye et al., 2012) and the influence of the existence of competition on the decision outcome and rationale (Kreye, 2011). The participants were given a set of questionnaires which consisted of a pricing scenario and various questions connected to their decision-making process for this hypothetical example. From the answers in the experimental studies it became clear, that industry did not have a universal set of definitions for the terminology. Thus, it was decided that in the beginning of the interview, the participant's specific definition had to be clarified and established. The literature highlights the influence of contextual issues to the pricing decision. These are for example the contract situation within the company (Monroe, 2002; Chapman et al., 2000), the bidding process (Lehman, 1986) and the payment process (Tseng et al., 2009). Thus, the decision context was the focus of the second area of interview questions. In the experimental studies preceding the interview, one of the questions focused on the further influences on the decision-making process. The answers to this question could be categorised into market uncertainties (which included developments such as inflation, economic changes and technology development), cost estimation uncertainty, product uncertainties (including performance of the machine and risk of failures), competition uncertainty (manifesting itself in the risk of losing the contract) and customer uncertainties. These five main influences were used as a basis for the interview questionnaire, in particular to establish the amount of information typically available about these issues. As the bidding decision making is highly influenced by strategic considerations (Harrington, 2009; Afuah, 2009), the fourth area of interview questions focused on the bidding strategy. Based on the literature in the field, it was found that different influences are of importance. For example, due to the highly subjective nature of decision making, the choice of the bidding decision maker has been highlighted as an important factor (Tulloch, 1980). Further influences include the decision maker's interpretation of the cost estimate by based on his/her experience and assumptions (Kreye et al., 2012) and the calculation of the price bid (Hytonen, 2005; Monroe, 2002; Lehman, 1986). Thus, it can be summarised that the design of the interview questionnaire was based on an iterative process of combining results of preceding empirical studies with industry and the literature in the field. Based on this process, the interview questionnaire was compiled which is described in the following section. 3.3 Interview questionnaire The questions covered four main areas: uncertainty and risk, bidding context, input information for the pricing decision, and bidding strategy. Questions included in the first main area established the meanings the practitioners applied to the terms risk and uncertainty and how these are considered and identified in the pricing process. These established a common ground for the terminology in comparison to the definitions applied in the presented research and formed the basis for later questions. The second main area about the bidding context established background information that can potentially influence the bidding strategy. The issues investigated were the current contract situation of the company (Monroe, 2002; Chapman et al., 2000), the usual bidding process for service contracts (Lehman, 1986) and the typical payment method once the contract was awarded (Tseng et al., 2009). The last two areas form the main focus of the interviews. The area of the input information for the pricing decision examined the form and type of information normally used in the decision process and possible assumptions the decision maker may form (Goh et al., 2010; Bolton et al., 2006; Fargier and Sabbadin, 2005; Rubinstein, 1998; Loewenstein and Prelec, 1993; Lehman, 1986). The questions in this area examined; the form of the cost estimate, the uncertainties included in the cost estimate, possible further uncertainties that the decision maker considers in the pricing process, the available information about the competitors and the customer, and the amount of input information that is considered in the decision-making process. The area of bidding strategy established the subjective aspects of decision making in the competitive bidding situation as this may influence the outcome of the decision process (Kreye et al., 2012; Stecher, 2008; Yager, 1999; Lehman, 1986; Tulloch, 1980). The questions explored; the selection process of the decision maker, the interpretation of the cost estimate, the calculation of the price bid, the calculation of the minimum price bid, and the possibility of accepting contracts with a high risk of making a loss. The next sub-section describes the participants of this empirical study. 3.4 Interviewees The interviews were carried out over one year (March 2010 to March 2011) during a rebound period after the global economic recession of 2008-2009. Nine interviews were undertaken where the investigated sectors and numbers of interviewees were: defence (1), aerospace (1) and both defence and aerospace (2); engineering (2); research (1); information technology (1); and construction (1). The interviewed companies ranged from large and globally acting providers (with employee numbers varying between about 40,000 and 1,800 employees) to smaller, nationally acting providers (with less than 300 employees). The group of interviewees focused on the suppliers of product-centred services with varying levels of complexity. The contract complexity describes its value with a fuzzy distinction of its attributes, in other words there is no distinct value or factor that defines the difference between the two complexity grades. Thus, the service contracts included in this interview study were separated as follows: Low complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is low (Skaggs and Youndt, 2004; Shostack, 1987). In other words, the requirements are clear to the involved parties (Bajari et al., 2008). The interviewees of this study named these "small contracts" and characterised them using phrases such as "less than PS3 million", "less than 150.000 Euro", or "simple requirements such as the need of three engineers to do some testing". High complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is high (Skaggs and Youndt, 2004; Shostack, 1987). In other words, at the point of the bid invitation, the service design may be hard to define in detail (Bajari et al., 2008). The interviewees named these "large contracts" and distinguished the with phrases such as "more than PS3 million", "complex tasks such as 18 months contract" or "site management". Figure 2 shows the frequency of answers from the interviewees. Four of the nine interviewees said they hold a portfolio of different complexity contracts, two focused on contracts of low complexity and three interviewees concentrated on contracts of high complexity. 3.5 Methodology for result analysis To analyse the responses, a qualitative approach was applied. This means the results and their implications are discussed verbally to highlight the importance in the bidding and pricing process (Saunders et al., 2012). Nevertheless, to demonstrate the relative importance of specific answers, a quantitative presentation of these results was chosen for specific questions. This is included mainly for demonstration purposes to show where trends may emerge. Due to the limited number of interviewees, a complete statistical analysis of these trends was not possible and is not included in this paper. In general, the basis for the data analysis was the differentiation of the interviewees depending on the size of their service contracts as described in Section 3.4. However, when the interviews showed a relationship between questions or even interview areas, this relationship is emphasised in the data analysis. For example, a connection was found between the included uncertainty in the cost estimate, the approach to communicating this cost estimate (both questions concerning the input information) and the decision maker's interpretation of this estimate (question asked in connection to the bidding strategy). This relationship is analysed in one section. To demonstrate when these cross-relationships were found in the interview process, Figure 3 shows the data collection methodology in contrast to the analysis methodology. This section analyses the results of the interview study and presents them in the four main areas, namely uncertainty and risk, bidding context, input information, and bidding strategy. The term bidding strategy refers to the pattern of activities which has an impact on the achievement of bidding goals such as winning a profitable contract. 4.1 Uncertainty and risk The aim of the questions in this section was to clarify the terminology used by the industrialists and thus to guide further discussion of the topic. Differences could be observed between the interviewees in general. Some had corporate-wide definitions for the two terms; others used examples to describe their individual understanding, two interviewees did not use the term uncertainty. However, comparing the meaning or interpretation of the definitions, similarities can be found. Out of nine interviewees, seven understood uncertainty as the variation of an aspect of the contract such as the cost estimate. Discussing the term risk, the interviewees agreed that it is connected to an impact. Furthermore, seven interviewees stated that it was connected to a specific event, such as the risk of a red light during a car journey or the loss of a team member whose knowledge is central to the fulfilment of the service. Two interviewees described it as the impact on the project as a whole. The interviewees' definitions of the terms risk and uncertainty were utilised throughout the process of interviewing as a basis for clarity. However, for the purpose of this research, the described definition of uncertainty (see list of definitions) is applied in the further analysis of the interview results; the concept of risk is not discussed further. The interviewees' sources of identification and management tools for uncertainty can be classified based on the level of subjectivity. To identify the uncertainty connected to a project, all interviewees identified experience as the main source which was typically connected to the team that put the bid together (stated by six interviewees) or to the project manager (stated by three interviewees). In addition, more objective identification sources were used such as a formalised risk analysis process in the form of, e.g. a risk management handbook or databases of previous projects. This category was mentioned by four interviewees. For the identification of uncertainties, the practitioners used either a subjective method on its own or in combination with an objective method. To manage uncertainty, subjective approaches were of less importance than for the identification; only five interviewees named this approach. Four interviewees mentioned objective management methods out of which three also mentioned objective identification methods. Table I depicts the connection between the classification of information sources and management tools for uncertainty. The frequencies highlight the amount of times each individual aspect was mentioned and thus do not add up with the combinatorial numbers in the rest of Table I. 4.2 Bidding context Describing the bidding process, the interviewees' answers were categorised into four groups: one-bid process, two-bid process without negotiation, two-bid process with negotiation, and negotiation. In the one-bid process, the competitors have one opportunity to submit their bid including the bid price and the specifications of the service and the contract. The customer then evaluates these bids and agrees to one of the offers. This includes the assumption that the customer has the ability to understand the technical and commercial details of the bids. In the two-bid process without negotiation, the bidding process is split into two phases. In the first phase, a number of possible suppliers submit their bid which usually includes their suitability for the service contract (this can be based on an invitation to bid or an open access). This number of competitors is reduced to the most suitable ones who are then invited to submit their full bid in the second phase. In this second phase, the competitors typically know the identity of each other. None of the phases includes negotiation with the customer. In the two-bid process with negotiation, the bidding process is split into two phases similar as described above. However, the second phase is characterised by a negotiation between the competitors and the customer to clarify important issues and questions. The answers to these questions can be published to all competitors or stay confidential between the two negotiating parties. A bidding process which includes negotiation is characterised by an exchange of large amounts of information concerning the service requirements, the customer's intention, technical scope or any other issues concerning the contract or bid. The bidding process which the interviewees typically faced in their decision process depended on the size of the contract to be bid for. The definitions as described in Section 3.3 are used to describe the contract size. Figure 4 shows the answer frequency of the usual bidding process connected to the contract size. The values in Figure 4 distinguish between usual and possible bidding processes as indicated by the interviewees. The numbers do not add up to nine as multiple answers were given by the interviewees managing a contract portfolio. The results of Figure 4 indicate that low complex contracts with clear requirements are typically not negotiated which can be constituted with the reason that negotiation is a time and cost consuming process (Bajari et al., 2008). In contrast, contracts of high complexity are typically agreed after negotiation with varying levels of depth of this process. This suggests that the uncertainty that may arise from unclear requirements can usually be reduced by collecting further information from the customer. The parties were willing to commit additional time and costs to this process to ensure that the service outcome best fits the needs of each of them. The interviewees' answers regarding the usual payment methods for service contracts can be divided into three categories: fixed price, cost-based payment and payment on completion. Seven of the nine interviewees stated that (some of) their company's service contracts are paid with fixed prices which can be based on milestones (mentioned by four) or over a set period of time (stated by three interviewees) such as a monthly payment. Three of the interviewees stated that the payment is based on the actually spent costs which can be assessed through, e.g. timesheets. In the category of payment on completion, the service supplier is paid upon completion of the project which was mentioned by one interviewee. It is to be noted that this company offered research services which usually only have deliverables at the end of the service period in the form of, e.g. a research report. Multiple answers were possible. Based on these results it can be summarised that fixed price payment seemed to be the standard method for service contracts. The following section describes the input information of a pricing decision. 4.3 Input information The results of the interviewees' answers to the questions of the input information section were analysed in three main sections: cost estimate and uncertainty, customer and competitors. These are described in this section. 4.3.1 Cost estimate and uncertainty The way the cost estimate is communicated during the bidding process was found to be distinguishable into two categories: presented using a table or a graph. The costing information included in a table was found to be in two different ways. Four interviewees used a detailed cost breakdown in the form of the necessary work steps, the time and expertise needed for each step and the cost value assigned to the different steps. The other approach was mentioned to include a three-point-estimate which includes pessimistic, most likely and optimistic assumptions represented in a tabular form. The approach that was used most to include cost estimating information in a graph was a three-point estimate. Another approach mentioned was an s-curve which displays the cumulative costs over time and usually adopts the form of the letter S (Cioffi, 2005). The specification of the available costing information in practice was found to be influenced by the way uncertainty was included in the estimate. The levels of uncertainty included in the cost estimate were reported as: none, variation in the input data and quantification of qualitative uncertainty. Four interviewees stated that they included no uncertainties in their cost estimate. In the second group, the available information that the cost estimate is based on can vary; for example, to fulfil a specific task, a particular engineer may have taken 4 or 5 hours depending on other variables. The third group includes the assessment of the question of "what can go wrong" and connecting a value to this assessment. This occurs subjectively through the experience of the decision maker. Furthermore, the interpretation of the cost estimate was found to be dependent on the way uncertainty was included in the cost estimate, thus it is discussed in this section (this question was asked in connection to the bidding strategy). The answers were grouped as: none, a point estimate and a range estimate. Participants who stated that they included no uncertainties in their cost estimate also said that the cost estimate they received was not interpreted. This means the cost estimate was taken as it was. However, two of those said that the possibility was kept in mind that the cost estimate may be reduced due to the fact that it was based on conservative values. For example, if the historic data would show that a specific task took between 4 and 5 hours, the cost estimate would be based on the 5-hour estimate. If the final cost estimate would be considered too high, these cost values would be adjusted in a second iteration of the process. In the second category, the costing information with the related uncertainty was stated to be interpreted as a point estimate, based on, e.g. the 50 or 80 per cent line in the graph. One interviewee stated that this was only held up when the uncertainty connected to the contract was low, otherwise a cost range was kept. In the third category, the communicated costing information was carried forward in the pricing process as a range estimate, either with its original spread or as a reduced spread. One interviewee stated that the full range was utilised when there was high uncertainty connected to the contract in the form of a high variation in the input data. Table II shows the comparison of the way the cost estimates were presented and interpreted against the uncertainty that is included. The total values do not add up to nine because two interviewees stated the use of multiple methods to communicate their cost estimate; one used both types of graphical displays, the other one stated the use of tables to present the cost breakdown and graphs to present the overall costs. However, the total values give an indication of how often each type of presentation was mentioned and which uncertainty is included. As depicted in Table II, the companies that presented the cost estimate as a breakdown in a table did not include any uncertainty; it was rather based on specific assumptions. These assumptions included the choice of a conservative value when the input data varied, e.g. when a task was recorded to take between 4 and 5 hours, the estimate would be based on 5 hours. Furthermore, when uncertainty was included, the cost estimate was more likely to be presented in graphical form. All interviewees who stated that they used a graphical approach to display their costing information included uncertainty in it. The interviews also assessed which further uncertainties can influence the pricing decision. Two out of the three interviewees who stated that their cost estimate did not contain any uncertainties, also stated there were no further uncertainties influencing the pricing decision. Both of them, however, stated that they would reduce the cost estimate if the originally derived price bid would be considered as too high. The other uncertainties influencing the pricing decision were categorised into: customer related uncertainties, competitor related uncertainties, cost estimation uncertainties, economic uncertainties and others. Customer related uncertainties included the customer's previous choices of bidders for similar projects to recognise observable patterns. For example, the customer may always go for the price bid that is 5 per cent below their stated budget limit. Other factors were mentioned as the assessment of questions such as the possible consequences if the customer found a mistake in the bid, the location of the customer to evaluate the possible travel costs, and assumptions about the usage of the serviced product or machine. Another aspect that was mentioned was the level of experience of the customer's personnel involved in the usage of the product or machine that was part of the service contract. Further aspects related to the customer are analysed at a later point in this section. Competitor related uncertainties assessed the identification of the competitors for the particular service contract and an evaluation of their most likely bid. Furthermore, the contract might be let to multiple suppliers who would either focus on different aspects of the service or would have to be able to share the project. Further aspects related to the competitors are analysed at a later point in this section. As discussed in this section, the cost estimate was stated to either include different uncertainties in the form of a spread or was based on assumptions that may not prove true. Further uncertainties included the possibility of cost reductions through, e.g. a reduction of the overhead costs. Economic uncertainties include factors which may influence the commercial activities such as legal changes, gains that can be achieved with the contract, the situation of the overall economy, of the market place and of the specific sector. Other mentioned uncertainties included the bidding company's contract situation and the uncertainty arising from the technical requirements. Most interviewees mentioned more than one of the presented sources of uncertainty with a clear emphasis on one important factor, usually concerning an example from the recent past. For this reason, there is no quantitative analysis of the relative importance of each of the mentioned categories. 4.3.2 Customer The available information concerning the customer considered the areas of their bidding strategy, the past relationships, their future needs and whether these aspects influence the decision maker of the bidding company. For these interviews, the customer's bidding strategy was addressed through the aspect of their budget and their evaluation criteria regarding the bids. The interviewees' answers indicate two different categories: either these strategic aspects are communicated with the service requirements or they can be assessed through a "getting to know the client" process in which usually a commercial team is involved. Of the nine interviewees, four stated that the customer's bidding strategy was communicated, two said it could be assessed, and three that it varies between these two categories depending on the kind of customer (resulting from aspects such as if they had worked with them before, what the preferred bidding process of the customer was). The past relationship between the bidding company and the customer was described by all interviewees as an important source of information. An ideal bidding situation would involve a long past relationship where trust had been build up and the parties would know each other. When this is not the case, the bidding company may still have previous experience with the customer to build up knowledge about them. In cases where there is no previous experience, the bidding company has to rely on the information communicated by the customer themselves or published in, e.g. the press. The assessment of the customer's possible future needs caused different reactions with the interviewees. One part (seven of nine interviewees) stated that this was one aspect that they assess during the process of compiling the bid and included it if appropriate. These interviewees stated the importance of possible follow-up work, future relations and the length of the service contract to demonstrate the suitability for, e.g. the next five years. The other two interviewees highlighted that the bid only covered the service requirements and that a consideration of the customer's possible future needs was highly speculative and thus not included in the bid-compiling process. Thus, for a specific competitive bidding situation, the customer's future needs may play an important role in the bidding process and would need to be considered in a conceptual framework of the influencing uncertainties at the bidding stage. Regarding the consideration of the available information about the customer, all interviewees stated that it was of importance for the decision maker and the compiling of the bid. Five said that all the available information is considered, two described the customer and their bidding strategy as the most important influence on the bid, and two stated that there were other more important aspects such as the contract costs. This means that the customer can constitute a central factor in a bidding decision, however, its relative importance depends on the particular service contract. 4.3.3 Competitors The interviewees were asked questions which aimed at determining the following information regarding their competitors, namely; their identity, their cost estimate, their available technology or knowledge, and which of these aspects would be considered in the pricing decision. As indicated in the discussion in Section 4.2, the identity of the competitors may be known depending on the bidding process. If this is not the case, the bidding company may either have a "pretty good" idea regarding their competitors, due to their experience about who is capable of dealing with the requirements or not be able to identify them at all, particularly when trying to bid in new market segments where their experience is limited. For the purpose of this analysis, the three possibilities are named as the competitors' identity is known, knowable or not known. The competitors' cost estimates are not usually known to the bidding company which was confirmed by all interviewees. However, there are different levels of speculation. Based on previous experiences, a "ballpark" or top level deduction may be known which can be formulated as an absolute value or assessed in relation to the bidding company's costs. Another possibility is the knowledge of cost details such as salaries based on information obtained from previous employees of the competitor. In other cases, particularly when dealing with new or unknown competitors, the cost estimates may be neither known nor deducible. The third investigated aspect concerned the information about the competitors' available technology or level of knowledge which may give them a competitive advantage. The answers varied between three categories. A common answer (by six out of nine interviewees) was that it is known as the competitors advertise themselves on, e.g. the internet and their homepages or have other publicity in, e.g. newspapers. Two interviewees stated that this aspect of the competitors is knowable due to the decision maker's experience in the area. In other cases, particularly when the company bids in a new market segment, this aspect was stated to be not known and not knowable by two interviewees. Table III shows the frequency of the interviewees' answers for their knowledge of the competitors' cost estimates and their available technology or knowledge plotted against the competitors' identity. The numbers do not sum up to nine due to the fact that four interviewees stated multiple answers regarding the competitors' identity which can be dependent on the particular service contract. Hence, their answers varied also for the other aspects. The results shown in Table III give an indication to the availability of information about the competitors and thus the level of uncertainty connected to them. In cases where the competitors' identity is known or determinable, the bidding company also had a reasonable level of knowledge about other aspects. In other words, the bidding company is not ignorant about their competitors and their possible bidding strategies unless it is bidding in a new market sector. Investigating the interviewees' consideration of these aspects during the decision process, six replied that they used all the information that is available to them and two stated that they considered the available information but that there are other more important factors such as the customer. One interviewee said that the information regarding the competitors is not considered in the pricing-decision process. This confirms the results of the second empirical study, namely that competition is one of the influences on a pricing decision. Furthermore, most of the interviewed companies (seven out of nine) stated that it was one of the most important factors. Similarly, the availability of the original service and contract requirements was assessed with the interview as they would have been communicated by the customer at the beginning of the bidding process. They were stated by all interviewees to be available and included in the decision process. The following section describes the interviewees' answers regarding their bidding strategy. 4.4 Bidding strategy The interviewees' answers to the questions concerning the bidding strategy were analysed in three main sections: the choice of the decision maker, the method to obtain the price bid and the acceptance of a contract with a high risk of making a loss. These are described in this section. 4.4.1 Choice of the decision maker As the bidding strategy can be very subjective, the interview assessed how the decision maker was chosen. Most of the interviewees (seven out of nine) highlighted that the decision was made by a team; two stated that a team was involved in the bid compilation and the final decision was made by the team manager. The team decision was connected to contracts of both low and high complexity; four of the seven interviewees managed contract portfolios, one dealt with contracts of low complexity and two focused on ones of high complexity. Thus, it can be derived that the assignment of a team to the decision process is not correlated with the contract size. This means that team dynamics may influence the decision outcome and that the uncertainty caused by human behaviour which is connected to one individual decision maker is of minor importance in this context. The decision makers were chosen based on different criteria: experience, delegation and completed courses. Multiple replies were possible. In the first group, the decision maker(s) would be chosen based on their experience with bidding in general, bidding for similar contracts or in managing (similar) service contracts. In the second group, the decision maker(s) had to have a certain level of authority to make the bidding decision. The third category was mentioned as courses that were offered in the companies on, e.g. writing proposals or negotiating. The most importance criterion for choosing a decision maker was named as their experience which was mentioned by six of nine interviewees. Of similar importance (mentioned by five interviewees) and connected to experience is the category of delegation in the company which was a further important criterion for the choice of the bidding decision maker. The completion of courses was mentioned by two interviewees, both highlighted that this was only a supportive aspect; the decision maker(s) would not be chosen based on the courses they had completed. 4.4.2 Obtaining the price bid The calculation of the price bid, in other words the assessment of the monetary values to be included in the bid, can be categorised in two different approaches: "cost+profit margin=price" and price-focused process. The "equation" of the first group is a simplified depiction of the approach, most of the interviewees (seven out of nine) utilised in their bidding process. To the interpretation of the cost estimate a profit margin is added which can include a contingency, an administration margin and the consideration of inflation. Two of the interviewees stated that their process was focused on the price and the costs were not considered separate from that. This means that the price is considered in different steps within the bidding company regarding to either its suitability to the customer's stated budget (one of the interviewees) or to strategic evaluation of the market situation and the customer needs (the other interviewee). Following this question was the assessment of the minimum price bid underneath which the bidder would not accept the contract. The interviewees agreed that there was not a usual process to calculate this price before the tendering or negotiation process. However, the valuation of the minimum price can be categorised as: "cost+minimum profit", available alternatives and the potential of follow-on work. Six of the nine interviewees stated that they were prepared to reduce their profit in the bidding situation (first group). This includes the situation of no profit and excludes a deliberate loss. One of the interviewees of that category stated that the price bid communicated to the customer would be the minimum acceptable price. Two of the interviewees said that the minimum price varied according to the available alternatives in the economic situation at the time of bidding (second group). This comparison could include not achieving an agreement. In the third group, the minimum price was dependent on strategic aims such as the possibility of receiving future contracts with this customer. Two of the interviewees belonged to this category, one of which stated it in addition to the best available alternative. 4.4.3 Acceptance of a contract with a high risk of making a loss To assess other strategic aspects that may influence the bidding decision, the interviewees were asked if they had agreed to contracts which deliberately made a loss. Of the nine interviewees five stated that they would not accept such a contract, four said they had done. The answers to the question can be categorised as depicted in Table IV. Table IV shows that there was just one reason mentioned by the interviewees regarding the refusal of a contract with a high probability of making a loss which was typically connected to company policy or the usual conduct in the market sector. However, for the acceptance, the answers could be divided into three categories, namely the bidding company's long-term gains, the possibility of eliminating competition and the profile of the customer as a client. The interviewees that stated that they would accept such a contract usually mentioned multiple aims of these categories. The pricing process used by most of the interviewees was cost based which confirms the assumptions of previous studies (Avlonitis and Indounas, 2005). Furthermore, a connection could be observed between the complexity of the contract and the bidding process which determines the level of negotiation between customer and possible supplier. It was found that the more complex a service contract, the closer the two parties work together throughout the bidding process. This confirms the research of Bajari et al. (2008). However, a connection between the payment method and the bidding process as described by Bajari et al. (2008) was not confirmed in this study. The cost estimate usually included uncertainty in the form of a cost range. If uncertainty was not explicitly included in the cost estimate, it was usually based on specific assumptions which would be reassessed during the following pricing process. The uncertainty in a pricing decision was usually considered in the process (in one way or another). Where possible this uncertainty was reduced, for example if the service requirements were not clear or vague, the bidding company usually had the opportunity to receive further information from the customer through negotiation. Focusing on certain sources of uncertainty such as the competitors and the customer, the bidding company was usually not ignorant about these factors and their possible influence on the decision outcome. The identity of the competitors was usually known to the bidding company or could be assessed during the process of compiling the bid. This means that the competitors' profile and available resources can be taken into account in the process. Similarly, the customer's bidding strategy was either known or assessable. This means that the customer's evaluation of the service price and quality as well as other criteria is or can be known at least vaguely. Particularly customers that the bidding company had had a previous connection with to build up trust (Johnson and Grayson, 2005) form an important source of information and reduce the level of uncertainty. The presented interview study found that the pricing decision under uncertainty was based on the subjective evaluation of the decision maker(s) regarding the consideration of different uncertainties. As indicated by literature in uncertainty research (Samson et al., 2009; Thunnissen, 2003), the terms uncertainty and risk are hard to define and distinguish comprehensively. This was confirmed by the interview study, some interviewees used examples to overcome this difficulty. For the identification of uncertainties that may influence the considered service contracts, subjective methods were prominent while for their management subjective methods are used but often supported by objective methods such as Monte Carlo modelling. This suggests that there is a need for models to support the decision process in practice. Another aspect to overcome the uncertainty arising from individual assessment was the involvement of a decision team. Limitations of this empirical study include the small set of participants. However, the results are to be understood as indicative as opposed to a comprehensive characterisation of the current bidding situation for service contracts. With this purpose, they identify common patterns of approaching the decision problem, aspects and opportunities for further improvement and possibilities for offering support to the decision maker. This paper presented an interview study with industrialists from manufacturing companies facing the change of market structures towards servitisation. The study gave insights into the typically available information. Table V shows a summary of the findings. The findings from the interview study described in this paper show the influences and considerations during the decision-making process at the competitive bidding stage for service contracts. This forms a first step towards a more elaborate understanding of the processes involved in practice and of the development of a support for industry to make more informed decisions and secure the profitability of their service contracts. In addition to the aim of the presented interview study, namely the identification of the available information for manufacturing companies at the competitive bidding stage for service contracts, the study delivered further results. For example, it was found that costing information is typically communicated within the company either in tabular form as a cost breakdown or in a graphical form as a three-point estimate. Recent research found that these approaches are suboptimal in raising the decision maker's awareness of the uncertainty connected to the cost forecast (Kreye et al., 2012). Thus, further research is necessary to support industry in adapting optimal approaches for the communication of the uncertainty associated with the decision-making problem. The findings described in this paper can be used for future research to develop a uncertainty model for competitive bidding. This uncertainty model can include the information connected to the customer and competitors to determine the manufacturing company's probability of winning the service contracts and its probability of making a profit. This information supports the decision makers at the bidding stage to make a more informed decision, evaluate the level of risk with their pricing decision and, thus, ensure the long-term profitability and sustainability of their business. Opens in a new window. Figure 1 Example of a cost estimate and the possible price bid Opens in a new window. Figure 2 Interviewees' positioning regarding the size of their service contracts Opens in a new window. Figure 3 Comparison of data collection and analysis methodologies Opens in a new window. Figure 4 Characterisation of bidding process regarding the type of contract to be bid for Opens in a new window. Table I Interviewees' responses regarding sources of information and management tools of uncertainty in the decision process at the bidding stage Opens in a new window. Table II Appearance of cost estimate in dependence of included uncertainty Opens in a new window. Table III Available information about the competitors at the bidding stage Opens in a new window. Table IV Interviewees' reasoning behind refusing or accepting a contract with high probability of making loss Opens in a new window. Table V Summary of research findings of interview study
|
One of the main findings was that, despite the novelty of the process, the decision makers at the competitive bidding stage have an understanding of the involved uncertainties. In particular, the uncertainty arising from the customer as the user of the product and evaluator of the competitive bids in addition to the uncertainty connected to the competitors were identified as the main influences on the pricing decision.
|
[SECTION: Value] Sustainable production and consumption have become more important internationally which has led to the transformation of market structures and competitive situations into the direction of servitisation (Baines et al., 2011; Bandinelli and Gamberi, 2011). For a manufacturing company the shift towards being a service provider is characterised by a high level of uncertainty about the future strategic development of the company caused by, e.g. inadequate knowledge and information (Song et al., 2007). For this research, a service is defined as an activity or a process which is aimed at the change of the state of the service issue such as the repair of a machine or the supply of flying hours for an aircraft (Araujo and Spring, 2006; Gadrey, 2000). In this context, the supply of product-centred services becomes more important. These services tend to be long-lived. For example, Babcock (2012) announced their support contract for the Australian Anzac class surface ship fleet until 2023. Another example is Rolls-Royce's Flotilla Support Programme for their submarines until 2017 (Rolls-Royce, 2011). The shift to a being a supplier for these services can cause many uncertainties, especially for companies that have previously focused on the production and manufacturing of products. The delivery of a service is usually embedded in a contract which is an agreement between the parties about the technical details of the service and is intended to be legally binding (Nellore, 2001; Rowley, 1997). Service contracts are often allocated through the process of competitive bidding where the competing suppliers communicate their service specifications and price bid to the customer who then evaluates the bids (Rexfelt and Ornas, 2009; Bubshait and Almohawis, 1994). This bidding process can include different levels of negotiation with the customer which can vary from an auction type bid (Friedman, 1956; Neugebauer and Pezanis-Christou, 2007) to an elaborate information exchange process (Lehman, 1986; Bajari et al., 2008). These varying levels of negotiation leave the bidding supplier with different levels of uncertainty influencing the pricing decision process. The pricing approach that is applied most frequently in practice is the cost-based pricing process which puts the starting point of the research at the estimation of the costs of the service contract (Hytonen, 2005). Cost estimation is concerned with predicting the future, thus, uncertainty is inherent to the process (Goh et al., 2010; Christoffersen, 1998). This uncertainty can be included in the cost estimate in different ways, one possibility is the range or density forecast which consists of a range of possible future values (Tay and Wallis, 2000). Included in the range forecast can be the minimum, maximum and average value connected to different assumptions about the future (Giordani and Soderlind, 2003). An exemplary cost estimate is shown in Figure 1. At the bidding stage, the decision maker has to select one point within the given range as a price bid to communicate to the customer; one example is marked in Figure 1. Choosing a price that is too high may result in being underbid by competitors and, thus, potential loss of the business (Lucas and Kirillova, 2011; Chapman et al., 2000). A too low price may influence the customer's perception of the quality of the service and, thus, be rejected (Freedman, 1988) or the failure to recover the costs and profit of the service (Swinney and Netessine, 2009; Wang et al., 2007). For the pricing decision at the bidding process the decision maker has to: understand the uncertainty in the cost estimate; and understand other uncertainties that influence the bidding success and the fulfilment of the service contract. The aim of this paper is to identify the availability and use of information at the competitive bidding stage. For this, an interview study with industrialists from different sectors was conducted. The related literature in contract bidding including the bidding process, contract conditions and typical payment methods is described in Section 2. Sections 3 and 4 describe the interview study and its results. Most literature describing theory on bidding decisions focus on auction-type processes (Cai et al., 2009; Schoenherr and Mabert, 2008; Neugebauer and Pezanis-Christou, 2007). This means that the described approaches focus on a constrained bidding environment and low complexity and duration of the services discussed in this context (Schoenherr and Mabert, 2008). This means that the model and theories described have limited applicability to the research described in this paper. This research focuses on services of high complexity, which are typically embedded in contracts of long duration. Literature describing the decision-making processes at the competitive bidding stage typically focuses on products (Li and Graves, 2012; Bhaskaran and Ramachandran, 2011; Sosic, 2011; Li and Wang, 2010). Particularly the pricing decisions of products has been highlighted to be influenced by uncertainty (Sosic, 2011). For example, customers can be expected to evaluate the competitive bids according to an individual list of preferences (Chaneton and Vulcano, 2011; Guo et al., 2009). Reasons for this more elaborate body of literature in product-focused decision making may be the longer history of the business process in industry and the issues connected to it. However, approaches describing the pricing of services can be found in the literature. One example is described by Guo et al. (2009), who model the strategic decision in a single-supplier context. While the approach offers valuable insights into the decision-making processes, it has two limitations for the application to servitisation: it does not include the existence of competition at its influence on the bidding strategy; and it describes services of low complexity such as hotel accommodation or restaurant dining. It can be summarised that current literature offers limited insights into the strategic decision-making processes at the competitive bidding stage, particularly from an industrial viewpoint. In particular, they do not consider the information that is available and the strategic process of its consideration in industry. Research that fails to consider these aspects will fail to accurately represent the decision-making process at the competitive bidding stage and will not be adopted by industry. This paper aims at closing this gap by introducing an exploratory study which describes the availability of information at the competitive bidding stage and its strategic consideration in practice. The aim of this study was to explore the availability of relevant information in the context of competitive bidding for a service contract on the supplier's side and to describe the subjective processes of the decision maker at the bidding stage. To examine this aim, an interview study was conducted. The following sections describe the applied method of this study in more detail. First, the interview procedure is described, then the design of the interview with the questions is explained, and last, the number of interviewees and the for example the sectors they work in are then described. 3.1 Interview procedure A standardised interview was carried out meaning the wording and sequence of questions was determined in advance, thus, each interviewee was asked the same questions in the same order (Teddlie and Tashakkori, 2009). This ensured that all topics were covered in each interview allowing a comparison between the answers of the different interviewees (Patton, 2002). The questions were open-ended, i.e. no predetermined answers were given (or suggested) and the interviewees were encouraged to describe the processes in their own words. This reduced possible bias of the replies. The interviews were not recorded as most of the interviewees were from organisations of the defence sector or simply not comfortable with recording. The results are based on the notes the researcher took during the interview processes. However, to ensure the correctness and limit the misinterpretation of the given information, the responses were returned to the interviewees after the interview for confirmation and validation as explained in Robinson et al. (2007). 3.2 Questionnaire design The questionnaire design was based both on previous empirical work and the literature in the field. The empirical work focused on two experimental studies undertaken with a total of 72 cost engineers and bidding decision makers from practice. These studies focused on the different influences on the bidding decision-making process, including the approach of displaying the cost estimate (Kreye et al., 2012) and the influence of the existence of competition on the decision outcome and rationale (Kreye, 2011). The participants were given a set of questionnaires which consisted of a pricing scenario and various questions connected to their decision-making process for this hypothetical example. From the answers in the experimental studies it became clear, that industry did not have a universal set of definitions for the terminology. Thus, it was decided that in the beginning of the interview, the participant's specific definition had to be clarified and established. The literature highlights the influence of contextual issues to the pricing decision. These are for example the contract situation within the company (Monroe, 2002; Chapman et al., 2000), the bidding process (Lehman, 1986) and the payment process (Tseng et al., 2009). Thus, the decision context was the focus of the second area of interview questions. In the experimental studies preceding the interview, one of the questions focused on the further influences on the decision-making process. The answers to this question could be categorised into market uncertainties (which included developments such as inflation, economic changes and technology development), cost estimation uncertainty, product uncertainties (including performance of the machine and risk of failures), competition uncertainty (manifesting itself in the risk of losing the contract) and customer uncertainties. These five main influences were used as a basis for the interview questionnaire, in particular to establish the amount of information typically available about these issues. As the bidding decision making is highly influenced by strategic considerations (Harrington, 2009; Afuah, 2009), the fourth area of interview questions focused on the bidding strategy. Based on the literature in the field, it was found that different influences are of importance. For example, due to the highly subjective nature of decision making, the choice of the bidding decision maker has been highlighted as an important factor (Tulloch, 1980). Further influences include the decision maker's interpretation of the cost estimate by based on his/her experience and assumptions (Kreye et al., 2012) and the calculation of the price bid (Hytonen, 2005; Monroe, 2002; Lehman, 1986). Thus, it can be summarised that the design of the interview questionnaire was based on an iterative process of combining results of preceding empirical studies with industry and the literature in the field. Based on this process, the interview questionnaire was compiled which is described in the following section. 3.3 Interview questionnaire The questions covered four main areas: uncertainty and risk, bidding context, input information for the pricing decision, and bidding strategy. Questions included in the first main area established the meanings the practitioners applied to the terms risk and uncertainty and how these are considered and identified in the pricing process. These established a common ground for the terminology in comparison to the definitions applied in the presented research and formed the basis for later questions. The second main area about the bidding context established background information that can potentially influence the bidding strategy. The issues investigated were the current contract situation of the company (Monroe, 2002; Chapman et al., 2000), the usual bidding process for service contracts (Lehman, 1986) and the typical payment method once the contract was awarded (Tseng et al., 2009). The last two areas form the main focus of the interviews. The area of the input information for the pricing decision examined the form and type of information normally used in the decision process and possible assumptions the decision maker may form (Goh et al., 2010; Bolton et al., 2006; Fargier and Sabbadin, 2005; Rubinstein, 1998; Loewenstein and Prelec, 1993; Lehman, 1986). The questions in this area examined; the form of the cost estimate, the uncertainties included in the cost estimate, possible further uncertainties that the decision maker considers in the pricing process, the available information about the competitors and the customer, and the amount of input information that is considered in the decision-making process. The area of bidding strategy established the subjective aspects of decision making in the competitive bidding situation as this may influence the outcome of the decision process (Kreye et al., 2012; Stecher, 2008; Yager, 1999; Lehman, 1986; Tulloch, 1980). The questions explored; the selection process of the decision maker, the interpretation of the cost estimate, the calculation of the price bid, the calculation of the minimum price bid, and the possibility of accepting contracts with a high risk of making a loss. The next sub-section describes the participants of this empirical study. 3.4 Interviewees The interviews were carried out over one year (March 2010 to March 2011) during a rebound period after the global economic recession of 2008-2009. Nine interviews were undertaken where the investigated sectors and numbers of interviewees were: defence (1), aerospace (1) and both defence and aerospace (2); engineering (2); research (1); information technology (1); and construction (1). The interviewed companies ranged from large and globally acting providers (with employee numbers varying between about 40,000 and 1,800 employees) to smaller, nationally acting providers (with less than 300 employees). The group of interviewees focused on the suppliers of product-centred services with varying levels of complexity. The contract complexity describes its value with a fuzzy distinction of its attributes, in other words there is no distinct value or factor that defines the difference between the two complexity grades. Thus, the service contracts included in this interview study were separated as follows: Low complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is low (Skaggs and Youndt, 2004; Shostack, 1987). In other words, the requirements are clear to the involved parties (Bajari et al., 2008). The interviewees of this study named these "small contracts" and characterised them using phrases such as "less than PS3 million", "less than 150.000 Euro", or "simple requirements such as the need of three engineers to do some testing". High complexity. The number of independent tasks necessary to complete the service and divergence is the difference between the natures of these tasks is high (Skaggs and Youndt, 2004; Shostack, 1987). In other words, at the point of the bid invitation, the service design may be hard to define in detail (Bajari et al., 2008). The interviewees named these "large contracts" and distinguished the with phrases such as "more than PS3 million", "complex tasks such as 18 months contract" or "site management". Figure 2 shows the frequency of answers from the interviewees. Four of the nine interviewees said they hold a portfolio of different complexity contracts, two focused on contracts of low complexity and three interviewees concentrated on contracts of high complexity. 3.5 Methodology for result analysis To analyse the responses, a qualitative approach was applied. This means the results and their implications are discussed verbally to highlight the importance in the bidding and pricing process (Saunders et al., 2012). Nevertheless, to demonstrate the relative importance of specific answers, a quantitative presentation of these results was chosen for specific questions. This is included mainly for demonstration purposes to show where trends may emerge. Due to the limited number of interviewees, a complete statistical analysis of these trends was not possible and is not included in this paper. In general, the basis for the data analysis was the differentiation of the interviewees depending on the size of their service contracts as described in Section 3.4. However, when the interviews showed a relationship between questions or even interview areas, this relationship is emphasised in the data analysis. For example, a connection was found between the included uncertainty in the cost estimate, the approach to communicating this cost estimate (both questions concerning the input information) and the decision maker's interpretation of this estimate (question asked in connection to the bidding strategy). This relationship is analysed in one section. To demonstrate when these cross-relationships were found in the interview process, Figure 3 shows the data collection methodology in contrast to the analysis methodology. This section analyses the results of the interview study and presents them in the four main areas, namely uncertainty and risk, bidding context, input information, and bidding strategy. The term bidding strategy refers to the pattern of activities which has an impact on the achievement of bidding goals such as winning a profitable contract. 4.1 Uncertainty and risk The aim of the questions in this section was to clarify the terminology used by the industrialists and thus to guide further discussion of the topic. Differences could be observed between the interviewees in general. Some had corporate-wide definitions for the two terms; others used examples to describe their individual understanding, two interviewees did not use the term uncertainty. However, comparing the meaning or interpretation of the definitions, similarities can be found. Out of nine interviewees, seven understood uncertainty as the variation of an aspect of the contract such as the cost estimate. Discussing the term risk, the interviewees agreed that it is connected to an impact. Furthermore, seven interviewees stated that it was connected to a specific event, such as the risk of a red light during a car journey or the loss of a team member whose knowledge is central to the fulfilment of the service. Two interviewees described it as the impact on the project as a whole. The interviewees' definitions of the terms risk and uncertainty were utilised throughout the process of interviewing as a basis for clarity. However, for the purpose of this research, the described definition of uncertainty (see list of definitions) is applied in the further analysis of the interview results; the concept of risk is not discussed further. The interviewees' sources of identification and management tools for uncertainty can be classified based on the level of subjectivity. To identify the uncertainty connected to a project, all interviewees identified experience as the main source which was typically connected to the team that put the bid together (stated by six interviewees) or to the project manager (stated by three interviewees). In addition, more objective identification sources were used such as a formalised risk analysis process in the form of, e.g. a risk management handbook or databases of previous projects. This category was mentioned by four interviewees. For the identification of uncertainties, the practitioners used either a subjective method on its own or in combination with an objective method. To manage uncertainty, subjective approaches were of less importance than for the identification; only five interviewees named this approach. Four interviewees mentioned objective management methods out of which three also mentioned objective identification methods. Table I depicts the connection between the classification of information sources and management tools for uncertainty. The frequencies highlight the amount of times each individual aspect was mentioned and thus do not add up with the combinatorial numbers in the rest of Table I. 4.2 Bidding context Describing the bidding process, the interviewees' answers were categorised into four groups: one-bid process, two-bid process without negotiation, two-bid process with negotiation, and negotiation. In the one-bid process, the competitors have one opportunity to submit their bid including the bid price and the specifications of the service and the contract. The customer then evaluates these bids and agrees to one of the offers. This includes the assumption that the customer has the ability to understand the technical and commercial details of the bids. In the two-bid process without negotiation, the bidding process is split into two phases. In the first phase, a number of possible suppliers submit their bid which usually includes their suitability for the service contract (this can be based on an invitation to bid or an open access). This number of competitors is reduced to the most suitable ones who are then invited to submit their full bid in the second phase. In this second phase, the competitors typically know the identity of each other. None of the phases includes negotiation with the customer. In the two-bid process with negotiation, the bidding process is split into two phases similar as described above. However, the second phase is characterised by a negotiation between the competitors and the customer to clarify important issues and questions. The answers to these questions can be published to all competitors or stay confidential between the two negotiating parties. A bidding process which includes negotiation is characterised by an exchange of large amounts of information concerning the service requirements, the customer's intention, technical scope or any other issues concerning the contract or bid. The bidding process which the interviewees typically faced in their decision process depended on the size of the contract to be bid for. The definitions as described in Section 3.3 are used to describe the contract size. Figure 4 shows the answer frequency of the usual bidding process connected to the contract size. The values in Figure 4 distinguish between usual and possible bidding processes as indicated by the interviewees. The numbers do not add up to nine as multiple answers were given by the interviewees managing a contract portfolio. The results of Figure 4 indicate that low complex contracts with clear requirements are typically not negotiated which can be constituted with the reason that negotiation is a time and cost consuming process (Bajari et al., 2008). In contrast, contracts of high complexity are typically agreed after negotiation with varying levels of depth of this process. This suggests that the uncertainty that may arise from unclear requirements can usually be reduced by collecting further information from the customer. The parties were willing to commit additional time and costs to this process to ensure that the service outcome best fits the needs of each of them. The interviewees' answers regarding the usual payment methods for service contracts can be divided into three categories: fixed price, cost-based payment and payment on completion. Seven of the nine interviewees stated that (some of) their company's service contracts are paid with fixed prices which can be based on milestones (mentioned by four) or over a set period of time (stated by three interviewees) such as a monthly payment. Three of the interviewees stated that the payment is based on the actually spent costs which can be assessed through, e.g. timesheets. In the category of payment on completion, the service supplier is paid upon completion of the project which was mentioned by one interviewee. It is to be noted that this company offered research services which usually only have deliverables at the end of the service period in the form of, e.g. a research report. Multiple answers were possible. Based on these results it can be summarised that fixed price payment seemed to be the standard method for service contracts. The following section describes the input information of a pricing decision. 4.3 Input information The results of the interviewees' answers to the questions of the input information section were analysed in three main sections: cost estimate and uncertainty, customer and competitors. These are described in this section. 4.3.1 Cost estimate and uncertainty The way the cost estimate is communicated during the bidding process was found to be distinguishable into two categories: presented using a table or a graph. The costing information included in a table was found to be in two different ways. Four interviewees used a detailed cost breakdown in the form of the necessary work steps, the time and expertise needed for each step and the cost value assigned to the different steps. The other approach was mentioned to include a three-point-estimate which includes pessimistic, most likely and optimistic assumptions represented in a tabular form. The approach that was used most to include cost estimating information in a graph was a three-point estimate. Another approach mentioned was an s-curve which displays the cumulative costs over time and usually adopts the form of the letter S (Cioffi, 2005). The specification of the available costing information in practice was found to be influenced by the way uncertainty was included in the estimate. The levels of uncertainty included in the cost estimate were reported as: none, variation in the input data and quantification of qualitative uncertainty. Four interviewees stated that they included no uncertainties in their cost estimate. In the second group, the available information that the cost estimate is based on can vary; for example, to fulfil a specific task, a particular engineer may have taken 4 or 5 hours depending on other variables. The third group includes the assessment of the question of "what can go wrong" and connecting a value to this assessment. This occurs subjectively through the experience of the decision maker. Furthermore, the interpretation of the cost estimate was found to be dependent on the way uncertainty was included in the cost estimate, thus it is discussed in this section (this question was asked in connection to the bidding strategy). The answers were grouped as: none, a point estimate and a range estimate. Participants who stated that they included no uncertainties in their cost estimate also said that the cost estimate they received was not interpreted. This means the cost estimate was taken as it was. However, two of those said that the possibility was kept in mind that the cost estimate may be reduced due to the fact that it was based on conservative values. For example, if the historic data would show that a specific task took between 4 and 5 hours, the cost estimate would be based on the 5-hour estimate. If the final cost estimate would be considered too high, these cost values would be adjusted in a second iteration of the process. In the second category, the costing information with the related uncertainty was stated to be interpreted as a point estimate, based on, e.g. the 50 or 80 per cent line in the graph. One interviewee stated that this was only held up when the uncertainty connected to the contract was low, otherwise a cost range was kept. In the third category, the communicated costing information was carried forward in the pricing process as a range estimate, either with its original spread or as a reduced spread. One interviewee stated that the full range was utilised when there was high uncertainty connected to the contract in the form of a high variation in the input data. Table II shows the comparison of the way the cost estimates were presented and interpreted against the uncertainty that is included. The total values do not add up to nine because two interviewees stated the use of multiple methods to communicate their cost estimate; one used both types of graphical displays, the other one stated the use of tables to present the cost breakdown and graphs to present the overall costs. However, the total values give an indication of how often each type of presentation was mentioned and which uncertainty is included. As depicted in Table II, the companies that presented the cost estimate as a breakdown in a table did not include any uncertainty; it was rather based on specific assumptions. These assumptions included the choice of a conservative value when the input data varied, e.g. when a task was recorded to take between 4 and 5 hours, the estimate would be based on 5 hours. Furthermore, when uncertainty was included, the cost estimate was more likely to be presented in graphical form. All interviewees who stated that they used a graphical approach to display their costing information included uncertainty in it. The interviews also assessed which further uncertainties can influence the pricing decision. Two out of the three interviewees who stated that their cost estimate did not contain any uncertainties, also stated there were no further uncertainties influencing the pricing decision. Both of them, however, stated that they would reduce the cost estimate if the originally derived price bid would be considered as too high. The other uncertainties influencing the pricing decision were categorised into: customer related uncertainties, competitor related uncertainties, cost estimation uncertainties, economic uncertainties and others. Customer related uncertainties included the customer's previous choices of bidders for similar projects to recognise observable patterns. For example, the customer may always go for the price bid that is 5 per cent below their stated budget limit. Other factors were mentioned as the assessment of questions such as the possible consequences if the customer found a mistake in the bid, the location of the customer to evaluate the possible travel costs, and assumptions about the usage of the serviced product or machine. Another aspect that was mentioned was the level of experience of the customer's personnel involved in the usage of the product or machine that was part of the service contract. Further aspects related to the customer are analysed at a later point in this section. Competitor related uncertainties assessed the identification of the competitors for the particular service contract and an evaluation of their most likely bid. Furthermore, the contract might be let to multiple suppliers who would either focus on different aspects of the service or would have to be able to share the project. Further aspects related to the competitors are analysed at a later point in this section. As discussed in this section, the cost estimate was stated to either include different uncertainties in the form of a spread or was based on assumptions that may not prove true. Further uncertainties included the possibility of cost reductions through, e.g. a reduction of the overhead costs. Economic uncertainties include factors which may influence the commercial activities such as legal changes, gains that can be achieved with the contract, the situation of the overall economy, of the market place and of the specific sector. Other mentioned uncertainties included the bidding company's contract situation and the uncertainty arising from the technical requirements. Most interviewees mentioned more than one of the presented sources of uncertainty with a clear emphasis on one important factor, usually concerning an example from the recent past. For this reason, there is no quantitative analysis of the relative importance of each of the mentioned categories. 4.3.2 Customer The available information concerning the customer considered the areas of their bidding strategy, the past relationships, their future needs and whether these aspects influence the decision maker of the bidding company. For these interviews, the customer's bidding strategy was addressed through the aspect of their budget and their evaluation criteria regarding the bids. The interviewees' answers indicate two different categories: either these strategic aspects are communicated with the service requirements or they can be assessed through a "getting to know the client" process in which usually a commercial team is involved. Of the nine interviewees, four stated that the customer's bidding strategy was communicated, two said it could be assessed, and three that it varies between these two categories depending on the kind of customer (resulting from aspects such as if they had worked with them before, what the preferred bidding process of the customer was). The past relationship between the bidding company and the customer was described by all interviewees as an important source of information. An ideal bidding situation would involve a long past relationship where trust had been build up and the parties would know each other. When this is not the case, the bidding company may still have previous experience with the customer to build up knowledge about them. In cases where there is no previous experience, the bidding company has to rely on the information communicated by the customer themselves or published in, e.g. the press. The assessment of the customer's possible future needs caused different reactions with the interviewees. One part (seven of nine interviewees) stated that this was one aspect that they assess during the process of compiling the bid and included it if appropriate. These interviewees stated the importance of possible follow-up work, future relations and the length of the service contract to demonstrate the suitability for, e.g. the next five years. The other two interviewees highlighted that the bid only covered the service requirements and that a consideration of the customer's possible future needs was highly speculative and thus not included in the bid-compiling process. Thus, for a specific competitive bidding situation, the customer's future needs may play an important role in the bidding process and would need to be considered in a conceptual framework of the influencing uncertainties at the bidding stage. Regarding the consideration of the available information about the customer, all interviewees stated that it was of importance for the decision maker and the compiling of the bid. Five said that all the available information is considered, two described the customer and their bidding strategy as the most important influence on the bid, and two stated that there were other more important aspects such as the contract costs. This means that the customer can constitute a central factor in a bidding decision, however, its relative importance depends on the particular service contract. 4.3.3 Competitors The interviewees were asked questions which aimed at determining the following information regarding their competitors, namely; their identity, their cost estimate, their available technology or knowledge, and which of these aspects would be considered in the pricing decision. As indicated in the discussion in Section 4.2, the identity of the competitors may be known depending on the bidding process. If this is not the case, the bidding company may either have a "pretty good" idea regarding their competitors, due to their experience about who is capable of dealing with the requirements or not be able to identify them at all, particularly when trying to bid in new market segments where their experience is limited. For the purpose of this analysis, the three possibilities are named as the competitors' identity is known, knowable or not known. The competitors' cost estimates are not usually known to the bidding company which was confirmed by all interviewees. However, there are different levels of speculation. Based on previous experiences, a "ballpark" or top level deduction may be known which can be formulated as an absolute value or assessed in relation to the bidding company's costs. Another possibility is the knowledge of cost details such as salaries based on information obtained from previous employees of the competitor. In other cases, particularly when dealing with new or unknown competitors, the cost estimates may be neither known nor deducible. The third investigated aspect concerned the information about the competitors' available technology or level of knowledge which may give them a competitive advantage. The answers varied between three categories. A common answer (by six out of nine interviewees) was that it is known as the competitors advertise themselves on, e.g. the internet and their homepages or have other publicity in, e.g. newspapers. Two interviewees stated that this aspect of the competitors is knowable due to the decision maker's experience in the area. In other cases, particularly when the company bids in a new market segment, this aspect was stated to be not known and not knowable by two interviewees. Table III shows the frequency of the interviewees' answers for their knowledge of the competitors' cost estimates and their available technology or knowledge plotted against the competitors' identity. The numbers do not sum up to nine due to the fact that four interviewees stated multiple answers regarding the competitors' identity which can be dependent on the particular service contract. Hence, their answers varied also for the other aspects. The results shown in Table III give an indication to the availability of information about the competitors and thus the level of uncertainty connected to them. In cases where the competitors' identity is known or determinable, the bidding company also had a reasonable level of knowledge about other aspects. In other words, the bidding company is not ignorant about their competitors and their possible bidding strategies unless it is bidding in a new market sector. Investigating the interviewees' consideration of these aspects during the decision process, six replied that they used all the information that is available to them and two stated that they considered the available information but that there are other more important factors such as the customer. One interviewee said that the information regarding the competitors is not considered in the pricing-decision process. This confirms the results of the second empirical study, namely that competition is one of the influences on a pricing decision. Furthermore, most of the interviewed companies (seven out of nine) stated that it was one of the most important factors. Similarly, the availability of the original service and contract requirements was assessed with the interview as they would have been communicated by the customer at the beginning of the bidding process. They were stated by all interviewees to be available and included in the decision process. The following section describes the interviewees' answers regarding their bidding strategy. 4.4 Bidding strategy The interviewees' answers to the questions concerning the bidding strategy were analysed in three main sections: the choice of the decision maker, the method to obtain the price bid and the acceptance of a contract with a high risk of making a loss. These are described in this section. 4.4.1 Choice of the decision maker As the bidding strategy can be very subjective, the interview assessed how the decision maker was chosen. Most of the interviewees (seven out of nine) highlighted that the decision was made by a team; two stated that a team was involved in the bid compilation and the final decision was made by the team manager. The team decision was connected to contracts of both low and high complexity; four of the seven interviewees managed contract portfolios, one dealt with contracts of low complexity and two focused on ones of high complexity. Thus, it can be derived that the assignment of a team to the decision process is not correlated with the contract size. This means that team dynamics may influence the decision outcome and that the uncertainty caused by human behaviour which is connected to one individual decision maker is of minor importance in this context. The decision makers were chosen based on different criteria: experience, delegation and completed courses. Multiple replies were possible. In the first group, the decision maker(s) would be chosen based on their experience with bidding in general, bidding for similar contracts or in managing (similar) service contracts. In the second group, the decision maker(s) had to have a certain level of authority to make the bidding decision. The third category was mentioned as courses that were offered in the companies on, e.g. writing proposals or negotiating. The most importance criterion for choosing a decision maker was named as their experience which was mentioned by six of nine interviewees. Of similar importance (mentioned by five interviewees) and connected to experience is the category of delegation in the company which was a further important criterion for the choice of the bidding decision maker. The completion of courses was mentioned by two interviewees, both highlighted that this was only a supportive aspect; the decision maker(s) would not be chosen based on the courses they had completed. 4.4.2 Obtaining the price bid The calculation of the price bid, in other words the assessment of the monetary values to be included in the bid, can be categorised in two different approaches: "cost+profit margin=price" and price-focused process. The "equation" of the first group is a simplified depiction of the approach, most of the interviewees (seven out of nine) utilised in their bidding process. To the interpretation of the cost estimate a profit margin is added which can include a contingency, an administration margin and the consideration of inflation. Two of the interviewees stated that their process was focused on the price and the costs were not considered separate from that. This means that the price is considered in different steps within the bidding company regarding to either its suitability to the customer's stated budget (one of the interviewees) or to strategic evaluation of the market situation and the customer needs (the other interviewee). Following this question was the assessment of the minimum price bid underneath which the bidder would not accept the contract. The interviewees agreed that there was not a usual process to calculate this price before the tendering or negotiation process. However, the valuation of the minimum price can be categorised as: "cost+minimum profit", available alternatives and the potential of follow-on work. Six of the nine interviewees stated that they were prepared to reduce their profit in the bidding situation (first group). This includes the situation of no profit and excludes a deliberate loss. One of the interviewees of that category stated that the price bid communicated to the customer would be the minimum acceptable price. Two of the interviewees said that the minimum price varied according to the available alternatives in the economic situation at the time of bidding (second group). This comparison could include not achieving an agreement. In the third group, the minimum price was dependent on strategic aims such as the possibility of receiving future contracts with this customer. Two of the interviewees belonged to this category, one of which stated it in addition to the best available alternative. 4.4.3 Acceptance of a contract with a high risk of making a loss To assess other strategic aspects that may influence the bidding decision, the interviewees were asked if they had agreed to contracts which deliberately made a loss. Of the nine interviewees five stated that they would not accept such a contract, four said they had done. The answers to the question can be categorised as depicted in Table IV. Table IV shows that there was just one reason mentioned by the interviewees regarding the refusal of a contract with a high probability of making a loss which was typically connected to company policy or the usual conduct in the market sector. However, for the acceptance, the answers could be divided into three categories, namely the bidding company's long-term gains, the possibility of eliminating competition and the profile of the customer as a client. The interviewees that stated that they would accept such a contract usually mentioned multiple aims of these categories. The pricing process used by most of the interviewees was cost based which confirms the assumptions of previous studies (Avlonitis and Indounas, 2005). Furthermore, a connection could be observed between the complexity of the contract and the bidding process which determines the level of negotiation between customer and possible supplier. It was found that the more complex a service contract, the closer the two parties work together throughout the bidding process. This confirms the research of Bajari et al. (2008). However, a connection between the payment method and the bidding process as described by Bajari et al. (2008) was not confirmed in this study. The cost estimate usually included uncertainty in the form of a cost range. If uncertainty was not explicitly included in the cost estimate, it was usually based on specific assumptions which would be reassessed during the following pricing process. The uncertainty in a pricing decision was usually considered in the process (in one way or another). Where possible this uncertainty was reduced, for example if the service requirements were not clear or vague, the bidding company usually had the opportunity to receive further information from the customer through negotiation. Focusing on certain sources of uncertainty such as the competitors and the customer, the bidding company was usually not ignorant about these factors and their possible influence on the decision outcome. The identity of the competitors was usually known to the bidding company or could be assessed during the process of compiling the bid. This means that the competitors' profile and available resources can be taken into account in the process. Similarly, the customer's bidding strategy was either known or assessable. This means that the customer's evaluation of the service price and quality as well as other criteria is or can be known at least vaguely. Particularly customers that the bidding company had had a previous connection with to build up trust (Johnson and Grayson, 2005) form an important source of information and reduce the level of uncertainty. The presented interview study found that the pricing decision under uncertainty was based on the subjective evaluation of the decision maker(s) regarding the consideration of different uncertainties. As indicated by literature in uncertainty research (Samson et al., 2009; Thunnissen, 2003), the terms uncertainty and risk are hard to define and distinguish comprehensively. This was confirmed by the interview study, some interviewees used examples to overcome this difficulty. For the identification of uncertainties that may influence the considered service contracts, subjective methods were prominent while for their management subjective methods are used but often supported by objective methods such as Monte Carlo modelling. This suggests that there is a need for models to support the decision process in practice. Another aspect to overcome the uncertainty arising from individual assessment was the involvement of a decision team. Limitations of this empirical study include the small set of participants. However, the results are to be understood as indicative as opposed to a comprehensive characterisation of the current bidding situation for service contracts. With this purpose, they identify common patterns of approaching the decision problem, aspects and opportunities for further improvement and possibilities for offering support to the decision maker. This paper presented an interview study with industrialists from manufacturing companies facing the change of market structures towards servitisation. The study gave insights into the typically available information. Table V shows a summary of the findings. The findings from the interview study described in this paper show the influences and considerations during the decision-making process at the competitive bidding stage for service contracts. This forms a first step towards a more elaborate understanding of the processes involved in practice and of the development of a support for industry to make more informed decisions and secure the profitability of their service contracts. In addition to the aim of the presented interview study, namely the identification of the available information for manufacturing companies at the competitive bidding stage for service contracts, the study delivered further results. For example, it was found that costing information is typically communicated within the company either in tabular form as a cost breakdown or in a graphical form as a three-point estimate. Recent research found that these approaches are suboptimal in raising the decision maker's awareness of the uncertainty connected to the cost forecast (Kreye et al., 2012). Thus, further research is necessary to support industry in adapting optimal approaches for the communication of the uncertainty associated with the decision-making problem. The findings described in this paper can be used for future research to develop a uncertainty model for competitive bidding. This uncertainty model can include the information connected to the customer and competitors to determine the manufacturing company's probability of winning the service contracts and its probability of making a profit. This information supports the decision makers at the bidding stage to make a more informed decision, evaluate the level of risk with their pricing decision and, thus, ensure the long-term profitability and sustainability of their business. Opens in a new window. Figure 1 Example of a cost estimate and the possible price bid Opens in a new window. Figure 2 Interviewees' positioning regarding the size of their service contracts Opens in a new window. Figure 3 Comparison of data collection and analysis methodologies Opens in a new window. Figure 4 Characterisation of bidding process regarding the type of contract to be bid for Opens in a new window. Table I Interviewees' responses regarding sources of information and management tools of uncertainty in the decision process at the bidding stage Opens in a new window. Table II Appearance of cost estimate in dependence of included uncertainty Opens in a new window. Table III Available information about the competitors at the bidding stage Opens in a new window. Table IV Interviewees' reasoning behind refusing or accepting a contract with high probability of making loss Opens in a new window. Table V Summary of research findings of interview study
|
The research implications show the influences and considerations during the decision-making process at the competitive bidding stage for service contracts. These include the customer and the competitors.
|
[SECTION: Purpose] Some 25 years ago, one of the authors wrote an article (Stevens, 1989) that sought to explicate the state-of-the-art in supply chain management (SCM). This was at a time when SCM was still in its infancy and only starting to gain currency as an area of interest for practitioners and academics (Oliver and Webber, 1982). At the time, the organizational functions involved in managing the availability of products and satisfying customer orders operated with relative independence, often with conflicting agendas. The purpose of the original article was to facilitate understanding and encourage organizations to exploit the potential for managing their supply chains as part of a joined up (integrated) whole. The original article addressed the need to manage the supply chain at the strategic, tactical, and operational levels as well as recognizing that the scope of an organization's supply chain extended to the furthest reaches of its network of customer and supplier relationships. Stevens (1989) posited that achieving a state of "integration" required a firm to progress through a number of defined stages of development. The stages identified at the time and illustrated in the original article are shown in Figure 1. As Figure 1 shows, the original article argued that SCM developed from a baseline of functional (independent) silos and the first level of integration was across functions (akin to process integration). This then moved to full internal integration involving a seamless flow through the internal supply chain, and finally to external integration embracing suppliers and customers. The primary benefits were identified as improved customer service and reduced inventory and operating costs. Since the original article, much has changed. The world today is more complex and turbulent (Christopher and Holweg, 2011). The reach of many supply chains has increased in pursuit of growth and low-cost sourcing (Fredriksson and Jonsson, 2009). Technological advances have fueled the development of new business models and ways of working (Johnson and Mena, 2008). The advent of new and maturing supply chain strategies (Christopher and Towill, 2002), tools and techniques, together with increased environmental and ethical concerns (Pagell and Wu, 2009) has increased the recognition of SCM as a driver and enabler of business performance (Johnson and Templar, 2011). This has lead to the adoption of new supply chain practices that have elevated the role of SCM within many organizations. While much has changed, the fundamental need for "joined up" thinking and working and the need to integrate the supply chain has not. Gartner Supply Chain Group (O'Marah and Hofman, 2010), place integration as one of the elements of creating a demand-driven supply chain strategy that leads to improved firm performance (Ellinger et al., 2011, 2012). Thus, the need for SCI is still the same, if not greater than before. What has changed since the original article is the context within which supply chains operate, and the enablers of change and performance improvement. As a result the relevance of narrow, linear-based supply chain models has been challenged as firms have looked more and more toward networked and collaborative supply chain strategies to deliver superior performance. The original article reported on the state-of-the-art in SCM. We retain that objective with this invited work. The aim is therefore not to re-visit supply chain integration per se - as advanced in 1989 - but to explore what the future may hold and how that relates to SCI. Therefore, on the basis that 25 years on is a good time to reflect on the changes that have taken place, the aim of this invited work is to explicate developments in SCM and SCI, and ask the questions: has SCM delivered on its promise? And, what does the future hold? Early on in the development of SCM, firms realized the limitations of isolated improvement initiatives and misaligned functional performance agendas and began managing internal processes and flows on a much more integrated basis (Stevens, 1989). This extended the scope of integration to include upstream suppliers and downstream customers. Since the original article, there has been a growing consensus concerning the importance of integrating internal processes and flows, suppliers, and customers (e.g. Tan et al., 1998; Frohlich and Westbrook, 2001). Despite research confirming the positive benefits of supply chain integration (Prajogo and Olhager, 2012), and its importance to a firm's success (Flynn et al., 2010), ambiguity remains as to what constitutes supply chain integration (Fabbe-Costes and Jahre, 2008; Autry et al., 2014). We posit that supply chain integration is the alignment, linkage and coordination of people, processes, information, knowledge, and strategies across the supply chain between all points of contact and influence to facilitate the efficient and effective flows of material, money, information, and knowledge in response to customer needs. SCI is the foundation of SCM (Pagell, 2004). SCI is characterized by "joined up thinking, working, and decision making," underpinned by principles of flow, simplicity, and the minimization of waste. SCI may be enabled by systems and technology such as e-commerce (Gunasekaran and Ngai, 2004), Manufacturing Resource Planning (MRPII), Enterprise Resource Planning (ERP) (Bagchi et al., 2005), and RFID (McFarlane and Sheffi, 2003), but SCI is not just about technology. Integrating the supply chain refers as much to the need for strategic and operational integration within and across the business (Swink et al., 2007) as it does to relational integration with customers and suppliers (Benton and Maloni, 2005). The scope of SCI therefore includes governance, organization structure, systems, relationship management, business strategy, process design, and performance management. SCM as a discipline has evolved rapidly. The early focus of SCM began when organizations began to improve their inventory management and production planning and control. The aim of these practices was to improve production efficiencies and ensure that the capacity of capital assets and machinery was utilized efficiently. This extended upstream to include the management of transport of raw materials at a time when firms were relatively vertically integrated. The next phase in the evolution of SCM was the systematization of materials, production, and transport management. This began with materials requirement planning (MRP) focussing on inventory control (Orlicky, 1975). MRP expanded to become MRPII by incorporating the planning and scheduling of resources involved in manufacturing. Both MRP and MRPII were conceived in the 1960s but did not gain prominence until the 1980s (Wight, 1981). MRP and MRPII evolved to become ERP, in an attempt to gain greater visibility over the entire enterprise. The mid to late 1980s brought intense retrospection from western firms concerning the threat of Japanese firms that were perceived to be more competitive due to higher productivity (Hayes and Wheelwright, 1984). This period led to the implementation of "Japanese" practices such as total quality management (TQM) and lean (Womack et al., 1990) by firms. These practices focussed on reducing inventory through improving quality and flow and involving suppliers in product and process design. The next phase in the evolution of SCM included the introduction of other process improvement practices (e.g. six sigma) that sought to provide a more concrete improvement method compared to TQM or lean (Montgomery and Woodall, 2008). As process improvement, and the standardization of products and processes that facilitated it, took place, there was increasing awareness that end customers were requiring ever increasing levels of choice and differentiation (Christopher, 2000). This led firms to consider that they had become too lean and rigid and should be focussing on creating agile supply chains to adapt to changing demand (Aitken et al., 2002). The agile approach was blended with lean (Naylor et al., 1999) as demand could be decoupled into push and pull to create greater choice for the customer while still retaining some control (van Hoek, 2001). The 1990s also saw a focus upon core competences within firms (Hamel and Prahalad, 1990). This led to a rise in increased outsourcing of non-core activities to lower cost economies. Political factors such as unilateral liberalization measures and the removal of formal free trade barriers have contributed to the growth of developing countries exporting to high wage economies (Gereffi, 1999), encouraging firms to source from lower cost economies. This, in turn, fuels both demand for products from developed economies and the competition to supply. This changed the topology of the supply chain as well as the magnitude, profile and direction of material, and information flows. Significant changes have also taken place around the understanding of how a firm secures a competitive position. Traditionally, superior competitive advantage was seen to be a function of how a firm organized its resources to differentiate itself from the competition (Barney, 1991) and its ability to operate at a lower cost (Porter, 2008). The prevailing tendency was to control as much of its upstream and downstream activities as possible, often leading to high levels of vertical integration (i.e. within a firm rather than with suppliers). At the time of the original article, firms focussed more on managing, in-house, core competences, i.e. those competencies or capabilities that deliver value (as perceived by the customer) and outsourcing non-core activities to specialist - often lower cost - third parties. This resulted in the advent of 3PL providers and supply chain integrators. This all points toward an explosion in SCM thinking over the last 25 years. Figure 2 presents a timeline of SCM strategies, tools, and techniques. The dates in the figure are based upon when, in our experience, these practices were popularized, not introduced. Supply chains are inherently unstable. A key role of SCM is to minimize the risks and uncertainty associated with the naturally occurring unstable state of the supply chain (Lee, 2002). Forrester's (1958) early work on supply chain dynamics highlighted the problem of the reliable "transimissivity" of information through the supply chain. Thus, Lee et al.'s (1997) characterization of the "bullwhip" effect, demonstrates how demand and upstream load are both delayed and distorted as information progresses upstream, such that variation is amplified along the SC. This instability, coupled with the inevitable challenges of forecasting and data integrity render the supply chain unstable. Technology has been used to good effect to improve information flows (Lee et al., 2000). However, the increased remoteness of a global market and supply base, together with the need to manage an increasingly complex network has exacerbated the challenge. In addition to the issues caused by information distortion and a global supply base, the twenty-first century is a time when organizations are facing pressure - from consumers and other stakeholders - to have green and ethical supply chains (Srivastava, 2007). This requires organizations to become more transparent in terms of disclosing their sources of supply, which increases costs and may place pressure on moving away from the lowest cost economies where labor rights can be poor. There are two major strategies to winning business: differentiation and cost advantage (Porter, 2008). Historically, the focus for securing differentiation has been product differentiation. With life cycles now measured in months, sometimes weeks, rather than years, the opportunities to secure sustained benefit through product differentiation are diminishing. Even when a product-based-strategy prevails, the window of opportunity for maximizing profit is becoming shorter and more difficult to hit such that a minor disruption to product availability has a major impact on financial return. The supply chain has, therefore, become either the driver or critical enabler for differentiation. The role of the supply chain as a major driver of cost has long been recognized. Up to 75 percent of a product's cost is external to the focal firm (Trent, 2004). The supply chain, therefore, also offers considerable opportunity for delivering cost advantage. In addition to securing differentiation and cost advantage the supply chain has taken on two further strategic imperatives arising from the need to ensure resilience, responsiveness, agility, and flexibility in an increasingly turbulent and uncertain world. Typically, the supply chain accounts for 50 percent of a company's assets. These comprise both fixed assets such as buildings and machinery as well as current assets such as inventory. Assets, by their very nature prescribe a limited range of working patterns and methods, thereby exposing an organization to significant changes in market structure. The nature and configuration of the asset base, the balance of fixed assets to current assets, and the profile of inventory and cash all influence the resilience of the supply chain and influence a firm's ability to mitigate risk. At an operational level, customers are becoming increasingly demanding in terms of both responsiveness and flexibility. Accordingly, the agility of the supply chain, in terms of structure, management, systems, and processes impacts directly the ability of an organization to respond to customer needs. The role of the supply chain and the focus for SCM can therefore, be summarized as to support an organization to win business competitively by addressing the strategic imperatives of differentiation, cost advantage, resilience, and dynamism (agility, flexibility, responsiveness). In the following section we discuss how these strategic imperatives, together with their drivers and enablers, influence the way in which the supply chain is configured and managed. A firm's SCOM is a translation of the firm's supply chain strategy and need to deliver the strategic imperatives, into operational terms. The design of the model needs to consider the external economic and competitive drivers, leverage current and future likely enablers, and deliver the required level of performance. Figure 3 provides an overview of a SCOM and its related dimensions. The operating model comprises a series of dimensions; each dimension representing a distinct aspect of a firm's supply chain. The decisions a firm makes on the design of each dimension, the overall configuration, and how the dimensions interact to form an integrated supply chain determines the performance of a firm's supply chain. Firms operating in the same sector may have similar operating frameworks - due to market, technological, and mimetic (i.e. the promulgation of "best" practice) influences. The detailed design and configuration will be unique to each firm, reflecting localized decisions on how best to secure a competitive advantage from its supply chain. A firm's SCOM is not fixed. It needs to develop in response to internal and external changes if a firm is to exploit the potential from new opportunities and maintain competitive performance. Given the pressure to improve and the need for firms to continually challenge the performance and capabilities of their SCOM, the question is: how do firms develop their supply chains to secure and maintain value and competitive advantage? How do they adapt to changing economic drivers, take advantage of new technologies and enablers and respond to the increasing need to deliver a differentiated offering, secure cost advantage, while ensuring a resilient and dynamic supply chain able to combat the risk of disruption and major disturbance? What change model operates? A review of supply chain development over the last 25 years suggests a model comprising periods of fundamental change followed by an ongoing focus on continuous improvement based on a combination of process and capability improvement, together with localized structural adjustments to the scope and/or topology of the supply base. Figure 4 illustrates the SCOM "dynamics of change." What is it that drives the need for fundamental change? Since the early 1990s business process re-engineering (Hammer, 1990), lean (Womack et al., 1990), and many other improvement tools and techniques have provided valuable contributions to improving supply chain performance. Local, incremental process improvement can deliver benefits. However, the very nature of the benefit emanating from ongoing reliance upon small incremental process changes is unlikely to have a corresponding impact on the performance of the supply chain as a whole. Inevitably, continuous process improvement will be confronted by the "law of diminishing returns" as significant opportunities become less and competitors copy early adopters. Process improvement is underpinned by process analysis, that is breaking the process down into its constituent parts by mapping product and information flows, in an attempt to improve understanding and expose opportunities to improve (Hines and Rich, 1997). Such improvement is predicated on the supply chain as a repeatable process but supply chains are inherently more complex. Supply chain performance is based on the interaction of processes from the perspective of a "system," thus performance is through synthesis. Developing a supply chain's performance requires focus on the interaction of processes not the optimization of isolated processes. Significant change to supply chain performance cannot be delivered by focussing exclusively on improving isolated processes; improvement will only come through improved interaction of processes. Globalization of supply chains has encouraged firms to pursue low-cost sourcing by increasing the reach of the supply base, "flipping" suppliers as cheaper alternatives emerge, chasing increased control by seeking to manage multiple tiers of supply and splitting purchasing spend across multiple sources in an attempt to stimulate competition. Delivering short term, localized reduction in purchase cost has significant consequences and implications leading to increased complexity, uncertainty, and instability. As shown in Figure 5, the compound effect of the relentless pursuit of low-cost sourcing is an exponential increase in risk. Supply chain leaders relying on a strategy of continuous process and capability improvement, together with frequent structural adjustment to the supply base to sustain their leadership position inevitably find that diminishing returns, coupled with increased risk, erodes their leadership position as the performance gap over the competition reduces and the "followers" catch up. At this point a firm can be said to have hit the "Performance Frontier," (cf. Schmenner and Swink, 1998) whereby the cost and risk of further incremental change is more likely to have a destabilizing effect and a negative impact on relative or absolute performance. Securing advantage at this point requires fundamental change to the operating model, i.e. a paradigm shift or change in the fundamental change to SC design. Firms should seek to maintain a state of equilibrium until such time that the diminishing return from striving to continually improve combined with an ongoing pursuit for leveraging more out of the supply base destabilizes the supply chain rendering the SCOM unstable. Thereafter, the way forward is to seek a step change in structure through fundamental change in order to secure a stable basis for continued growth. During the period of fundamental change there is likely to be a drop in performance while the new structures and ways of working are embedded and optimized. Supply chain development can be said to follow a path of "Punctuated Equilibrium," (cf. Gersick, 1991). This comprises an alternation between long periods of relative structural stability, followed by brief periods of upheaval as a firm seeks competitive advantage through a process of fundamental structural change (a "paradigm shift"). During periods of stability the conceptual framework, basic organization and operational principles of the operating model are stable and can be said to be in a state of equilibrium. The underlying activities are subject to incremental adjustments through a process of continuous improvement able to respond to changes in the external environment, competitive pressures, and operational capabilities. The state of equilibrium continues as long as the underlying changes deliver a positive contribution. Once the performance frontier has been reached, a firm needs to seek fundamental structural change to secure a competitive advantage and establish a platform for further continuous improvement. The principle of integrating the supply chain as a cornerstone of SCM was introduced in the early 1980s. Since then the business context has changed and the structure of SCOMs has developed accordingly. The limitations of supply chain models based on "linear" physical flows have been exposed (e.g. Choi and Wu, 2009c; Bastl et al., 2012) and new phases of networked supply chains have developed. Figure 6 suggests the need to add two further stages to the development model proposed by Stevens in 1989. The additional stages are predicated on the need for integration but reflect the changes in context and capabilities. The transition between phases represents the point at which the extant phase begins to show diminishing returns for the focal firm. Internal supply chain integration transitioned to external supply chain integration as there was a limited amount of performance improvement that could be achieved without involving suppliers and customers. External supply chain integration transitioned to goal directed network supply chains as firms understood that supply chains were non-linear networks and that there would be benefit for non-strategic (or non-integrated) suppliers to have visibility of demand. We suggest that - at the time of writing - we are undergoing a transition to devolved, collaborative supply chain clusters. We suggest that this transition is occurring due to the increased complexity, risk and costs that are being borne by focal firms who are attempting to manage large networks. By effectively outsourcing elements of this management to lead suppliers, there is devolvement of the collaboration into clusters. Clusters are smaller networks that are more easily managed. For example, Zara has popularized the localized, collaborative cluster model (cf. Ghemawat, 2005) although this model currently has a tendency to be implemented in industries with relatively simple products or services, or around a single industry (e.g. Silicon Valley). The automotive industry also uses lead suppliers to coordinate clusters. The early phases of development, internal, and external integration, were addressed in the original article, and are briefly revisited below. Internal integration Internal integration represents the evolution of a firm's SCOM from the functional separation of the 1970s to a model based on the "closed loop" business and resource planning of the late 1980s. Functional separation was characterized by individual functions having their own agendas with limited interaction resulting in high unit costs, high levels of inventory, and poor customer service. The objective for most supply chains was inventory management based on aggregate inventory, stock replenishment using re-order point, and economic order point techniques with limited recognition of the needs of production plans or customer demand. At this time the focus for SCM was to balance supply and demand within the constraints of the business plan. The scope of the supply chain model included commercial, production, technical, purchasing, finance, and materials management and was underpinned by joined up thinking, working, and decision making. External integration External integration involves extending the scope of the integrated supply chain to include supplier integration, distribution integration, and customer integration. Supplier integration focusses on improving the performance of the supply chain between a firm and its supply base. It involves sharing information between both parties enabling a firm to influence costs, quantities and timing of deliveries and production in order to streamline the product flow and to move to a collaborative relationship. Supplier integration often involves a partnership model, with deeper, more long-term relationships with fewer vendors that, in turn, tend to have relationships with fewer customers. This helps build communication channels and trust, which facilitates more extensive knowledge sharing. Supplier integration involves suppliers taking increased responsibility for aspects of availability and product development. It involves increased interactions between businesses and functions to increase productivity and availability and reduce the risk of non-compliance. Distribution integration focusses on detailed resource and flow management through the outbound logistics network in order to reduce logistics and distribution costs and provide increased demand visibility. The focus moves away from the efficient management of transport to planning and controlling the efficient forward and reverse flows and storage of goods and related information as part of an integrated supply chain. Customer integration involves leveraging the supply chain's capabilities as part of the customer proposition and a firm collaborating with customers to add value to both parties. The cornerstones of supply chain customer collaboration are cultural and process integration, whereby both parties contribute their unique insights and capabilities to develop a mutually agreed forecast of demand that meets the needs of the customer, within the constraints of the firm. Customer integration is well operationalized by Collaborative Planning, Forecasting, and Replenishment (CPFR). The benefits of CPFR are well documented, typically in the order of a 10-40 percent inventory reduction in supply chains (Lapide, 2010). Despite the benefits of internal and external integration, the wider business landscape has changed resulting in the need to conceptualize the new SCOMs of goal directed networked supply chains and devolved, collaborative supply chain clusters. We turn to these next. Goal directed networked supply chain Early SCOMs focussed on the linear relationships and flows between customers and suppliers. While the linear perspective may have reflected simplified material flows and aided firms to develop techniques for planning and controlling a physical supply chain, the approach quickly diverged from evolving reality. The dramatic increase in access to information in the late 1990s, the advent of internet communication and the pursuit of global trading and low-cost sourcing, caused leading firms to revise their perception and management of supply chains from physical flows to information flows. Recognizing the supply chain as a network of relationships (e.g. Harland, 1996) not a sequence (or chain) of transactions enabled leading firms to gain improved performance, operational efficiencies, and ultimately sustainable competitiveness (e.g. Choi and Hong, 2002). Figure 7 presents an illustration of a networked supply chain. This model is based on recognizing that the supply chain is a non-linear network with connections between firms. It acknowledges that there can be relationships between suppliers and customers and having visibility of the network can uncover potential risks (cf. Choi and Hong, 2002). The culture and organization of most early adopters of the network perspective was invariably based on a "traditional" command and control style of management, underpinned by a centrally based structure. This manifested itself in a desire to control the sourcing of the bill of material by engaging in directed sourcing. This is where the firm established relationships with second and third tier suppliers and directed the top-tier supplier to source material from them. This SCOM is referred to as a goal directed networked supply chain as supplier relationships and sourcing strategies are aligned with the firm's overall cost, quality, and service goals. One of the key challenges of managing within networks is the presence of indirect relationships (cf. Choi and Wu, 2009a). From Figure 7, an example of an indirect relationship is the one between the supplier and the customer represented as a dashed line. For example, Amazon often uses a 3PL to fulfill customer orders. This creates a direct relationship between the 3PL and the customer. The customer's satisfaction with Amazon thus becomes reliant upon the performance of the 3PL (cf. Choi and Wu, 2009b). This type of structural arrangement is referred to as a triad with all firms within the triad being interdependent. However, the critical issue within the network is the management by the focal firm (e.g. Amazon) of the indirect relationship (e.g. between 3PL and customer). With a networked supply chain there is a significant burden in coordinating all of the direct and indirect relationships in order to meet the goal of the focal firm. This has led firms to create SCOMs that devolve coordination responsibilities to lead suppliers (occasionally known as "Tier 0.5") who then coordinate collaborative clusters. Devolved, collaborative, supply chain clusters The next step in the evolution of SCOMs is the transition to devolved, collaborative supply chain clusters. Choi and Hong (2002), examined the traits of supply networks in terms of formalization, centralization, and complexity. Formalization is closely associated with standardization through rules and procedures as well as norms and values. Centralization addresses the degree to which authority or power of decision making is concentrated or dispersed across the network. Complexity refers to the structural differentiation or variety that exists in the network. The three dimensions form a useful basis for highlighting the limitations of Goal Directed Networked Supply Chains and the emergence of devolved, collaborative, supply chain clusters. The centralized organization structure and underlying need for formality to support the central control of a Goal Directed Networked Supply Chain gave rise to a rigid, inflexible structure unable to cope with the turbulent environment of the last ten years. Similarly, the increase in reach, coupled with attempts to control the bill of materials significantly increased the number of nodes and connections in the network in addition to heavily impacting transaction costs within a firm. The work on the empirical relationship between system size, connectance, and stability carried out by Disney et al. (1997) identified two important phenomena relevant to SCOM design as: first, as the number of nodes increases the probability of a stable operation decreases dramatically; and second, as system connectance increases the network swiftly crosses the "switching" line and becomes unstable. Thus, the implications for supply chain performance are clear. The complexity inherent in a large supply chain network is likely to render it unstable, resulting in a major deterioration in performance. The complexity of the network also leads to an increase in coordination cost. Developing a SCOM to equip a firm to manage a global supply network needs to address the issue of how to accommodate and coordinate the needs and activities of multiple participants without undue complexity, cost, or formality. It should provide a level of governance sufficient to ensure that participants engage in collective and mutually supportive actions, such that any conflict can be addressed and the objectives of the firm's supply chain are met. As presented in Figure 8, we posit that that the future global integrated supply chain model will be devolved, collaborative, supply chain clusters. This model is based on a series of self-governing clusters, each cluster comprising a network of suppliers and/or sub-contractors associated by type, product structure, or flow. All non-core activities are outsourced by the firm (or lead organization) across a range of clusters. Collaboration within and across each cluster is based on goal consensus, whereby the goals for each cluster are aligned and managed in accordance with the goals of the firm. Operational coordination, planning, and governance across clusters are facilitated by the lead organization through an integrated collaboration and operations management and planning protocol supported by clear lines of responsibility and accountability and a visible performance management system. This operates in a network-wide culture where economy of scale and efficiency are subordinate to service, resilience, and effectiveness. Research into clusters is by no means a new phenomenon (cf. Porter, 1998; Sheffi, 2012). However, much of the previous work has focussed on the innovativeness of the cluster or the specialization of competences into an industrial district (e.g. Pinch et al., 2003), or has focussed upon knowledge management within the cluster (e.g. Miles and Snow, 2007). With devolved, collaborative supply chain clusters the focus moves from the cluster to clusters, and to the governance of the clusters. This is challenging as the management of the clusters is reliant upon "architectural knowledge" (cf. Tallman et al., 2004) which is external to the firms within the cluster. Architectural knowledge in the context of the devolved, collaborative supply chain clusters is related to understanding the network as a system and the structures and routines required to effectively coordinate it (cf. McGaughey, 2002). Within devolved, collaborative supply chain clusters, SCI moves away from being a monolithic approach to one that enables the "modular" connecting of the focal firm to the different clusters. This will be facilitated via shared values, agendas, thinking, and norms. We argue that these are required to "lubricate" the flow of information, knowledge, and insight between the devolved clusters and the lead organization. Table I contrasts the four different operating models depicted in Figure 6. The table summarizes the key characteristics of each dimension against the four primary stages of supply chain development. The change from one operating model to another will, we suggest, occur at each "punctuation" (cf. Gersick, 1991). Table I is indicative (or descriptive) rather than definitive (or prescriptive) illustrating changes to process, structure, relationships, and emphasis. The evolution of the SCOMs has been influenced by a number of factors. The growing realization that SCM is critical to a firm's success has made it more strategic with a long-term focus. Firms have also focussed more on what is core to their success and have outsourced that which is not. This has been balanced by the need to understand and coordinate supply chain networks to increase effectiveness and reduce risks. The coordination costs of this are potentially high and firms have built collaborative relationships with lead suppliers who coordinate specialized clusters whose capability is leveraged. Overall, this means that SCOMs have moved from an attempt to control toward a realization that they can, at best, coordinate the network. Planning now takes place at a strategic level and considers not just materials and capacity, but capability and the long-term goals of the firm. This is facilitated by better use of information, knowledge, and insight so pro-active decisions can be made. Another enabler are metrics and accounting systems that enable collaborative behavior and focus on the efficacy of the network. Now that we have discussed changes to SCM and SCI since the original article we turn to examining whether SCM has delivered on its promise. Advances in SCM, whether championed by technology providers, consultants, academics, or practitioners have invariably been accompanied by the promise of improved business performance. Notably, reductions in inventory and operating costs and improved customer experience. It is appropriate therefore after 25 years to ask whether SCM has delivered to its promise? Horvath (2001) suggested that the most considerable benefits to a business with advanced SCM would be radically improved customer responsiveness, customer service and satisfaction, increased flexibility for changing market conditions, improved customer retention, and more effective marketing. Ellram and Liu (2002, p. 30) suggested: "Supply chain management can significantly affect a company's financial performance - both positively and negatively." Sales growth, operating profit margin, working capital investment, and fixed capital investment impact shareholder value, all of these are influenced by SCM (Lambert and Burduroglu, 2000). However, it is difficult to empirically link the evolution of SCM with financial performance to evaluate whether SCM has delivered on its promise. In an attempt to assess the impact of SCM over time, and consistent with Ellinger et al.'s assessments of top SC performers' financial performance compared to that of industry rivals and industries (2011, 2012), we examine the performance over time of a number of companies, across multiple sectors on three indicators of SC performance. These indicators are Return on Net Assets (RONA), inventory turns, and the unified proxy for SC performance developed by Johnson and Templar (2011). We assess these indicators against ten companies from a range of industries, for the years 1997-2014, selected from the Fortune Global 500 Top 25 and companies that appear in the Gartner Supply Chain Top 25 rankings between 2010-2015. The convenience selection is intended to ensure representation of companies at the forefront of exploiting leading edge supply chain development to assess whether acknowledged leaders in SCM practice have consistently and positively influenced financial performance over an extended time frame. The results are shown in Table II. The analysis suggests that the overall impact of improving SCM practices has been equivocal (13 positives to 17 negatives). Inventory performance has improved for half the firms. The supply chain indicator suggests similar levels of improvement. Overall RONA shows an adverse impact (three positives to seven negatives). The authors acknowledge the limitations of the analysis and the impact of recent changes to the global economy, but suggest that while it points to some firms realizing benefit from improved SCM, the majority have not realized the full potential of their supply chains. We suggest that the realization of these benefits is due to four possible factors. The first is that firms are not recognizing that the SCOMs that have worked so well, for so long, may no longer be appropriate in today's volatile, uncertain, complex, and ambiguous world. The second is that the solutions promulgated for SC performance improvement by technology providers, consultants, academics, and practitioners could be regarded as the emperor's new clothes and will be unable to live up to the hype. Third, the users of the solutions are not equipped to realize the benefits despite the robustness of the thinking. This can be due to the technical complexity of the solution or a lack of capability. The fourth, and we suggest most likely, is that the implementation of solutions to improve performance are complex and require large-scale change (cf. van Hoek et al., 2010). The time-scale and complexity of such radical change may lead firms to abandon or partially implement solutions before the performance benefits are realized. Understanding and removing the barriers that impede benefits realization will require a concerted effort by the supply chain community (academics, advisors, technology providers, and practitioners) to work collaboratively to operationalize SCM thinking and deliver measurable, sustainable benefit on a consistent basis. The current challenges presented by a global economy, accelerating rates of change, and the emergence of new and innovative competitors will undoubtedly persist. The role of the SCM as an enabler of business success will not go away. It is more likely that the pressure on the supply chain will increase. SCM's response needs to first, find a more effective way of aligning thinking and practice and accelerating the flow of promising practices across the supply network. Second, address the challenge of ever increasing complexity. The stages of SCI presented here represent what we think to be the next stages in the evolution of SCI. Goal directed supply networks evolved from external integration when firms realized that they existed within a network and non-strategic suppliers could benefit from the sharing of demand data to facilitate planning. The next stage of evolution was devolved, collaborative clusters. Clusters arose as focal firms realized that the coordination of a network was burdensome and that lead suppliers could manage clusters to reduce these coordination costs. This brings us to the current state-of-the-art but what could the next 25 years have in store for SCM? Changes to supply chains over the next quarter of a decade will be driven by changes in the business environment, technology, economies, and customer preferences. There is no doubt that the business environment will become even more volatile, uncertain, complex, and ambiguous (cf. Bennett and Lemoine, 2014). As such, supply chains need to be configured to navigate the future environment and will move ever closer to becoming complex adaptive systems (cf. Choi et al., 2001). We are also seeing a rise in technologies that have promise for tomorrow's supply chains and the democratization of product and process knowledge (Anderson, 2012). These include big data and additive manufacturing technologies such as 3D printing (Brennan et al., 2015). Overlaid upon this are changes within developing economies as they industrialize and wages in those countries increase. As countries move from developing to developed, they become less attractive as manufacturing destinations as the cost benefits are eroded. A reduction in cost benefits, coupled to higher logistics costs, long transport times, and increased risks have influenced firms to move production closer to the point of consumption (Ellram et al., 2013); a phenomena known as re-shoring or near-shoring. A further complicating factor is that customers will require even more differentiation and we will move toward "markets of one." We are already seeing this on a limited scale with the customization of sportswear through the MI-Adidas and Nike-ID initiatives but these provide somewhat limited choices. We suggest that customers will require greater levels of customization. Given these changes, what will the supply chain of the future look like? We suggest that the SCOM of the future will be atomized, adaptive fulfillment communities. They will be atomized - rather than clustered - because the need for, and intensity of, collaboration will increase leading to smaller, less intense clusters which we class as communities. The relationships within these communities (i.e. atomized clusters) will be underpinned by shared norms, values, and behaviors. They will be adaptive to both supply and demand as well as being reactive and pro-active to wider geo-political, business, economic, environmental, and social factors. The networks of the future will also be democratic to supply and demand, hence our use of the term "fulfilment." Integration will be philosophical and driven by behaviors, insight and information, not processes and systems. SCM and SCI have undergone rapid evolution over the past quarter of a century, we look forward to the next 25 years. Opens in a new window. Figure 1 Stages of supply chain development Opens in a new window. Figure 2 A timeline of SCM Strategies, tools, and techniques Opens in a new window. Figure 3 Dimensions of a supply chain operating model Opens in a new window. Figure 4 Supply chain operating model dynamics of change Opens in a new window. Figure 5 The compound effect of the relentless pursuit of low-cost sourcing Opens in a new window. Figure 6 Phases in supply chain management development Opens in a new window. Figure 7 Networked supply chain Opens in a new window. Figure 8 Devolved, collaborative supply chain clusters Opens in a new window. Table I Comparison of the four operating models Opens in a new window. Table II RONA, inventory turns, and supply chain proxy values for ten selected companies from 1997-2014
|
Twenty-five years ago IJPDLM published "Integrating the Supply Chain" (Stevens, 1989). The purpose of that original work was to examine the state-of-the-art in supply chain management (SCM). There have been substantial changes to the landscape within which supply chains function and changes to supply chains themselves. Given these changes it is appropriate to re-visit what is the new state-of-the art and determine whether the 1989 conceptualization requires extending. The authors also attempt to assess whether the evolution of SCM is associated with improved financial performance. The paper aims to discuss these issues.
|
[SECTION: Method] Some 25 years ago, one of the authors wrote an article (Stevens, 1989) that sought to explicate the state-of-the-art in supply chain management (SCM). This was at a time when SCM was still in its infancy and only starting to gain currency as an area of interest for practitioners and academics (Oliver and Webber, 1982). At the time, the organizational functions involved in managing the availability of products and satisfying customer orders operated with relative independence, often with conflicting agendas. The purpose of the original article was to facilitate understanding and encourage organizations to exploit the potential for managing their supply chains as part of a joined up (integrated) whole. The original article addressed the need to manage the supply chain at the strategic, tactical, and operational levels as well as recognizing that the scope of an organization's supply chain extended to the furthest reaches of its network of customer and supplier relationships. Stevens (1989) posited that achieving a state of "integration" required a firm to progress through a number of defined stages of development. The stages identified at the time and illustrated in the original article are shown in Figure 1. As Figure 1 shows, the original article argued that SCM developed from a baseline of functional (independent) silos and the first level of integration was across functions (akin to process integration). This then moved to full internal integration involving a seamless flow through the internal supply chain, and finally to external integration embracing suppliers and customers. The primary benefits were identified as improved customer service and reduced inventory and operating costs. Since the original article, much has changed. The world today is more complex and turbulent (Christopher and Holweg, 2011). The reach of many supply chains has increased in pursuit of growth and low-cost sourcing (Fredriksson and Jonsson, 2009). Technological advances have fueled the development of new business models and ways of working (Johnson and Mena, 2008). The advent of new and maturing supply chain strategies (Christopher and Towill, 2002), tools and techniques, together with increased environmental and ethical concerns (Pagell and Wu, 2009) has increased the recognition of SCM as a driver and enabler of business performance (Johnson and Templar, 2011). This has lead to the adoption of new supply chain practices that have elevated the role of SCM within many organizations. While much has changed, the fundamental need for "joined up" thinking and working and the need to integrate the supply chain has not. Gartner Supply Chain Group (O'Marah and Hofman, 2010), place integration as one of the elements of creating a demand-driven supply chain strategy that leads to improved firm performance (Ellinger et al., 2011, 2012). Thus, the need for SCI is still the same, if not greater than before. What has changed since the original article is the context within which supply chains operate, and the enablers of change and performance improvement. As a result the relevance of narrow, linear-based supply chain models has been challenged as firms have looked more and more toward networked and collaborative supply chain strategies to deliver superior performance. The original article reported on the state-of-the-art in SCM. We retain that objective with this invited work. The aim is therefore not to re-visit supply chain integration per se - as advanced in 1989 - but to explore what the future may hold and how that relates to SCI. Therefore, on the basis that 25 years on is a good time to reflect on the changes that have taken place, the aim of this invited work is to explicate developments in SCM and SCI, and ask the questions: has SCM delivered on its promise? And, what does the future hold? Early on in the development of SCM, firms realized the limitations of isolated improvement initiatives and misaligned functional performance agendas and began managing internal processes and flows on a much more integrated basis (Stevens, 1989). This extended the scope of integration to include upstream suppliers and downstream customers. Since the original article, there has been a growing consensus concerning the importance of integrating internal processes and flows, suppliers, and customers (e.g. Tan et al., 1998; Frohlich and Westbrook, 2001). Despite research confirming the positive benefits of supply chain integration (Prajogo and Olhager, 2012), and its importance to a firm's success (Flynn et al., 2010), ambiguity remains as to what constitutes supply chain integration (Fabbe-Costes and Jahre, 2008; Autry et al., 2014). We posit that supply chain integration is the alignment, linkage and coordination of people, processes, information, knowledge, and strategies across the supply chain between all points of contact and influence to facilitate the efficient and effective flows of material, money, information, and knowledge in response to customer needs. SCI is the foundation of SCM (Pagell, 2004). SCI is characterized by "joined up thinking, working, and decision making," underpinned by principles of flow, simplicity, and the minimization of waste. SCI may be enabled by systems and technology such as e-commerce (Gunasekaran and Ngai, 2004), Manufacturing Resource Planning (MRPII), Enterprise Resource Planning (ERP) (Bagchi et al., 2005), and RFID (McFarlane and Sheffi, 2003), but SCI is not just about technology. Integrating the supply chain refers as much to the need for strategic and operational integration within and across the business (Swink et al., 2007) as it does to relational integration with customers and suppliers (Benton and Maloni, 2005). The scope of SCI therefore includes governance, organization structure, systems, relationship management, business strategy, process design, and performance management. SCM as a discipline has evolved rapidly. The early focus of SCM began when organizations began to improve their inventory management and production planning and control. The aim of these practices was to improve production efficiencies and ensure that the capacity of capital assets and machinery was utilized efficiently. This extended upstream to include the management of transport of raw materials at a time when firms were relatively vertically integrated. The next phase in the evolution of SCM was the systematization of materials, production, and transport management. This began with materials requirement planning (MRP) focussing on inventory control (Orlicky, 1975). MRP expanded to become MRPII by incorporating the planning and scheduling of resources involved in manufacturing. Both MRP and MRPII were conceived in the 1960s but did not gain prominence until the 1980s (Wight, 1981). MRP and MRPII evolved to become ERP, in an attempt to gain greater visibility over the entire enterprise. The mid to late 1980s brought intense retrospection from western firms concerning the threat of Japanese firms that were perceived to be more competitive due to higher productivity (Hayes and Wheelwright, 1984). This period led to the implementation of "Japanese" practices such as total quality management (TQM) and lean (Womack et al., 1990) by firms. These practices focussed on reducing inventory through improving quality and flow and involving suppliers in product and process design. The next phase in the evolution of SCM included the introduction of other process improvement practices (e.g. six sigma) that sought to provide a more concrete improvement method compared to TQM or lean (Montgomery and Woodall, 2008). As process improvement, and the standardization of products and processes that facilitated it, took place, there was increasing awareness that end customers were requiring ever increasing levels of choice and differentiation (Christopher, 2000). This led firms to consider that they had become too lean and rigid and should be focussing on creating agile supply chains to adapt to changing demand (Aitken et al., 2002). The agile approach was blended with lean (Naylor et al., 1999) as demand could be decoupled into push and pull to create greater choice for the customer while still retaining some control (van Hoek, 2001). The 1990s also saw a focus upon core competences within firms (Hamel and Prahalad, 1990). This led to a rise in increased outsourcing of non-core activities to lower cost economies. Political factors such as unilateral liberalization measures and the removal of formal free trade barriers have contributed to the growth of developing countries exporting to high wage economies (Gereffi, 1999), encouraging firms to source from lower cost economies. This, in turn, fuels both demand for products from developed economies and the competition to supply. This changed the topology of the supply chain as well as the magnitude, profile and direction of material, and information flows. Significant changes have also taken place around the understanding of how a firm secures a competitive position. Traditionally, superior competitive advantage was seen to be a function of how a firm organized its resources to differentiate itself from the competition (Barney, 1991) and its ability to operate at a lower cost (Porter, 2008). The prevailing tendency was to control as much of its upstream and downstream activities as possible, often leading to high levels of vertical integration (i.e. within a firm rather than with suppliers). At the time of the original article, firms focussed more on managing, in-house, core competences, i.e. those competencies or capabilities that deliver value (as perceived by the customer) and outsourcing non-core activities to specialist - often lower cost - third parties. This resulted in the advent of 3PL providers and supply chain integrators. This all points toward an explosion in SCM thinking over the last 25 years. Figure 2 presents a timeline of SCM strategies, tools, and techniques. The dates in the figure are based upon when, in our experience, these practices were popularized, not introduced. Supply chains are inherently unstable. A key role of SCM is to minimize the risks and uncertainty associated with the naturally occurring unstable state of the supply chain (Lee, 2002). Forrester's (1958) early work on supply chain dynamics highlighted the problem of the reliable "transimissivity" of information through the supply chain. Thus, Lee et al.'s (1997) characterization of the "bullwhip" effect, demonstrates how demand and upstream load are both delayed and distorted as information progresses upstream, such that variation is amplified along the SC. This instability, coupled with the inevitable challenges of forecasting and data integrity render the supply chain unstable. Technology has been used to good effect to improve information flows (Lee et al., 2000). However, the increased remoteness of a global market and supply base, together with the need to manage an increasingly complex network has exacerbated the challenge. In addition to the issues caused by information distortion and a global supply base, the twenty-first century is a time when organizations are facing pressure - from consumers and other stakeholders - to have green and ethical supply chains (Srivastava, 2007). This requires organizations to become more transparent in terms of disclosing their sources of supply, which increases costs and may place pressure on moving away from the lowest cost economies where labor rights can be poor. There are two major strategies to winning business: differentiation and cost advantage (Porter, 2008). Historically, the focus for securing differentiation has been product differentiation. With life cycles now measured in months, sometimes weeks, rather than years, the opportunities to secure sustained benefit through product differentiation are diminishing. Even when a product-based-strategy prevails, the window of opportunity for maximizing profit is becoming shorter and more difficult to hit such that a minor disruption to product availability has a major impact on financial return. The supply chain has, therefore, become either the driver or critical enabler for differentiation. The role of the supply chain as a major driver of cost has long been recognized. Up to 75 percent of a product's cost is external to the focal firm (Trent, 2004). The supply chain, therefore, also offers considerable opportunity for delivering cost advantage. In addition to securing differentiation and cost advantage the supply chain has taken on two further strategic imperatives arising from the need to ensure resilience, responsiveness, agility, and flexibility in an increasingly turbulent and uncertain world. Typically, the supply chain accounts for 50 percent of a company's assets. These comprise both fixed assets such as buildings and machinery as well as current assets such as inventory. Assets, by their very nature prescribe a limited range of working patterns and methods, thereby exposing an organization to significant changes in market structure. The nature and configuration of the asset base, the balance of fixed assets to current assets, and the profile of inventory and cash all influence the resilience of the supply chain and influence a firm's ability to mitigate risk. At an operational level, customers are becoming increasingly demanding in terms of both responsiveness and flexibility. Accordingly, the agility of the supply chain, in terms of structure, management, systems, and processes impacts directly the ability of an organization to respond to customer needs. The role of the supply chain and the focus for SCM can therefore, be summarized as to support an organization to win business competitively by addressing the strategic imperatives of differentiation, cost advantage, resilience, and dynamism (agility, flexibility, responsiveness). In the following section we discuss how these strategic imperatives, together with their drivers and enablers, influence the way in which the supply chain is configured and managed. A firm's SCOM is a translation of the firm's supply chain strategy and need to deliver the strategic imperatives, into operational terms. The design of the model needs to consider the external economic and competitive drivers, leverage current and future likely enablers, and deliver the required level of performance. Figure 3 provides an overview of a SCOM and its related dimensions. The operating model comprises a series of dimensions; each dimension representing a distinct aspect of a firm's supply chain. The decisions a firm makes on the design of each dimension, the overall configuration, and how the dimensions interact to form an integrated supply chain determines the performance of a firm's supply chain. Firms operating in the same sector may have similar operating frameworks - due to market, technological, and mimetic (i.e. the promulgation of "best" practice) influences. The detailed design and configuration will be unique to each firm, reflecting localized decisions on how best to secure a competitive advantage from its supply chain. A firm's SCOM is not fixed. It needs to develop in response to internal and external changes if a firm is to exploit the potential from new opportunities and maintain competitive performance. Given the pressure to improve and the need for firms to continually challenge the performance and capabilities of their SCOM, the question is: how do firms develop their supply chains to secure and maintain value and competitive advantage? How do they adapt to changing economic drivers, take advantage of new technologies and enablers and respond to the increasing need to deliver a differentiated offering, secure cost advantage, while ensuring a resilient and dynamic supply chain able to combat the risk of disruption and major disturbance? What change model operates? A review of supply chain development over the last 25 years suggests a model comprising periods of fundamental change followed by an ongoing focus on continuous improvement based on a combination of process and capability improvement, together with localized structural adjustments to the scope and/or topology of the supply base. Figure 4 illustrates the SCOM "dynamics of change." What is it that drives the need for fundamental change? Since the early 1990s business process re-engineering (Hammer, 1990), lean (Womack et al., 1990), and many other improvement tools and techniques have provided valuable contributions to improving supply chain performance. Local, incremental process improvement can deliver benefits. However, the very nature of the benefit emanating from ongoing reliance upon small incremental process changes is unlikely to have a corresponding impact on the performance of the supply chain as a whole. Inevitably, continuous process improvement will be confronted by the "law of diminishing returns" as significant opportunities become less and competitors copy early adopters. Process improvement is underpinned by process analysis, that is breaking the process down into its constituent parts by mapping product and information flows, in an attempt to improve understanding and expose opportunities to improve (Hines and Rich, 1997). Such improvement is predicated on the supply chain as a repeatable process but supply chains are inherently more complex. Supply chain performance is based on the interaction of processes from the perspective of a "system," thus performance is through synthesis. Developing a supply chain's performance requires focus on the interaction of processes not the optimization of isolated processes. Significant change to supply chain performance cannot be delivered by focussing exclusively on improving isolated processes; improvement will only come through improved interaction of processes. Globalization of supply chains has encouraged firms to pursue low-cost sourcing by increasing the reach of the supply base, "flipping" suppliers as cheaper alternatives emerge, chasing increased control by seeking to manage multiple tiers of supply and splitting purchasing spend across multiple sources in an attempt to stimulate competition. Delivering short term, localized reduction in purchase cost has significant consequences and implications leading to increased complexity, uncertainty, and instability. As shown in Figure 5, the compound effect of the relentless pursuit of low-cost sourcing is an exponential increase in risk. Supply chain leaders relying on a strategy of continuous process and capability improvement, together with frequent structural adjustment to the supply base to sustain their leadership position inevitably find that diminishing returns, coupled with increased risk, erodes their leadership position as the performance gap over the competition reduces and the "followers" catch up. At this point a firm can be said to have hit the "Performance Frontier," (cf. Schmenner and Swink, 1998) whereby the cost and risk of further incremental change is more likely to have a destabilizing effect and a negative impact on relative or absolute performance. Securing advantage at this point requires fundamental change to the operating model, i.e. a paradigm shift or change in the fundamental change to SC design. Firms should seek to maintain a state of equilibrium until such time that the diminishing return from striving to continually improve combined with an ongoing pursuit for leveraging more out of the supply base destabilizes the supply chain rendering the SCOM unstable. Thereafter, the way forward is to seek a step change in structure through fundamental change in order to secure a stable basis for continued growth. During the period of fundamental change there is likely to be a drop in performance while the new structures and ways of working are embedded and optimized. Supply chain development can be said to follow a path of "Punctuated Equilibrium," (cf. Gersick, 1991). This comprises an alternation between long periods of relative structural stability, followed by brief periods of upheaval as a firm seeks competitive advantage through a process of fundamental structural change (a "paradigm shift"). During periods of stability the conceptual framework, basic organization and operational principles of the operating model are stable and can be said to be in a state of equilibrium. The underlying activities are subject to incremental adjustments through a process of continuous improvement able to respond to changes in the external environment, competitive pressures, and operational capabilities. The state of equilibrium continues as long as the underlying changes deliver a positive contribution. Once the performance frontier has been reached, a firm needs to seek fundamental structural change to secure a competitive advantage and establish a platform for further continuous improvement. The principle of integrating the supply chain as a cornerstone of SCM was introduced in the early 1980s. Since then the business context has changed and the structure of SCOMs has developed accordingly. The limitations of supply chain models based on "linear" physical flows have been exposed (e.g. Choi and Wu, 2009c; Bastl et al., 2012) and new phases of networked supply chains have developed. Figure 6 suggests the need to add two further stages to the development model proposed by Stevens in 1989. The additional stages are predicated on the need for integration but reflect the changes in context and capabilities. The transition between phases represents the point at which the extant phase begins to show diminishing returns for the focal firm. Internal supply chain integration transitioned to external supply chain integration as there was a limited amount of performance improvement that could be achieved without involving suppliers and customers. External supply chain integration transitioned to goal directed network supply chains as firms understood that supply chains were non-linear networks and that there would be benefit for non-strategic (or non-integrated) suppliers to have visibility of demand. We suggest that - at the time of writing - we are undergoing a transition to devolved, collaborative supply chain clusters. We suggest that this transition is occurring due to the increased complexity, risk and costs that are being borne by focal firms who are attempting to manage large networks. By effectively outsourcing elements of this management to lead suppliers, there is devolvement of the collaboration into clusters. Clusters are smaller networks that are more easily managed. For example, Zara has popularized the localized, collaborative cluster model (cf. Ghemawat, 2005) although this model currently has a tendency to be implemented in industries with relatively simple products or services, or around a single industry (e.g. Silicon Valley). The automotive industry also uses lead suppliers to coordinate clusters. The early phases of development, internal, and external integration, were addressed in the original article, and are briefly revisited below. Internal integration Internal integration represents the evolution of a firm's SCOM from the functional separation of the 1970s to a model based on the "closed loop" business and resource planning of the late 1980s. Functional separation was characterized by individual functions having their own agendas with limited interaction resulting in high unit costs, high levels of inventory, and poor customer service. The objective for most supply chains was inventory management based on aggregate inventory, stock replenishment using re-order point, and economic order point techniques with limited recognition of the needs of production plans or customer demand. At this time the focus for SCM was to balance supply and demand within the constraints of the business plan. The scope of the supply chain model included commercial, production, technical, purchasing, finance, and materials management and was underpinned by joined up thinking, working, and decision making. External integration External integration involves extending the scope of the integrated supply chain to include supplier integration, distribution integration, and customer integration. Supplier integration focusses on improving the performance of the supply chain between a firm and its supply base. It involves sharing information between both parties enabling a firm to influence costs, quantities and timing of deliveries and production in order to streamline the product flow and to move to a collaborative relationship. Supplier integration often involves a partnership model, with deeper, more long-term relationships with fewer vendors that, in turn, tend to have relationships with fewer customers. This helps build communication channels and trust, which facilitates more extensive knowledge sharing. Supplier integration involves suppliers taking increased responsibility for aspects of availability and product development. It involves increased interactions between businesses and functions to increase productivity and availability and reduce the risk of non-compliance. Distribution integration focusses on detailed resource and flow management through the outbound logistics network in order to reduce logistics and distribution costs and provide increased demand visibility. The focus moves away from the efficient management of transport to planning and controlling the efficient forward and reverse flows and storage of goods and related information as part of an integrated supply chain. Customer integration involves leveraging the supply chain's capabilities as part of the customer proposition and a firm collaborating with customers to add value to both parties. The cornerstones of supply chain customer collaboration are cultural and process integration, whereby both parties contribute their unique insights and capabilities to develop a mutually agreed forecast of demand that meets the needs of the customer, within the constraints of the firm. Customer integration is well operationalized by Collaborative Planning, Forecasting, and Replenishment (CPFR). The benefits of CPFR are well documented, typically in the order of a 10-40 percent inventory reduction in supply chains (Lapide, 2010). Despite the benefits of internal and external integration, the wider business landscape has changed resulting in the need to conceptualize the new SCOMs of goal directed networked supply chains and devolved, collaborative supply chain clusters. We turn to these next. Goal directed networked supply chain Early SCOMs focussed on the linear relationships and flows between customers and suppliers. While the linear perspective may have reflected simplified material flows and aided firms to develop techniques for planning and controlling a physical supply chain, the approach quickly diverged from evolving reality. The dramatic increase in access to information in the late 1990s, the advent of internet communication and the pursuit of global trading and low-cost sourcing, caused leading firms to revise their perception and management of supply chains from physical flows to information flows. Recognizing the supply chain as a network of relationships (e.g. Harland, 1996) not a sequence (or chain) of transactions enabled leading firms to gain improved performance, operational efficiencies, and ultimately sustainable competitiveness (e.g. Choi and Hong, 2002). Figure 7 presents an illustration of a networked supply chain. This model is based on recognizing that the supply chain is a non-linear network with connections between firms. It acknowledges that there can be relationships between suppliers and customers and having visibility of the network can uncover potential risks (cf. Choi and Hong, 2002). The culture and organization of most early adopters of the network perspective was invariably based on a "traditional" command and control style of management, underpinned by a centrally based structure. This manifested itself in a desire to control the sourcing of the bill of material by engaging in directed sourcing. This is where the firm established relationships with second and third tier suppliers and directed the top-tier supplier to source material from them. This SCOM is referred to as a goal directed networked supply chain as supplier relationships and sourcing strategies are aligned with the firm's overall cost, quality, and service goals. One of the key challenges of managing within networks is the presence of indirect relationships (cf. Choi and Wu, 2009a). From Figure 7, an example of an indirect relationship is the one between the supplier and the customer represented as a dashed line. For example, Amazon often uses a 3PL to fulfill customer orders. This creates a direct relationship between the 3PL and the customer. The customer's satisfaction with Amazon thus becomes reliant upon the performance of the 3PL (cf. Choi and Wu, 2009b). This type of structural arrangement is referred to as a triad with all firms within the triad being interdependent. However, the critical issue within the network is the management by the focal firm (e.g. Amazon) of the indirect relationship (e.g. between 3PL and customer). With a networked supply chain there is a significant burden in coordinating all of the direct and indirect relationships in order to meet the goal of the focal firm. This has led firms to create SCOMs that devolve coordination responsibilities to lead suppliers (occasionally known as "Tier 0.5") who then coordinate collaborative clusters. Devolved, collaborative, supply chain clusters The next step in the evolution of SCOMs is the transition to devolved, collaborative supply chain clusters. Choi and Hong (2002), examined the traits of supply networks in terms of formalization, centralization, and complexity. Formalization is closely associated with standardization through rules and procedures as well as norms and values. Centralization addresses the degree to which authority or power of decision making is concentrated or dispersed across the network. Complexity refers to the structural differentiation or variety that exists in the network. The three dimensions form a useful basis for highlighting the limitations of Goal Directed Networked Supply Chains and the emergence of devolved, collaborative, supply chain clusters. The centralized organization structure and underlying need for formality to support the central control of a Goal Directed Networked Supply Chain gave rise to a rigid, inflexible structure unable to cope with the turbulent environment of the last ten years. Similarly, the increase in reach, coupled with attempts to control the bill of materials significantly increased the number of nodes and connections in the network in addition to heavily impacting transaction costs within a firm. The work on the empirical relationship between system size, connectance, and stability carried out by Disney et al. (1997) identified two important phenomena relevant to SCOM design as: first, as the number of nodes increases the probability of a stable operation decreases dramatically; and second, as system connectance increases the network swiftly crosses the "switching" line and becomes unstable. Thus, the implications for supply chain performance are clear. The complexity inherent in a large supply chain network is likely to render it unstable, resulting in a major deterioration in performance. The complexity of the network also leads to an increase in coordination cost. Developing a SCOM to equip a firm to manage a global supply network needs to address the issue of how to accommodate and coordinate the needs and activities of multiple participants without undue complexity, cost, or formality. It should provide a level of governance sufficient to ensure that participants engage in collective and mutually supportive actions, such that any conflict can be addressed and the objectives of the firm's supply chain are met. As presented in Figure 8, we posit that that the future global integrated supply chain model will be devolved, collaborative, supply chain clusters. This model is based on a series of self-governing clusters, each cluster comprising a network of suppliers and/or sub-contractors associated by type, product structure, or flow. All non-core activities are outsourced by the firm (or lead organization) across a range of clusters. Collaboration within and across each cluster is based on goal consensus, whereby the goals for each cluster are aligned and managed in accordance with the goals of the firm. Operational coordination, planning, and governance across clusters are facilitated by the lead organization through an integrated collaboration and operations management and planning protocol supported by clear lines of responsibility and accountability and a visible performance management system. This operates in a network-wide culture where economy of scale and efficiency are subordinate to service, resilience, and effectiveness. Research into clusters is by no means a new phenomenon (cf. Porter, 1998; Sheffi, 2012). However, much of the previous work has focussed on the innovativeness of the cluster or the specialization of competences into an industrial district (e.g. Pinch et al., 2003), or has focussed upon knowledge management within the cluster (e.g. Miles and Snow, 2007). With devolved, collaborative supply chain clusters the focus moves from the cluster to clusters, and to the governance of the clusters. This is challenging as the management of the clusters is reliant upon "architectural knowledge" (cf. Tallman et al., 2004) which is external to the firms within the cluster. Architectural knowledge in the context of the devolved, collaborative supply chain clusters is related to understanding the network as a system and the structures and routines required to effectively coordinate it (cf. McGaughey, 2002). Within devolved, collaborative supply chain clusters, SCI moves away from being a monolithic approach to one that enables the "modular" connecting of the focal firm to the different clusters. This will be facilitated via shared values, agendas, thinking, and norms. We argue that these are required to "lubricate" the flow of information, knowledge, and insight between the devolved clusters and the lead organization. Table I contrasts the four different operating models depicted in Figure 6. The table summarizes the key characteristics of each dimension against the four primary stages of supply chain development. The change from one operating model to another will, we suggest, occur at each "punctuation" (cf. Gersick, 1991). Table I is indicative (or descriptive) rather than definitive (or prescriptive) illustrating changes to process, structure, relationships, and emphasis. The evolution of the SCOMs has been influenced by a number of factors. The growing realization that SCM is critical to a firm's success has made it more strategic with a long-term focus. Firms have also focussed more on what is core to their success and have outsourced that which is not. This has been balanced by the need to understand and coordinate supply chain networks to increase effectiveness and reduce risks. The coordination costs of this are potentially high and firms have built collaborative relationships with lead suppliers who coordinate specialized clusters whose capability is leveraged. Overall, this means that SCOMs have moved from an attempt to control toward a realization that they can, at best, coordinate the network. Planning now takes place at a strategic level and considers not just materials and capacity, but capability and the long-term goals of the firm. This is facilitated by better use of information, knowledge, and insight so pro-active decisions can be made. Another enabler are metrics and accounting systems that enable collaborative behavior and focus on the efficacy of the network. Now that we have discussed changes to SCM and SCI since the original article we turn to examining whether SCM has delivered on its promise. Advances in SCM, whether championed by technology providers, consultants, academics, or practitioners have invariably been accompanied by the promise of improved business performance. Notably, reductions in inventory and operating costs and improved customer experience. It is appropriate therefore after 25 years to ask whether SCM has delivered to its promise? Horvath (2001) suggested that the most considerable benefits to a business with advanced SCM would be radically improved customer responsiveness, customer service and satisfaction, increased flexibility for changing market conditions, improved customer retention, and more effective marketing. Ellram and Liu (2002, p. 30) suggested: "Supply chain management can significantly affect a company's financial performance - both positively and negatively." Sales growth, operating profit margin, working capital investment, and fixed capital investment impact shareholder value, all of these are influenced by SCM (Lambert and Burduroglu, 2000). However, it is difficult to empirically link the evolution of SCM with financial performance to evaluate whether SCM has delivered on its promise. In an attempt to assess the impact of SCM over time, and consistent with Ellinger et al.'s assessments of top SC performers' financial performance compared to that of industry rivals and industries (2011, 2012), we examine the performance over time of a number of companies, across multiple sectors on three indicators of SC performance. These indicators are Return on Net Assets (RONA), inventory turns, and the unified proxy for SC performance developed by Johnson and Templar (2011). We assess these indicators against ten companies from a range of industries, for the years 1997-2014, selected from the Fortune Global 500 Top 25 and companies that appear in the Gartner Supply Chain Top 25 rankings between 2010-2015. The convenience selection is intended to ensure representation of companies at the forefront of exploiting leading edge supply chain development to assess whether acknowledged leaders in SCM practice have consistently and positively influenced financial performance over an extended time frame. The results are shown in Table II. The analysis suggests that the overall impact of improving SCM practices has been equivocal (13 positives to 17 negatives). Inventory performance has improved for half the firms. The supply chain indicator suggests similar levels of improvement. Overall RONA shows an adverse impact (three positives to seven negatives). The authors acknowledge the limitations of the analysis and the impact of recent changes to the global economy, but suggest that while it points to some firms realizing benefit from improved SCM, the majority have not realized the full potential of their supply chains. We suggest that the realization of these benefits is due to four possible factors. The first is that firms are not recognizing that the SCOMs that have worked so well, for so long, may no longer be appropriate in today's volatile, uncertain, complex, and ambiguous world. The second is that the solutions promulgated for SC performance improvement by technology providers, consultants, academics, and practitioners could be regarded as the emperor's new clothes and will be unable to live up to the hype. Third, the users of the solutions are not equipped to realize the benefits despite the robustness of the thinking. This can be due to the technical complexity of the solution or a lack of capability. The fourth, and we suggest most likely, is that the implementation of solutions to improve performance are complex and require large-scale change (cf. van Hoek et al., 2010). The time-scale and complexity of such radical change may lead firms to abandon or partially implement solutions before the performance benefits are realized. Understanding and removing the barriers that impede benefits realization will require a concerted effort by the supply chain community (academics, advisors, technology providers, and practitioners) to work collaboratively to operationalize SCM thinking and deliver measurable, sustainable benefit on a consistent basis. The current challenges presented by a global economy, accelerating rates of change, and the emergence of new and innovative competitors will undoubtedly persist. The role of the SCM as an enabler of business success will not go away. It is more likely that the pressure on the supply chain will increase. SCM's response needs to first, find a more effective way of aligning thinking and practice and accelerating the flow of promising practices across the supply network. Second, address the challenge of ever increasing complexity. The stages of SCI presented here represent what we think to be the next stages in the evolution of SCI. Goal directed supply networks evolved from external integration when firms realized that they existed within a network and non-strategic suppliers could benefit from the sharing of demand data to facilitate planning. The next stage of evolution was devolved, collaborative clusters. Clusters arose as focal firms realized that the coordination of a network was burdensome and that lead suppliers could manage clusters to reduce these coordination costs. This brings us to the current state-of-the-art but what could the next 25 years have in store for SCM? Changes to supply chains over the next quarter of a decade will be driven by changes in the business environment, technology, economies, and customer preferences. There is no doubt that the business environment will become even more volatile, uncertain, complex, and ambiguous (cf. Bennett and Lemoine, 2014). As such, supply chains need to be configured to navigate the future environment and will move ever closer to becoming complex adaptive systems (cf. Choi et al., 2001). We are also seeing a rise in technologies that have promise for tomorrow's supply chains and the democratization of product and process knowledge (Anderson, 2012). These include big data and additive manufacturing technologies such as 3D printing (Brennan et al., 2015). Overlaid upon this are changes within developing economies as they industrialize and wages in those countries increase. As countries move from developing to developed, they become less attractive as manufacturing destinations as the cost benefits are eroded. A reduction in cost benefits, coupled to higher logistics costs, long transport times, and increased risks have influenced firms to move production closer to the point of consumption (Ellram et al., 2013); a phenomena known as re-shoring or near-shoring. A further complicating factor is that customers will require even more differentiation and we will move toward "markets of one." We are already seeing this on a limited scale with the customization of sportswear through the MI-Adidas and Nike-ID initiatives but these provide somewhat limited choices. We suggest that customers will require greater levels of customization. Given these changes, what will the supply chain of the future look like? We suggest that the SCOM of the future will be atomized, adaptive fulfillment communities. They will be atomized - rather than clustered - because the need for, and intensity of, collaboration will increase leading to smaller, less intense clusters which we class as communities. The relationships within these communities (i.e. atomized clusters) will be underpinned by shared norms, values, and behaviors. They will be adaptive to both supply and demand as well as being reactive and pro-active to wider geo-political, business, economic, environmental, and social factors. The networks of the future will also be democratic to supply and demand, hence our use of the term "fulfilment." Integration will be philosophical and driven by behaviors, insight and information, not processes and systems. SCM and SCI have undergone rapid evolution over the past quarter of a century, we look forward to the next 25 years. Opens in a new window. Figure 1 Stages of supply chain development Opens in a new window. Figure 2 A timeline of SCM Strategies, tools, and techniques Opens in a new window. Figure 3 Dimensions of a supply chain operating model Opens in a new window. Figure 4 Supply chain operating model dynamics of change Opens in a new window. Figure 5 The compound effect of the relentless pursuit of low-cost sourcing Opens in a new window. Figure 6 Phases in supply chain management development Opens in a new window. Figure 7 Networked supply chain Opens in a new window. Figure 8 Devolved, collaborative supply chain clusters Opens in a new window. Table I Comparison of the four operating models Opens in a new window. Table II RONA, inventory turns, and supply chain proxy values for ten selected companies from 1997-2014
|
The authors take a conceptual approach to suggest that SCM is undergoing a transition to devolved, collaborative supply chain clusters. In addition, the authors consider imperatives and models for supply chain change and development. In line with the 1989 work, many of the observations in this invited paper are based on the primary author's experience. The authors use a selection of financial data from leading firms to assess whether benefits attributed to SCM and changes in supply chain operating models have affected financial performance.
|
[SECTION: Findings] Some 25 years ago, one of the authors wrote an article (Stevens, 1989) that sought to explicate the state-of-the-art in supply chain management (SCM). This was at a time when SCM was still in its infancy and only starting to gain currency as an area of interest for practitioners and academics (Oliver and Webber, 1982). At the time, the organizational functions involved in managing the availability of products and satisfying customer orders operated with relative independence, often with conflicting agendas. The purpose of the original article was to facilitate understanding and encourage organizations to exploit the potential for managing their supply chains as part of a joined up (integrated) whole. The original article addressed the need to manage the supply chain at the strategic, tactical, and operational levels as well as recognizing that the scope of an organization's supply chain extended to the furthest reaches of its network of customer and supplier relationships. Stevens (1989) posited that achieving a state of "integration" required a firm to progress through a number of defined stages of development. The stages identified at the time and illustrated in the original article are shown in Figure 1. As Figure 1 shows, the original article argued that SCM developed from a baseline of functional (independent) silos and the first level of integration was across functions (akin to process integration). This then moved to full internal integration involving a seamless flow through the internal supply chain, and finally to external integration embracing suppliers and customers. The primary benefits were identified as improved customer service and reduced inventory and operating costs. Since the original article, much has changed. The world today is more complex and turbulent (Christopher and Holweg, 2011). The reach of many supply chains has increased in pursuit of growth and low-cost sourcing (Fredriksson and Jonsson, 2009). Technological advances have fueled the development of new business models and ways of working (Johnson and Mena, 2008). The advent of new and maturing supply chain strategies (Christopher and Towill, 2002), tools and techniques, together with increased environmental and ethical concerns (Pagell and Wu, 2009) has increased the recognition of SCM as a driver and enabler of business performance (Johnson and Templar, 2011). This has lead to the adoption of new supply chain practices that have elevated the role of SCM within many organizations. While much has changed, the fundamental need for "joined up" thinking and working and the need to integrate the supply chain has not. Gartner Supply Chain Group (O'Marah and Hofman, 2010), place integration as one of the elements of creating a demand-driven supply chain strategy that leads to improved firm performance (Ellinger et al., 2011, 2012). Thus, the need for SCI is still the same, if not greater than before. What has changed since the original article is the context within which supply chains operate, and the enablers of change and performance improvement. As a result the relevance of narrow, linear-based supply chain models has been challenged as firms have looked more and more toward networked and collaborative supply chain strategies to deliver superior performance. The original article reported on the state-of-the-art in SCM. We retain that objective with this invited work. The aim is therefore not to re-visit supply chain integration per se - as advanced in 1989 - but to explore what the future may hold and how that relates to SCI. Therefore, on the basis that 25 years on is a good time to reflect on the changes that have taken place, the aim of this invited work is to explicate developments in SCM and SCI, and ask the questions: has SCM delivered on its promise? And, what does the future hold? Early on in the development of SCM, firms realized the limitations of isolated improvement initiatives and misaligned functional performance agendas and began managing internal processes and flows on a much more integrated basis (Stevens, 1989). This extended the scope of integration to include upstream suppliers and downstream customers. Since the original article, there has been a growing consensus concerning the importance of integrating internal processes and flows, suppliers, and customers (e.g. Tan et al., 1998; Frohlich and Westbrook, 2001). Despite research confirming the positive benefits of supply chain integration (Prajogo and Olhager, 2012), and its importance to a firm's success (Flynn et al., 2010), ambiguity remains as to what constitutes supply chain integration (Fabbe-Costes and Jahre, 2008; Autry et al., 2014). We posit that supply chain integration is the alignment, linkage and coordination of people, processes, information, knowledge, and strategies across the supply chain between all points of contact and influence to facilitate the efficient and effective flows of material, money, information, and knowledge in response to customer needs. SCI is the foundation of SCM (Pagell, 2004). SCI is characterized by "joined up thinking, working, and decision making," underpinned by principles of flow, simplicity, and the minimization of waste. SCI may be enabled by systems and technology such as e-commerce (Gunasekaran and Ngai, 2004), Manufacturing Resource Planning (MRPII), Enterprise Resource Planning (ERP) (Bagchi et al., 2005), and RFID (McFarlane and Sheffi, 2003), but SCI is not just about technology. Integrating the supply chain refers as much to the need for strategic and operational integration within and across the business (Swink et al., 2007) as it does to relational integration with customers and suppliers (Benton and Maloni, 2005). The scope of SCI therefore includes governance, organization structure, systems, relationship management, business strategy, process design, and performance management. SCM as a discipline has evolved rapidly. The early focus of SCM began when organizations began to improve their inventory management and production planning and control. The aim of these practices was to improve production efficiencies and ensure that the capacity of capital assets and machinery was utilized efficiently. This extended upstream to include the management of transport of raw materials at a time when firms were relatively vertically integrated. The next phase in the evolution of SCM was the systematization of materials, production, and transport management. This began with materials requirement planning (MRP) focussing on inventory control (Orlicky, 1975). MRP expanded to become MRPII by incorporating the planning and scheduling of resources involved in manufacturing. Both MRP and MRPII were conceived in the 1960s but did not gain prominence until the 1980s (Wight, 1981). MRP and MRPII evolved to become ERP, in an attempt to gain greater visibility over the entire enterprise. The mid to late 1980s brought intense retrospection from western firms concerning the threat of Japanese firms that were perceived to be more competitive due to higher productivity (Hayes and Wheelwright, 1984). This period led to the implementation of "Japanese" practices such as total quality management (TQM) and lean (Womack et al., 1990) by firms. These practices focussed on reducing inventory through improving quality and flow and involving suppliers in product and process design. The next phase in the evolution of SCM included the introduction of other process improvement practices (e.g. six sigma) that sought to provide a more concrete improvement method compared to TQM or lean (Montgomery and Woodall, 2008). As process improvement, and the standardization of products and processes that facilitated it, took place, there was increasing awareness that end customers were requiring ever increasing levels of choice and differentiation (Christopher, 2000). This led firms to consider that they had become too lean and rigid and should be focussing on creating agile supply chains to adapt to changing demand (Aitken et al., 2002). The agile approach was blended with lean (Naylor et al., 1999) as demand could be decoupled into push and pull to create greater choice for the customer while still retaining some control (van Hoek, 2001). The 1990s also saw a focus upon core competences within firms (Hamel and Prahalad, 1990). This led to a rise in increased outsourcing of non-core activities to lower cost economies. Political factors such as unilateral liberalization measures and the removal of formal free trade barriers have contributed to the growth of developing countries exporting to high wage economies (Gereffi, 1999), encouraging firms to source from lower cost economies. This, in turn, fuels both demand for products from developed economies and the competition to supply. This changed the topology of the supply chain as well as the magnitude, profile and direction of material, and information flows. Significant changes have also taken place around the understanding of how a firm secures a competitive position. Traditionally, superior competitive advantage was seen to be a function of how a firm organized its resources to differentiate itself from the competition (Barney, 1991) and its ability to operate at a lower cost (Porter, 2008). The prevailing tendency was to control as much of its upstream and downstream activities as possible, often leading to high levels of vertical integration (i.e. within a firm rather than with suppliers). At the time of the original article, firms focussed more on managing, in-house, core competences, i.e. those competencies or capabilities that deliver value (as perceived by the customer) and outsourcing non-core activities to specialist - often lower cost - third parties. This resulted in the advent of 3PL providers and supply chain integrators. This all points toward an explosion in SCM thinking over the last 25 years. Figure 2 presents a timeline of SCM strategies, tools, and techniques. The dates in the figure are based upon when, in our experience, these practices were popularized, not introduced. Supply chains are inherently unstable. A key role of SCM is to minimize the risks and uncertainty associated with the naturally occurring unstable state of the supply chain (Lee, 2002). Forrester's (1958) early work on supply chain dynamics highlighted the problem of the reliable "transimissivity" of information through the supply chain. Thus, Lee et al.'s (1997) characterization of the "bullwhip" effect, demonstrates how demand and upstream load are both delayed and distorted as information progresses upstream, such that variation is amplified along the SC. This instability, coupled with the inevitable challenges of forecasting and data integrity render the supply chain unstable. Technology has been used to good effect to improve information flows (Lee et al., 2000). However, the increased remoteness of a global market and supply base, together with the need to manage an increasingly complex network has exacerbated the challenge. In addition to the issues caused by information distortion and a global supply base, the twenty-first century is a time when organizations are facing pressure - from consumers and other stakeholders - to have green and ethical supply chains (Srivastava, 2007). This requires organizations to become more transparent in terms of disclosing their sources of supply, which increases costs and may place pressure on moving away from the lowest cost economies where labor rights can be poor. There are two major strategies to winning business: differentiation and cost advantage (Porter, 2008). Historically, the focus for securing differentiation has been product differentiation. With life cycles now measured in months, sometimes weeks, rather than years, the opportunities to secure sustained benefit through product differentiation are diminishing. Even when a product-based-strategy prevails, the window of opportunity for maximizing profit is becoming shorter and more difficult to hit such that a minor disruption to product availability has a major impact on financial return. The supply chain has, therefore, become either the driver or critical enabler for differentiation. The role of the supply chain as a major driver of cost has long been recognized. Up to 75 percent of a product's cost is external to the focal firm (Trent, 2004). The supply chain, therefore, also offers considerable opportunity for delivering cost advantage. In addition to securing differentiation and cost advantage the supply chain has taken on two further strategic imperatives arising from the need to ensure resilience, responsiveness, agility, and flexibility in an increasingly turbulent and uncertain world. Typically, the supply chain accounts for 50 percent of a company's assets. These comprise both fixed assets such as buildings and machinery as well as current assets such as inventory. Assets, by their very nature prescribe a limited range of working patterns and methods, thereby exposing an organization to significant changes in market structure. The nature and configuration of the asset base, the balance of fixed assets to current assets, and the profile of inventory and cash all influence the resilience of the supply chain and influence a firm's ability to mitigate risk. At an operational level, customers are becoming increasingly demanding in terms of both responsiveness and flexibility. Accordingly, the agility of the supply chain, in terms of structure, management, systems, and processes impacts directly the ability of an organization to respond to customer needs. The role of the supply chain and the focus for SCM can therefore, be summarized as to support an organization to win business competitively by addressing the strategic imperatives of differentiation, cost advantage, resilience, and dynamism (agility, flexibility, responsiveness). In the following section we discuss how these strategic imperatives, together with their drivers and enablers, influence the way in which the supply chain is configured and managed. A firm's SCOM is a translation of the firm's supply chain strategy and need to deliver the strategic imperatives, into operational terms. The design of the model needs to consider the external economic and competitive drivers, leverage current and future likely enablers, and deliver the required level of performance. Figure 3 provides an overview of a SCOM and its related dimensions. The operating model comprises a series of dimensions; each dimension representing a distinct aspect of a firm's supply chain. The decisions a firm makes on the design of each dimension, the overall configuration, and how the dimensions interact to form an integrated supply chain determines the performance of a firm's supply chain. Firms operating in the same sector may have similar operating frameworks - due to market, technological, and mimetic (i.e. the promulgation of "best" practice) influences. The detailed design and configuration will be unique to each firm, reflecting localized decisions on how best to secure a competitive advantage from its supply chain. A firm's SCOM is not fixed. It needs to develop in response to internal and external changes if a firm is to exploit the potential from new opportunities and maintain competitive performance. Given the pressure to improve and the need for firms to continually challenge the performance and capabilities of their SCOM, the question is: how do firms develop their supply chains to secure and maintain value and competitive advantage? How do they adapt to changing economic drivers, take advantage of new technologies and enablers and respond to the increasing need to deliver a differentiated offering, secure cost advantage, while ensuring a resilient and dynamic supply chain able to combat the risk of disruption and major disturbance? What change model operates? A review of supply chain development over the last 25 years suggests a model comprising periods of fundamental change followed by an ongoing focus on continuous improvement based on a combination of process and capability improvement, together with localized structural adjustments to the scope and/or topology of the supply base. Figure 4 illustrates the SCOM "dynamics of change." What is it that drives the need for fundamental change? Since the early 1990s business process re-engineering (Hammer, 1990), lean (Womack et al., 1990), and many other improvement tools and techniques have provided valuable contributions to improving supply chain performance. Local, incremental process improvement can deliver benefits. However, the very nature of the benefit emanating from ongoing reliance upon small incremental process changes is unlikely to have a corresponding impact on the performance of the supply chain as a whole. Inevitably, continuous process improvement will be confronted by the "law of diminishing returns" as significant opportunities become less and competitors copy early adopters. Process improvement is underpinned by process analysis, that is breaking the process down into its constituent parts by mapping product and information flows, in an attempt to improve understanding and expose opportunities to improve (Hines and Rich, 1997). Such improvement is predicated on the supply chain as a repeatable process but supply chains are inherently more complex. Supply chain performance is based on the interaction of processes from the perspective of a "system," thus performance is through synthesis. Developing a supply chain's performance requires focus on the interaction of processes not the optimization of isolated processes. Significant change to supply chain performance cannot be delivered by focussing exclusively on improving isolated processes; improvement will only come through improved interaction of processes. Globalization of supply chains has encouraged firms to pursue low-cost sourcing by increasing the reach of the supply base, "flipping" suppliers as cheaper alternatives emerge, chasing increased control by seeking to manage multiple tiers of supply and splitting purchasing spend across multiple sources in an attempt to stimulate competition. Delivering short term, localized reduction in purchase cost has significant consequences and implications leading to increased complexity, uncertainty, and instability. As shown in Figure 5, the compound effect of the relentless pursuit of low-cost sourcing is an exponential increase in risk. Supply chain leaders relying on a strategy of continuous process and capability improvement, together with frequent structural adjustment to the supply base to sustain their leadership position inevitably find that diminishing returns, coupled with increased risk, erodes their leadership position as the performance gap over the competition reduces and the "followers" catch up. At this point a firm can be said to have hit the "Performance Frontier," (cf. Schmenner and Swink, 1998) whereby the cost and risk of further incremental change is more likely to have a destabilizing effect and a negative impact on relative or absolute performance. Securing advantage at this point requires fundamental change to the operating model, i.e. a paradigm shift or change in the fundamental change to SC design. Firms should seek to maintain a state of equilibrium until such time that the diminishing return from striving to continually improve combined with an ongoing pursuit for leveraging more out of the supply base destabilizes the supply chain rendering the SCOM unstable. Thereafter, the way forward is to seek a step change in structure through fundamental change in order to secure a stable basis for continued growth. During the period of fundamental change there is likely to be a drop in performance while the new structures and ways of working are embedded and optimized. Supply chain development can be said to follow a path of "Punctuated Equilibrium," (cf. Gersick, 1991). This comprises an alternation between long periods of relative structural stability, followed by brief periods of upheaval as a firm seeks competitive advantage through a process of fundamental structural change (a "paradigm shift"). During periods of stability the conceptual framework, basic organization and operational principles of the operating model are stable and can be said to be in a state of equilibrium. The underlying activities are subject to incremental adjustments through a process of continuous improvement able to respond to changes in the external environment, competitive pressures, and operational capabilities. The state of equilibrium continues as long as the underlying changes deliver a positive contribution. Once the performance frontier has been reached, a firm needs to seek fundamental structural change to secure a competitive advantage and establish a platform for further continuous improvement. The principle of integrating the supply chain as a cornerstone of SCM was introduced in the early 1980s. Since then the business context has changed and the structure of SCOMs has developed accordingly. The limitations of supply chain models based on "linear" physical flows have been exposed (e.g. Choi and Wu, 2009c; Bastl et al., 2012) and new phases of networked supply chains have developed. Figure 6 suggests the need to add two further stages to the development model proposed by Stevens in 1989. The additional stages are predicated on the need for integration but reflect the changes in context and capabilities. The transition between phases represents the point at which the extant phase begins to show diminishing returns for the focal firm. Internal supply chain integration transitioned to external supply chain integration as there was a limited amount of performance improvement that could be achieved without involving suppliers and customers. External supply chain integration transitioned to goal directed network supply chains as firms understood that supply chains were non-linear networks and that there would be benefit for non-strategic (or non-integrated) suppliers to have visibility of demand. We suggest that - at the time of writing - we are undergoing a transition to devolved, collaborative supply chain clusters. We suggest that this transition is occurring due to the increased complexity, risk and costs that are being borne by focal firms who are attempting to manage large networks. By effectively outsourcing elements of this management to lead suppliers, there is devolvement of the collaboration into clusters. Clusters are smaller networks that are more easily managed. For example, Zara has popularized the localized, collaborative cluster model (cf. Ghemawat, 2005) although this model currently has a tendency to be implemented in industries with relatively simple products or services, or around a single industry (e.g. Silicon Valley). The automotive industry also uses lead suppliers to coordinate clusters. The early phases of development, internal, and external integration, were addressed in the original article, and are briefly revisited below. Internal integration Internal integration represents the evolution of a firm's SCOM from the functional separation of the 1970s to a model based on the "closed loop" business and resource planning of the late 1980s. Functional separation was characterized by individual functions having their own agendas with limited interaction resulting in high unit costs, high levels of inventory, and poor customer service. The objective for most supply chains was inventory management based on aggregate inventory, stock replenishment using re-order point, and economic order point techniques with limited recognition of the needs of production plans or customer demand. At this time the focus for SCM was to balance supply and demand within the constraints of the business plan. The scope of the supply chain model included commercial, production, technical, purchasing, finance, and materials management and was underpinned by joined up thinking, working, and decision making. External integration External integration involves extending the scope of the integrated supply chain to include supplier integration, distribution integration, and customer integration. Supplier integration focusses on improving the performance of the supply chain between a firm and its supply base. It involves sharing information between both parties enabling a firm to influence costs, quantities and timing of deliveries and production in order to streamline the product flow and to move to a collaborative relationship. Supplier integration often involves a partnership model, with deeper, more long-term relationships with fewer vendors that, in turn, tend to have relationships with fewer customers. This helps build communication channels and trust, which facilitates more extensive knowledge sharing. Supplier integration involves suppliers taking increased responsibility for aspects of availability and product development. It involves increased interactions between businesses and functions to increase productivity and availability and reduce the risk of non-compliance. Distribution integration focusses on detailed resource and flow management through the outbound logistics network in order to reduce logistics and distribution costs and provide increased demand visibility. The focus moves away from the efficient management of transport to planning and controlling the efficient forward and reverse flows and storage of goods and related information as part of an integrated supply chain. Customer integration involves leveraging the supply chain's capabilities as part of the customer proposition and a firm collaborating with customers to add value to both parties. The cornerstones of supply chain customer collaboration are cultural and process integration, whereby both parties contribute their unique insights and capabilities to develop a mutually agreed forecast of demand that meets the needs of the customer, within the constraints of the firm. Customer integration is well operationalized by Collaborative Planning, Forecasting, and Replenishment (CPFR). The benefits of CPFR are well documented, typically in the order of a 10-40 percent inventory reduction in supply chains (Lapide, 2010). Despite the benefits of internal and external integration, the wider business landscape has changed resulting in the need to conceptualize the new SCOMs of goal directed networked supply chains and devolved, collaborative supply chain clusters. We turn to these next. Goal directed networked supply chain Early SCOMs focussed on the linear relationships and flows between customers and suppliers. While the linear perspective may have reflected simplified material flows and aided firms to develop techniques for planning and controlling a physical supply chain, the approach quickly diverged from evolving reality. The dramatic increase in access to information in the late 1990s, the advent of internet communication and the pursuit of global trading and low-cost sourcing, caused leading firms to revise their perception and management of supply chains from physical flows to information flows. Recognizing the supply chain as a network of relationships (e.g. Harland, 1996) not a sequence (or chain) of transactions enabled leading firms to gain improved performance, operational efficiencies, and ultimately sustainable competitiveness (e.g. Choi and Hong, 2002). Figure 7 presents an illustration of a networked supply chain. This model is based on recognizing that the supply chain is a non-linear network with connections between firms. It acknowledges that there can be relationships between suppliers and customers and having visibility of the network can uncover potential risks (cf. Choi and Hong, 2002). The culture and organization of most early adopters of the network perspective was invariably based on a "traditional" command and control style of management, underpinned by a centrally based structure. This manifested itself in a desire to control the sourcing of the bill of material by engaging in directed sourcing. This is where the firm established relationships with second and third tier suppliers and directed the top-tier supplier to source material from them. This SCOM is referred to as a goal directed networked supply chain as supplier relationships and sourcing strategies are aligned with the firm's overall cost, quality, and service goals. One of the key challenges of managing within networks is the presence of indirect relationships (cf. Choi and Wu, 2009a). From Figure 7, an example of an indirect relationship is the one between the supplier and the customer represented as a dashed line. For example, Amazon often uses a 3PL to fulfill customer orders. This creates a direct relationship between the 3PL and the customer. The customer's satisfaction with Amazon thus becomes reliant upon the performance of the 3PL (cf. Choi and Wu, 2009b). This type of structural arrangement is referred to as a triad with all firms within the triad being interdependent. However, the critical issue within the network is the management by the focal firm (e.g. Amazon) of the indirect relationship (e.g. between 3PL and customer). With a networked supply chain there is a significant burden in coordinating all of the direct and indirect relationships in order to meet the goal of the focal firm. This has led firms to create SCOMs that devolve coordination responsibilities to lead suppliers (occasionally known as "Tier 0.5") who then coordinate collaborative clusters. Devolved, collaborative, supply chain clusters The next step in the evolution of SCOMs is the transition to devolved, collaborative supply chain clusters. Choi and Hong (2002), examined the traits of supply networks in terms of formalization, centralization, and complexity. Formalization is closely associated with standardization through rules and procedures as well as norms and values. Centralization addresses the degree to which authority or power of decision making is concentrated or dispersed across the network. Complexity refers to the structural differentiation or variety that exists in the network. The three dimensions form a useful basis for highlighting the limitations of Goal Directed Networked Supply Chains and the emergence of devolved, collaborative, supply chain clusters. The centralized organization structure and underlying need for formality to support the central control of a Goal Directed Networked Supply Chain gave rise to a rigid, inflexible structure unable to cope with the turbulent environment of the last ten years. Similarly, the increase in reach, coupled with attempts to control the bill of materials significantly increased the number of nodes and connections in the network in addition to heavily impacting transaction costs within a firm. The work on the empirical relationship between system size, connectance, and stability carried out by Disney et al. (1997) identified two important phenomena relevant to SCOM design as: first, as the number of nodes increases the probability of a stable operation decreases dramatically; and second, as system connectance increases the network swiftly crosses the "switching" line and becomes unstable. Thus, the implications for supply chain performance are clear. The complexity inherent in a large supply chain network is likely to render it unstable, resulting in a major deterioration in performance. The complexity of the network also leads to an increase in coordination cost. Developing a SCOM to equip a firm to manage a global supply network needs to address the issue of how to accommodate and coordinate the needs and activities of multiple participants without undue complexity, cost, or formality. It should provide a level of governance sufficient to ensure that participants engage in collective and mutually supportive actions, such that any conflict can be addressed and the objectives of the firm's supply chain are met. As presented in Figure 8, we posit that that the future global integrated supply chain model will be devolved, collaborative, supply chain clusters. This model is based on a series of self-governing clusters, each cluster comprising a network of suppliers and/or sub-contractors associated by type, product structure, or flow. All non-core activities are outsourced by the firm (or lead organization) across a range of clusters. Collaboration within and across each cluster is based on goal consensus, whereby the goals for each cluster are aligned and managed in accordance with the goals of the firm. Operational coordination, planning, and governance across clusters are facilitated by the lead organization through an integrated collaboration and operations management and planning protocol supported by clear lines of responsibility and accountability and a visible performance management system. This operates in a network-wide culture where economy of scale and efficiency are subordinate to service, resilience, and effectiveness. Research into clusters is by no means a new phenomenon (cf. Porter, 1998; Sheffi, 2012). However, much of the previous work has focussed on the innovativeness of the cluster or the specialization of competences into an industrial district (e.g. Pinch et al., 2003), or has focussed upon knowledge management within the cluster (e.g. Miles and Snow, 2007). With devolved, collaborative supply chain clusters the focus moves from the cluster to clusters, and to the governance of the clusters. This is challenging as the management of the clusters is reliant upon "architectural knowledge" (cf. Tallman et al., 2004) which is external to the firms within the cluster. Architectural knowledge in the context of the devolved, collaborative supply chain clusters is related to understanding the network as a system and the structures and routines required to effectively coordinate it (cf. McGaughey, 2002). Within devolved, collaborative supply chain clusters, SCI moves away from being a monolithic approach to one that enables the "modular" connecting of the focal firm to the different clusters. This will be facilitated via shared values, agendas, thinking, and norms. We argue that these are required to "lubricate" the flow of information, knowledge, and insight between the devolved clusters and the lead organization. Table I contrasts the four different operating models depicted in Figure 6. The table summarizes the key characteristics of each dimension against the four primary stages of supply chain development. The change from one operating model to another will, we suggest, occur at each "punctuation" (cf. Gersick, 1991). Table I is indicative (or descriptive) rather than definitive (or prescriptive) illustrating changes to process, structure, relationships, and emphasis. The evolution of the SCOMs has been influenced by a number of factors. The growing realization that SCM is critical to a firm's success has made it more strategic with a long-term focus. Firms have also focussed more on what is core to their success and have outsourced that which is not. This has been balanced by the need to understand and coordinate supply chain networks to increase effectiveness and reduce risks. The coordination costs of this are potentially high and firms have built collaborative relationships with lead suppliers who coordinate specialized clusters whose capability is leveraged. Overall, this means that SCOMs have moved from an attempt to control toward a realization that they can, at best, coordinate the network. Planning now takes place at a strategic level and considers not just materials and capacity, but capability and the long-term goals of the firm. This is facilitated by better use of information, knowledge, and insight so pro-active decisions can be made. Another enabler are metrics and accounting systems that enable collaborative behavior and focus on the efficacy of the network. Now that we have discussed changes to SCM and SCI since the original article we turn to examining whether SCM has delivered on its promise. Advances in SCM, whether championed by technology providers, consultants, academics, or practitioners have invariably been accompanied by the promise of improved business performance. Notably, reductions in inventory and operating costs and improved customer experience. It is appropriate therefore after 25 years to ask whether SCM has delivered to its promise? Horvath (2001) suggested that the most considerable benefits to a business with advanced SCM would be radically improved customer responsiveness, customer service and satisfaction, increased flexibility for changing market conditions, improved customer retention, and more effective marketing. Ellram and Liu (2002, p. 30) suggested: "Supply chain management can significantly affect a company's financial performance - both positively and negatively." Sales growth, operating profit margin, working capital investment, and fixed capital investment impact shareholder value, all of these are influenced by SCM (Lambert and Burduroglu, 2000). However, it is difficult to empirically link the evolution of SCM with financial performance to evaluate whether SCM has delivered on its promise. In an attempt to assess the impact of SCM over time, and consistent with Ellinger et al.'s assessments of top SC performers' financial performance compared to that of industry rivals and industries (2011, 2012), we examine the performance over time of a number of companies, across multiple sectors on three indicators of SC performance. These indicators are Return on Net Assets (RONA), inventory turns, and the unified proxy for SC performance developed by Johnson and Templar (2011). We assess these indicators against ten companies from a range of industries, for the years 1997-2014, selected from the Fortune Global 500 Top 25 and companies that appear in the Gartner Supply Chain Top 25 rankings between 2010-2015. The convenience selection is intended to ensure representation of companies at the forefront of exploiting leading edge supply chain development to assess whether acknowledged leaders in SCM practice have consistently and positively influenced financial performance over an extended time frame. The results are shown in Table II. The analysis suggests that the overall impact of improving SCM practices has been equivocal (13 positives to 17 negatives). Inventory performance has improved for half the firms. The supply chain indicator suggests similar levels of improvement. Overall RONA shows an adverse impact (three positives to seven negatives). The authors acknowledge the limitations of the analysis and the impact of recent changes to the global economy, but suggest that while it points to some firms realizing benefit from improved SCM, the majority have not realized the full potential of their supply chains. We suggest that the realization of these benefits is due to four possible factors. The first is that firms are not recognizing that the SCOMs that have worked so well, for so long, may no longer be appropriate in today's volatile, uncertain, complex, and ambiguous world. The second is that the solutions promulgated for SC performance improvement by technology providers, consultants, academics, and practitioners could be regarded as the emperor's new clothes and will be unable to live up to the hype. Third, the users of the solutions are not equipped to realize the benefits despite the robustness of the thinking. This can be due to the technical complexity of the solution or a lack of capability. The fourth, and we suggest most likely, is that the implementation of solutions to improve performance are complex and require large-scale change (cf. van Hoek et al., 2010). The time-scale and complexity of such radical change may lead firms to abandon or partially implement solutions before the performance benefits are realized. Understanding and removing the barriers that impede benefits realization will require a concerted effort by the supply chain community (academics, advisors, technology providers, and practitioners) to work collaboratively to operationalize SCM thinking and deliver measurable, sustainable benefit on a consistent basis. The current challenges presented by a global economy, accelerating rates of change, and the emergence of new and innovative competitors will undoubtedly persist. The role of the SCM as an enabler of business success will not go away. It is more likely that the pressure on the supply chain will increase. SCM's response needs to first, find a more effective way of aligning thinking and practice and accelerating the flow of promising practices across the supply network. Second, address the challenge of ever increasing complexity. The stages of SCI presented here represent what we think to be the next stages in the evolution of SCI. Goal directed supply networks evolved from external integration when firms realized that they existed within a network and non-strategic suppliers could benefit from the sharing of demand data to facilitate planning. The next stage of evolution was devolved, collaborative clusters. Clusters arose as focal firms realized that the coordination of a network was burdensome and that lead suppliers could manage clusters to reduce these coordination costs. This brings us to the current state-of-the-art but what could the next 25 years have in store for SCM? Changes to supply chains over the next quarter of a decade will be driven by changes in the business environment, technology, economies, and customer preferences. There is no doubt that the business environment will become even more volatile, uncertain, complex, and ambiguous (cf. Bennett and Lemoine, 2014). As such, supply chains need to be configured to navigate the future environment and will move ever closer to becoming complex adaptive systems (cf. Choi et al., 2001). We are also seeing a rise in technologies that have promise for tomorrow's supply chains and the democratization of product and process knowledge (Anderson, 2012). These include big data and additive manufacturing technologies such as 3D printing (Brennan et al., 2015). Overlaid upon this are changes within developing economies as they industrialize and wages in those countries increase. As countries move from developing to developed, they become less attractive as manufacturing destinations as the cost benefits are eroded. A reduction in cost benefits, coupled to higher logistics costs, long transport times, and increased risks have influenced firms to move production closer to the point of consumption (Ellram et al., 2013); a phenomena known as re-shoring or near-shoring. A further complicating factor is that customers will require even more differentiation and we will move toward "markets of one." We are already seeing this on a limited scale with the customization of sportswear through the MI-Adidas and Nike-ID initiatives but these provide somewhat limited choices. We suggest that customers will require greater levels of customization. Given these changes, what will the supply chain of the future look like? We suggest that the SCOM of the future will be atomized, adaptive fulfillment communities. They will be atomized - rather than clustered - because the need for, and intensity of, collaboration will increase leading to smaller, less intense clusters which we class as communities. The relationships within these communities (i.e. atomized clusters) will be underpinned by shared norms, values, and behaviors. They will be adaptive to both supply and demand as well as being reactive and pro-active to wider geo-political, business, economic, environmental, and social factors. The networks of the future will also be democratic to supply and demand, hence our use of the term "fulfilment." Integration will be philosophical and driven by behaviors, insight and information, not processes and systems. SCM and SCI have undergone rapid evolution over the past quarter of a century, we look forward to the next 25 years. Opens in a new window. Figure 1 Stages of supply chain development Opens in a new window. Figure 2 A timeline of SCM Strategies, tools, and techniques Opens in a new window. Figure 3 Dimensions of a supply chain operating model Opens in a new window. Figure 4 Supply chain operating model dynamics of change Opens in a new window. Figure 5 The compound effect of the relentless pursuit of low-cost sourcing Opens in a new window. Figure 6 Phases in supply chain management development Opens in a new window. Figure 7 Networked supply chain Opens in a new window. Figure 8 Devolved, collaborative supply chain clusters Opens in a new window. Table I Comparison of the four operating models Opens in a new window. Table II RONA, inventory turns, and supply chain proxy values for ten selected companies from 1997-2014
|
The authors formalize a model for the dynamics of SCM change. The authors also synthesize a number of models of SCM that extend the original, highly cited work. These include goal-oriented networks and devolved, collaborative supply chain clusters. The authors also find the associations between the evolution of SCM and measures of firm financial performance over time to be equivocal.
|
[SECTION: Value] Some 25 years ago, one of the authors wrote an article (Stevens, 1989) that sought to explicate the state-of-the-art in supply chain management (SCM). This was at a time when SCM was still in its infancy and only starting to gain currency as an area of interest for practitioners and academics (Oliver and Webber, 1982). At the time, the organizational functions involved in managing the availability of products and satisfying customer orders operated with relative independence, often with conflicting agendas. The purpose of the original article was to facilitate understanding and encourage organizations to exploit the potential for managing their supply chains as part of a joined up (integrated) whole. The original article addressed the need to manage the supply chain at the strategic, tactical, and operational levels as well as recognizing that the scope of an organization's supply chain extended to the furthest reaches of its network of customer and supplier relationships. Stevens (1989) posited that achieving a state of "integration" required a firm to progress through a number of defined stages of development. The stages identified at the time and illustrated in the original article are shown in Figure 1. As Figure 1 shows, the original article argued that SCM developed from a baseline of functional (independent) silos and the first level of integration was across functions (akin to process integration). This then moved to full internal integration involving a seamless flow through the internal supply chain, and finally to external integration embracing suppliers and customers. The primary benefits were identified as improved customer service and reduced inventory and operating costs. Since the original article, much has changed. The world today is more complex and turbulent (Christopher and Holweg, 2011). The reach of many supply chains has increased in pursuit of growth and low-cost sourcing (Fredriksson and Jonsson, 2009). Technological advances have fueled the development of new business models and ways of working (Johnson and Mena, 2008). The advent of new and maturing supply chain strategies (Christopher and Towill, 2002), tools and techniques, together with increased environmental and ethical concerns (Pagell and Wu, 2009) has increased the recognition of SCM as a driver and enabler of business performance (Johnson and Templar, 2011). This has lead to the adoption of new supply chain practices that have elevated the role of SCM within many organizations. While much has changed, the fundamental need for "joined up" thinking and working and the need to integrate the supply chain has not. Gartner Supply Chain Group (O'Marah and Hofman, 2010), place integration as one of the elements of creating a demand-driven supply chain strategy that leads to improved firm performance (Ellinger et al., 2011, 2012). Thus, the need for SCI is still the same, if not greater than before. What has changed since the original article is the context within which supply chains operate, and the enablers of change and performance improvement. As a result the relevance of narrow, linear-based supply chain models has been challenged as firms have looked more and more toward networked and collaborative supply chain strategies to deliver superior performance. The original article reported on the state-of-the-art in SCM. We retain that objective with this invited work. The aim is therefore not to re-visit supply chain integration per se - as advanced in 1989 - but to explore what the future may hold and how that relates to SCI. Therefore, on the basis that 25 years on is a good time to reflect on the changes that have taken place, the aim of this invited work is to explicate developments in SCM and SCI, and ask the questions: has SCM delivered on its promise? And, what does the future hold? Early on in the development of SCM, firms realized the limitations of isolated improvement initiatives and misaligned functional performance agendas and began managing internal processes and flows on a much more integrated basis (Stevens, 1989). This extended the scope of integration to include upstream suppliers and downstream customers. Since the original article, there has been a growing consensus concerning the importance of integrating internal processes and flows, suppliers, and customers (e.g. Tan et al., 1998; Frohlich and Westbrook, 2001). Despite research confirming the positive benefits of supply chain integration (Prajogo and Olhager, 2012), and its importance to a firm's success (Flynn et al., 2010), ambiguity remains as to what constitutes supply chain integration (Fabbe-Costes and Jahre, 2008; Autry et al., 2014). We posit that supply chain integration is the alignment, linkage and coordination of people, processes, information, knowledge, and strategies across the supply chain between all points of contact and influence to facilitate the efficient and effective flows of material, money, information, and knowledge in response to customer needs. SCI is the foundation of SCM (Pagell, 2004). SCI is characterized by "joined up thinking, working, and decision making," underpinned by principles of flow, simplicity, and the minimization of waste. SCI may be enabled by systems and technology such as e-commerce (Gunasekaran and Ngai, 2004), Manufacturing Resource Planning (MRPII), Enterprise Resource Planning (ERP) (Bagchi et al., 2005), and RFID (McFarlane and Sheffi, 2003), but SCI is not just about technology. Integrating the supply chain refers as much to the need for strategic and operational integration within and across the business (Swink et al., 2007) as it does to relational integration with customers and suppliers (Benton and Maloni, 2005). The scope of SCI therefore includes governance, organization structure, systems, relationship management, business strategy, process design, and performance management. SCM as a discipline has evolved rapidly. The early focus of SCM began when organizations began to improve their inventory management and production planning and control. The aim of these practices was to improve production efficiencies and ensure that the capacity of capital assets and machinery was utilized efficiently. This extended upstream to include the management of transport of raw materials at a time when firms were relatively vertically integrated. The next phase in the evolution of SCM was the systematization of materials, production, and transport management. This began with materials requirement planning (MRP) focussing on inventory control (Orlicky, 1975). MRP expanded to become MRPII by incorporating the planning and scheduling of resources involved in manufacturing. Both MRP and MRPII were conceived in the 1960s but did not gain prominence until the 1980s (Wight, 1981). MRP and MRPII evolved to become ERP, in an attempt to gain greater visibility over the entire enterprise. The mid to late 1980s brought intense retrospection from western firms concerning the threat of Japanese firms that were perceived to be more competitive due to higher productivity (Hayes and Wheelwright, 1984). This period led to the implementation of "Japanese" practices such as total quality management (TQM) and lean (Womack et al., 1990) by firms. These practices focussed on reducing inventory through improving quality and flow and involving suppliers in product and process design. The next phase in the evolution of SCM included the introduction of other process improvement practices (e.g. six sigma) that sought to provide a more concrete improvement method compared to TQM or lean (Montgomery and Woodall, 2008). As process improvement, and the standardization of products and processes that facilitated it, took place, there was increasing awareness that end customers were requiring ever increasing levels of choice and differentiation (Christopher, 2000). This led firms to consider that they had become too lean and rigid and should be focussing on creating agile supply chains to adapt to changing demand (Aitken et al., 2002). The agile approach was blended with lean (Naylor et al., 1999) as demand could be decoupled into push and pull to create greater choice for the customer while still retaining some control (van Hoek, 2001). The 1990s also saw a focus upon core competences within firms (Hamel and Prahalad, 1990). This led to a rise in increased outsourcing of non-core activities to lower cost economies. Political factors such as unilateral liberalization measures and the removal of formal free trade barriers have contributed to the growth of developing countries exporting to high wage economies (Gereffi, 1999), encouraging firms to source from lower cost economies. This, in turn, fuels both demand for products from developed economies and the competition to supply. This changed the topology of the supply chain as well as the magnitude, profile and direction of material, and information flows. Significant changes have also taken place around the understanding of how a firm secures a competitive position. Traditionally, superior competitive advantage was seen to be a function of how a firm organized its resources to differentiate itself from the competition (Barney, 1991) and its ability to operate at a lower cost (Porter, 2008). The prevailing tendency was to control as much of its upstream and downstream activities as possible, often leading to high levels of vertical integration (i.e. within a firm rather than with suppliers). At the time of the original article, firms focussed more on managing, in-house, core competences, i.e. those competencies or capabilities that deliver value (as perceived by the customer) and outsourcing non-core activities to specialist - often lower cost - third parties. This resulted in the advent of 3PL providers and supply chain integrators. This all points toward an explosion in SCM thinking over the last 25 years. Figure 2 presents a timeline of SCM strategies, tools, and techniques. The dates in the figure are based upon when, in our experience, these practices were popularized, not introduced. Supply chains are inherently unstable. A key role of SCM is to minimize the risks and uncertainty associated with the naturally occurring unstable state of the supply chain (Lee, 2002). Forrester's (1958) early work on supply chain dynamics highlighted the problem of the reliable "transimissivity" of information through the supply chain. Thus, Lee et al.'s (1997) characterization of the "bullwhip" effect, demonstrates how demand and upstream load are both delayed and distorted as information progresses upstream, such that variation is amplified along the SC. This instability, coupled with the inevitable challenges of forecasting and data integrity render the supply chain unstable. Technology has been used to good effect to improve information flows (Lee et al., 2000). However, the increased remoteness of a global market and supply base, together with the need to manage an increasingly complex network has exacerbated the challenge. In addition to the issues caused by information distortion and a global supply base, the twenty-first century is a time when organizations are facing pressure - from consumers and other stakeholders - to have green and ethical supply chains (Srivastava, 2007). This requires organizations to become more transparent in terms of disclosing their sources of supply, which increases costs and may place pressure on moving away from the lowest cost economies where labor rights can be poor. There are two major strategies to winning business: differentiation and cost advantage (Porter, 2008). Historically, the focus for securing differentiation has been product differentiation. With life cycles now measured in months, sometimes weeks, rather than years, the opportunities to secure sustained benefit through product differentiation are diminishing. Even when a product-based-strategy prevails, the window of opportunity for maximizing profit is becoming shorter and more difficult to hit such that a minor disruption to product availability has a major impact on financial return. The supply chain has, therefore, become either the driver or critical enabler for differentiation. The role of the supply chain as a major driver of cost has long been recognized. Up to 75 percent of a product's cost is external to the focal firm (Trent, 2004). The supply chain, therefore, also offers considerable opportunity for delivering cost advantage. In addition to securing differentiation and cost advantage the supply chain has taken on two further strategic imperatives arising from the need to ensure resilience, responsiveness, agility, and flexibility in an increasingly turbulent and uncertain world. Typically, the supply chain accounts for 50 percent of a company's assets. These comprise both fixed assets such as buildings and machinery as well as current assets such as inventory. Assets, by their very nature prescribe a limited range of working patterns and methods, thereby exposing an organization to significant changes in market structure. The nature and configuration of the asset base, the balance of fixed assets to current assets, and the profile of inventory and cash all influence the resilience of the supply chain and influence a firm's ability to mitigate risk. At an operational level, customers are becoming increasingly demanding in terms of both responsiveness and flexibility. Accordingly, the agility of the supply chain, in terms of structure, management, systems, and processes impacts directly the ability of an organization to respond to customer needs. The role of the supply chain and the focus for SCM can therefore, be summarized as to support an organization to win business competitively by addressing the strategic imperatives of differentiation, cost advantage, resilience, and dynamism (agility, flexibility, responsiveness). In the following section we discuss how these strategic imperatives, together with their drivers and enablers, influence the way in which the supply chain is configured and managed. A firm's SCOM is a translation of the firm's supply chain strategy and need to deliver the strategic imperatives, into operational terms. The design of the model needs to consider the external economic and competitive drivers, leverage current and future likely enablers, and deliver the required level of performance. Figure 3 provides an overview of a SCOM and its related dimensions. The operating model comprises a series of dimensions; each dimension representing a distinct aspect of a firm's supply chain. The decisions a firm makes on the design of each dimension, the overall configuration, and how the dimensions interact to form an integrated supply chain determines the performance of a firm's supply chain. Firms operating in the same sector may have similar operating frameworks - due to market, technological, and mimetic (i.e. the promulgation of "best" practice) influences. The detailed design and configuration will be unique to each firm, reflecting localized decisions on how best to secure a competitive advantage from its supply chain. A firm's SCOM is not fixed. It needs to develop in response to internal and external changes if a firm is to exploit the potential from new opportunities and maintain competitive performance. Given the pressure to improve and the need for firms to continually challenge the performance and capabilities of their SCOM, the question is: how do firms develop their supply chains to secure and maintain value and competitive advantage? How do they adapt to changing economic drivers, take advantage of new technologies and enablers and respond to the increasing need to deliver a differentiated offering, secure cost advantage, while ensuring a resilient and dynamic supply chain able to combat the risk of disruption and major disturbance? What change model operates? A review of supply chain development over the last 25 years suggests a model comprising periods of fundamental change followed by an ongoing focus on continuous improvement based on a combination of process and capability improvement, together with localized structural adjustments to the scope and/or topology of the supply base. Figure 4 illustrates the SCOM "dynamics of change." What is it that drives the need for fundamental change? Since the early 1990s business process re-engineering (Hammer, 1990), lean (Womack et al., 1990), and many other improvement tools and techniques have provided valuable contributions to improving supply chain performance. Local, incremental process improvement can deliver benefits. However, the very nature of the benefit emanating from ongoing reliance upon small incremental process changes is unlikely to have a corresponding impact on the performance of the supply chain as a whole. Inevitably, continuous process improvement will be confronted by the "law of diminishing returns" as significant opportunities become less and competitors copy early adopters. Process improvement is underpinned by process analysis, that is breaking the process down into its constituent parts by mapping product and information flows, in an attempt to improve understanding and expose opportunities to improve (Hines and Rich, 1997). Such improvement is predicated on the supply chain as a repeatable process but supply chains are inherently more complex. Supply chain performance is based on the interaction of processes from the perspective of a "system," thus performance is through synthesis. Developing a supply chain's performance requires focus on the interaction of processes not the optimization of isolated processes. Significant change to supply chain performance cannot be delivered by focussing exclusively on improving isolated processes; improvement will only come through improved interaction of processes. Globalization of supply chains has encouraged firms to pursue low-cost sourcing by increasing the reach of the supply base, "flipping" suppliers as cheaper alternatives emerge, chasing increased control by seeking to manage multiple tiers of supply and splitting purchasing spend across multiple sources in an attempt to stimulate competition. Delivering short term, localized reduction in purchase cost has significant consequences and implications leading to increased complexity, uncertainty, and instability. As shown in Figure 5, the compound effect of the relentless pursuit of low-cost sourcing is an exponential increase in risk. Supply chain leaders relying on a strategy of continuous process and capability improvement, together with frequent structural adjustment to the supply base to sustain their leadership position inevitably find that diminishing returns, coupled with increased risk, erodes their leadership position as the performance gap over the competition reduces and the "followers" catch up. At this point a firm can be said to have hit the "Performance Frontier," (cf. Schmenner and Swink, 1998) whereby the cost and risk of further incremental change is more likely to have a destabilizing effect and a negative impact on relative or absolute performance. Securing advantage at this point requires fundamental change to the operating model, i.e. a paradigm shift or change in the fundamental change to SC design. Firms should seek to maintain a state of equilibrium until such time that the diminishing return from striving to continually improve combined with an ongoing pursuit for leveraging more out of the supply base destabilizes the supply chain rendering the SCOM unstable. Thereafter, the way forward is to seek a step change in structure through fundamental change in order to secure a stable basis for continued growth. During the period of fundamental change there is likely to be a drop in performance while the new structures and ways of working are embedded and optimized. Supply chain development can be said to follow a path of "Punctuated Equilibrium," (cf. Gersick, 1991). This comprises an alternation between long periods of relative structural stability, followed by brief periods of upheaval as a firm seeks competitive advantage through a process of fundamental structural change (a "paradigm shift"). During periods of stability the conceptual framework, basic organization and operational principles of the operating model are stable and can be said to be in a state of equilibrium. The underlying activities are subject to incremental adjustments through a process of continuous improvement able to respond to changes in the external environment, competitive pressures, and operational capabilities. The state of equilibrium continues as long as the underlying changes deliver a positive contribution. Once the performance frontier has been reached, a firm needs to seek fundamental structural change to secure a competitive advantage and establish a platform for further continuous improvement. The principle of integrating the supply chain as a cornerstone of SCM was introduced in the early 1980s. Since then the business context has changed and the structure of SCOMs has developed accordingly. The limitations of supply chain models based on "linear" physical flows have been exposed (e.g. Choi and Wu, 2009c; Bastl et al., 2012) and new phases of networked supply chains have developed. Figure 6 suggests the need to add two further stages to the development model proposed by Stevens in 1989. The additional stages are predicated on the need for integration but reflect the changes in context and capabilities. The transition between phases represents the point at which the extant phase begins to show diminishing returns for the focal firm. Internal supply chain integration transitioned to external supply chain integration as there was a limited amount of performance improvement that could be achieved without involving suppliers and customers. External supply chain integration transitioned to goal directed network supply chains as firms understood that supply chains were non-linear networks and that there would be benefit for non-strategic (or non-integrated) suppliers to have visibility of demand. We suggest that - at the time of writing - we are undergoing a transition to devolved, collaborative supply chain clusters. We suggest that this transition is occurring due to the increased complexity, risk and costs that are being borne by focal firms who are attempting to manage large networks. By effectively outsourcing elements of this management to lead suppliers, there is devolvement of the collaboration into clusters. Clusters are smaller networks that are more easily managed. For example, Zara has popularized the localized, collaborative cluster model (cf. Ghemawat, 2005) although this model currently has a tendency to be implemented in industries with relatively simple products or services, or around a single industry (e.g. Silicon Valley). The automotive industry also uses lead suppliers to coordinate clusters. The early phases of development, internal, and external integration, were addressed in the original article, and are briefly revisited below. Internal integration Internal integration represents the evolution of a firm's SCOM from the functional separation of the 1970s to a model based on the "closed loop" business and resource planning of the late 1980s. Functional separation was characterized by individual functions having their own agendas with limited interaction resulting in high unit costs, high levels of inventory, and poor customer service. The objective for most supply chains was inventory management based on aggregate inventory, stock replenishment using re-order point, and economic order point techniques with limited recognition of the needs of production plans or customer demand. At this time the focus for SCM was to balance supply and demand within the constraints of the business plan. The scope of the supply chain model included commercial, production, technical, purchasing, finance, and materials management and was underpinned by joined up thinking, working, and decision making. External integration External integration involves extending the scope of the integrated supply chain to include supplier integration, distribution integration, and customer integration. Supplier integration focusses on improving the performance of the supply chain between a firm and its supply base. It involves sharing information between both parties enabling a firm to influence costs, quantities and timing of deliveries and production in order to streamline the product flow and to move to a collaborative relationship. Supplier integration often involves a partnership model, with deeper, more long-term relationships with fewer vendors that, in turn, tend to have relationships with fewer customers. This helps build communication channels and trust, which facilitates more extensive knowledge sharing. Supplier integration involves suppliers taking increased responsibility for aspects of availability and product development. It involves increased interactions between businesses and functions to increase productivity and availability and reduce the risk of non-compliance. Distribution integration focusses on detailed resource and flow management through the outbound logistics network in order to reduce logistics and distribution costs and provide increased demand visibility. The focus moves away from the efficient management of transport to planning and controlling the efficient forward and reverse flows and storage of goods and related information as part of an integrated supply chain. Customer integration involves leveraging the supply chain's capabilities as part of the customer proposition and a firm collaborating with customers to add value to both parties. The cornerstones of supply chain customer collaboration are cultural and process integration, whereby both parties contribute their unique insights and capabilities to develop a mutually agreed forecast of demand that meets the needs of the customer, within the constraints of the firm. Customer integration is well operationalized by Collaborative Planning, Forecasting, and Replenishment (CPFR). The benefits of CPFR are well documented, typically in the order of a 10-40 percent inventory reduction in supply chains (Lapide, 2010). Despite the benefits of internal and external integration, the wider business landscape has changed resulting in the need to conceptualize the new SCOMs of goal directed networked supply chains and devolved, collaborative supply chain clusters. We turn to these next. Goal directed networked supply chain Early SCOMs focussed on the linear relationships and flows between customers and suppliers. While the linear perspective may have reflected simplified material flows and aided firms to develop techniques for planning and controlling a physical supply chain, the approach quickly diverged from evolving reality. The dramatic increase in access to information in the late 1990s, the advent of internet communication and the pursuit of global trading and low-cost sourcing, caused leading firms to revise their perception and management of supply chains from physical flows to information flows. Recognizing the supply chain as a network of relationships (e.g. Harland, 1996) not a sequence (or chain) of transactions enabled leading firms to gain improved performance, operational efficiencies, and ultimately sustainable competitiveness (e.g. Choi and Hong, 2002). Figure 7 presents an illustration of a networked supply chain. This model is based on recognizing that the supply chain is a non-linear network with connections between firms. It acknowledges that there can be relationships between suppliers and customers and having visibility of the network can uncover potential risks (cf. Choi and Hong, 2002). The culture and organization of most early adopters of the network perspective was invariably based on a "traditional" command and control style of management, underpinned by a centrally based structure. This manifested itself in a desire to control the sourcing of the bill of material by engaging in directed sourcing. This is where the firm established relationships with second and third tier suppliers and directed the top-tier supplier to source material from them. This SCOM is referred to as a goal directed networked supply chain as supplier relationships and sourcing strategies are aligned with the firm's overall cost, quality, and service goals. One of the key challenges of managing within networks is the presence of indirect relationships (cf. Choi and Wu, 2009a). From Figure 7, an example of an indirect relationship is the one between the supplier and the customer represented as a dashed line. For example, Amazon often uses a 3PL to fulfill customer orders. This creates a direct relationship between the 3PL and the customer. The customer's satisfaction with Amazon thus becomes reliant upon the performance of the 3PL (cf. Choi and Wu, 2009b). This type of structural arrangement is referred to as a triad with all firms within the triad being interdependent. However, the critical issue within the network is the management by the focal firm (e.g. Amazon) of the indirect relationship (e.g. between 3PL and customer). With a networked supply chain there is a significant burden in coordinating all of the direct and indirect relationships in order to meet the goal of the focal firm. This has led firms to create SCOMs that devolve coordination responsibilities to lead suppliers (occasionally known as "Tier 0.5") who then coordinate collaborative clusters. Devolved, collaborative, supply chain clusters The next step in the evolution of SCOMs is the transition to devolved, collaborative supply chain clusters. Choi and Hong (2002), examined the traits of supply networks in terms of formalization, centralization, and complexity. Formalization is closely associated with standardization through rules and procedures as well as norms and values. Centralization addresses the degree to which authority or power of decision making is concentrated or dispersed across the network. Complexity refers to the structural differentiation or variety that exists in the network. The three dimensions form a useful basis for highlighting the limitations of Goal Directed Networked Supply Chains and the emergence of devolved, collaborative, supply chain clusters. The centralized organization structure and underlying need for formality to support the central control of a Goal Directed Networked Supply Chain gave rise to a rigid, inflexible structure unable to cope with the turbulent environment of the last ten years. Similarly, the increase in reach, coupled with attempts to control the bill of materials significantly increased the number of nodes and connections in the network in addition to heavily impacting transaction costs within a firm. The work on the empirical relationship between system size, connectance, and stability carried out by Disney et al. (1997) identified two important phenomena relevant to SCOM design as: first, as the number of nodes increases the probability of a stable operation decreases dramatically; and second, as system connectance increases the network swiftly crosses the "switching" line and becomes unstable. Thus, the implications for supply chain performance are clear. The complexity inherent in a large supply chain network is likely to render it unstable, resulting in a major deterioration in performance. The complexity of the network also leads to an increase in coordination cost. Developing a SCOM to equip a firm to manage a global supply network needs to address the issue of how to accommodate and coordinate the needs and activities of multiple participants without undue complexity, cost, or formality. It should provide a level of governance sufficient to ensure that participants engage in collective and mutually supportive actions, such that any conflict can be addressed and the objectives of the firm's supply chain are met. As presented in Figure 8, we posit that that the future global integrated supply chain model will be devolved, collaborative, supply chain clusters. This model is based on a series of self-governing clusters, each cluster comprising a network of suppliers and/or sub-contractors associated by type, product structure, or flow. All non-core activities are outsourced by the firm (or lead organization) across a range of clusters. Collaboration within and across each cluster is based on goal consensus, whereby the goals for each cluster are aligned and managed in accordance with the goals of the firm. Operational coordination, planning, and governance across clusters are facilitated by the lead organization through an integrated collaboration and operations management and planning protocol supported by clear lines of responsibility and accountability and a visible performance management system. This operates in a network-wide culture where economy of scale and efficiency are subordinate to service, resilience, and effectiveness. Research into clusters is by no means a new phenomenon (cf. Porter, 1998; Sheffi, 2012). However, much of the previous work has focussed on the innovativeness of the cluster or the specialization of competences into an industrial district (e.g. Pinch et al., 2003), or has focussed upon knowledge management within the cluster (e.g. Miles and Snow, 2007). With devolved, collaborative supply chain clusters the focus moves from the cluster to clusters, and to the governance of the clusters. This is challenging as the management of the clusters is reliant upon "architectural knowledge" (cf. Tallman et al., 2004) which is external to the firms within the cluster. Architectural knowledge in the context of the devolved, collaborative supply chain clusters is related to understanding the network as a system and the structures and routines required to effectively coordinate it (cf. McGaughey, 2002). Within devolved, collaborative supply chain clusters, SCI moves away from being a monolithic approach to one that enables the "modular" connecting of the focal firm to the different clusters. This will be facilitated via shared values, agendas, thinking, and norms. We argue that these are required to "lubricate" the flow of information, knowledge, and insight between the devolved clusters and the lead organization. Table I contrasts the four different operating models depicted in Figure 6. The table summarizes the key characteristics of each dimension against the four primary stages of supply chain development. The change from one operating model to another will, we suggest, occur at each "punctuation" (cf. Gersick, 1991). Table I is indicative (or descriptive) rather than definitive (or prescriptive) illustrating changes to process, structure, relationships, and emphasis. The evolution of the SCOMs has been influenced by a number of factors. The growing realization that SCM is critical to a firm's success has made it more strategic with a long-term focus. Firms have also focussed more on what is core to their success and have outsourced that which is not. This has been balanced by the need to understand and coordinate supply chain networks to increase effectiveness and reduce risks. The coordination costs of this are potentially high and firms have built collaborative relationships with lead suppliers who coordinate specialized clusters whose capability is leveraged. Overall, this means that SCOMs have moved from an attempt to control toward a realization that they can, at best, coordinate the network. Planning now takes place at a strategic level and considers not just materials and capacity, but capability and the long-term goals of the firm. This is facilitated by better use of information, knowledge, and insight so pro-active decisions can be made. Another enabler are metrics and accounting systems that enable collaborative behavior and focus on the efficacy of the network. Now that we have discussed changes to SCM and SCI since the original article we turn to examining whether SCM has delivered on its promise. Advances in SCM, whether championed by technology providers, consultants, academics, or practitioners have invariably been accompanied by the promise of improved business performance. Notably, reductions in inventory and operating costs and improved customer experience. It is appropriate therefore after 25 years to ask whether SCM has delivered to its promise? Horvath (2001) suggested that the most considerable benefits to a business with advanced SCM would be radically improved customer responsiveness, customer service and satisfaction, increased flexibility for changing market conditions, improved customer retention, and more effective marketing. Ellram and Liu (2002, p. 30) suggested: "Supply chain management can significantly affect a company's financial performance - both positively and negatively." Sales growth, operating profit margin, working capital investment, and fixed capital investment impact shareholder value, all of these are influenced by SCM (Lambert and Burduroglu, 2000). However, it is difficult to empirically link the evolution of SCM with financial performance to evaluate whether SCM has delivered on its promise. In an attempt to assess the impact of SCM over time, and consistent with Ellinger et al.'s assessments of top SC performers' financial performance compared to that of industry rivals and industries (2011, 2012), we examine the performance over time of a number of companies, across multiple sectors on three indicators of SC performance. These indicators are Return on Net Assets (RONA), inventory turns, and the unified proxy for SC performance developed by Johnson and Templar (2011). We assess these indicators against ten companies from a range of industries, for the years 1997-2014, selected from the Fortune Global 500 Top 25 and companies that appear in the Gartner Supply Chain Top 25 rankings between 2010-2015. The convenience selection is intended to ensure representation of companies at the forefront of exploiting leading edge supply chain development to assess whether acknowledged leaders in SCM practice have consistently and positively influenced financial performance over an extended time frame. The results are shown in Table II. The analysis suggests that the overall impact of improving SCM practices has been equivocal (13 positives to 17 negatives). Inventory performance has improved for half the firms. The supply chain indicator suggests similar levels of improvement. Overall RONA shows an adverse impact (three positives to seven negatives). The authors acknowledge the limitations of the analysis and the impact of recent changes to the global economy, but suggest that while it points to some firms realizing benefit from improved SCM, the majority have not realized the full potential of their supply chains. We suggest that the realization of these benefits is due to four possible factors. The first is that firms are not recognizing that the SCOMs that have worked so well, for so long, may no longer be appropriate in today's volatile, uncertain, complex, and ambiguous world. The second is that the solutions promulgated for SC performance improvement by technology providers, consultants, academics, and practitioners could be regarded as the emperor's new clothes and will be unable to live up to the hype. Third, the users of the solutions are not equipped to realize the benefits despite the robustness of the thinking. This can be due to the technical complexity of the solution or a lack of capability. The fourth, and we suggest most likely, is that the implementation of solutions to improve performance are complex and require large-scale change (cf. van Hoek et al., 2010). The time-scale and complexity of such radical change may lead firms to abandon or partially implement solutions before the performance benefits are realized. Understanding and removing the barriers that impede benefits realization will require a concerted effort by the supply chain community (academics, advisors, technology providers, and practitioners) to work collaboratively to operationalize SCM thinking and deliver measurable, sustainable benefit on a consistent basis. The current challenges presented by a global economy, accelerating rates of change, and the emergence of new and innovative competitors will undoubtedly persist. The role of the SCM as an enabler of business success will not go away. It is more likely that the pressure on the supply chain will increase. SCM's response needs to first, find a more effective way of aligning thinking and practice and accelerating the flow of promising practices across the supply network. Second, address the challenge of ever increasing complexity. The stages of SCI presented here represent what we think to be the next stages in the evolution of SCI. Goal directed supply networks evolved from external integration when firms realized that they existed within a network and non-strategic suppliers could benefit from the sharing of demand data to facilitate planning. The next stage of evolution was devolved, collaborative clusters. Clusters arose as focal firms realized that the coordination of a network was burdensome and that lead suppliers could manage clusters to reduce these coordination costs. This brings us to the current state-of-the-art but what could the next 25 years have in store for SCM? Changes to supply chains over the next quarter of a decade will be driven by changes in the business environment, technology, economies, and customer preferences. There is no doubt that the business environment will become even more volatile, uncertain, complex, and ambiguous (cf. Bennett and Lemoine, 2014). As such, supply chains need to be configured to navigate the future environment and will move ever closer to becoming complex adaptive systems (cf. Choi et al., 2001). We are also seeing a rise in technologies that have promise for tomorrow's supply chains and the democratization of product and process knowledge (Anderson, 2012). These include big data and additive manufacturing technologies such as 3D printing (Brennan et al., 2015). Overlaid upon this are changes within developing economies as they industrialize and wages in those countries increase. As countries move from developing to developed, they become less attractive as manufacturing destinations as the cost benefits are eroded. A reduction in cost benefits, coupled to higher logistics costs, long transport times, and increased risks have influenced firms to move production closer to the point of consumption (Ellram et al., 2013); a phenomena known as re-shoring or near-shoring. A further complicating factor is that customers will require even more differentiation and we will move toward "markets of one." We are already seeing this on a limited scale with the customization of sportswear through the MI-Adidas and Nike-ID initiatives but these provide somewhat limited choices. We suggest that customers will require greater levels of customization. Given these changes, what will the supply chain of the future look like? We suggest that the SCOM of the future will be atomized, adaptive fulfillment communities. They will be atomized - rather than clustered - because the need for, and intensity of, collaboration will increase leading to smaller, less intense clusters which we class as communities. The relationships within these communities (i.e. atomized clusters) will be underpinned by shared norms, values, and behaviors. They will be adaptive to both supply and demand as well as being reactive and pro-active to wider geo-political, business, economic, environmental, and social factors. The networks of the future will also be democratic to supply and demand, hence our use of the term "fulfilment." Integration will be philosophical and driven by behaviors, insight and information, not processes and systems. SCM and SCI have undergone rapid evolution over the past quarter of a century, we look forward to the next 25 years. Opens in a new window. Figure 1 Stages of supply chain development Opens in a new window. Figure 2 A timeline of SCM Strategies, tools, and techniques Opens in a new window. Figure 3 Dimensions of a supply chain operating model Opens in a new window. Figure 4 Supply chain operating model dynamics of change Opens in a new window. Figure 5 The compound effect of the relentless pursuit of low-cost sourcing Opens in a new window. Figure 6 Phases in supply chain management development Opens in a new window. Figure 7 Networked supply chain Opens in a new window. Figure 8 Devolved, collaborative supply chain clusters Opens in a new window. Table I Comparison of the four operating models Opens in a new window. Table II RONA, inventory turns, and supply chain proxy values for ten selected companies from 1997-2014
|
This work proposes two additional operating models that firms can implement in order to improve the efficacy of their supply chains.
|
[SECTION: Purpose] We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers - people able to put together the right information at the right time, think critically about it, and make important choices wisely (Edward O. Wilson). I have the privilege of working with key leaders in contemporary business and society, helping them resolve difficult problems, clarify meaning and purpose, think through complex strategic challenges, navigate tricky emotional situations. From this viewpoint I observe the workings of the CEO mind in the CEO environment, and note leaders who are constantly striving to understand, to make sense, to figure it out, to have a breakthrough, to be persuasive, creative, and innovative. Facing the pressure to run a mental marathon every single day - at a sprinting pace - they usually lack effective thinking time and rarely have an effective thinking model that can be applied in any situation. Many of the business books which line the shelves range across strategy, management, and leadership, although few address the question of what thinking is, and offer a simple model that can be readily practiced in the hectic pace at which leaders work. This paper seeks to redress this issue.1.1 Thinking in organisations - the challenge The ability to marshal and understand data and form correct judgements is a critical business skill, since these directly influence the decisions taken in the organisation. Many firms focus on action, execution, and getting things done, without paying sufficient attention to the underlying thinking processes. Thinking precedes doing, and excellent thinking contributes to excellent outcomes. On the other hand, poor thinking contributes to poor outcomes.Some of the myriad examples that I have observed, which may be attributable to poor thinking, include:* inappropriate hires and promotions;* erroneous, and sometimes, foolish investments and resource allocation;* decisions driven by ego, with little contribution from key members of a team;* acquisitions that fail to deliver the mooted benefits;* creation of unintended consequences from ill-informed decision making;* staff, customers, suppliers, and the public treated badly, damaging engagement, productivity, share price, and profit;* destruction of brand equity, corporate profiles, and personal reputations;* money repeatedly wasted on doomed projects, which would have been saved by better planning, with a subsequent faster return on capital;* suboptimal productivity;* interpersonal conflict and failed relationships;* failure to act in a timely manner; and* overreliance on data, leading to decision paralysis and diminished personal initiative.The quality of thinking in organisations sometimes lacks the reflection and rigour required for management in the contemporary environment. Initial impressions become corporate reality without further reflection. People develop fixed views and have little disposition to hearing counter arguments or alternative perspectives. Firms ask for ever more data and analysis, but find no one is able to take responsibility for making a decision. Rear looking data, such as benchmarking and case studies, is used to analyse a path to the future, with often disastrous results. Decisions are made with no real understanding of the underlying thinking. The crush of ambiguity and complexity leads to over-simplification and avoidance of the hard mental work that is required.The use of a relatively narrow set of terms to talk about knowing - most often confused with sense perception or understanding - and a lack of coherent cognitive theory makes it difficult for leaders to engage with those who assume an air of expertise, or argue a case strongly. The framework proposed in this paper rectifies that need, providing a set of example questions that can be asked in any meeting, of any advice or report, or of one's own thinking, to arrive at greater insight and better judgement which will facilitate better decisions and actions. It may also put into perspective and provide rationale for some of the logic and processes currently employed by managers.Two leading contemporary writers - Howard Gardner and Roger Martin - are actively addressing the question of how leaders think in today's enterprise.With a background in psychology, Gardner (2006) suggests we need "five minds" to succeed in the twenty-first century. These include:(1) a disciplined mind, which "has mastered at least one way of thinking - a distinctive mode of cognition [...] (in order to) succeed at any demanding workplace" (Gardner, 2006);(2) a synthesising mind which "takes information from disparate sources, understands and evaluates that information objectively and puts it together in ways that make sense to the synthesizer and other persons" (Gardner, 2006);(3) a creating mind that "puts forth new ideas, poses unfamiliar questions, conjures up fresh ways of thinking, arrives at unexpected answers" (Gardner, 2006);(4) a respectful mind which "welcomes differences between human individuals and human groups [...] and seeks to work effectively with them" (Gardner, 2006); and(5) an ethical mind to "ponder the nature of one's work and the needs and desires of the society in which one lives" (Gardner, 2006).This paper proposes an underlying model which is beneficial for each of these minds, and which will aid the effort to acquire them.Martin on the other hand starts with leaders facing complex challenges and wanting to understand how they think. He bases his theory of "integrative thinking" (Martin, 2007) on examples of, and conversations with, successful leaders who demonstrate this type of thinking. He is particularly interested in the mental processes employed by successful leaders, and says integrative thinkers have an ability to comfortably hold opposing ideas in their mind and arrive at new insights or approaches that either allow for both, or create an entirely new solution (Martin, 2007). Barack Obama shows evidence of this integrative thinking capability in his 2010 State of the Union address when he says "Let's reject the false choice between protecting our people and upholding our values" (Obama, 2010). President Obama "consistently lays out the opposing models, not to set up an either/or choice, but to begin the thinking process toward an integrative solution" (Martin, 2010).1.2 Thinking in business - a solution This paper addresses the question of what goes on in the human mind when we are thinking, identifies ways we can improve our individual and organisational thinking, and hence foster a thinking organisation. It is based primarily on the work of Bernard Lonergan, a Canadian philosopher who has been called one of the most influential thinkers of the twentieth century (Time Magazine, 1970). Lonergan starts with the thinking person and expounds a dynamic structure of knowing that is common to humanity, irrespective of culture, status, success, or other markers (Lonergan, 1957/1992). Since learning of his work more than 20 years ago I have found his simple model readily applicable to any conversation or decision-making process, and usually find it to be of great help to clients.Lonergan's cognitional theory provides a method for thinking and knowing that can be used in any endeavour and in any discipline. His approach can incorporate Gardner's five minds into one thinking model, provide a philosophical foundation to Martin's work, and lay the foundation for becoming a thinking organisation. Whether discussing political, commercial or personal issues, economics, strategy or people, systems thinking or chaos theory, this approach can serve as an effective model for structuring one's thinking, and contribute to generating further insights.The model explained here can become the grounds for a common approach to thinking at both the individual and organisational level. It offers a consistent way of managing the thinking challenges that confront us: creativity and innovation, clarity of thought for effective decision making, resolution of seemingly intractable problems, enabling of honesty and discovery of truth in leadership conversations, conflict resolution, etc. Although the list is endless the common starting point is inside the human mind, and understanding the way we think. Recognising this, and having a familiar language to talk about thinking, increases the potential for listening and learning from one another, arriving at new solutions, and avoiding dead ends, blind alleys, and unnecessary conflict.Before talking about the model, we must first discuss the process of thinking and identify a common trap people fall into - confusing what one's senses perceive with knowing what actually is. One attribute shared by thinking people is the drive to know - a deep seated desire to understand, to make sense of our world, and to find answers to the questions we face (Lonergan, 1957/1992). Archimedes' cry of "Eureka" and subsequent headlong dash through the street when he solved the problem about measuring the gold content in the king's crown is a classic example of the power of this desire. Although we may be disinclined to run naked through the streets after solving a bothersome question, the sense of elation and relief we experience when we "get it", and solve the problem at hand, may lead us to jump up and down, pump our fist in the air, or rush out to tell our colleagues.By taking a moment to reflect on those times when we "got the point", or found a solution to a problem, or worked out which way to proceed, we are able to observe that this understanding is accompanied by a shift in consciousness. This shift is from a state of puzzlement, of frustration, of not "getting it", to a state of awe, of elation, and of understanding. This is an "ah-ha" moment. Insight - coming to understand - brings about this shift in our mental and emotional states.Most of us remember being a novice in a new field listening to an expert explain something. The expert describes, points, expounds about connections between this and that, while we struggle to keep up with what is being said. Two people - the novice and the expert - having quite different experiences, because one understands and the other does not.Though the act of understanding is central in coming to know, it can easily be neglected. It is much easier to simply "see what is in front of us", to go along with the current view, to rely only upon what we have been told is happening. A desire to leave things uncomplicated can fall into the confusion of assuming that what is obvious in knowing (i.e. looking) is what knowing obviously is. This "cognitional myth" (Ogilvie, 2002) confuses looking with knowing, and is a danger for both thinkers and thinking organisations.How the myth can operate and mislead is illustrated in the responses to a series of photos that emerged from Abu Ghraib prison in 2004. Most people would readily recall the graphic nature of the photographs, showing what appeared to be guards torturing, abusing, and degrading prisoners. "The photographs tell it all" stated Seymour Hersh (2004).The filmmaker Errol Morris started with a view that it is not obvious what the photos depict. Quoting Specialist Megan Ambuhl, one of the guards featured in the photos, Morris observed that photographs do not allow people to see "outside the frame" (Burrell, 2008). With a background as a private investigator, Morris "scrutinises data (and) unravels preconceptions" (Burrell, 2008).The timelines reconstructed from the digital metadata by the prosecution in the case brought against the soldiers only provides further empirical data. It does not reveal what people were thinking, why they were acting in certain ways, why the photos were taken, what were the proximate events. Empirical data, although appearing to provide compelling visual proof, leaves many questions unanswered. Morris went to work to answer those questions, examining the contents of the photos in great detail, speaking to soldiers, prisoners, and relevant policy makers. The outcome of this process is captured in Morris' (2008) film, Standard Operating Procedure.Data and fact are two entirely different aspects of knowing. The data contained in the photographs - soldiers, prisoners, prison cells, and so forth - are what we observe. The question of fact, of what is occurring, arises when we try to organise the data into an intelligible whole, from which we can form a hypothesis, which we then test by asking further relevant questions. When the hypothesis has withstood persistent questioning, and no further questions arise, we have arrived at fact and can reasonably agree that "this is so".The cognitional myth confuses seeing with knowing. Other people can see the same photograph and arrive at different conclusions. But in the absence of more data we can only hypothesise about the contents, not draw conclusions. The visual alone is not proof. The solution to the cognitional myth is a cognitional model, a "procedure of the human mind [...] that is, a basic pattern of operations employed in every cognitional enterprise" (Lonergan, 1972/1979). These operations "are contained in questions prior to answers [...] move us from ignorance to knowledge [...] (and) go beyond what we know to seek what we do not know" (Lonergan, 1972/1979).In order to understand the structure of knowing, we need to ask "what is it that I am doing when I am knowing?" The operations of the mind include "seeing, hearing, touching, tasting, smelling, inquiring, imagining, understanding, conceiving, formulating, reflecting, marshalling the evidence, judging, deliberating, evaluating, deciding, speaking, writing" (Lonergan, 1972/1979). These operations occur on four levels of consciousness (Lonergan, 1972/1979), three of which constitute knowing, and the fourth pertaining to the application of knowledge.The first dimension of consciousness is that of experience and the empirical. The second dimension is the level of intellect, and the effort to understand experience. "A third dimension of rationality emerges when the content of our acts of understanding is regarded as, of itself, a mere bright idea and we endeavour to settle what really is so" (Lonergan, 1972/1979). A fourth dimension of consciousness "comes to the fore when judgement on the facts is followed by deliberation on what we are to do about them" (Lonergan, 1972/1979).Table I captures the essence of the structure of knowing as depicted by Lonergan. It summarises the four levels of consciousness, the operations that occur on each level, and the description Lonergan used for the principal occurrence on that level.This is the cognitional model being presented in this paper, that knowing is not mere looking, but a compound of experiencing, understanding, and judging. On this basis, to say one "knows" is to say that all the necessary data has been considered, relevant questions asked, and sound judgement passed on one's understanding. In the absence of relevant data, or if more questions remain, one does not yet "know" but only supposes or hypothesises.None of these operations alone constitutes knowing, but all are needed in order to know. The data of sense, or experience, provokes inquiry - not for more data, but in order to make sense, to organise, to understand. Understanding leads to judgement about the veracity of what is understood. Having then made a judgement about what one knows, we seek to apply that knowledge in the decisions that are made.The type of question being asked helps one to distinguish the level of operation. On the first level one seeks a name or label in the answer. "The second level of questions seeks an explanation; the third level seeks a very short answer - yes or no. The fourth level seeks an answer to the question 'will I?'" (Little, 2010) resulting in an action. This model is readily applicable to any situation which calls for sound judgement, or where we find ourselves challenged to make sense of disparate data.3.1 Observing the cognitional model As a young midshipman on a cargo ship I was assigned to examine the contents of the hold to ensure nothing had come adrift during a particularly fierce storm. The ship had recently left a port rife with rumours about waterfront wars and disappearing bodies. Descending a vertical shaft into inky blackness, surrounded by the groans and shudders of a large ship in a storm, can be an unnerving experience. Vehicles, industrial machinery, and various pallets of goods covered the floor of the hold. I quickly checked the bolts and shackles by torchlight to ensure everything remained fast. Curiosity got the better of me as I focused the torch inside the rear of a small truck. Imagine my surprise and horror when I saw the limbs and torso of a body spreadeagled on the floor. The darkness of the hold was transformed into a tomb, as my heart raced and I fled the scene.The captain insisted I return with a colleague to confirm my macabre finding. Tentatively we approached the truck, each encouraging the other to go inside and look more closely. Marshalling our combined courage we opened the doors and approached the body. A full body diving wetsuit greeted us, to our great relief, and my chagrin.In this frightening event, the darkness, limbs and torso compounded rapidly to form a concept in my mind with which I readily agreed: "there is a dead body in the hold". Only by asking further questions - approaching the "body" and examining it more closely - and modifying my original understanding could a correct judgement be made: "this is not a dead body". The "proof" was overwhelming at first pass, and my emotional state allowed irrelevant data (darkness and rumours from the last port) to inform my understanding.We can recognise the cognitional model in this example. Sense experience provided raw material for the intellect, and as a result of active questioning one begins to understand, fulfilling the human drive to know. Insight provides the relationships between the data, enabling us to "join the dots", creating an "intelligible unity" (Ogilvie, 2002). Having had an insight, we then form a concept, a general expression of that insight, setting aside the data which is irrelevant. For example, the fact that the ship's hold is dark is irrelevant to the questions posed. Having a concept we test the veracity, since there may be still further relevant questions that need to be asked which would render our concept invalid. We endeavour to discover the accuracy of our understanding. When we have asked all the relevant questions, and formed a correct insight or understanding, we are then able to assent to truth as there is only one answer to the question "is it so?" and that answer is "yes".Judgement also relies on insight, grasping the "sufficiency of evidence" (Ogilvie, 2002). It is not one's perception of reality, but an assertion of reality. Failure to grasp all the evidence means that one is only guessing, or having grasped the evidence and refused to accept or deny, is demonstrating foolishness, blindsightedness, or bias. "Ignorance, error, negligence, malice that blocks this dynamic structure is obscurantism in its most radical form" (Lonergan, 1972/1979). If there is insufficient data to form an accurate understanding, then the wise person acknowledges that their judgement is limited, and remains prepared to modify their conclusion as more data comes to hand. The tendency to immediately accept our perception as being truly the case constitutes a failure to ask sufficient questions. And much of that arises because of the mental models we carry around. One of the biggest challenges for effective thinking is our existing mental models, which remain largely hidden and unarticulated. A mental model is "an ingrained way of thinking" (Senge, 1990/2006), or "an intelligible interlocking set of terms and relations that (we use when) describing reality or forming hypotheses" (Lonergan, 1972/1979). We are inclined to "assume that our models of reality are identical to reality itself" (Martin, 2007), making it difficult to understand and make sense of reality. Familiarity with the cognitional operations detailed in this paper can help identify and grasp mental models that influence our thinking, and provide a framework for sense making.The presence of mental models creates a tendency to see what serves our purpose, or to interpret data in a way that confirms our prior understanding. Steven Johnson's (2006) account of the cholera epidemic in London in 1854, and the struggle between opposing mental models to understand the disease, is a vivid example of this tendency. The accepted scientific explanation for the spread of cholera was known as the miasma theory, which held that cholera was an airborne disease. The observation that the disease appeared to spread rapidly among those living in packed squalid conditions seemed to confirm this view. As long as the miasma theory held, then any cure would focus on improving air quality and circulation.John Snow, the main protagonist in the story, formed an hypothesis during an earlier outbreak that cholera was waterborne. During the 1854 epidemic he slowly and painstakingly assembled data - testing water from different pumps, discovering the habits of victims from interviews with surviving relatives, mapping the times and locations where people contracted the illness. This data supported his hypothesis, leading to his judgement that cholera was waterborne, and that this outbreak was emanating from one pump in particular. On this basis a decision was made to remove the tap handle and deny access to that pump. The number of cholera cases rapidly declined, but Snow realised this did not prove his theory, as the epidemic may have been near the end of its lifecycle. Johnson's work reads like a detective novel as Snow continued assembling the data required to confirm his hypothesis and the veracity of his judgement.The authorities agreed that a case of someone in a remote location drinking water delivered from a suspect well would prove the role of water in spreading the disease. When Susannah Eley died in Hampstead, after drinking water brought to her from the Broad Street well some distance away, it seemed Snow had his conclusive proof. Existing mental models, however, blinded the Cholera Commission which used her death to prove their theory of an airborne disease, saying, "the atmosphere must be so poisoned that it has infected the water as well" (Johnson, 2006).The tendency to become "conceptually mired in the prevailing model" (Johnson, 2006) is not limited to nineteenth century England, nor just to what serves our purpose. We are also blinded by our field of expertise. Johnson's story demonstrates so clearly that our view of what is relevant is coloured by our mental models. In the face of Snow's compelling evidence that cholera was waterborne the General Board of Health, and influential media commentators, refused to be swayed, so firmly were they wedded to the miasma theory. The event offers "a brilliant case study in how dominant intellectual paradigms can make it more difficult for the truth to be established" (Johnson, 2006). It shows the dangers of too readily accepting what is immediately presented and unquestioned as being the true state of affairs.Operating from our field of expertise explains why the finance director tends to search for and give added weight to financial data, having less understanding of (say) production or supply chain data, while the director of human resources will focus on data relating to people, performance, and culture. This also helps account for why specialisation can be an obstacle to being an effective CEO, and why functional or business silos can be an obstacle to organisational growth and performance. Both encourage lenses which unknowingly and unintentionally filter out relevant data. In this situation an authentic leader who is aware of their own mental models and comfortable with their limitations, can bring out the best from a team by inviting and welcoming diverse perspectives, working together to generate new insights. Such a process facilitates new understanding from wider data sets, promotes better judgement, and lays the foundation for a thinking organisation. In order to generate better results corporations need to adopt and foster a culture of continuous organisational learning (Senge, 1990/2006). But in order to be "learning organisations" they first need to become effective thinking organisations. And in order to become a thinking organisation, they need to become an organisation of thinkers, since organisational thinking is a compound of individual thinking.5.1 Organisational culture The culture in which we find ourselves, whether a community, a nation, or a firm, has an equally profound impact on thinking as our mental models. If the environment welcomes open questions, delights in inquiry and fosters conversation, we are much more likely to observe innovation and creativity. An environment which stifles debate and shuts down inquiry will almost certainly make poor decisions and diminish stakeholder engagement. Leaders who persevere with data, encourage discussion and reward ideas will foster greater openness, lift organisational performance, and enhance the potential for breakthrough thinking.5.2 Inquiry as the driving force of innovation Questions turn raw data into something of value - knowledge, insight, and eventually wisdom. Inquiry, via questioning, is like a driveshaft powering our mind to constantly add value to data. We form an image from experiences, form a concept based on understanding, give assent to judgement, and then act on decisions that are made.Questions provide power to our thinking. Better questions provide more efficient power. A repeated series of ever better questions can foster creativity and innovation, minimise the time for breakthrough thinking and maximise the opportunity to generate competitive advantage. Restricting the questioning process, whether in the laboratory or the management meeting, shuts down the power of the mind and limits insight and innovation.Clarity about the level of consciousness being employed - empirical, intellectual, rational, responsible - and the relevant operation at that level - sense perception, inquiry and understanding, reflection and judgement, application of knowledge - drives the process forward. Whether facing Newton's question of why objects fall to earth, or Ray Kroc's question of how to systemise and replicate a fast food restaurant, or an organisational question about creating blue ocean strategy, this simple model allows one to appreciate the relevant operation of our inquiry and the questions best suited to that level.5.3 Speed of inquiry as competitive advantage Recognising we will face a challenge in December and finding a solution the following January is only helpful for subsequent occurrences of the same problem. Competitive advantage is a function of being able to turn data into innovation faster than the competition. Solving the dilemma six months sooner both ameliorates the problem and creates advantage. The power of inquiry is only realised when the timeframe to innovation is less than the time required for resolution, and is amplified when it significantly shortens the cycle.In order to survive - let alone excel or outperform - the speed with which we learn has to exceed the rate of change in our environment (Hames, 2007). Speed of learning refers to the "time taken to optimally engage in the process of transforming information into purposeful change" (Hames, 2007). Using the power of inquiry to push data along the data value chain accelerates learning, innovation, and time to market. Lonergan's model of knowing is a simple framework for driving this inquiry.5.4 Reverse engineering a decision Questions can help us to mine data, review decisions, clarify understanding, and test judgements. We can "reverse engineer" (Martin, 2007) a decision by working backwards to review, analyse, and establish what data must exist to support our understanding and subsequent judgement. Lonergan's cognitional model provides a robust approach for reviewing a decision prior to acting, as shown in the following example.In one of our regular sessions a CEO mentioned the challenge he faced restructuring his senior leadership team (SLT), and wanted to test his thinking prior to taking action. We reverse engineered the proposed decision to identify what assumptions were being made and what data needed to exist, identify any gaps in thinking, and perhaps modify the proposal.Peter, the CEO, had decided to restructure the SLT by creating a new head of services role and promoting Bob and Ian to the team.As we talked through the proposed decision Peter explained the judgements he was making:* Bob and Ian ran the two biggest divisions of the firm, and had been reporting via a COO who was retiring. Promoting them was the right thing to do since each managed major profit centres for the firm and already contributed at SLT level. Promotion would acknowledge this reality, give added authority to match their responsibilities, and foster better team work and resource allocation.* Creating a new head of services would foster greater teamwork between divisions. They could liaise directly with Bob and Ian as peers on the SLT.* Getting the structure set up for future growth was vital at this time.These judgements were based on his understanding that if Bob and Ian were successful in their new roles it would minimise his own workload and enable him to focus on his key strategic challenges. It would also balance out some of the distorted work allocations. Expectations from head office meant he had to improve efficiencies in the local firm, to prepare for growth.This understanding arose from a range of data or experiences:* Peter was constantly distracted by the urgent rather than important.* Bob was overworked and Ian underutilised.* There was often conflict, confusion, and at times outright competition for resources that existed between divisions.* The global firm was looking to his division to contribute significantly to the group profit within five years and had allocated considerable funds for development of new operations to achieve this goal.Having reverse engineered Peter's proposed decision it seemed reasonable in the circumstances. But because his focus had been on Bob and Ian, whose divisions would contribute the expected profit, he had failed to ask further relevant questions. One vital question concerned the aspirations and potential of his team and succession planning for his own role. He quickly explained that Bob and Ian were highly competent but had reservations about their potential to succeed him.Mary had the single most important strategic role in the firm at that time, although was not part of the SLT. She was highly competent, got things done quickly and efficiently, demonstrated considerable potential, and was the leading internal candidate for the CEO role. Since she would play a major role in the success of the group investment Peter wanted Mary to stay focused on the task at hand. He did not remember, until questioned, that 18 months previously she had committed two years to the role and was quite clear that she would leave without further opportunity. It rapidly became clear to Peter that acting on his proposed decision would block any potential moves for Mary, in which case she would probably leave in six months, putting the entire project at risk, and that he would be without an internal successor.As we continued to test each component - the data, the understanding, the judgement, and subsequent decision - a resolution was achieved when Peter realised he could promote Mary to the SLT as head of services, playing to her aspirations, developing her capability, and positioning her for eventual succession. At the same time he could provide additional support to Mary in her current role, so she could develop her own successor over the next six months.Although this solution may seem self-evident to the reader when presented with the story in this way, this approach provided Peter with an innovative solution to his problem, and was a breakthrough in his thinking. The dominant organisational questions seem to revolve around action. Countless books have been written on execution, or getting things done, or doing what matters, arguing that the ability to execute is a source of competitive advantage (Bossidy and Charam, 2002). Execution links a firm's people, strategy, and operations (Bossidy and Charam, 2002).As presented in this paper, thinking precedes doing, and poor thinking contributes to poor outcomes. Hence the way we think at work is crucial to success. While many firms may focus on execution, few take the time to ask the better question - "How should we think?" - in order to link thinking to execution.6.1 Learning to think Discovering the "facts" about an event is just as important, and just as difficult, in business as it is in photographs - and perhaps more so. It is hard for management to obtain objective information or solid data that has not been coloured by bias, or filtered by fear. Vested interest is a powerful determinant of action. Failure to speak out and give one's perspective - particularly when it deviates from the strongly held views of others - is endemic, to the detriment of productivity and profit (Kakabadse, 2009).Lonergan's cognitional model provides a way to critique our thinking, and the thinking of others. When presented with a report, analysis, or decision, we can use this approach to ask questions like:* Is this actually raw data, with no real analysis?* Does the writer confuse looking with knowing?* Is there sufficient data for reaching a proper understanding?* Is the writer merely presenting data and implying that is truth?* Has the writer presumed their understanding will be borne out by the data?* Have they asked all the relevant questions and tested their understanding with the reflective question "Is it so?"* Is their assent to the question expressed with confidence, or do they demonstrate lingering doubt, implying yet more questions exist?The reader will do well to examine the language used. Does the writer distinguish the different operations of experience, understanding, judging and deciding, and recognise that all are required for knowing?6.2 Learning from thinking Critical incidents reveal much about our thinking patterns (Martin, 2007). As global citizens we witness history unfolding in the consequences of significant decisions - the bailout of banks during the financial crisis, military action in Iraq, Copenhagen climate action, etc. Corporate leaders are party to decisions about mergers and acquisitions, redundancies, offshoring and outsourcing, environmental and social impact, growth and profit targets, and people policies.With the passage of time we are able to see the outcomes of those decisions with greater clarity. But in order to both understand the decisions that were made, and to make better quality decisions in the future, we need to unpack the original thinking process, not just review the outcomes. In order to learn from our past decisions we must document not only the decision but also the underlying judgement, understanding and data, including specific expectations regarding outcomes:* What is the key question we are asking?* What is our aim in this inquiry?* What is the data upon which we base our understanding?* What data did we discard, or were unable to obtain?* Was any data excluded, either through ignorance, prejudice, bias, malice?* Did we acknowledge shortcomings in the data and seek to address those limitations, either at the time, or in the future to modify our decisions?* Did we take sufficient time - as the circumstances allowed - to examine and understand the data?* Did we allow those with different views and perspectives to challenge our understanding?* Did we listen to all points of view, not just those which buttressed our position?* What is the understanding we have arrived at?* What is the concept we have formed?* Based on the judgement we have made (about the correctness of our concept) what decision/s did we take?* What specific outcomes, and in what timeframes, do we expect from the decision/s?* What outcomes actually occurred, and in what timeframe?* How does this contribute further data and hence modify our understanding?The human tendency, having arrived at an outcome, is to revise - whether knowingly or unknowingly - one's original thinking and justify the outcome. This happens because we have not written down, at the time of taking the decision, the outcome we expected. Only by carefully documenting our thought process can we learn from experience, recognise our blind spots and biases, our haste or our caution. This is a crucial practice for individuals, teams, and enterprises. As human beings we ask many questions and derive great satisfaction from discovering the answers. We are united by the questions we ask, but divided by the answers - and usually because we have not fully understood the other person or people. The climate change debate is a classic example of this problem. We are united by the questions, and the desire to find answers. This paper suggests we are divided by the answers because we have failed to identify a common cognitional approach. Would a more robust approach to the data about weapons of mass destruction, interpreting and understanding that data, and judging the accuracy of understanding after ensuring all relevant questions were asked, have avoided the ensuing conflict, destabilisation, and human suffering?Complexity shows no sign of abating. Ambiguity and paradox are on the increase. Poor decisions are amplified quickly through global systems. Society and our planet have little tolerance for getting it wrong. If ever there was a time to improve our thinking then this is it.The influence of the cognitional myth - that to know anything one only needs to look at it - cannot be underestimated. Few people or organisations take the time in this fast paced world to ask sufficient questions of the data, to come to an insight about that data, and then to question that insight for verification.Although time always seems in short supply, taking the time to ask all the relevant questions, and documenting our decision-making processes, can only prove beneficial in the long run. And actively reviewing those processes is a key to learning about thinking, improving our thinking, and becoming a thinking organisation.In order to be effective learning organisations, we need to first become thinking organisations. We become thinking organisations by becoming an organisation of thinkers. We become thinkers by understanding the operations of the human mind, and applying a rigorous thinking process to all that we do. Better thinking can only lead to better outcomes as we wade through the challenges confronting our organisations and our world. Opens in a new window.Table I Lonergan's structure of knowing
|
- The purpose of this paper is to provide a model of thinking for managers that is readily applicable in their situation and which will foster effective decision making.
|
[SECTION: Method] We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers - people able to put together the right information at the right time, think critically about it, and make important choices wisely (Edward O. Wilson). I have the privilege of working with key leaders in contemporary business and society, helping them resolve difficult problems, clarify meaning and purpose, think through complex strategic challenges, navigate tricky emotional situations. From this viewpoint I observe the workings of the CEO mind in the CEO environment, and note leaders who are constantly striving to understand, to make sense, to figure it out, to have a breakthrough, to be persuasive, creative, and innovative. Facing the pressure to run a mental marathon every single day - at a sprinting pace - they usually lack effective thinking time and rarely have an effective thinking model that can be applied in any situation. Many of the business books which line the shelves range across strategy, management, and leadership, although few address the question of what thinking is, and offer a simple model that can be readily practiced in the hectic pace at which leaders work. This paper seeks to redress this issue.1.1 Thinking in organisations - the challenge The ability to marshal and understand data and form correct judgements is a critical business skill, since these directly influence the decisions taken in the organisation. Many firms focus on action, execution, and getting things done, without paying sufficient attention to the underlying thinking processes. Thinking precedes doing, and excellent thinking contributes to excellent outcomes. On the other hand, poor thinking contributes to poor outcomes.Some of the myriad examples that I have observed, which may be attributable to poor thinking, include:* inappropriate hires and promotions;* erroneous, and sometimes, foolish investments and resource allocation;* decisions driven by ego, with little contribution from key members of a team;* acquisitions that fail to deliver the mooted benefits;* creation of unintended consequences from ill-informed decision making;* staff, customers, suppliers, and the public treated badly, damaging engagement, productivity, share price, and profit;* destruction of brand equity, corporate profiles, and personal reputations;* money repeatedly wasted on doomed projects, which would have been saved by better planning, with a subsequent faster return on capital;* suboptimal productivity;* interpersonal conflict and failed relationships;* failure to act in a timely manner; and* overreliance on data, leading to decision paralysis and diminished personal initiative.The quality of thinking in organisations sometimes lacks the reflection and rigour required for management in the contemporary environment. Initial impressions become corporate reality without further reflection. People develop fixed views and have little disposition to hearing counter arguments or alternative perspectives. Firms ask for ever more data and analysis, but find no one is able to take responsibility for making a decision. Rear looking data, such as benchmarking and case studies, is used to analyse a path to the future, with often disastrous results. Decisions are made with no real understanding of the underlying thinking. The crush of ambiguity and complexity leads to over-simplification and avoidance of the hard mental work that is required.The use of a relatively narrow set of terms to talk about knowing - most often confused with sense perception or understanding - and a lack of coherent cognitive theory makes it difficult for leaders to engage with those who assume an air of expertise, or argue a case strongly. The framework proposed in this paper rectifies that need, providing a set of example questions that can be asked in any meeting, of any advice or report, or of one's own thinking, to arrive at greater insight and better judgement which will facilitate better decisions and actions. It may also put into perspective and provide rationale for some of the logic and processes currently employed by managers.Two leading contemporary writers - Howard Gardner and Roger Martin - are actively addressing the question of how leaders think in today's enterprise.With a background in psychology, Gardner (2006) suggests we need "five minds" to succeed in the twenty-first century. These include:(1) a disciplined mind, which "has mastered at least one way of thinking - a distinctive mode of cognition [...] (in order to) succeed at any demanding workplace" (Gardner, 2006);(2) a synthesising mind which "takes information from disparate sources, understands and evaluates that information objectively and puts it together in ways that make sense to the synthesizer and other persons" (Gardner, 2006);(3) a creating mind that "puts forth new ideas, poses unfamiliar questions, conjures up fresh ways of thinking, arrives at unexpected answers" (Gardner, 2006);(4) a respectful mind which "welcomes differences between human individuals and human groups [...] and seeks to work effectively with them" (Gardner, 2006); and(5) an ethical mind to "ponder the nature of one's work and the needs and desires of the society in which one lives" (Gardner, 2006).This paper proposes an underlying model which is beneficial for each of these minds, and which will aid the effort to acquire them.Martin on the other hand starts with leaders facing complex challenges and wanting to understand how they think. He bases his theory of "integrative thinking" (Martin, 2007) on examples of, and conversations with, successful leaders who demonstrate this type of thinking. He is particularly interested in the mental processes employed by successful leaders, and says integrative thinkers have an ability to comfortably hold opposing ideas in their mind and arrive at new insights or approaches that either allow for both, or create an entirely new solution (Martin, 2007). Barack Obama shows evidence of this integrative thinking capability in his 2010 State of the Union address when he says "Let's reject the false choice between protecting our people and upholding our values" (Obama, 2010). President Obama "consistently lays out the opposing models, not to set up an either/or choice, but to begin the thinking process toward an integrative solution" (Martin, 2010).1.2 Thinking in business - a solution This paper addresses the question of what goes on in the human mind when we are thinking, identifies ways we can improve our individual and organisational thinking, and hence foster a thinking organisation. It is based primarily on the work of Bernard Lonergan, a Canadian philosopher who has been called one of the most influential thinkers of the twentieth century (Time Magazine, 1970). Lonergan starts with the thinking person and expounds a dynamic structure of knowing that is common to humanity, irrespective of culture, status, success, or other markers (Lonergan, 1957/1992). Since learning of his work more than 20 years ago I have found his simple model readily applicable to any conversation or decision-making process, and usually find it to be of great help to clients.Lonergan's cognitional theory provides a method for thinking and knowing that can be used in any endeavour and in any discipline. His approach can incorporate Gardner's five minds into one thinking model, provide a philosophical foundation to Martin's work, and lay the foundation for becoming a thinking organisation. Whether discussing political, commercial or personal issues, economics, strategy or people, systems thinking or chaos theory, this approach can serve as an effective model for structuring one's thinking, and contribute to generating further insights.The model explained here can become the grounds for a common approach to thinking at both the individual and organisational level. It offers a consistent way of managing the thinking challenges that confront us: creativity and innovation, clarity of thought for effective decision making, resolution of seemingly intractable problems, enabling of honesty and discovery of truth in leadership conversations, conflict resolution, etc. Although the list is endless the common starting point is inside the human mind, and understanding the way we think. Recognising this, and having a familiar language to talk about thinking, increases the potential for listening and learning from one another, arriving at new solutions, and avoiding dead ends, blind alleys, and unnecessary conflict.Before talking about the model, we must first discuss the process of thinking and identify a common trap people fall into - confusing what one's senses perceive with knowing what actually is. One attribute shared by thinking people is the drive to know - a deep seated desire to understand, to make sense of our world, and to find answers to the questions we face (Lonergan, 1957/1992). Archimedes' cry of "Eureka" and subsequent headlong dash through the street when he solved the problem about measuring the gold content in the king's crown is a classic example of the power of this desire. Although we may be disinclined to run naked through the streets after solving a bothersome question, the sense of elation and relief we experience when we "get it", and solve the problem at hand, may lead us to jump up and down, pump our fist in the air, or rush out to tell our colleagues.By taking a moment to reflect on those times when we "got the point", or found a solution to a problem, or worked out which way to proceed, we are able to observe that this understanding is accompanied by a shift in consciousness. This shift is from a state of puzzlement, of frustration, of not "getting it", to a state of awe, of elation, and of understanding. This is an "ah-ha" moment. Insight - coming to understand - brings about this shift in our mental and emotional states.Most of us remember being a novice in a new field listening to an expert explain something. The expert describes, points, expounds about connections between this and that, while we struggle to keep up with what is being said. Two people - the novice and the expert - having quite different experiences, because one understands and the other does not.Though the act of understanding is central in coming to know, it can easily be neglected. It is much easier to simply "see what is in front of us", to go along with the current view, to rely only upon what we have been told is happening. A desire to leave things uncomplicated can fall into the confusion of assuming that what is obvious in knowing (i.e. looking) is what knowing obviously is. This "cognitional myth" (Ogilvie, 2002) confuses looking with knowing, and is a danger for both thinkers and thinking organisations.How the myth can operate and mislead is illustrated in the responses to a series of photos that emerged from Abu Ghraib prison in 2004. Most people would readily recall the graphic nature of the photographs, showing what appeared to be guards torturing, abusing, and degrading prisoners. "The photographs tell it all" stated Seymour Hersh (2004).The filmmaker Errol Morris started with a view that it is not obvious what the photos depict. Quoting Specialist Megan Ambuhl, one of the guards featured in the photos, Morris observed that photographs do not allow people to see "outside the frame" (Burrell, 2008). With a background as a private investigator, Morris "scrutinises data (and) unravels preconceptions" (Burrell, 2008).The timelines reconstructed from the digital metadata by the prosecution in the case brought against the soldiers only provides further empirical data. It does not reveal what people were thinking, why they were acting in certain ways, why the photos were taken, what were the proximate events. Empirical data, although appearing to provide compelling visual proof, leaves many questions unanswered. Morris went to work to answer those questions, examining the contents of the photos in great detail, speaking to soldiers, prisoners, and relevant policy makers. The outcome of this process is captured in Morris' (2008) film, Standard Operating Procedure.Data and fact are two entirely different aspects of knowing. The data contained in the photographs - soldiers, prisoners, prison cells, and so forth - are what we observe. The question of fact, of what is occurring, arises when we try to organise the data into an intelligible whole, from which we can form a hypothesis, which we then test by asking further relevant questions. When the hypothesis has withstood persistent questioning, and no further questions arise, we have arrived at fact and can reasonably agree that "this is so".The cognitional myth confuses seeing with knowing. Other people can see the same photograph and arrive at different conclusions. But in the absence of more data we can only hypothesise about the contents, not draw conclusions. The visual alone is not proof. The solution to the cognitional myth is a cognitional model, a "procedure of the human mind [...] that is, a basic pattern of operations employed in every cognitional enterprise" (Lonergan, 1972/1979). These operations "are contained in questions prior to answers [...] move us from ignorance to knowledge [...] (and) go beyond what we know to seek what we do not know" (Lonergan, 1972/1979).In order to understand the structure of knowing, we need to ask "what is it that I am doing when I am knowing?" The operations of the mind include "seeing, hearing, touching, tasting, smelling, inquiring, imagining, understanding, conceiving, formulating, reflecting, marshalling the evidence, judging, deliberating, evaluating, deciding, speaking, writing" (Lonergan, 1972/1979). These operations occur on four levels of consciousness (Lonergan, 1972/1979), three of which constitute knowing, and the fourth pertaining to the application of knowledge.The first dimension of consciousness is that of experience and the empirical. The second dimension is the level of intellect, and the effort to understand experience. "A third dimension of rationality emerges when the content of our acts of understanding is regarded as, of itself, a mere bright idea and we endeavour to settle what really is so" (Lonergan, 1972/1979). A fourth dimension of consciousness "comes to the fore when judgement on the facts is followed by deliberation on what we are to do about them" (Lonergan, 1972/1979).Table I captures the essence of the structure of knowing as depicted by Lonergan. It summarises the four levels of consciousness, the operations that occur on each level, and the description Lonergan used for the principal occurrence on that level.This is the cognitional model being presented in this paper, that knowing is not mere looking, but a compound of experiencing, understanding, and judging. On this basis, to say one "knows" is to say that all the necessary data has been considered, relevant questions asked, and sound judgement passed on one's understanding. In the absence of relevant data, or if more questions remain, one does not yet "know" but only supposes or hypothesises.None of these operations alone constitutes knowing, but all are needed in order to know. The data of sense, or experience, provokes inquiry - not for more data, but in order to make sense, to organise, to understand. Understanding leads to judgement about the veracity of what is understood. Having then made a judgement about what one knows, we seek to apply that knowledge in the decisions that are made.The type of question being asked helps one to distinguish the level of operation. On the first level one seeks a name or label in the answer. "The second level of questions seeks an explanation; the third level seeks a very short answer - yes or no. The fourth level seeks an answer to the question 'will I?'" (Little, 2010) resulting in an action. This model is readily applicable to any situation which calls for sound judgement, or where we find ourselves challenged to make sense of disparate data.3.1 Observing the cognitional model As a young midshipman on a cargo ship I was assigned to examine the contents of the hold to ensure nothing had come adrift during a particularly fierce storm. The ship had recently left a port rife with rumours about waterfront wars and disappearing bodies. Descending a vertical shaft into inky blackness, surrounded by the groans and shudders of a large ship in a storm, can be an unnerving experience. Vehicles, industrial machinery, and various pallets of goods covered the floor of the hold. I quickly checked the bolts and shackles by torchlight to ensure everything remained fast. Curiosity got the better of me as I focused the torch inside the rear of a small truck. Imagine my surprise and horror when I saw the limbs and torso of a body spreadeagled on the floor. The darkness of the hold was transformed into a tomb, as my heart raced and I fled the scene.The captain insisted I return with a colleague to confirm my macabre finding. Tentatively we approached the truck, each encouraging the other to go inside and look more closely. Marshalling our combined courage we opened the doors and approached the body. A full body diving wetsuit greeted us, to our great relief, and my chagrin.In this frightening event, the darkness, limbs and torso compounded rapidly to form a concept in my mind with which I readily agreed: "there is a dead body in the hold". Only by asking further questions - approaching the "body" and examining it more closely - and modifying my original understanding could a correct judgement be made: "this is not a dead body". The "proof" was overwhelming at first pass, and my emotional state allowed irrelevant data (darkness and rumours from the last port) to inform my understanding.We can recognise the cognitional model in this example. Sense experience provided raw material for the intellect, and as a result of active questioning one begins to understand, fulfilling the human drive to know. Insight provides the relationships between the data, enabling us to "join the dots", creating an "intelligible unity" (Ogilvie, 2002). Having had an insight, we then form a concept, a general expression of that insight, setting aside the data which is irrelevant. For example, the fact that the ship's hold is dark is irrelevant to the questions posed. Having a concept we test the veracity, since there may be still further relevant questions that need to be asked which would render our concept invalid. We endeavour to discover the accuracy of our understanding. When we have asked all the relevant questions, and formed a correct insight or understanding, we are then able to assent to truth as there is only one answer to the question "is it so?" and that answer is "yes".Judgement also relies on insight, grasping the "sufficiency of evidence" (Ogilvie, 2002). It is not one's perception of reality, but an assertion of reality. Failure to grasp all the evidence means that one is only guessing, or having grasped the evidence and refused to accept or deny, is demonstrating foolishness, blindsightedness, or bias. "Ignorance, error, negligence, malice that blocks this dynamic structure is obscurantism in its most radical form" (Lonergan, 1972/1979). If there is insufficient data to form an accurate understanding, then the wise person acknowledges that their judgement is limited, and remains prepared to modify their conclusion as more data comes to hand. The tendency to immediately accept our perception as being truly the case constitutes a failure to ask sufficient questions. And much of that arises because of the mental models we carry around. One of the biggest challenges for effective thinking is our existing mental models, which remain largely hidden and unarticulated. A mental model is "an ingrained way of thinking" (Senge, 1990/2006), or "an intelligible interlocking set of terms and relations that (we use when) describing reality or forming hypotheses" (Lonergan, 1972/1979). We are inclined to "assume that our models of reality are identical to reality itself" (Martin, 2007), making it difficult to understand and make sense of reality. Familiarity with the cognitional operations detailed in this paper can help identify and grasp mental models that influence our thinking, and provide a framework for sense making.The presence of mental models creates a tendency to see what serves our purpose, or to interpret data in a way that confirms our prior understanding. Steven Johnson's (2006) account of the cholera epidemic in London in 1854, and the struggle between opposing mental models to understand the disease, is a vivid example of this tendency. The accepted scientific explanation for the spread of cholera was known as the miasma theory, which held that cholera was an airborne disease. The observation that the disease appeared to spread rapidly among those living in packed squalid conditions seemed to confirm this view. As long as the miasma theory held, then any cure would focus on improving air quality and circulation.John Snow, the main protagonist in the story, formed an hypothesis during an earlier outbreak that cholera was waterborne. During the 1854 epidemic he slowly and painstakingly assembled data - testing water from different pumps, discovering the habits of victims from interviews with surviving relatives, mapping the times and locations where people contracted the illness. This data supported his hypothesis, leading to his judgement that cholera was waterborne, and that this outbreak was emanating from one pump in particular. On this basis a decision was made to remove the tap handle and deny access to that pump. The number of cholera cases rapidly declined, but Snow realised this did not prove his theory, as the epidemic may have been near the end of its lifecycle. Johnson's work reads like a detective novel as Snow continued assembling the data required to confirm his hypothesis and the veracity of his judgement.The authorities agreed that a case of someone in a remote location drinking water delivered from a suspect well would prove the role of water in spreading the disease. When Susannah Eley died in Hampstead, after drinking water brought to her from the Broad Street well some distance away, it seemed Snow had his conclusive proof. Existing mental models, however, blinded the Cholera Commission which used her death to prove their theory of an airborne disease, saying, "the atmosphere must be so poisoned that it has infected the water as well" (Johnson, 2006).The tendency to become "conceptually mired in the prevailing model" (Johnson, 2006) is not limited to nineteenth century England, nor just to what serves our purpose. We are also blinded by our field of expertise. Johnson's story demonstrates so clearly that our view of what is relevant is coloured by our mental models. In the face of Snow's compelling evidence that cholera was waterborne the General Board of Health, and influential media commentators, refused to be swayed, so firmly were they wedded to the miasma theory. The event offers "a brilliant case study in how dominant intellectual paradigms can make it more difficult for the truth to be established" (Johnson, 2006). It shows the dangers of too readily accepting what is immediately presented and unquestioned as being the true state of affairs.Operating from our field of expertise explains why the finance director tends to search for and give added weight to financial data, having less understanding of (say) production or supply chain data, while the director of human resources will focus on data relating to people, performance, and culture. This also helps account for why specialisation can be an obstacle to being an effective CEO, and why functional or business silos can be an obstacle to organisational growth and performance. Both encourage lenses which unknowingly and unintentionally filter out relevant data. In this situation an authentic leader who is aware of their own mental models and comfortable with their limitations, can bring out the best from a team by inviting and welcoming diverse perspectives, working together to generate new insights. Such a process facilitates new understanding from wider data sets, promotes better judgement, and lays the foundation for a thinking organisation. In order to generate better results corporations need to adopt and foster a culture of continuous organisational learning (Senge, 1990/2006). But in order to be "learning organisations" they first need to become effective thinking organisations. And in order to become a thinking organisation, they need to become an organisation of thinkers, since organisational thinking is a compound of individual thinking.5.1 Organisational culture The culture in which we find ourselves, whether a community, a nation, or a firm, has an equally profound impact on thinking as our mental models. If the environment welcomes open questions, delights in inquiry and fosters conversation, we are much more likely to observe innovation and creativity. An environment which stifles debate and shuts down inquiry will almost certainly make poor decisions and diminish stakeholder engagement. Leaders who persevere with data, encourage discussion and reward ideas will foster greater openness, lift organisational performance, and enhance the potential for breakthrough thinking.5.2 Inquiry as the driving force of innovation Questions turn raw data into something of value - knowledge, insight, and eventually wisdom. Inquiry, via questioning, is like a driveshaft powering our mind to constantly add value to data. We form an image from experiences, form a concept based on understanding, give assent to judgement, and then act on decisions that are made.Questions provide power to our thinking. Better questions provide more efficient power. A repeated series of ever better questions can foster creativity and innovation, minimise the time for breakthrough thinking and maximise the opportunity to generate competitive advantage. Restricting the questioning process, whether in the laboratory or the management meeting, shuts down the power of the mind and limits insight and innovation.Clarity about the level of consciousness being employed - empirical, intellectual, rational, responsible - and the relevant operation at that level - sense perception, inquiry and understanding, reflection and judgement, application of knowledge - drives the process forward. Whether facing Newton's question of why objects fall to earth, or Ray Kroc's question of how to systemise and replicate a fast food restaurant, or an organisational question about creating blue ocean strategy, this simple model allows one to appreciate the relevant operation of our inquiry and the questions best suited to that level.5.3 Speed of inquiry as competitive advantage Recognising we will face a challenge in December and finding a solution the following January is only helpful for subsequent occurrences of the same problem. Competitive advantage is a function of being able to turn data into innovation faster than the competition. Solving the dilemma six months sooner both ameliorates the problem and creates advantage. The power of inquiry is only realised when the timeframe to innovation is less than the time required for resolution, and is amplified when it significantly shortens the cycle.In order to survive - let alone excel or outperform - the speed with which we learn has to exceed the rate of change in our environment (Hames, 2007). Speed of learning refers to the "time taken to optimally engage in the process of transforming information into purposeful change" (Hames, 2007). Using the power of inquiry to push data along the data value chain accelerates learning, innovation, and time to market. Lonergan's model of knowing is a simple framework for driving this inquiry.5.4 Reverse engineering a decision Questions can help us to mine data, review decisions, clarify understanding, and test judgements. We can "reverse engineer" (Martin, 2007) a decision by working backwards to review, analyse, and establish what data must exist to support our understanding and subsequent judgement. Lonergan's cognitional model provides a robust approach for reviewing a decision prior to acting, as shown in the following example.In one of our regular sessions a CEO mentioned the challenge he faced restructuring his senior leadership team (SLT), and wanted to test his thinking prior to taking action. We reverse engineered the proposed decision to identify what assumptions were being made and what data needed to exist, identify any gaps in thinking, and perhaps modify the proposal.Peter, the CEO, had decided to restructure the SLT by creating a new head of services role and promoting Bob and Ian to the team.As we talked through the proposed decision Peter explained the judgements he was making:* Bob and Ian ran the two biggest divisions of the firm, and had been reporting via a COO who was retiring. Promoting them was the right thing to do since each managed major profit centres for the firm and already contributed at SLT level. Promotion would acknowledge this reality, give added authority to match their responsibilities, and foster better team work and resource allocation.* Creating a new head of services would foster greater teamwork between divisions. They could liaise directly with Bob and Ian as peers on the SLT.* Getting the structure set up for future growth was vital at this time.These judgements were based on his understanding that if Bob and Ian were successful in their new roles it would minimise his own workload and enable him to focus on his key strategic challenges. It would also balance out some of the distorted work allocations. Expectations from head office meant he had to improve efficiencies in the local firm, to prepare for growth.This understanding arose from a range of data or experiences:* Peter was constantly distracted by the urgent rather than important.* Bob was overworked and Ian underutilised.* There was often conflict, confusion, and at times outright competition for resources that existed between divisions.* The global firm was looking to his division to contribute significantly to the group profit within five years and had allocated considerable funds for development of new operations to achieve this goal.Having reverse engineered Peter's proposed decision it seemed reasonable in the circumstances. But because his focus had been on Bob and Ian, whose divisions would contribute the expected profit, he had failed to ask further relevant questions. One vital question concerned the aspirations and potential of his team and succession planning for his own role. He quickly explained that Bob and Ian were highly competent but had reservations about their potential to succeed him.Mary had the single most important strategic role in the firm at that time, although was not part of the SLT. She was highly competent, got things done quickly and efficiently, demonstrated considerable potential, and was the leading internal candidate for the CEO role. Since she would play a major role in the success of the group investment Peter wanted Mary to stay focused on the task at hand. He did not remember, until questioned, that 18 months previously she had committed two years to the role and was quite clear that she would leave without further opportunity. It rapidly became clear to Peter that acting on his proposed decision would block any potential moves for Mary, in which case she would probably leave in six months, putting the entire project at risk, and that he would be without an internal successor.As we continued to test each component - the data, the understanding, the judgement, and subsequent decision - a resolution was achieved when Peter realised he could promote Mary to the SLT as head of services, playing to her aspirations, developing her capability, and positioning her for eventual succession. At the same time he could provide additional support to Mary in her current role, so she could develop her own successor over the next six months.Although this solution may seem self-evident to the reader when presented with the story in this way, this approach provided Peter with an innovative solution to his problem, and was a breakthrough in his thinking. The dominant organisational questions seem to revolve around action. Countless books have been written on execution, or getting things done, or doing what matters, arguing that the ability to execute is a source of competitive advantage (Bossidy and Charam, 2002). Execution links a firm's people, strategy, and operations (Bossidy and Charam, 2002).As presented in this paper, thinking precedes doing, and poor thinking contributes to poor outcomes. Hence the way we think at work is crucial to success. While many firms may focus on execution, few take the time to ask the better question - "How should we think?" - in order to link thinking to execution.6.1 Learning to think Discovering the "facts" about an event is just as important, and just as difficult, in business as it is in photographs - and perhaps more so. It is hard for management to obtain objective information or solid data that has not been coloured by bias, or filtered by fear. Vested interest is a powerful determinant of action. Failure to speak out and give one's perspective - particularly when it deviates from the strongly held views of others - is endemic, to the detriment of productivity and profit (Kakabadse, 2009).Lonergan's cognitional model provides a way to critique our thinking, and the thinking of others. When presented with a report, analysis, or decision, we can use this approach to ask questions like:* Is this actually raw data, with no real analysis?* Does the writer confuse looking with knowing?* Is there sufficient data for reaching a proper understanding?* Is the writer merely presenting data and implying that is truth?* Has the writer presumed their understanding will be borne out by the data?* Have they asked all the relevant questions and tested their understanding with the reflective question "Is it so?"* Is their assent to the question expressed with confidence, or do they demonstrate lingering doubt, implying yet more questions exist?The reader will do well to examine the language used. Does the writer distinguish the different operations of experience, understanding, judging and deciding, and recognise that all are required for knowing?6.2 Learning from thinking Critical incidents reveal much about our thinking patterns (Martin, 2007). As global citizens we witness history unfolding in the consequences of significant decisions - the bailout of banks during the financial crisis, military action in Iraq, Copenhagen climate action, etc. Corporate leaders are party to decisions about mergers and acquisitions, redundancies, offshoring and outsourcing, environmental and social impact, growth and profit targets, and people policies.With the passage of time we are able to see the outcomes of those decisions with greater clarity. But in order to both understand the decisions that were made, and to make better quality decisions in the future, we need to unpack the original thinking process, not just review the outcomes. In order to learn from our past decisions we must document not only the decision but also the underlying judgement, understanding and data, including specific expectations regarding outcomes:* What is the key question we are asking?* What is our aim in this inquiry?* What is the data upon which we base our understanding?* What data did we discard, or were unable to obtain?* Was any data excluded, either through ignorance, prejudice, bias, malice?* Did we acknowledge shortcomings in the data and seek to address those limitations, either at the time, or in the future to modify our decisions?* Did we take sufficient time - as the circumstances allowed - to examine and understand the data?* Did we allow those with different views and perspectives to challenge our understanding?* Did we listen to all points of view, not just those which buttressed our position?* What is the understanding we have arrived at?* What is the concept we have formed?* Based on the judgement we have made (about the correctness of our concept) what decision/s did we take?* What specific outcomes, and in what timeframes, do we expect from the decision/s?* What outcomes actually occurred, and in what timeframe?* How does this contribute further data and hence modify our understanding?The human tendency, having arrived at an outcome, is to revise - whether knowingly or unknowingly - one's original thinking and justify the outcome. This happens because we have not written down, at the time of taking the decision, the outcome we expected. Only by carefully documenting our thought process can we learn from experience, recognise our blind spots and biases, our haste or our caution. This is a crucial practice for individuals, teams, and enterprises. As human beings we ask many questions and derive great satisfaction from discovering the answers. We are united by the questions we ask, but divided by the answers - and usually because we have not fully understood the other person or people. The climate change debate is a classic example of this problem. We are united by the questions, and the desire to find answers. This paper suggests we are divided by the answers because we have failed to identify a common cognitional approach. Would a more robust approach to the data about weapons of mass destruction, interpreting and understanding that data, and judging the accuracy of understanding after ensuring all relevant questions were asked, have avoided the ensuing conflict, destabilisation, and human suffering?Complexity shows no sign of abating. Ambiguity and paradox are on the increase. Poor decisions are amplified quickly through global systems. Society and our planet have little tolerance for getting it wrong. If ever there was a time to improve our thinking then this is it.The influence of the cognitional myth - that to know anything one only needs to look at it - cannot be underestimated. Few people or organisations take the time in this fast paced world to ask sufficient questions of the data, to come to an insight about that data, and then to question that insight for verification.Although time always seems in short supply, taking the time to ask all the relevant questions, and documenting our decision-making processes, can only prove beneficial in the long run. And actively reviewing those processes is a key to learning about thinking, improving our thinking, and becoming a thinking organisation.In order to be effective learning organisations, we need to first become thinking organisations. We become thinking organisations by becoming an organisation of thinkers. We become thinkers by understanding the operations of the human mind, and applying a rigorous thinking process to all that we do. Better thinking can only lead to better outcomes as we wade through the challenges confronting our organisations and our world. Opens in a new window.Table I Lonergan's structure of knowing
|
- The paper examines some of the thinking challenges facing contemporary business leaders and provides a sound philosophical basis for a cognitional theory.
|
[SECTION: Findings] We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers - people able to put together the right information at the right time, think critically about it, and make important choices wisely (Edward O. Wilson). I have the privilege of working with key leaders in contemporary business and society, helping them resolve difficult problems, clarify meaning and purpose, think through complex strategic challenges, navigate tricky emotional situations. From this viewpoint I observe the workings of the CEO mind in the CEO environment, and note leaders who are constantly striving to understand, to make sense, to figure it out, to have a breakthrough, to be persuasive, creative, and innovative. Facing the pressure to run a mental marathon every single day - at a sprinting pace - they usually lack effective thinking time and rarely have an effective thinking model that can be applied in any situation. Many of the business books which line the shelves range across strategy, management, and leadership, although few address the question of what thinking is, and offer a simple model that can be readily practiced in the hectic pace at which leaders work. This paper seeks to redress this issue.1.1 Thinking in organisations - the challenge The ability to marshal and understand data and form correct judgements is a critical business skill, since these directly influence the decisions taken in the organisation. Many firms focus on action, execution, and getting things done, without paying sufficient attention to the underlying thinking processes. Thinking precedes doing, and excellent thinking contributes to excellent outcomes. On the other hand, poor thinking contributes to poor outcomes.Some of the myriad examples that I have observed, which may be attributable to poor thinking, include:* inappropriate hires and promotions;* erroneous, and sometimes, foolish investments and resource allocation;* decisions driven by ego, with little contribution from key members of a team;* acquisitions that fail to deliver the mooted benefits;* creation of unintended consequences from ill-informed decision making;* staff, customers, suppliers, and the public treated badly, damaging engagement, productivity, share price, and profit;* destruction of brand equity, corporate profiles, and personal reputations;* money repeatedly wasted on doomed projects, which would have been saved by better planning, with a subsequent faster return on capital;* suboptimal productivity;* interpersonal conflict and failed relationships;* failure to act in a timely manner; and* overreliance on data, leading to decision paralysis and diminished personal initiative.The quality of thinking in organisations sometimes lacks the reflection and rigour required for management in the contemporary environment. Initial impressions become corporate reality without further reflection. People develop fixed views and have little disposition to hearing counter arguments or alternative perspectives. Firms ask for ever more data and analysis, but find no one is able to take responsibility for making a decision. Rear looking data, such as benchmarking and case studies, is used to analyse a path to the future, with often disastrous results. Decisions are made with no real understanding of the underlying thinking. The crush of ambiguity and complexity leads to over-simplification and avoidance of the hard mental work that is required.The use of a relatively narrow set of terms to talk about knowing - most often confused with sense perception or understanding - and a lack of coherent cognitive theory makes it difficult for leaders to engage with those who assume an air of expertise, or argue a case strongly. The framework proposed in this paper rectifies that need, providing a set of example questions that can be asked in any meeting, of any advice or report, or of one's own thinking, to arrive at greater insight and better judgement which will facilitate better decisions and actions. It may also put into perspective and provide rationale for some of the logic and processes currently employed by managers.Two leading contemporary writers - Howard Gardner and Roger Martin - are actively addressing the question of how leaders think in today's enterprise.With a background in psychology, Gardner (2006) suggests we need "five minds" to succeed in the twenty-first century. These include:(1) a disciplined mind, which "has mastered at least one way of thinking - a distinctive mode of cognition [...] (in order to) succeed at any demanding workplace" (Gardner, 2006);(2) a synthesising mind which "takes information from disparate sources, understands and evaluates that information objectively and puts it together in ways that make sense to the synthesizer and other persons" (Gardner, 2006);(3) a creating mind that "puts forth new ideas, poses unfamiliar questions, conjures up fresh ways of thinking, arrives at unexpected answers" (Gardner, 2006);(4) a respectful mind which "welcomes differences between human individuals and human groups [...] and seeks to work effectively with them" (Gardner, 2006); and(5) an ethical mind to "ponder the nature of one's work and the needs and desires of the society in which one lives" (Gardner, 2006).This paper proposes an underlying model which is beneficial for each of these minds, and which will aid the effort to acquire them.Martin on the other hand starts with leaders facing complex challenges and wanting to understand how they think. He bases his theory of "integrative thinking" (Martin, 2007) on examples of, and conversations with, successful leaders who demonstrate this type of thinking. He is particularly interested in the mental processes employed by successful leaders, and says integrative thinkers have an ability to comfortably hold opposing ideas in their mind and arrive at new insights or approaches that either allow for both, or create an entirely new solution (Martin, 2007). Barack Obama shows evidence of this integrative thinking capability in his 2010 State of the Union address when he says "Let's reject the false choice between protecting our people and upholding our values" (Obama, 2010). President Obama "consistently lays out the opposing models, not to set up an either/or choice, but to begin the thinking process toward an integrative solution" (Martin, 2010).1.2 Thinking in business - a solution This paper addresses the question of what goes on in the human mind when we are thinking, identifies ways we can improve our individual and organisational thinking, and hence foster a thinking organisation. It is based primarily on the work of Bernard Lonergan, a Canadian philosopher who has been called one of the most influential thinkers of the twentieth century (Time Magazine, 1970). Lonergan starts with the thinking person and expounds a dynamic structure of knowing that is common to humanity, irrespective of culture, status, success, or other markers (Lonergan, 1957/1992). Since learning of his work more than 20 years ago I have found his simple model readily applicable to any conversation or decision-making process, and usually find it to be of great help to clients.Lonergan's cognitional theory provides a method for thinking and knowing that can be used in any endeavour and in any discipline. His approach can incorporate Gardner's five minds into one thinking model, provide a philosophical foundation to Martin's work, and lay the foundation for becoming a thinking organisation. Whether discussing political, commercial or personal issues, economics, strategy or people, systems thinking or chaos theory, this approach can serve as an effective model for structuring one's thinking, and contribute to generating further insights.The model explained here can become the grounds for a common approach to thinking at both the individual and organisational level. It offers a consistent way of managing the thinking challenges that confront us: creativity and innovation, clarity of thought for effective decision making, resolution of seemingly intractable problems, enabling of honesty and discovery of truth in leadership conversations, conflict resolution, etc. Although the list is endless the common starting point is inside the human mind, and understanding the way we think. Recognising this, and having a familiar language to talk about thinking, increases the potential for listening and learning from one another, arriving at new solutions, and avoiding dead ends, blind alleys, and unnecessary conflict.Before talking about the model, we must first discuss the process of thinking and identify a common trap people fall into - confusing what one's senses perceive with knowing what actually is. One attribute shared by thinking people is the drive to know - a deep seated desire to understand, to make sense of our world, and to find answers to the questions we face (Lonergan, 1957/1992). Archimedes' cry of "Eureka" and subsequent headlong dash through the street when he solved the problem about measuring the gold content in the king's crown is a classic example of the power of this desire. Although we may be disinclined to run naked through the streets after solving a bothersome question, the sense of elation and relief we experience when we "get it", and solve the problem at hand, may lead us to jump up and down, pump our fist in the air, or rush out to tell our colleagues.By taking a moment to reflect on those times when we "got the point", or found a solution to a problem, or worked out which way to proceed, we are able to observe that this understanding is accompanied by a shift in consciousness. This shift is from a state of puzzlement, of frustration, of not "getting it", to a state of awe, of elation, and of understanding. This is an "ah-ha" moment. Insight - coming to understand - brings about this shift in our mental and emotional states.Most of us remember being a novice in a new field listening to an expert explain something. The expert describes, points, expounds about connections between this and that, while we struggle to keep up with what is being said. Two people - the novice and the expert - having quite different experiences, because one understands and the other does not.Though the act of understanding is central in coming to know, it can easily be neglected. It is much easier to simply "see what is in front of us", to go along with the current view, to rely only upon what we have been told is happening. A desire to leave things uncomplicated can fall into the confusion of assuming that what is obvious in knowing (i.e. looking) is what knowing obviously is. This "cognitional myth" (Ogilvie, 2002) confuses looking with knowing, and is a danger for both thinkers and thinking organisations.How the myth can operate and mislead is illustrated in the responses to a series of photos that emerged from Abu Ghraib prison in 2004. Most people would readily recall the graphic nature of the photographs, showing what appeared to be guards torturing, abusing, and degrading prisoners. "The photographs tell it all" stated Seymour Hersh (2004).The filmmaker Errol Morris started with a view that it is not obvious what the photos depict. Quoting Specialist Megan Ambuhl, one of the guards featured in the photos, Morris observed that photographs do not allow people to see "outside the frame" (Burrell, 2008). With a background as a private investigator, Morris "scrutinises data (and) unravels preconceptions" (Burrell, 2008).The timelines reconstructed from the digital metadata by the prosecution in the case brought against the soldiers only provides further empirical data. It does not reveal what people were thinking, why they were acting in certain ways, why the photos were taken, what were the proximate events. Empirical data, although appearing to provide compelling visual proof, leaves many questions unanswered. Morris went to work to answer those questions, examining the contents of the photos in great detail, speaking to soldiers, prisoners, and relevant policy makers. The outcome of this process is captured in Morris' (2008) film, Standard Operating Procedure.Data and fact are two entirely different aspects of knowing. The data contained in the photographs - soldiers, prisoners, prison cells, and so forth - are what we observe. The question of fact, of what is occurring, arises when we try to organise the data into an intelligible whole, from which we can form a hypothesis, which we then test by asking further relevant questions. When the hypothesis has withstood persistent questioning, and no further questions arise, we have arrived at fact and can reasonably agree that "this is so".The cognitional myth confuses seeing with knowing. Other people can see the same photograph and arrive at different conclusions. But in the absence of more data we can only hypothesise about the contents, not draw conclusions. The visual alone is not proof. The solution to the cognitional myth is a cognitional model, a "procedure of the human mind [...] that is, a basic pattern of operations employed in every cognitional enterprise" (Lonergan, 1972/1979). These operations "are contained in questions prior to answers [...] move us from ignorance to knowledge [...] (and) go beyond what we know to seek what we do not know" (Lonergan, 1972/1979).In order to understand the structure of knowing, we need to ask "what is it that I am doing when I am knowing?" The operations of the mind include "seeing, hearing, touching, tasting, smelling, inquiring, imagining, understanding, conceiving, formulating, reflecting, marshalling the evidence, judging, deliberating, evaluating, deciding, speaking, writing" (Lonergan, 1972/1979). These operations occur on four levels of consciousness (Lonergan, 1972/1979), three of which constitute knowing, and the fourth pertaining to the application of knowledge.The first dimension of consciousness is that of experience and the empirical. The second dimension is the level of intellect, and the effort to understand experience. "A third dimension of rationality emerges when the content of our acts of understanding is regarded as, of itself, a mere bright idea and we endeavour to settle what really is so" (Lonergan, 1972/1979). A fourth dimension of consciousness "comes to the fore when judgement on the facts is followed by deliberation on what we are to do about them" (Lonergan, 1972/1979).Table I captures the essence of the structure of knowing as depicted by Lonergan. It summarises the four levels of consciousness, the operations that occur on each level, and the description Lonergan used for the principal occurrence on that level.This is the cognitional model being presented in this paper, that knowing is not mere looking, but a compound of experiencing, understanding, and judging. On this basis, to say one "knows" is to say that all the necessary data has been considered, relevant questions asked, and sound judgement passed on one's understanding. In the absence of relevant data, or if more questions remain, one does not yet "know" but only supposes or hypothesises.None of these operations alone constitutes knowing, but all are needed in order to know. The data of sense, or experience, provokes inquiry - not for more data, but in order to make sense, to organise, to understand. Understanding leads to judgement about the veracity of what is understood. Having then made a judgement about what one knows, we seek to apply that knowledge in the decisions that are made.The type of question being asked helps one to distinguish the level of operation. On the first level one seeks a name or label in the answer. "The second level of questions seeks an explanation; the third level seeks a very short answer - yes or no. The fourth level seeks an answer to the question 'will I?'" (Little, 2010) resulting in an action. This model is readily applicable to any situation which calls for sound judgement, or where we find ourselves challenged to make sense of disparate data.3.1 Observing the cognitional model As a young midshipman on a cargo ship I was assigned to examine the contents of the hold to ensure nothing had come adrift during a particularly fierce storm. The ship had recently left a port rife with rumours about waterfront wars and disappearing bodies. Descending a vertical shaft into inky blackness, surrounded by the groans and shudders of a large ship in a storm, can be an unnerving experience. Vehicles, industrial machinery, and various pallets of goods covered the floor of the hold. I quickly checked the bolts and shackles by torchlight to ensure everything remained fast. Curiosity got the better of me as I focused the torch inside the rear of a small truck. Imagine my surprise and horror when I saw the limbs and torso of a body spreadeagled on the floor. The darkness of the hold was transformed into a tomb, as my heart raced and I fled the scene.The captain insisted I return with a colleague to confirm my macabre finding. Tentatively we approached the truck, each encouraging the other to go inside and look more closely. Marshalling our combined courage we opened the doors and approached the body. A full body diving wetsuit greeted us, to our great relief, and my chagrin.In this frightening event, the darkness, limbs and torso compounded rapidly to form a concept in my mind with which I readily agreed: "there is a dead body in the hold". Only by asking further questions - approaching the "body" and examining it more closely - and modifying my original understanding could a correct judgement be made: "this is not a dead body". The "proof" was overwhelming at first pass, and my emotional state allowed irrelevant data (darkness and rumours from the last port) to inform my understanding.We can recognise the cognitional model in this example. Sense experience provided raw material for the intellect, and as a result of active questioning one begins to understand, fulfilling the human drive to know. Insight provides the relationships between the data, enabling us to "join the dots", creating an "intelligible unity" (Ogilvie, 2002). Having had an insight, we then form a concept, a general expression of that insight, setting aside the data which is irrelevant. For example, the fact that the ship's hold is dark is irrelevant to the questions posed. Having a concept we test the veracity, since there may be still further relevant questions that need to be asked which would render our concept invalid. We endeavour to discover the accuracy of our understanding. When we have asked all the relevant questions, and formed a correct insight or understanding, we are then able to assent to truth as there is only one answer to the question "is it so?" and that answer is "yes".Judgement also relies on insight, grasping the "sufficiency of evidence" (Ogilvie, 2002). It is not one's perception of reality, but an assertion of reality. Failure to grasp all the evidence means that one is only guessing, or having grasped the evidence and refused to accept or deny, is demonstrating foolishness, blindsightedness, or bias. "Ignorance, error, negligence, malice that blocks this dynamic structure is obscurantism in its most radical form" (Lonergan, 1972/1979). If there is insufficient data to form an accurate understanding, then the wise person acknowledges that their judgement is limited, and remains prepared to modify their conclusion as more data comes to hand. The tendency to immediately accept our perception as being truly the case constitutes a failure to ask sufficient questions. And much of that arises because of the mental models we carry around. One of the biggest challenges for effective thinking is our existing mental models, which remain largely hidden and unarticulated. A mental model is "an ingrained way of thinking" (Senge, 1990/2006), or "an intelligible interlocking set of terms and relations that (we use when) describing reality or forming hypotheses" (Lonergan, 1972/1979). We are inclined to "assume that our models of reality are identical to reality itself" (Martin, 2007), making it difficult to understand and make sense of reality. Familiarity with the cognitional operations detailed in this paper can help identify and grasp mental models that influence our thinking, and provide a framework for sense making.The presence of mental models creates a tendency to see what serves our purpose, or to interpret data in a way that confirms our prior understanding. Steven Johnson's (2006) account of the cholera epidemic in London in 1854, and the struggle between opposing mental models to understand the disease, is a vivid example of this tendency. The accepted scientific explanation for the spread of cholera was known as the miasma theory, which held that cholera was an airborne disease. The observation that the disease appeared to spread rapidly among those living in packed squalid conditions seemed to confirm this view. As long as the miasma theory held, then any cure would focus on improving air quality and circulation.John Snow, the main protagonist in the story, formed an hypothesis during an earlier outbreak that cholera was waterborne. During the 1854 epidemic he slowly and painstakingly assembled data - testing water from different pumps, discovering the habits of victims from interviews with surviving relatives, mapping the times and locations where people contracted the illness. This data supported his hypothesis, leading to his judgement that cholera was waterborne, and that this outbreak was emanating from one pump in particular. On this basis a decision was made to remove the tap handle and deny access to that pump. The number of cholera cases rapidly declined, but Snow realised this did not prove his theory, as the epidemic may have been near the end of its lifecycle. Johnson's work reads like a detective novel as Snow continued assembling the data required to confirm his hypothesis and the veracity of his judgement.The authorities agreed that a case of someone in a remote location drinking water delivered from a suspect well would prove the role of water in spreading the disease. When Susannah Eley died in Hampstead, after drinking water brought to her from the Broad Street well some distance away, it seemed Snow had his conclusive proof. Existing mental models, however, blinded the Cholera Commission which used her death to prove their theory of an airborne disease, saying, "the atmosphere must be so poisoned that it has infected the water as well" (Johnson, 2006).The tendency to become "conceptually mired in the prevailing model" (Johnson, 2006) is not limited to nineteenth century England, nor just to what serves our purpose. We are also blinded by our field of expertise. Johnson's story demonstrates so clearly that our view of what is relevant is coloured by our mental models. In the face of Snow's compelling evidence that cholera was waterborne the General Board of Health, and influential media commentators, refused to be swayed, so firmly were they wedded to the miasma theory. The event offers "a brilliant case study in how dominant intellectual paradigms can make it more difficult for the truth to be established" (Johnson, 2006). It shows the dangers of too readily accepting what is immediately presented and unquestioned as being the true state of affairs.Operating from our field of expertise explains why the finance director tends to search for and give added weight to financial data, having less understanding of (say) production or supply chain data, while the director of human resources will focus on data relating to people, performance, and culture. This also helps account for why specialisation can be an obstacle to being an effective CEO, and why functional or business silos can be an obstacle to organisational growth and performance. Both encourage lenses which unknowingly and unintentionally filter out relevant data. In this situation an authentic leader who is aware of their own mental models and comfortable with their limitations, can bring out the best from a team by inviting and welcoming diverse perspectives, working together to generate new insights. Such a process facilitates new understanding from wider data sets, promotes better judgement, and lays the foundation for a thinking organisation. In order to generate better results corporations need to adopt and foster a culture of continuous organisational learning (Senge, 1990/2006). But in order to be "learning organisations" they first need to become effective thinking organisations. And in order to become a thinking organisation, they need to become an organisation of thinkers, since organisational thinking is a compound of individual thinking.5.1 Organisational culture The culture in which we find ourselves, whether a community, a nation, or a firm, has an equally profound impact on thinking as our mental models. If the environment welcomes open questions, delights in inquiry and fosters conversation, we are much more likely to observe innovation and creativity. An environment which stifles debate and shuts down inquiry will almost certainly make poor decisions and diminish stakeholder engagement. Leaders who persevere with data, encourage discussion and reward ideas will foster greater openness, lift organisational performance, and enhance the potential for breakthrough thinking.5.2 Inquiry as the driving force of innovation Questions turn raw data into something of value - knowledge, insight, and eventually wisdom. Inquiry, via questioning, is like a driveshaft powering our mind to constantly add value to data. We form an image from experiences, form a concept based on understanding, give assent to judgement, and then act on decisions that are made.Questions provide power to our thinking. Better questions provide more efficient power. A repeated series of ever better questions can foster creativity and innovation, minimise the time for breakthrough thinking and maximise the opportunity to generate competitive advantage. Restricting the questioning process, whether in the laboratory or the management meeting, shuts down the power of the mind and limits insight and innovation.Clarity about the level of consciousness being employed - empirical, intellectual, rational, responsible - and the relevant operation at that level - sense perception, inquiry and understanding, reflection and judgement, application of knowledge - drives the process forward. Whether facing Newton's question of why objects fall to earth, or Ray Kroc's question of how to systemise and replicate a fast food restaurant, or an organisational question about creating blue ocean strategy, this simple model allows one to appreciate the relevant operation of our inquiry and the questions best suited to that level.5.3 Speed of inquiry as competitive advantage Recognising we will face a challenge in December and finding a solution the following January is only helpful for subsequent occurrences of the same problem. Competitive advantage is a function of being able to turn data into innovation faster than the competition. Solving the dilemma six months sooner both ameliorates the problem and creates advantage. The power of inquiry is only realised when the timeframe to innovation is less than the time required for resolution, and is amplified when it significantly shortens the cycle.In order to survive - let alone excel or outperform - the speed with which we learn has to exceed the rate of change in our environment (Hames, 2007). Speed of learning refers to the "time taken to optimally engage in the process of transforming information into purposeful change" (Hames, 2007). Using the power of inquiry to push data along the data value chain accelerates learning, innovation, and time to market. Lonergan's model of knowing is a simple framework for driving this inquiry.5.4 Reverse engineering a decision Questions can help us to mine data, review decisions, clarify understanding, and test judgements. We can "reverse engineer" (Martin, 2007) a decision by working backwards to review, analyse, and establish what data must exist to support our understanding and subsequent judgement. Lonergan's cognitional model provides a robust approach for reviewing a decision prior to acting, as shown in the following example.In one of our regular sessions a CEO mentioned the challenge he faced restructuring his senior leadership team (SLT), and wanted to test his thinking prior to taking action. We reverse engineered the proposed decision to identify what assumptions were being made and what data needed to exist, identify any gaps in thinking, and perhaps modify the proposal.Peter, the CEO, had decided to restructure the SLT by creating a new head of services role and promoting Bob and Ian to the team.As we talked through the proposed decision Peter explained the judgements he was making:* Bob and Ian ran the two biggest divisions of the firm, and had been reporting via a COO who was retiring. Promoting them was the right thing to do since each managed major profit centres for the firm and already contributed at SLT level. Promotion would acknowledge this reality, give added authority to match their responsibilities, and foster better team work and resource allocation.* Creating a new head of services would foster greater teamwork between divisions. They could liaise directly with Bob and Ian as peers on the SLT.* Getting the structure set up for future growth was vital at this time.These judgements were based on his understanding that if Bob and Ian were successful in their new roles it would minimise his own workload and enable him to focus on his key strategic challenges. It would also balance out some of the distorted work allocations. Expectations from head office meant he had to improve efficiencies in the local firm, to prepare for growth.This understanding arose from a range of data or experiences:* Peter was constantly distracted by the urgent rather than important.* Bob was overworked and Ian underutilised.* There was often conflict, confusion, and at times outright competition for resources that existed between divisions.* The global firm was looking to his division to contribute significantly to the group profit within five years and had allocated considerable funds for development of new operations to achieve this goal.Having reverse engineered Peter's proposed decision it seemed reasonable in the circumstances. But because his focus had been on Bob and Ian, whose divisions would contribute the expected profit, he had failed to ask further relevant questions. One vital question concerned the aspirations and potential of his team and succession planning for his own role. He quickly explained that Bob and Ian were highly competent but had reservations about their potential to succeed him.Mary had the single most important strategic role in the firm at that time, although was not part of the SLT. She was highly competent, got things done quickly and efficiently, demonstrated considerable potential, and was the leading internal candidate for the CEO role. Since she would play a major role in the success of the group investment Peter wanted Mary to stay focused on the task at hand. He did not remember, until questioned, that 18 months previously she had committed two years to the role and was quite clear that she would leave without further opportunity. It rapidly became clear to Peter that acting on his proposed decision would block any potential moves for Mary, in which case she would probably leave in six months, putting the entire project at risk, and that he would be without an internal successor.As we continued to test each component - the data, the understanding, the judgement, and subsequent decision - a resolution was achieved when Peter realised he could promote Mary to the SLT as head of services, playing to her aspirations, developing her capability, and positioning her for eventual succession. At the same time he could provide additional support to Mary in her current role, so she could develop her own successor over the next six months.Although this solution may seem self-evident to the reader when presented with the story in this way, this approach provided Peter with an innovative solution to his problem, and was a breakthrough in his thinking. The dominant organisational questions seem to revolve around action. Countless books have been written on execution, or getting things done, or doing what matters, arguing that the ability to execute is a source of competitive advantage (Bossidy and Charam, 2002). Execution links a firm's people, strategy, and operations (Bossidy and Charam, 2002).As presented in this paper, thinking precedes doing, and poor thinking contributes to poor outcomes. Hence the way we think at work is crucial to success. While many firms may focus on execution, few take the time to ask the better question - "How should we think?" - in order to link thinking to execution.6.1 Learning to think Discovering the "facts" about an event is just as important, and just as difficult, in business as it is in photographs - and perhaps more so. It is hard for management to obtain objective information or solid data that has not been coloured by bias, or filtered by fear. Vested interest is a powerful determinant of action. Failure to speak out and give one's perspective - particularly when it deviates from the strongly held views of others - is endemic, to the detriment of productivity and profit (Kakabadse, 2009).Lonergan's cognitional model provides a way to critique our thinking, and the thinking of others. When presented with a report, analysis, or decision, we can use this approach to ask questions like:* Is this actually raw data, with no real analysis?* Does the writer confuse looking with knowing?* Is there sufficient data for reaching a proper understanding?* Is the writer merely presenting data and implying that is truth?* Has the writer presumed their understanding will be borne out by the data?* Have they asked all the relevant questions and tested their understanding with the reflective question "Is it so?"* Is their assent to the question expressed with confidence, or do they demonstrate lingering doubt, implying yet more questions exist?The reader will do well to examine the language used. Does the writer distinguish the different operations of experience, understanding, judging and deciding, and recognise that all are required for knowing?6.2 Learning from thinking Critical incidents reveal much about our thinking patterns (Martin, 2007). As global citizens we witness history unfolding in the consequences of significant decisions - the bailout of banks during the financial crisis, military action in Iraq, Copenhagen climate action, etc. Corporate leaders are party to decisions about mergers and acquisitions, redundancies, offshoring and outsourcing, environmental and social impact, growth and profit targets, and people policies.With the passage of time we are able to see the outcomes of those decisions with greater clarity. But in order to both understand the decisions that were made, and to make better quality decisions in the future, we need to unpack the original thinking process, not just review the outcomes. In order to learn from our past decisions we must document not only the decision but also the underlying judgement, understanding and data, including specific expectations regarding outcomes:* What is the key question we are asking?* What is our aim in this inquiry?* What is the data upon which we base our understanding?* What data did we discard, or were unable to obtain?* Was any data excluded, either through ignorance, prejudice, bias, malice?* Did we acknowledge shortcomings in the data and seek to address those limitations, either at the time, or in the future to modify our decisions?* Did we take sufficient time - as the circumstances allowed - to examine and understand the data?* Did we allow those with different views and perspectives to challenge our understanding?* Did we listen to all points of view, not just those which buttressed our position?* What is the understanding we have arrived at?* What is the concept we have formed?* Based on the judgement we have made (about the correctness of our concept) what decision/s did we take?* What specific outcomes, and in what timeframes, do we expect from the decision/s?* What outcomes actually occurred, and in what timeframe?* How does this contribute further data and hence modify our understanding?The human tendency, having arrived at an outcome, is to revise - whether knowingly or unknowingly - one's original thinking and justify the outcome. This happens because we have not written down, at the time of taking the decision, the outcome we expected. Only by carefully documenting our thought process can we learn from experience, recognise our blind spots and biases, our haste or our caution. This is a crucial practice for individuals, teams, and enterprises. As human beings we ask many questions and derive great satisfaction from discovering the answers. We are united by the questions we ask, but divided by the answers - and usually because we have not fully understood the other person or people. The climate change debate is a classic example of this problem. We are united by the questions, and the desire to find answers. This paper suggests we are divided by the answers because we have failed to identify a common cognitional approach. Would a more robust approach to the data about weapons of mass destruction, interpreting and understanding that data, and judging the accuracy of understanding after ensuring all relevant questions were asked, have avoided the ensuing conflict, destabilisation, and human suffering?Complexity shows no sign of abating. Ambiguity and paradox are on the increase. Poor decisions are amplified quickly through global systems. Society and our planet have little tolerance for getting it wrong. If ever there was a time to improve our thinking then this is it.The influence of the cognitional myth - that to know anything one only needs to look at it - cannot be underestimated. Few people or organisations take the time in this fast paced world to ask sufficient questions of the data, to come to an insight about that data, and then to question that insight for verification.Although time always seems in short supply, taking the time to ask all the relevant questions, and documenting our decision-making processes, can only prove beneficial in the long run. And actively reviewing those processes is a key to learning about thinking, improving our thinking, and becoming a thinking organisation.In order to be effective learning organisations, we need to first become thinking organisations. We become thinking organisations by becoming an organisation of thinkers. We become thinkers by understanding the operations of the human mind, and applying a rigorous thinking process to all that we do. Better thinking can only lead to better outcomes as we wade through the challenges confronting our organisations and our world. Opens in a new window.Table I Lonergan's structure of knowing
|
- The paper demonstrates that effective execution results from effective thinking, that a learning organisation is a result of becoming a thinking organisation, which is a collection of thinking people, and that people and organisations benefit from having a common cognitional method which can help overcome embedded mental models.
|
[SECTION: Value] We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers - people able to put together the right information at the right time, think critically about it, and make important choices wisely (Edward O. Wilson). I have the privilege of working with key leaders in contemporary business and society, helping them resolve difficult problems, clarify meaning and purpose, think through complex strategic challenges, navigate tricky emotional situations. From this viewpoint I observe the workings of the CEO mind in the CEO environment, and note leaders who are constantly striving to understand, to make sense, to figure it out, to have a breakthrough, to be persuasive, creative, and innovative. Facing the pressure to run a mental marathon every single day - at a sprinting pace - they usually lack effective thinking time and rarely have an effective thinking model that can be applied in any situation. Many of the business books which line the shelves range across strategy, management, and leadership, although few address the question of what thinking is, and offer a simple model that can be readily practiced in the hectic pace at which leaders work. This paper seeks to redress this issue.1.1 Thinking in organisations - the challenge The ability to marshal and understand data and form correct judgements is a critical business skill, since these directly influence the decisions taken in the organisation. Many firms focus on action, execution, and getting things done, without paying sufficient attention to the underlying thinking processes. Thinking precedes doing, and excellent thinking contributes to excellent outcomes. On the other hand, poor thinking contributes to poor outcomes.Some of the myriad examples that I have observed, which may be attributable to poor thinking, include:* inappropriate hires and promotions;* erroneous, and sometimes, foolish investments and resource allocation;* decisions driven by ego, with little contribution from key members of a team;* acquisitions that fail to deliver the mooted benefits;* creation of unintended consequences from ill-informed decision making;* staff, customers, suppliers, and the public treated badly, damaging engagement, productivity, share price, and profit;* destruction of brand equity, corporate profiles, and personal reputations;* money repeatedly wasted on doomed projects, which would have been saved by better planning, with a subsequent faster return on capital;* suboptimal productivity;* interpersonal conflict and failed relationships;* failure to act in a timely manner; and* overreliance on data, leading to decision paralysis and diminished personal initiative.The quality of thinking in organisations sometimes lacks the reflection and rigour required for management in the contemporary environment. Initial impressions become corporate reality without further reflection. People develop fixed views and have little disposition to hearing counter arguments or alternative perspectives. Firms ask for ever more data and analysis, but find no one is able to take responsibility for making a decision. Rear looking data, such as benchmarking and case studies, is used to analyse a path to the future, with often disastrous results. Decisions are made with no real understanding of the underlying thinking. The crush of ambiguity and complexity leads to over-simplification and avoidance of the hard mental work that is required.The use of a relatively narrow set of terms to talk about knowing - most often confused with sense perception or understanding - and a lack of coherent cognitive theory makes it difficult for leaders to engage with those who assume an air of expertise, or argue a case strongly. The framework proposed in this paper rectifies that need, providing a set of example questions that can be asked in any meeting, of any advice or report, or of one's own thinking, to arrive at greater insight and better judgement which will facilitate better decisions and actions. It may also put into perspective and provide rationale for some of the logic and processes currently employed by managers.Two leading contemporary writers - Howard Gardner and Roger Martin - are actively addressing the question of how leaders think in today's enterprise.With a background in psychology, Gardner (2006) suggests we need "five minds" to succeed in the twenty-first century. These include:(1) a disciplined mind, which "has mastered at least one way of thinking - a distinctive mode of cognition [...] (in order to) succeed at any demanding workplace" (Gardner, 2006);(2) a synthesising mind which "takes information from disparate sources, understands and evaluates that information objectively and puts it together in ways that make sense to the synthesizer and other persons" (Gardner, 2006);(3) a creating mind that "puts forth new ideas, poses unfamiliar questions, conjures up fresh ways of thinking, arrives at unexpected answers" (Gardner, 2006);(4) a respectful mind which "welcomes differences between human individuals and human groups [...] and seeks to work effectively with them" (Gardner, 2006); and(5) an ethical mind to "ponder the nature of one's work and the needs and desires of the society in which one lives" (Gardner, 2006).This paper proposes an underlying model which is beneficial for each of these minds, and which will aid the effort to acquire them.Martin on the other hand starts with leaders facing complex challenges and wanting to understand how they think. He bases his theory of "integrative thinking" (Martin, 2007) on examples of, and conversations with, successful leaders who demonstrate this type of thinking. He is particularly interested in the mental processes employed by successful leaders, and says integrative thinkers have an ability to comfortably hold opposing ideas in their mind and arrive at new insights or approaches that either allow for both, or create an entirely new solution (Martin, 2007). Barack Obama shows evidence of this integrative thinking capability in his 2010 State of the Union address when he says "Let's reject the false choice between protecting our people and upholding our values" (Obama, 2010). President Obama "consistently lays out the opposing models, not to set up an either/or choice, but to begin the thinking process toward an integrative solution" (Martin, 2010).1.2 Thinking in business - a solution This paper addresses the question of what goes on in the human mind when we are thinking, identifies ways we can improve our individual and organisational thinking, and hence foster a thinking organisation. It is based primarily on the work of Bernard Lonergan, a Canadian philosopher who has been called one of the most influential thinkers of the twentieth century (Time Magazine, 1970). Lonergan starts with the thinking person and expounds a dynamic structure of knowing that is common to humanity, irrespective of culture, status, success, or other markers (Lonergan, 1957/1992). Since learning of his work more than 20 years ago I have found his simple model readily applicable to any conversation or decision-making process, and usually find it to be of great help to clients.Lonergan's cognitional theory provides a method for thinking and knowing that can be used in any endeavour and in any discipline. His approach can incorporate Gardner's five minds into one thinking model, provide a philosophical foundation to Martin's work, and lay the foundation for becoming a thinking organisation. Whether discussing political, commercial or personal issues, economics, strategy or people, systems thinking or chaos theory, this approach can serve as an effective model for structuring one's thinking, and contribute to generating further insights.The model explained here can become the grounds for a common approach to thinking at both the individual and organisational level. It offers a consistent way of managing the thinking challenges that confront us: creativity and innovation, clarity of thought for effective decision making, resolution of seemingly intractable problems, enabling of honesty and discovery of truth in leadership conversations, conflict resolution, etc. Although the list is endless the common starting point is inside the human mind, and understanding the way we think. Recognising this, and having a familiar language to talk about thinking, increases the potential for listening and learning from one another, arriving at new solutions, and avoiding dead ends, blind alleys, and unnecessary conflict.Before talking about the model, we must first discuss the process of thinking and identify a common trap people fall into - confusing what one's senses perceive with knowing what actually is. One attribute shared by thinking people is the drive to know - a deep seated desire to understand, to make sense of our world, and to find answers to the questions we face (Lonergan, 1957/1992). Archimedes' cry of "Eureka" and subsequent headlong dash through the street when he solved the problem about measuring the gold content in the king's crown is a classic example of the power of this desire. Although we may be disinclined to run naked through the streets after solving a bothersome question, the sense of elation and relief we experience when we "get it", and solve the problem at hand, may lead us to jump up and down, pump our fist in the air, or rush out to tell our colleagues.By taking a moment to reflect on those times when we "got the point", or found a solution to a problem, or worked out which way to proceed, we are able to observe that this understanding is accompanied by a shift in consciousness. This shift is from a state of puzzlement, of frustration, of not "getting it", to a state of awe, of elation, and of understanding. This is an "ah-ha" moment. Insight - coming to understand - brings about this shift in our mental and emotional states.Most of us remember being a novice in a new field listening to an expert explain something. The expert describes, points, expounds about connections between this and that, while we struggle to keep up with what is being said. Two people - the novice and the expert - having quite different experiences, because one understands and the other does not.Though the act of understanding is central in coming to know, it can easily be neglected. It is much easier to simply "see what is in front of us", to go along with the current view, to rely only upon what we have been told is happening. A desire to leave things uncomplicated can fall into the confusion of assuming that what is obvious in knowing (i.e. looking) is what knowing obviously is. This "cognitional myth" (Ogilvie, 2002) confuses looking with knowing, and is a danger for both thinkers and thinking organisations.How the myth can operate and mislead is illustrated in the responses to a series of photos that emerged from Abu Ghraib prison in 2004. Most people would readily recall the graphic nature of the photographs, showing what appeared to be guards torturing, abusing, and degrading prisoners. "The photographs tell it all" stated Seymour Hersh (2004).The filmmaker Errol Morris started with a view that it is not obvious what the photos depict. Quoting Specialist Megan Ambuhl, one of the guards featured in the photos, Morris observed that photographs do not allow people to see "outside the frame" (Burrell, 2008). With a background as a private investigator, Morris "scrutinises data (and) unravels preconceptions" (Burrell, 2008).The timelines reconstructed from the digital metadata by the prosecution in the case brought against the soldiers only provides further empirical data. It does not reveal what people were thinking, why they were acting in certain ways, why the photos were taken, what were the proximate events. Empirical data, although appearing to provide compelling visual proof, leaves many questions unanswered. Morris went to work to answer those questions, examining the contents of the photos in great detail, speaking to soldiers, prisoners, and relevant policy makers. The outcome of this process is captured in Morris' (2008) film, Standard Operating Procedure.Data and fact are two entirely different aspects of knowing. The data contained in the photographs - soldiers, prisoners, prison cells, and so forth - are what we observe. The question of fact, of what is occurring, arises when we try to organise the data into an intelligible whole, from which we can form a hypothesis, which we then test by asking further relevant questions. When the hypothesis has withstood persistent questioning, and no further questions arise, we have arrived at fact and can reasonably agree that "this is so".The cognitional myth confuses seeing with knowing. Other people can see the same photograph and arrive at different conclusions. But in the absence of more data we can only hypothesise about the contents, not draw conclusions. The visual alone is not proof. The solution to the cognitional myth is a cognitional model, a "procedure of the human mind [...] that is, a basic pattern of operations employed in every cognitional enterprise" (Lonergan, 1972/1979). These operations "are contained in questions prior to answers [...] move us from ignorance to knowledge [...] (and) go beyond what we know to seek what we do not know" (Lonergan, 1972/1979).In order to understand the structure of knowing, we need to ask "what is it that I am doing when I am knowing?" The operations of the mind include "seeing, hearing, touching, tasting, smelling, inquiring, imagining, understanding, conceiving, formulating, reflecting, marshalling the evidence, judging, deliberating, evaluating, deciding, speaking, writing" (Lonergan, 1972/1979). These operations occur on four levels of consciousness (Lonergan, 1972/1979), three of which constitute knowing, and the fourth pertaining to the application of knowledge.The first dimension of consciousness is that of experience and the empirical. The second dimension is the level of intellect, and the effort to understand experience. "A third dimension of rationality emerges when the content of our acts of understanding is regarded as, of itself, a mere bright idea and we endeavour to settle what really is so" (Lonergan, 1972/1979). A fourth dimension of consciousness "comes to the fore when judgement on the facts is followed by deliberation on what we are to do about them" (Lonergan, 1972/1979).Table I captures the essence of the structure of knowing as depicted by Lonergan. It summarises the four levels of consciousness, the operations that occur on each level, and the description Lonergan used for the principal occurrence on that level.This is the cognitional model being presented in this paper, that knowing is not mere looking, but a compound of experiencing, understanding, and judging. On this basis, to say one "knows" is to say that all the necessary data has been considered, relevant questions asked, and sound judgement passed on one's understanding. In the absence of relevant data, or if more questions remain, one does not yet "know" but only supposes or hypothesises.None of these operations alone constitutes knowing, but all are needed in order to know. The data of sense, or experience, provokes inquiry - not for more data, but in order to make sense, to organise, to understand. Understanding leads to judgement about the veracity of what is understood. Having then made a judgement about what one knows, we seek to apply that knowledge in the decisions that are made.The type of question being asked helps one to distinguish the level of operation. On the first level one seeks a name or label in the answer. "The second level of questions seeks an explanation; the third level seeks a very short answer - yes or no. The fourth level seeks an answer to the question 'will I?'" (Little, 2010) resulting in an action. This model is readily applicable to any situation which calls for sound judgement, or where we find ourselves challenged to make sense of disparate data.3.1 Observing the cognitional model As a young midshipman on a cargo ship I was assigned to examine the contents of the hold to ensure nothing had come adrift during a particularly fierce storm. The ship had recently left a port rife with rumours about waterfront wars and disappearing bodies. Descending a vertical shaft into inky blackness, surrounded by the groans and shudders of a large ship in a storm, can be an unnerving experience. Vehicles, industrial machinery, and various pallets of goods covered the floor of the hold. I quickly checked the bolts and shackles by torchlight to ensure everything remained fast. Curiosity got the better of me as I focused the torch inside the rear of a small truck. Imagine my surprise and horror when I saw the limbs and torso of a body spreadeagled on the floor. The darkness of the hold was transformed into a tomb, as my heart raced and I fled the scene.The captain insisted I return with a colleague to confirm my macabre finding. Tentatively we approached the truck, each encouraging the other to go inside and look more closely. Marshalling our combined courage we opened the doors and approached the body. A full body diving wetsuit greeted us, to our great relief, and my chagrin.In this frightening event, the darkness, limbs and torso compounded rapidly to form a concept in my mind with which I readily agreed: "there is a dead body in the hold". Only by asking further questions - approaching the "body" and examining it more closely - and modifying my original understanding could a correct judgement be made: "this is not a dead body". The "proof" was overwhelming at first pass, and my emotional state allowed irrelevant data (darkness and rumours from the last port) to inform my understanding.We can recognise the cognitional model in this example. Sense experience provided raw material for the intellect, and as a result of active questioning one begins to understand, fulfilling the human drive to know. Insight provides the relationships between the data, enabling us to "join the dots", creating an "intelligible unity" (Ogilvie, 2002). Having had an insight, we then form a concept, a general expression of that insight, setting aside the data which is irrelevant. For example, the fact that the ship's hold is dark is irrelevant to the questions posed. Having a concept we test the veracity, since there may be still further relevant questions that need to be asked which would render our concept invalid. We endeavour to discover the accuracy of our understanding. When we have asked all the relevant questions, and formed a correct insight or understanding, we are then able to assent to truth as there is only one answer to the question "is it so?" and that answer is "yes".Judgement also relies on insight, grasping the "sufficiency of evidence" (Ogilvie, 2002). It is not one's perception of reality, but an assertion of reality. Failure to grasp all the evidence means that one is only guessing, or having grasped the evidence and refused to accept or deny, is demonstrating foolishness, blindsightedness, or bias. "Ignorance, error, negligence, malice that blocks this dynamic structure is obscurantism in its most radical form" (Lonergan, 1972/1979). If there is insufficient data to form an accurate understanding, then the wise person acknowledges that their judgement is limited, and remains prepared to modify their conclusion as more data comes to hand. The tendency to immediately accept our perception as being truly the case constitutes a failure to ask sufficient questions. And much of that arises because of the mental models we carry around. One of the biggest challenges for effective thinking is our existing mental models, which remain largely hidden and unarticulated. A mental model is "an ingrained way of thinking" (Senge, 1990/2006), or "an intelligible interlocking set of terms and relations that (we use when) describing reality or forming hypotheses" (Lonergan, 1972/1979). We are inclined to "assume that our models of reality are identical to reality itself" (Martin, 2007), making it difficult to understand and make sense of reality. Familiarity with the cognitional operations detailed in this paper can help identify and grasp mental models that influence our thinking, and provide a framework for sense making.The presence of mental models creates a tendency to see what serves our purpose, or to interpret data in a way that confirms our prior understanding. Steven Johnson's (2006) account of the cholera epidemic in London in 1854, and the struggle between opposing mental models to understand the disease, is a vivid example of this tendency. The accepted scientific explanation for the spread of cholera was known as the miasma theory, which held that cholera was an airborne disease. The observation that the disease appeared to spread rapidly among those living in packed squalid conditions seemed to confirm this view. As long as the miasma theory held, then any cure would focus on improving air quality and circulation.John Snow, the main protagonist in the story, formed an hypothesis during an earlier outbreak that cholera was waterborne. During the 1854 epidemic he slowly and painstakingly assembled data - testing water from different pumps, discovering the habits of victims from interviews with surviving relatives, mapping the times and locations where people contracted the illness. This data supported his hypothesis, leading to his judgement that cholera was waterborne, and that this outbreak was emanating from one pump in particular. On this basis a decision was made to remove the tap handle and deny access to that pump. The number of cholera cases rapidly declined, but Snow realised this did not prove his theory, as the epidemic may have been near the end of its lifecycle. Johnson's work reads like a detective novel as Snow continued assembling the data required to confirm his hypothesis and the veracity of his judgement.The authorities agreed that a case of someone in a remote location drinking water delivered from a suspect well would prove the role of water in spreading the disease. When Susannah Eley died in Hampstead, after drinking water brought to her from the Broad Street well some distance away, it seemed Snow had his conclusive proof. Existing mental models, however, blinded the Cholera Commission which used her death to prove their theory of an airborne disease, saying, "the atmosphere must be so poisoned that it has infected the water as well" (Johnson, 2006).The tendency to become "conceptually mired in the prevailing model" (Johnson, 2006) is not limited to nineteenth century England, nor just to what serves our purpose. We are also blinded by our field of expertise. Johnson's story demonstrates so clearly that our view of what is relevant is coloured by our mental models. In the face of Snow's compelling evidence that cholera was waterborne the General Board of Health, and influential media commentators, refused to be swayed, so firmly were they wedded to the miasma theory. The event offers "a brilliant case study in how dominant intellectual paradigms can make it more difficult for the truth to be established" (Johnson, 2006). It shows the dangers of too readily accepting what is immediately presented and unquestioned as being the true state of affairs.Operating from our field of expertise explains why the finance director tends to search for and give added weight to financial data, having less understanding of (say) production or supply chain data, while the director of human resources will focus on data relating to people, performance, and culture. This also helps account for why specialisation can be an obstacle to being an effective CEO, and why functional or business silos can be an obstacle to organisational growth and performance. Both encourage lenses which unknowingly and unintentionally filter out relevant data. In this situation an authentic leader who is aware of their own mental models and comfortable with their limitations, can bring out the best from a team by inviting and welcoming diverse perspectives, working together to generate new insights. Such a process facilitates new understanding from wider data sets, promotes better judgement, and lays the foundation for a thinking organisation. In order to generate better results corporations need to adopt and foster a culture of continuous organisational learning (Senge, 1990/2006). But in order to be "learning organisations" they first need to become effective thinking organisations. And in order to become a thinking organisation, they need to become an organisation of thinkers, since organisational thinking is a compound of individual thinking.5.1 Organisational culture The culture in which we find ourselves, whether a community, a nation, or a firm, has an equally profound impact on thinking as our mental models. If the environment welcomes open questions, delights in inquiry and fosters conversation, we are much more likely to observe innovation and creativity. An environment which stifles debate and shuts down inquiry will almost certainly make poor decisions and diminish stakeholder engagement. Leaders who persevere with data, encourage discussion and reward ideas will foster greater openness, lift organisational performance, and enhance the potential for breakthrough thinking.5.2 Inquiry as the driving force of innovation Questions turn raw data into something of value - knowledge, insight, and eventually wisdom. Inquiry, via questioning, is like a driveshaft powering our mind to constantly add value to data. We form an image from experiences, form a concept based on understanding, give assent to judgement, and then act on decisions that are made.Questions provide power to our thinking. Better questions provide more efficient power. A repeated series of ever better questions can foster creativity and innovation, minimise the time for breakthrough thinking and maximise the opportunity to generate competitive advantage. Restricting the questioning process, whether in the laboratory or the management meeting, shuts down the power of the mind and limits insight and innovation.Clarity about the level of consciousness being employed - empirical, intellectual, rational, responsible - and the relevant operation at that level - sense perception, inquiry and understanding, reflection and judgement, application of knowledge - drives the process forward. Whether facing Newton's question of why objects fall to earth, or Ray Kroc's question of how to systemise and replicate a fast food restaurant, or an organisational question about creating blue ocean strategy, this simple model allows one to appreciate the relevant operation of our inquiry and the questions best suited to that level.5.3 Speed of inquiry as competitive advantage Recognising we will face a challenge in December and finding a solution the following January is only helpful for subsequent occurrences of the same problem. Competitive advantage is a function of being able to turn data into innovation faster than the competition. Solving the dilemma six months sooner both ameliorates the problem and creates advantage. The power of inquiry is only realised when the timeframe to innovation is less than the time required for resolution, and is amplified when it significantly shortens the cycle.In order to survive - let alone excel or outperform - the speed with which we learn has to exceed the rate of change in our environment (Hames, 2007). Speed of learning refers to the "time taken to optimally engage in the process of transforming information into purposeful change" (Hames, 2007). Using the power of inquiry to push data along the data value chain accelerates learning, innovation, and time to market. Lonergan's model of knowing is a simple framework for driving this inquiry.5.4 Reverse engineering a decision Questions can help us to mine data, review decisions, clarify understanding, and test judgements. We can "reverse engineer" (Martin, 2007) a decision by working backwards to review, analyse, and establish what data must exist to support our understanding and subsequent judgement. Lonergan's cognitional model provides a robust approach for reviewing a decision prior to acting, as shown in the following example.In one of our regular sessions a CEO mentioned the challenge he faced restructuring his senior leadership team (SLT), and wanted to test his thinking prior to taking action. We reverse engineered the proposed decision to identify what assumptions were being made and what data needed to exist, identify any gaps in thinking, and perhaps modify the proposal.Peter, the CEO, had decided to restructure the SLT by creating a new head of services role and promoting Bob and Ian to the team.As we talked through the proposed decision Peter explained the judgements he was making:* Bob and Ian ran the two biggest divisions of the firm, and had been reporting via a COO who was retiring. Promoting them was the right thing to do since each managed major profit centres for the firm and already contributed at SLT level. Promotion would acknowledge this reality, give added authority to match their responsibilities, and foster better team work and resource allocation.* Creating a new head of services would foster greater teamwork between divisions. They could liaise directly with Bob and Ian as peers on the SLT.* Getting the structure set up for future growth was vital at this time.These judgements were based on his understanding that if Bob and Ian were successful in their new roles it would minimise his own workload and enable him to focus on his key strategic challenges. It would also balance out some of the distorted work allocations. Expectations from head office meant he had to improve efficiencies in the local firm, to prepare for growth.This understanding arose from a range of data or experiences:* Peter was constantly distracted by the urgent rather than important.* Bob was overworked and Ian underutilised.* There was often conflict, confusion, and at times outright competition for resources that existed between divisions.* The global firm was looking to his division to contribute significantly to the group profit within five years and had allocated considerable funds for development of new operations to achieve this goal.Having reverse engineered Peter's proposed decision it seemed reasonable in the circumstances. But because his focus had been on Bob and Ian, whose divisions would contribute the expected profit, he had failed to ask further relevant questions. One vital question concerned the aspirations and potential of his team and succession planning for his own role. He quickly explained that Bob and Ian were highly competent but had reservations about their potential to succeed him.Mary had the single most important strategic role in the firm at that time, although was not part of the SLT. She was highly competent, got things done quickly and efficiently, demonstrated considerable potential, and was the leading internal candidate for the CEO role. Since she would play a major role in the success of the group investment Peter wanted Mary to stay focused on the task at hand. He did not remember, until questioned, that 18 months previously she had committed two years to the role and was quite clear that she would leave without further opportunity. It rapidly became clear to Peter that acting on his proposed decision would block any potential moves for Mary, in which case she would probably leave in six months, putting the entire project at risk, and that he would be without an internal successor.As we continued to test each component - the data, the understanding, the judgement, and subsequent decision - a resolution was achieved when Peter realised he could promote Mary to the SLT as head of services, playing to her aspirations, developing her capability, and positioning her for eventual succession. At the same time he could provide additional support to Mary in her current role, so she could develop her own successor over the next six months.Although this solution may seem self-evident to the reader when presented with the story in this way, this approach provided Peter with an innovative solution to his problem, and was a breakthrough in his thinking. The dominant organisational questions seem to revolve around action. Countless books have been written on execution, or getting things done, or doing what matters, arguing that the ability to execute is a source of competitive advantage (Bossidy and Charam, 2002). Execution links a firm's people, strategy, and operations (Bossidy and Charam, 2002).As presented in this paper, thinking precedes doing, and poor thinking contributes to poor outcomes. Hence the way we think at work is crucial to success. While many firms may focus on execution, few take the time to ask the better question - "How should we think?" - in order to link thinking to execution.6.1 Learning to think Discovering the "facts" about an event is just as important, and just as difficult, in business as it is in photographs - and perhaps more so. It is hard for management to obtain objective information or solid data that has not been coloured by bias, or filtered by fear. Vested interest is a powerful determinant of action. Failure to speak out and give one's perspective - particularly when it deviates from the strongly held views of others - is endemic, to the detriment of productivity and profit (Kakabadse, 2009).Lonergan's cognitional model provides a way to critique our thinking, and the thinking of others. When presented with a report, analysis, or decision, we can use this approach to ask questions like:* Is this actually raw data, with no real analysis?* Does the writer confuse looking with knowing?* Is there sufficient data for reaching a proper understanding?* Is the writer merely presenting data and implying that is truth?* Has the writer presumed their understanding will be borne out by the data?* Have they asked all the relevant questions and tested their understanding with the reflective question "Is it so?"* Is their assent to the question expressed with confidence, or do they demonstrate lingering doubt, implying yet more questions exist?The reader will do well to examine the language used. Does the writer distinguish the different operations of experience, understanding, judging and deciding, and recognise that all are required for knowing?6.2 Learning from thinking Critical incidents reveal much about our thinking patterns (Martin, 2007). As global citizens we witness history unfolding in the consequences of significant decisions - the bailout of banks during the financial crisis, military action in Iraq, Copenhagen climate action, etc. Corporate leaders are party to decisions about mergers and acquisitions, redundancies, offshoring and outsourcing, environmental and social impact, growth and profit targets, and people policies.With the passage of time we are able to see the outcomes of those decisions with greater clarity. But in order to both understand the decisions that were made, and to make better quality decisions in the future, we need to unpack the original thinking process, not just review the outcomes. In order to learn from our past decisions we must document not only the decision but also the underlying judgement, understanding and data, including specific expectations regarding outcomes:* What is the key question we are asking?* What is our aim in this inquiry?* What is the data upon which we base our understanding?* What data did we discard, or were unable to obtain?* Was any data excluded, either through ignorance, prejudice, bias, malice?* Did we acknowledge shortcomings in the data and seek to address those limitations, either at the time, or in the future to modify our decisions?* Did we take sufficient time - as the circumstances allowed - to examine and understand the data?* Did we allow those with different views and perspectives to challenge our understanding?* Did we listen to all points of view, not just those which buttressed our position?* What is the understanding we have arrived at?* What is the concept we have formed?* Based on the judgement we have made (about the correctness of our concept) what decision/s did we take?* What specific outcomes, and in what timeframes, do we expect from the decision/s?* What outcomes actually occurred, and in what timeframe?* How does this contribute further data and hence modify our understanding?The human tendency, having arrived at an outcome, is to revise - whether knowingly or unknowingly - one's original thinking and justify the outcome. This happens because we have not written down, at the time of taking the decision, the outcome we expected. Only by carefully documenting our thought process can we learn from experience, recognise our blind spots and biases, our haste or our caution. This is a crucial practice for individuals, teams, and enterprises. As human beings we ask many questions and derive great satisfaction from discovering the answers. We are united by the questions we ask, but divided by the answers - and usually because we have not fully understood the other person or people. The climate change debate is a classic example of this problem. We are united by the questions, and the desire to find answers. This paper suggests we are divided by the answers because we have failed to identify a common cognitional approach. Would a more robust approach to the data about weapons of mass destruction, interpreting and understanding that data, and judging the accuracy of understanding after ensuring all relevant questions were asked, have avoided the ensuing conflict, destabilisation, and human suffering?Complexity shows no sign of abating. Ambiguity and paradox are on the increase. Poor decisions are amplified quickly through global systems. Society and our planet have little tolerance for getting it wrong. If ever there was a time to improve our thinking then this is it.The influence of the cognitional myth - that to know anything one only needs to look at it - cannot be underestimated. Few people or organisations take the time in this fast paced world to ask sufficient questions of the data, to come to an insight about that data, and then to question that insight for verification.Although time always seems in short supply, taking the time to ask all the relevant questions, and documenting our decision-making processes, can only prove beneficial in the long run. And actively reviewing those processes is a key to learning about thinking, improving our thinking, and becoming a thinking organisation.In order to be effective learning organisations, we need to first become thinking organisations. We become thinking organisations by becoming an organisation of thinkers. We become thinkers by understanding the operations of the human mind, and applying a rigorous thinking process to all that we do. Better thinking can only lead to better outcomes as we wade through the challenges confronting our organisations and our world. Opens in a new window.Table I Lonergan's structure of knowing
|
- The paper introduces readers to the cognitional model of Bernard Lonergan, shows the application of that model to contemporary business challenges, and provides an easily-learned model for thinking, which will aid managers at every level and lead to better decisions.
|
[SECTION: Purpose] The "equalization of bargaining power" (Kaufman, 1993, p. 5) between relatively powerless employees and increasingly large and corporate concentrations of employer prowess was a fundamental goal of early twentieth century reformers and theorists addressing "the labor problem" - the numerous abuses of employees such as child labor, low wages, long hours, and inhumane treatment, and broader disruptions of employees' lives in the rapidly industrializing American workplace of that time. The goal of balancing employee and employer power was officially enshrined in US federal law with the Wagner Act (or National Labor Relations Act [NLRA]) in 1935.Congress expressly cited bargaining power imbalances as a major policy concern, a perceived cause of strife, depressed wages, and commercial disruptions (US Code, Title 29, Ch. 7, or Wagner Act, Section 1):The inequality of bargaining power between employees who do not possess full freedom of association or actual liberty of contract and employers who are organized in the corporate or other forms of ownership association substantially burdens and affects the flow of commerce, and tends to aggravate recurrent business depressions.The Act went on to prescribe "restoring the equality of bargaining power between employees and employers" (US Code, Title 29, Ch. 7, or Wagner Act, Section 1), and to specify principles and mechanism to implement that goal.Traditionally, the focus of employment relations (and its traditional namesake, "industrial relations") and its study of power was the formal bargaining or negotiating level, at the organizational level where the employer's and employee's formal representatives negotiated wages, hours, and other terms of employment. Roughly 25 years ago, however, theorists began to recognize that this focus tended to encourage neglect of other important levels of employer-employee interaction, including public policy or societal, strategic, and most importantly for immediate purposes, "workplace and individual/organization relationships" (Kochan et al., 1986, pp. 16-20). This latter "micro" level remains an area of relative neglect despite some progress. This study aims to address one part of this neglect by proposing and testing hypotheses on the relationship between perceived union strength, a micro- or workplace-level analog of union bargaining power, and perceptions of shared leader-member expectations using supervisor-subordinate dyads as a unit of analysis. Theory of power-dependence relations and coalition formation The theory of power-dependence relations (Emerson, 1962) posits that power and dependence are equivalent. In other words, entity A depending on entity B is equivalent to B having power over A. The theory also suggests that, when power relations are imbalanced, people seek to bring those relations into balance via balancing operations. One such balancing operation is coalition formation. Coalitions are basically organized individuals that form a single collective entity. Coalitions increase member power levels via processes such as group identification and internalization of collective demands (Emerson, 1962, p. 38). Role prescriptions and group norms demand that members behave in ways that benefit the group. Labor unions, which are coalitions, can provide a source of power for their constituents.Of course, coalitions will vary in their overall effectiveness. For example, some labor unions are stronger than others. The more effective a coalition, the more empowering it will be for its constituents. Hence, a crucial variable for study is union strength. Therefore, simply determining whether there is union representation is necessary, but it is insufficient for a complete understanding of the association between unions and employee empowerment.A key relationship at work is between employees and their direct supervisors (e.g. Ferris et al., 2009; Ferris et al., 2008). Some organizational researchers have argued that this relationship normally represents employee conceptions about their relationships with their entire employing organization (e.g. Shore and Tetrick, 1994; Shore et al., 2004). For this reason, we wish to specifically investigate how union strength corresponds with power relations in the supervisor-subordinate relationship. Shared-leadership expectation (SLX) is a measure of employees' expectations of sharing power with their supervisors (e.g. Graen, 2009; Pearce and Conger, 2003).Pre-union, post-union, and non-union power relation contexts It is important to distinguish between pre-union, post-union, and non-union power relation contexts. If any particular work unit is represented by a labor union, then typically there must have been a pre-union context that led to that unionization[1], and a post-union context that informs about the overall effectiveness of that union. On the other hand, work units without unions probably have power relation contexts that are different. Of course, there are non-union work units that possess power relation contexts similar to pre-union contexts, but on the whole they will likely differ (i.e. employees probably find them generally less intolerable or more satisfactory).The fact that any particular work unit formed a union suggests that the pre-union environment was probably intolerably imbalanced. Union formations are coalition formations intended to balance power between labor and management groups (e.g. Kaufman, 1993). Therefore, we assume that intolerable power imbalances between labor and management groups pre-existed any union formation. Correspondingly, and on the whole, non-union units are probably more balanced than pre-union units, or at least, the employees are likely more tolerant of any such imbalances.The effectiveness or ineffectiveness of the union will determine if the post-union context has improved or not. A weak union implies that the union has not achieved improved balance. Therefore, work units with weak unions are essentially equivalent to pre-union work units (i.e. intolerably imbalanced). On the other hand, work units with strong unions imply that the unions have achieved improved balance. Last, we argue that strong unions likely create more balance than non-union units because they likely foster more employee power than management typically would offer without unions (e.g. Kaufman, 1993). Therefore, we formulate the following set of hypotheses:H1. Non-union employees tend to have higher SLX than constituents of weak unions.H2. Constituents of strong unions tend to have higher SLX than constituents of weak unions.H3. Constituents of strong unions tend to have higher SLX than non-union employees. Data The sample consisted of working adults across the United States (N=347). We purchased responses from a survey software company that makes survey panels commercially available. The survey company compensated all participants who completed our online survey. Respondents were racially/ethnically diverse: 25.4 percent White, 24.8 percent Black, 22.8 percent Hispanic, 23.1 percent Asian, and 4 percent other. The mean age was approximately 40.9 years, and ranged from 18 to over 62 years. About 65.4 percent of the respondents were female. Approximately 26.5 percent were High School graduates (or less); 24.2 percent possessed only 2-year college degrees; 32 percent had only four-year college degrees; 13.5 percent possessed only master's degrees; and 3.7 percent held a doctoral degree.About 70.6 percent of the respondents were full-time employees. Roughly 41.2 percent worked for private employers; 46.1 percent for public employers; and 12.7 percent were self-employed. The average job tenure was about 4.3 years. The average company tenure was 5.1 years. Roughly 13.5 percent were members of a labor union. The average income was $50,000 per year, and ranged from less than $25,000 to over $100,000 per year.Measures The outcome variable of interest was shared-leadership expectation (SLX). The central independent variable was the union status of the work unit (i.e. workplace with no union, a weak union, or a strong union). In addition, important control variables were employed in order to rule out competing hypotheses. We controlled for occupational self-efficacy, occupational level, and tenure with supervisor. Rationales for using these particular control variables are provided below.Shared-leadership expectation (SLX). This six-item measure assesses the extent to which employees can expect to develop a shared-leadership relationship with their supervisors (Graen et al., 2006; Graen, 2009). Graen et al. (2006) originally developed this measure and borrowed four items from the well-known LMX-7 scale (e.g. see Graen and Uhl-Bien, 1995 for an extensive review of LMX-7); the remaining two items correlated strongly with those four LMX-7 items. Their six-item measure of SLX correlated significantly, and in predicted directions, with measures of team fairness, process, and success (Graen et al., 2006). Representative items include "I have an excellent working relationship with my supervisor" and "My supervisor has respect for my capabilities." The Cronbach's alpha internal consistency reliability estimate was 0.88.Union status. This categorical variable included three groups:1. work unit with no labor union;2. work unit with a weak labor union; and3. work unit with a strong labor union.Classifications were obtained from a single-item measure:When dealing with company management, your labor union is typically ... weak (ineffective); powerful (effective); not applicable - there's no labor union at my workplace.About 64.5 percent worked in units with no union; 11.0 percent in a unit with a weak union; and 24.5 percent worked in units with strong unions.Occupational self-efficacy. The six-item scale developed by Rigotti et al. (2008) was used to measure occupational self-efficacy. Rigotti et al. described occupational self-efficacy as an employee's felt competence with regards to successfully performing job tasks. As part of their assessment of construct validity, Rigotti et al. demonstrated that their measure correlated significantly, and in anticipated directions, with job satisfaction, commitment, performance, and job insecurity - in five samples from different countries:1. Germany;2. Sweden;3. Belgium;4. United Kingdom; and5. Spain.Representative items include "I feel prepared for most of the demands of my job" and "When I m confronted with a problem in my job, I can usually find several solutions." This variable was used as a control because supervisors are more inclined to share power with more competent employees (e.g. Graen, 2009). The Cronbach's alpha reliability estimate was 0.90.Occupational level. The question "How would you describe your occupation?" permitted six occupational type responses:1. unskilled or semi-skilled manual worker;2. generally trained office worker or secretary;3. vocationally trained craftsperson, technician, IT-specialist, nurse, artist or equivalent;4. academically trained professional or equivalent (but not a manager of people);5. manager of one or more subordinates (non-managers); and6. manager of one or more managers.This question, along with its response categories, was borrowed from Hofstede (2008).Hofstede and Hofstede (2005) explained that power distance (i.e. "the extent that the less powerful members of institutions and organizations expect and accept that power is distributed unequally," p. 46) varies by occupation within some nations; where blue-collar occupations tend to have employees with higher power distance than white-collar occupations. Hence, controlling for occupation may indirectly help account for cultural differences with regards to tolerances for power imbalances. The six occupational categories are basically ordered from the most blue-collar to the most white-collar, so we treated it as an ordinal variable.Tenure with supervisor. One item captured supervisor-subordinate dyad tenure: "How long have you worked with your current supervisor?" Response options were: less than 1 year, between 1 and 3 years, between 3 and 5 years, between 5 and 10 years, between 10 and 15 years, between 15 and 20 years, and more than 20 years. Supervisor-subordinate relationship tenure has shown to positively correlate with overall relationship quality (e.g. Wayne et al., 1997).Data analyses techniques A univariate analysis of variance (ANOVA) model was conducted. Pair-wise comparisons were assessed via Bonferroni's method. Descriptive statistics and correlations (and Cronbach's alphas) are presented in Tables I and II, respectively. As seen in Table I, the variables in this study possessed significant variation, hence, did not appear to be range-restricted. Multicollinearity did not seem to pose a substantial threat to this study - the occupational level variable had only small-to-moderate correlations with the other independent variables.The results of the ANOVA procedure, with SLX as the dependent variable, are summarized in Table III. Based on the sampling procedure employed, it was appropriate to assume that the cases were independent. The Levene's test failed to reject the null hypothesis that there was equality of error variances (F (2, 344)=1.46, p=0.23). Finally, the distribution of residuals appeared normal. Therefore, the major assumptions required for ANOVA procedures were adequately met in this study.The analysis revealed a significant main effect for the union status variable, F(2, 341)=4.00; p=0.019. Bonferroni comparisons, in Table IV, quantify the nature of the relationship between union status and SLX. Employees in non-union units demonstrated significantly higher SLX than those in weak union units (d=0.27, p=0.04), providing support for H1. Furthermore, employees in strong union units exhibited significantly higher SLX than those in weak union units (d=0.39, p < 0.01), providing support for H2. While those in strong union units seemed to show slightly higher SLX than employees in non-union units, the difference was not statistically significant (d=0.12, p=0.37), providing no support for H3. Unions perceived as strong produce more empowered constituents relative to unions perceived as weak, as predicted (H2). This finding is consistent with the theory of power-dependence relations because coalition building is a power balancing operation, and unions are coalitions. In this study, we found that unions vary in their perceived effectiveness in dealing with management, and this variable was significantly linked with the nature of supervisor-subordinate dyads in host organizations.Specifically, employees who belonged to more powerful unions possessed increased shared-leadership expectations with their supervisors, compared to employees who belonged to less powerful unions. Consistent with our prediction (H1), non-union employees also possessed increased shared leadership expectations in comparison to union workers where the union was perceived as weak. Non-union employees did not differ significantly in shared-leadership expectations from employees perceiving strong unions, contrary to expectations (H3).Implications of study This study suggests that organizational researchers who study or control for the impacts of labor unions should routinely consider the "strength" of unions. Most researchers who sample workers via surveys typically ask respondents if they belong to a union or not, but this data alone is insufficient because not all unions are equal; therefore, a next step would be to ask respondents how powerful they think their unions are. For example, one could use the one-item variable employed in this study (i.e. union's perceived effectiveness in dealing with management). The construct is similar to that of "union instrumentality" that is often used in studies of non-union workers' voting intentions in union representation elections (e.g. Kochan, 1979), but it is rarely assessed in studies using union member subjects. In short, unions are power-balancing tools, so their power levels should be customarily assessed.Another implication of the study is that unionized employees may become more interested in helping to make their unions stronger if they understood the connections that stronger unions have on their relationships with direct supervisors. Employees should be informed about the importance of union strength so that they avoid lumping all unions together. For instance, in terms of shared leadership, if one lumps all unions together, then the stronger and weaker unions will "average out" to some level below non-union workplaces. But, we now know that in terms of shared leadership; stronger unions can perform just as well or better than non-union workplaces.A noteworthy societal implication of this study is that belonging to an effective coalition may correspond with improved relations with important others outside, but related to, the coalition. Hence, groups who desire to improve their standing in any larger social structure may organize themselves into effective coalitions. Unions are simply one kind of coalition, but, as we have seen, not all unions are effective. Perhaps workers can (and do) form other kinds of coalitions (formal or informal) to achieve their ends, and this could be another fruitful area of future research. For example, in the US Postal Service the National Alliance of Postal and Federal Employees, has since 1913 championed non-discrimination for postal employees, but holds limited bargaining rights. More generally, it has been suggested that no workplace is ever truly unorganized (Dunlop, 1958; pp. 7-8):The hierarchy of workers does not necessarily imply formal organizations; they may be said to be "unorganized" in popular usage, but the fact is, that wherever they work together for any considerable period, at least an informal organization comes to be formulated among the workers with norms of conduct and attitudes toward the hierarchy of managers. In this sense, workers in a continuing enterprise are never unorganized.As Freeman and Medoff (1984, p. 19) noted in their comprehensive study of union effects, "Our most far-reaching conclusion is that, in addition to well advertised effects on wages, unions alter nearly every other measurable aspect of the operations of workplaces ... ". Our study extends past research by examining possible effects on individual perceptions of shared leadership expectations, but also by documenting how relations between union status and outcomes may vary with union strength. At a more macro-level, recent research showed that declining union membership density explains from one-fifth to one-third of growing wage inequality over the period from 1973-2007 (Western and Rosenfeld, 2011). In a sense, our results illustrate one of many possible micro-level outcomes that parallel this macro-level result.Strengths and limitations of study A notable strength of the study was the use of a large national sample composed of diverse respondents (e.g. diverse in race/ethnicity, age, occupation, gender, etc.). This sample enabled the results to be more generalizable. Also, the inclusion of important control variables (i.e. occupational self-efficacy, occupational level, and tenure with supervisor) enabled the elimination of many obvious competing hypotheses.There were some noteworthy limitations of the study. One limitation was that the study utilized a cross-sectional design, which made it impossible to make an affirmative conclusion about the directionality. In other words, it might have been the case that strong unions were caused by more empowered constituents. However, our hypotheses were grounded in theory, so we had good reasons to believe that strong unions led to empowered constituents.Another limitation was the use of self-reports for all measures. This put our study at risk with regards to common method bias. However, the self-report survey was the most practical data collection procedure for assessing the large national sample; the constructs possessed adequate validity; and multicollinearity was minimal. Therefore, the benefits of using the self-report questionnaires probably outweighed the risks.Future research directions An obvious direction for future research would be to develop a more comprehensive union strength measure. Perhaps a scale with several key dimensions could be developed. Better yet, a global union strength scale, with just a few reliable items, could be developed. A shorter scale would be most useful when including union strength as a control variable, or as part of a larger survey.Another area for promising research includes longitudinal research designs. For instance, measuring important outcome variables, such as shared-leadership expectations, in work units before and after union certification election wins. This type of study might reveal the actual changes in the outcomes due to the union and its characteristics (i.e. union strength). It has been noted that union effects arise gradually in newly-unionized workplaces (Freeman and Kleiner, 1990), and that "voice effects" (such as empowering employees through shared leadership), rather than economic effects, appear to emerge first. These points add further motivation for longitudinal study. Of course, countless other outcome variables, besides shared-leadership expectations (e.g. performance measures), could be investigated. Fundamentally, labor unions are power-balancing tools, hence, they should be measured as such. Perhaps it is intuitive that stronger unions empower their constituents. However, it is less obvious how union strength relates to supervisor-subordinate dyads. In this study, we found that those employees associated with stronger unions had improved power relations with their direct supervisors. Union strength is connected to important employee outcomes, such as shared-leadership expectations; thus, it might behoove researchers, organizations, employees, and unions to more routinely consider union strength as a key union attribute. Opens in a new window.Table I Descriptive statistics for union strength study Opens in a new window.Table II Pearson correlation matrix for union strength study Opens in a new window.Table III ANOVA table for SLX Opens in a new window.Table IV Bonferroni comparisons for SLX
|
- A labor union's strength is a crucial factor when considering outcomes such as its constituents' empowerment. One of the most important goals of any labor union is to achieve increased balance-of-power between the labor and management groups; hence, union strength is an accomplishment of this fundamental aim. It follows that stronger unions, measured by their perceived effectiveness in dealing with management, will contain more empowered constituents. Previous union-related research typically considered employee empowerment at the group-level of analysis (e.g. improved work rules, pay, and benefits for entire groups of employees). The purpose of this paper is to propose and test hypotheses on the relationship between perceived union strength, a micro- or workplace-level analog of union bargaining power, and perceptions of shared leader-member expectations using supervisor-subordinate dyads as a unit of analysis.
|
[SECTION: Method] The "equalization of bargaining power" (Kaufman, 1993, p. 5) between relatively powerless employees and increasingly large and corporate concentrations of employer prowess was a fundamental goal of early twentieth century reformers and theorists addressing "the labor problem" - the numerous abuses of employees such as child labor, low wages, long hours, and inhumane treatment, and broader disruptions of employees' lives in the rapidly industrializing American workplace of that time. The goal of balancing employee and employer power was officially enshrined in US federal law with the Wagner Act (or National Labor Relations Act [NLRA]) in 1935.Congress expressly cited bargaining power imbalances as a major policy concern, a perceived cause of strife, depressed wages, and commercial disruptions (US Code, Title 29, Ch. 7, or Wagner Act, Section 1):The inequality of bargaining power between employees who do not possess full freedom of association or actual liberty of contract and employers who are organized in the corporate or other forms of ownership association substantially burdens and affects the flow of commerce, and tends to aggravate recurrent business depressions.The Act went on to prescribe "restoring the equality of bargaining power between employees and employers" (US Code, Title 29, Ch. 7, or Wagner Act, Section 1), and to specify principles and mechanism to implement that goal.Traditionally, the focus of employment relations (and its traditional namesake, "industrial relations") and its study of power was the formal bargaining or negotiating level, at the organizational level where the employer's and employee's formal representatives negotiated wages, hours, and other terms of employment. Roughly 25 years ago, however, theorists began to recognize that this focus tended to encourage neglect of other important levels of employer-employee interaction, including public policy or societal, strategic, and most importantly for immediate purposes, "workplace and individual/organization relationships" (Kochan et al., 1986, pp. 16-20). This latter "micro" level remains an area of relative neglect despite some progress. This study aims to address one part of this neglect by proposing and testing hypotheses on the relationship between perceived union strength, a micro- or workplace-level analog of union bargaining power, and perceptions of shared leader-member expectations using supervisor-subordinate dyads as a unit of analysis. Theory of power-dependence relations and coalition formation The theory of power-dependence relations (Emerson, 1962) posits that power and dependence are equivalent. In other words, entity A depending on entity B is equivalent to B having power over A. The theory also suggests that, when power relations are imbalanced, people seek to bring those relations into balance via balancing operations. One such balancing operation is coalition formation. Coalitions are basically organized individuals that form a single collective entity. Coalitions increase member power levels via processes such as group identification and internalization of collective demands (Emerson, 1962, p. 38). Role prescriptions and group norms demand that members behave in ways that benefit the group. Labor unions, which are coalitions, can provide a source of power for their constituents.Of course, coalitions will vary in their overall effectiveness. For example, some labor unions are stronger than others. The more effective a coalition, the more empowering it will be for its constituents. Hence, a crucial variable for study is union strength. Therefore, simply determining whether there is union representation is necessary, but it is insufficient for a complete understanding of the association between unions and employee empowerment.A key relationship at work is between employees and their direct supervisors (e.g. Ferris et al., 2009; Ferris et al., 2008). Some organizational researchers have argued that this relationship normally represents employee conceptions about their relationships with their entire employing organization (e.g. Shore and Tetrick, 1994; Shore et al., 2004). For this reason, we wish to specifically investigate how union strength corresponds with power relations in the supervisor-subordinate relationship. Shared-leadership expectation (SLX) is a measure of employees' expectations of sharing power with their supervisors (e.g. Graen, 2009; Pearce and Conger, 2003).Pre-union, post-union, and non-union power relation contexts It is important to distinguish between pre-union, post-union, and non-union power relation contexts. If any particular work unit is represented by a labor union, then typically there must have been a pre-union context that led to that unionization[1], and a post-union context that informs about the overall effectiveness of that union. On the other hand, work units without unions probably have power relation contexts that are different. Of course, there are non-union work units that possess power relation contexts similar to pre-union contexts, but on the whole they will likely differ (i.e. employees probably find them generally less intolerable or more satisfactory).The fact that any particular work unit formed a union suggests that the pre-union environment was probably intolerably imbalanced. Union formations are coalition formations intended to balance power between labor and management groups (e.g. Kaufman, 1993). Therefore, we assume that intolerable power imbalances between labor and management groups pre-existed any union formation. Correspondingly, and on the whole, non-union units are probably more balanced than pre-union units, or at least, the employees are likely more tolerant of any such imbalances.The effectiveness or ineffectiveness of the union will determine if the post-union context has improved or not. A weak union implies that the union has not achieved improved balance. Therefore, work units with weak unions are essentially equivalent to pre-union work units (i.e. intolerably imbalanced). On the other hand, work units with strong unions imply that the unions have achieved improved balance. Last, we argue that strong unions likely create more balance than non-union units because they likely foster more employee power than management typically would offer without unions (e.g. Kaufman, 1993). Therefore, we formulate the following set of hypotheses:H1. Non-union employees tend to have higher SLX than constituents of weak unions.H2. Constituents of strong unions tend to have higher SLX than constituents of weak unions.H3. Constituents of strong unions tend to have higher SLX than non-union employees. Data The sample consisted of working adults across the United States (N=347). We purchased responses from a survey software company that makes survey panels commercially available. The survey company compensated all participants who completed our online survey. Respondents were racially/ethnically diverse: 25.4 percent White, 24.8 percent Black, 22.8 percent Hispanic, 23.1 percent Asian, and 4 percent other. The mean age was approximately 40.9 years, and ranged from 18 to over 62 years. About 65.4 percent of the respondents were female. Approximately 26.5 percent were High School graduates (or less); 24.2 percent possessed only 2-year college degrees; 32 percent had only four-year college degrees; 13.5 percent possessed only master's degrees; and 3.7 percent held a doctoral degree.About 70.6 percent of the respondents were full-time employees. Roughly 41.2 percent worked for private employers; 46.1 percent for public employers; and 12.7 percent were self-employed. The average job tenure was about 4.3 years. The average company tenure was 5.1 years. Roughly 13.5 percent were members of a labor union. The average income was $50,000 per year, and ranged from less than $25,000 to over $100,000 per year.Measures The outcome variable of interest was shared-leadership expectation (SLX). The central independent variable was the union status of the work unit (i.e. workplace with no union, a weak union, or a strong union). In addition, important control variables were employed in order to rule out competing hypotheses. We controlled for occupational self-efficacy, occupational level, and tenure with supervisor. Rationales for using these particular control variables are provided below.Shared-leadership expectation (SLX). This six-item measure assesses the extent to which employees can expect to develop a shared-leadership relationship with their supervisors (Graen et al., 2006; Graen, 2009). Graen et al. (2006) originally developed this measure and borrowed four items from the well-known LMX-7 scale (e.g. see Graen and Uhl-Bien, 1995 for an extensive review of LMX-7); the remaining two items correlated strongly with those four LMX-7 items. Their six-item measure of SLX correlated significantly, and in predicted directions, with measures of team fairness, process, and success (Graen et al., 2006). Representative items include "I have an excellent working relationship with my supervisor" and "My supervisor has respect for my capabilities." The Cronbach's alpha internal consistency reliability estimate was 0.88.Union status. This categorical variable included three groups:1. work unit with no labor union;2. work unit with a weak labor union; and3. work unit with a strong labor union.Classifications were obtained from a single-item measure:When dealing with company management, your labor union is typically ... weak (ineffective); powerful (effective); not applicable - there's no labor union at my workplace.About 64.5 percent worked in units with no union; 11.0 percent in a unit with a weak union; and 24.5 percent worked in units with strong unions.Occupational self-efficacy. The six-item scale developed by Rigotti et al. (2008) was used to measure occupational self-efficacy. Rigotti et al. described occupational self-efficacy as an employee's felt competence with regards to successfully performing job tasks. As part of their assessment of construct validity, Rigotti et al. demonstrated that their measure correlated significantly, and in anticipated directions, with job satisfaction, commitment, performance, and job insecurity - in five samples from different countries:1. Germany;2. Sweden;3. Belgium;4. United Kingdom; and5. Spain.Representative items include "I feel prepared for most of the demands of my job" and "When I m confronted with a problem in my job, I can usually find several solutions." This variable was used as a control because supervisors are more inclined to share power with more competent employees (e.g. Graen, 2009). The Cronbach's alpha reliability estimate was 0.90.Occupational level. The question "How would you describe your occupation?" permitted six occupational type responses:1. unskilled or semi-skilled manual worker;2. generally trained office worker or secretary;3. vocationally trained craftsperson, technician, IT-specialist, nurse, artist or equivalent;4. academically trained professional or equivalent (but not a manager of people);5. manager of one or more subordinates (non-managers); and6. manager of one or more managers.This question, along with its response categories, was borrowed from Hofstede (2008).Hofstede and Hofstede (2005) explained that power distance (i.e. "the extent that the less powerful members of institutions and organizations expect and accept that power is distributed unequally," p. 46) varies by occupation within some nations; where blue-collar occupations tend to have employees with higher power distance than white-collar occupations. Hence, controlling for occupation may indirectly help account for cultural differences with regards to tolerances for power imbalances. The six occupational categories are basically ordered from the most blue-collar to the most white-collar, so we treated it as an ordinal variable.Tenure with supervisor. One item captured supervisor-subordinate dyad tenure: "How long have you worked with your current supervisor?" Response options were: less than 1 year, between 1 and 3 years, between 3 and 5 years, between 5 and 10 years, between 10 and 15 years, between 15 and 20 years, and more than 20 years. Supervisor-subordinate relationship tenure has shown to positively correlate with overall relationship quality (e.g. Wayne et al., 1997).Data analyses techniques A univariate analysis of variance (ANOVA) model was conducted. Pair-wise comparisons were assessed via Bonferroni's method. Descriptive statistics and correlations (and Cronbach's alphas) are presented in Tables I and II, respectively. As seen in Table I, the variables in this study possessed significant variation, hence, did not appear to be range-restricted. Multicollinearity did not seem to pose a substantial threat to this study - the occupational level variable had only small-to-moderate correlations with the other independent variables.The results of the ANOVA procedure, with SLX as the dependent variable, are summarized in Table III. Based on the sampling procedure employed, it was appropriate to assume that the cases were independent. The Levene's test failed to reject the null hypothesis that there was equality of error variances (F (2, 344)=1.46, p=0.23). Finally, the distribution of residuals appeared normal. Therefore, the major assumptions required for ANOVA procedures were adequately met in this study.The analysis revealed a significant main effect for the union status variable, F(2, 341)=4.00; p=0.019. Bonferroni comparisons, in Table IV, quantify the nature of the relationship between union status and SLX. Employees in non-union units demonstrated significantly higher SLX than those in weak union units (d=0.27, p=0.04), providing support for H1. Furthermore, employees in strong union units exhibited significantly higher SLX than those in weak union units (d=0.39, p < 0.01), providing support for H2. While those in strong union units seemed to show slightly higher SLX than employees in non-union units, the difference was not statistically significant (d=0.12, p=0.37), providing no support for H3. Unions perceived as strong produce more empowered constituents relative to unions perceived as weak, as predicted (H2). This finding is consistent with the theory of power-dependence relations because coalition building is a power balancing operation, and unions are coalitions. In this study, we found that unions vary in their perceived effectiveness in dealing with management, and this variable was significantly linked with the nature of supervisor-subordinate dyads in host organizations.Specifically, employees who belonged to more powerful unions possessed increased shared-leadership expectations with their supervisors, compared to employees who belonged to less powerful unions. Consistent with our prediction (H1), non-union employees also possessed increased shared leadership expectations in comparison to union workers where the union was perceived as weak. Non-union employees did not differ significantly in shared-leadership expectations from employees perceiving strong unions, contrary to expectations (H3).Implications of study This study suggests that organizational researchers who study or control for the impacts of labor unions should routinely consider the "strength" of unions. Most researchers who sample workers via surveys typically ask respondents if they belong to a union or not, but this data alone is insufficient because not all unions are equal; therefore, a next step would be to ask respondents how powerful they think their unions are. For example, one could use the one-item variable employed in this study (i.e. union's perceived effectiveness in dealing with management). The construct is similar to that of "union instrumentality" that is often used in studies of non-union workers' voting intentions in union representation elections (e.g. Kochan, 1979), but it is rarely assessed in studies using union member subjects. In short, unions are power-balancing tools, so their power levels should be customarily assessed.Another implication of the study is that unionized employees may become more interested in helping to make their unions stronger if they understood the connections that stronger unions have on their relationships with direct supervisors. Employees should be informed about the importance of union strength so that they avoid lumping all unions together. For instance, in terms of shared leadership, if one lumps all unions together, then the stronger and weaker unions will "average out" to some level below non-union workplaces. But, we now know that in terms of shared leadership; stronger unions can perform just as well or better than non-union workplaces.A noteworthy societal implication of this study is that belonging to an effective coalition may correspond with improved relations with important others outside, but related to, the coalition. Hence, groups who desire to improve their standing in any larger social structure may organize themselves into effective coalitions. Unions are simply one kind of coalition, but, as we have seen, not all unions are effective. Perhaps workers can (and do) form other kinds of coalitions (formal or informal) to achieve their ends, and this could be another fruitful area of future research. For example, in the US Postal Service the National Alliance of Postal and Federal Employees, has since 1913 championed non-discrimination for postal employees, but holds limited bargaining rights. More generally, it has been suggested that no workplace is ever truly unorganized (Dunlop, 1958; pp. 7-8):The hierarchy of workers does not necessarily imply formal organizations; they may be said to be "unorganized" in popular usage, but the fact is, that wherever they work together for any considerable period, at least an informal organization comes to be formulated among the workers with norms of conduct and attitudes toward the hierarchy of managers. In this sense, workers in a continuing enterprise are never unorganized.As Freeman and Medoff (1984, p. 19) noted in their comprehensive study of union effects, "Our most far-reaching conclusion is that, in addition to well advertised effects on wages, unions alter nearly every other measurable aspect of the operations of workplaces ... ". Our study extends past research by examining possible effects on individual perceptions of shared leadership expectations, but also by documenting how relations between union status and outcomes may vary with union strength. At a more macro-level, recent research showed that declining union membership density explains from one-fifth to one-third of growing wage inequality over the period from 1973-2007 (Western and Rosenfeld, 2011). In a sense, our results illustrate one of many possible micro-level outcomes that parallel this macro-level result.Strengths and limitations of study A notable strength of the study was the use of a large national sample composed of diverse respondents (e.g. diverse in race/ethnicity, age, occupation, gender, etc.). This sample enabled the results to be more generalizable. Also, the inclusion of important control variables (i.e. occupational self-efficacy, occupational level, and tenure with supervisor) enabled the elimination of many obvious competing hypotheses.There were some noteworthy limitations of the study. One limitation was that the study utilized a cross-sectional design, which made it impossible to make an affirmative conclusion about the directionality. In other words, it might have been the case that strong unions were caused by more empowered constituents. However, our hypotheses were grounded in theory, so we had good reasons to believe that strong unions led to empowered constituents.Another limitation was the use of self-reports for all measures. This put our study at risk with regards to common method bias. However, the self-report survey was the most practical data collection procedure for assessing the large national sample; the constructs possessed adequate validity; and multicollinearity was minimal. Therefore, the benefits of using the self-report questionnaires probably outweighed the risks.Future research directions An obvious direction for future research would be to develop a more comprehensive union strength measure. Perhaps a scale with several key dimensions could be developed. Better yet, a global union strength scale, with just a few reliable items, could be developed. A shorter scale would be most useful when including union strength as a control variable, or as part of a larger survey.Another area for promising research includes longitudinal research designs. For instance, measuring important outcome variables, such as shared-leadership expectations, in work units before and after union certification election wins. This type of study might reveal the actual changes in the outcomes due to the union and its characteristics (i.e. union strength). It has been noted that union effects arise gradually in newly-unionized workplaces (Freeman and Kleiner, 1990), and that "voice effects" (such as empowering employees through shared leadership), rather than economic effects, appear to emerge first. These points add further motivation for longitudinal study. Of course, countless other outcome variables, besides shared-leadership expectations (e.g. performance measures), could be investigated. Fundamentally, labor unions are power-balancing tools, hence, they should be measured as such. Perhaps it is intuitive that stronger unions empower their constituents. However, it is less obvious how union strength relates to supervisor-subordinate dyads. In this study, we found that those employees associated with stronger unions had improved power relations with their direct supervisors. Union strength is connected to important employee outcomes, such as shared-leadership expectations; thus, it might behoove researchers, organizations, employees, and unions to more routinely consider union strength as a key union attribute. Opens in a new window.Table I Descriptive statistics for union strength study Opens in a new window.Table II Pearson correlation matrix for union strength study Opens in a new window.Table III ANOVA table for SLX Opens in a new window.Table IV Bonferroni comparisons for SLX
|
- Working adults across the USA were sampled (n=347), through the use of a survey software company that makes survey panels commercially available. Respondents were racially/ethnically diverse, with a mean age of about 41 years (range of 18 to over 62 years), and slightly more females than males (about 65 percent female). Also, about 13.5 percent were members of a labor union.
|
[SECTION: Findings] The "equalization of bargaining power" (Kaufman, 1993, p. 5) between relatively powerless employees and increasingly large and corporate concentrations of employer prowess was a fundamental goal of early twentieth century reformers and theorists addressing "the labor problem" - the numerous abuses of employees such as child labor, low wages, long hours, and inhumane treatment, and broader disruptions of employees' lives in the rapidly industrializing American workplace of that time. The goal of balancing employee and employer power was officially enshrined in US federal law with the Wagner Act (or National Labor Relations Act [NLRA]) in 1935.Congress expressly cited bargaining power imbalances as a major policy concern, a perceived cause of strife, depressed wages, and commercial disruptions (US Code, Title 29, Ch. 7, or Wagner Act, Section 1):The inequality of bargaining power between employees who do not possess full freedom of association or actual liberty of contract and employers who are organized in the corporate or other forms of ownership association substantially burdens and affects the flow of commerce, and tends to aggravate recurrent business depressions.The Act went on to prescribe "restoring the equality of bargaining power between employees and employers" (US Code, Title 29, Ch. 7, or Wagner Act, Section 1), and to specify principles and mechanism to implement that goal.Traditionally, the focus of employment relations (and its traditional namesake, "industrial relations") and its study of power was the formal bargaining or negotiating level, at the organizational level where the employer's and employee's formal representatives negotiated wages, hours, and other terms of employment. Roughly 25 years ago, however, theorists began to recognize that this focus tended to encourage neglect of other important levels of employer-employee interaction, including public policy or societal, strategic, and most importantly for immediate purposes, "workplace and individual/organization relationships" (Kochan et al., 1986, pp. 16-20). This latter "micro" level remains an area of relative neglect despite some progress. This study aims to address one part of this neglect by proposing and testing hypotheses on the relationship between perceived union strength, a micro- or workplace-level analog of union bargaining power, and perceptions of shared leader-member expectations using supervisor-subordinate dyads as a unit of analysis. Theory of power-dependence relations and coalition formation The theory of power-dependence relations (Emerson, 1962) posits that power and dependence are equivalent. In other words, entity A depending on entity B is equivalent to B having power over A. The theory also suggests that, when power relations are imbalanced, people seek to bring those relations into balance via balancing operations. One such balancing operation is coalition formation. Coalitions are basically organized individuals that form a single collective entity. Coalitions increase member power levels via processes such as group identification and internalization of collective demands (Emerson, 1962, p. 38). Role prescriptions and group norms demand that members behave in ways that benefit the group. Labor unions, which are coalitions, can provide a source of power for their constituents.Of course, coalitions will vary in their overall effectiveness. For example, some labor unions are stronger than others. The more effective a coalition, the more empowering it will be for its constituents. Hence, a crucial variable for study is union strength. Therefore, simply determining whether there is union representation is necessary, but it is insufficient for a complete understanding of the association between unions and employee empowerment.A key relationship at work is between employees and their direct supervisors (e.g. Ferris et al., 2009; Ferris et al., 2008). Some organizational researchers have argued that this relationship normally represents employee conceptions about their relationships with their entire employing organization (e.g. Shore and Tetrick, 1994; Shore et al., 2004). For this reason, we wish to specifically investigate how union strength corresponds with power relations in the supervisor-subordinate relationship. Shared-leadership expectation (SLX) is a measure of employees' expectations of sharing power with their supervisors (e.g. Graen, 2009; Pearce and Conger, 2003).Pre-union, post-union, and non-union power relation contexts It is important to distinguish between pre-union, post-union, and non-union power relation contexts. If any particular work unit is represented by a labor union, then typically there must have been a pre-union context that led to that unionization[1], and a post-union context that informs about the overall effectiveness of that union. On the other hand, work units without unions probably have power relation contexts that are different. Of course, there are non-union work units that possess power relation contexts similar to pre-union contexts, but on the whole they will likely differ (i.e. employees probably find them generally less intolerable or more satisfactory).The fact that any particular work unit formed a union suggests that the pre-union environment was probably intolerably imbalanced. Union formations are coalition formations intended to balance power between labor and management groups (e.g. Kaufman, 1993). Therefore, we assume that intolerable power imbalances between labor and management groups pre-existed any union formation. Correspondingly, and on the whole, non-union units are probably more balanced than pre-union units, or at least, the employees are likely more tolerant of any such imbalances.The effectiveness or ineffectiveness of the union will determine if the post-union context has improved or not. A weak union implies that the union has not achieved improved balance. Therefore, work units with weak unions are essentially equivalent to pre-union work units (i.e. intolerably imbalanced). On the other hand, work units with strong unions imply that the unions have achieved improved balance. Last, we argue that strong unions likely create more balance than non-union units because they likely foster more employee power than management typically would offer without unions (e.g. Kaufman, 1993). Therefore, we formulate the following set of hypotheses:H1. Non-union employees tend to have higher SLX than constituents of weak unions.H2. Constituents of strong unions tend to have higher SLX than constituents of weak unions.H3. Constituents of strong unions tend to have higher SLX than non-union employees. Data The sample consisted of working adults across the United States (N=347). We purchased responses from a survey software company that makes survey panels commercially available. The survey company compensated all participants who completed our online survey. Respondents were racially/ethnically diverse: 25.4 percent White, 24.8 percent Black, 22.8 percent Hispanic, 23.1 percent Asian, and 4 percent other. The mean age was approximately 40.9 years, and ranged from 18 to over 62 years. About 65.4 percent of the respondents were female. Approximately 26.5 percent were High School graduates (or less); 24.2 percent possessed only 2-year college degrees; 32 percent had only four-year college degrees; 13.5 percent possessed only master's degrees; and 3.7 percent held a doctoral degree.About 70.6 percent of the respondents were full-time employees. Roughly 41.2 percent worked for private employers; 46.1 percent for public employers; and 12.7 percent were self-employed. The average job tenure was about 4.3 years. The average company tenure was 5.1 years. Roughly 13.5 percent were members of a labor union. The average income was $50,000 per year, and ranged from less than $25,000 to over $100,000 per year.Measures The outcome variable of interest was shared-leadership expectation (SLX). The central independent variable was the union status of the work unit (i.e. workplace with no union, a weak union, or a strong union). In addition, important control variables were employed in order to rule out competing hypotheses. We controlled for occupational self-efficacy, occupational level, and tenure with supervisor. Rationales for using these particular control variables are provided below.Shared-leadership expectation (SLX). This six-item measure assesses the extent to which employees can expect to develop a shared-leadership relationship with their supervisors (Graen et al., 2006; Graen, 2009). Graen et al. (2006) originally developed this measure and borrowed four items from the well-known LMX-7 scale (e.g. see Graen and Uhl-Bien, 1995 for an extensive review of LMX-7); the remaining two items correlated strongly with those four LMX-7 items. Their six-item measure of SLX correlated significantly, and in predicted directions, with measures of team fairness, process, and success (Graen et al., 2006). Representative items include "I have an excellent working relationship with my supervisor" and "My supervisor has respect for my capabilities." The Cronbach's alpha internal consistency reliability estimate was 0.88.Union status. This categorical variable included three groups:1. work unit with no labor union;2. work unit with a weak labor union; and3. work unit with a strong labor union.Classifications were obtained from a single-item measure:When dealing with company management, your labor union is typically ... weak (ineffective); powerful (effective); not applicable - there's no labor union at my workplace.About 64.5 percent worked in units with no union; 11.0 percent in a unit with a weak union; and 24.5 percent worked in units with strong unions.Occupational self-efficacy. The six-item scale developed by Rigotti et al. (2008) was used to measure occupational self-efficacy. Rigotti et al. described occupational self-efficacy as an employee's felt competence with regards to successfully performing job tasks. As part of their assessment of construct validity, Rigotti et al. demonstrated that their measure correlated significantly, and in anticipated directions, with job satisfaction, commitment, performance, and job insecurity - in five samples from different countries:1. Germany;2. Sweden;3. Belgium;4. United Kingdom; and5. Spain.Representative items include "I feel prepared for most of the demands of my job" and "When I m confronted with a problem in my job, I can usually find several solutions." This variable was used as a control because supervisors are more inclined to share power with more competent employees (e.g. Graen, 2009). The Cronbach's alpha reliability estimate was 0.90.Occupational level. The question "How would you describe your occupation?" permitted six occupational type responses:1. unskilled or semi-skilled manual worker;2. generally trained office worker or secretary;3. vocationally trained craftsperson, technician, IT-specialist, nurse, artist or equivalent;4. academically trained professional or equivalent (but not a manager of people);5. manager of one or more subordinates (non-managers); and6. manager of one or more managers.This question, along with its response categories, was borrowed from Hofstede (2008).Hofstede and Hofstede (2005) explained that power distance (i.e. "the extent that the less powerful members of institutions and organizations expect and accept that power is distributed unequally," p. 46) varies by occupation within some nations; where blue-collar occupations tend to have employees with higher power distance than white-collar occupations. Hence, controlling for occupation may indirectly help account for cultural differences with regards to tolerances for power imbalances. The six occupational categories are basically ordered from the most blue-collar to the most white-collar, so we treated it as an ordinal variable.Tenure with supervisor. One item captured supervisor-subordinate dyad tenure: "How long have you worked with your current supervisor?" Response options were: less than 1 year, between 1 and 3 years, between 3 and 5 years, between 5 and 10 years, between 10 and 15 years, between 15 and 20 years, and more than 20 years. Supervisor-subordinate relationship tenure has shown to positively correlate with overall relationship quality (e.g. Wayne et al., 1997).Data analyses techniques A univariate analysis of variance (ANOVA) model was conducted. Pair-wise comparisons were assessed via Bonferroni's method. Descriptive statistics and correlations (and Cronbach's alphas) are presented in Tables I and II, respectively. As seen in Table I, the variables in this study possessed significant variation, hence, did not appear to be range-restricted. Multicollinearity did not seem to pose a substantial threat to this study - the occupational level variable had only small-to-moderate correlations with the other independent variables.The results of the ANOVA procedure, with SLX as the dependent variable, are summarized in Table III. Based on the sampling procedure employed, it was appropriate to assume that the cases were independent. The Levene's test failed to reject the null hypothesis that there was equality of error variances (F (2, 344)=1.46, p=0.23). Finally, the distribution of residuals appeared normal. Therefore, the major assumptions required for ANOVA procedures were adequately met in this study.The analysis revealed a significant main effect for the union status variable, F(2, 341)=4.00; p=0.019. Bonferroni comparisons, in Table IV, quantify the nature of the relationship between union status and SLX. Employees in non-union units demonstrated significantly higher SLX than those in weak union units (d=0.27, p=0.04), providing support for H1. Furthermore, employees in strong union units exhibited significantly higher SLX than those in weak union units (d=0.39, p < 0.01), providing support for H2. While those in strong union units seemed to show slightly higher SLX than employees in non-union units, the difference was not statistically significant (d=0.12, p=0.37), providing no support for H3. Unions perceived as strong produce more empowered constituents relative to unions perceived as weak, as predicted (H2). This finding is consistent with the theory of power-dependence relations because coalition building is a power balancing operation, and unions are coalitions. In this study, we found that unions vary in their perceived effectiveness in dealing with management, and this variable was significantly linked with the nature of supervisor-subordinate dyads in host organizations.Specifically, employees who belonged to more powerful unions possessed increased shared-leadership expectations with their supervisors, compared to employees who belonged to less powerful unions. Consistent with our prediction (H1), non-union employees also possessed increased shared leadership expectations in comparison to union workers where the union was perceived as weak. Non-union employees did not differ significantly in shared-leadership expectations from employees perceiving strong unions, contrary to expectations (H3).Implications of study This study suggests that organizational researchers who study or control for the impacts of labor unions should routinely consider the "strength" of unions. Most researchers who sample workers via surveys typically ask respondents if they belong to a union or not, but this data alone is insufficient because not all unions are equal; therefore, a next step would be to ask respondents how powerful they think their unions are. For example, one could use the one-item variable employed in this study (i.e. union's perceived effectiveness in dealing with management). The construct is similar to that of "union instrumentality" that is often used in studies of non-union workers' voting intentions in union representation elections (e.g. Kochan, 1979), but it is rarely assessed in studies using union member subjects. In short, unions are power-balancing tools, so their power levels should be customarily assessed.Another implication of the study is that unionized employees may become more interested in helping to make their unions stronger if they understood the connections that stronger unions have on their relationships with direct supervisors. Employees should be informed about the importance of union strength so that they avoid lumping all unions together. For instance, in terms of shared leadership, if one lumps all unions together, then the stronger and weaker unions will "average out" to some level below non-union workplaces. But, we now know that in terms of shared leadership; stronger unions can perform just as well or better than non-union workplaces.A noteworthy societal implication of this study is that belonging to an effective coalition may correspond with improved relations with important others outside, but related to, the coalition. Hence, groups who desire to improve their standing in any larger social structure may organize themselves into effective coalitions. Unions are simply one kind of coalition, but, as we have seen, not all unions are effective. Perhaps workers can (and do) form other kinds of coalitions (formal or informal) to achieve their ends, and this could be another fruitful area of future research. For example, in the US Postal Service the National Alliance of Postal and Federal Employees, has since 1913 championed non-discrimination for postal employees, but holds limited bargaining rights. More generally, it has been suggested that no workplace is ever truly unorganized (Dunlop, 1958; pp. 7-8):The hierarchy of workers does not necessarily imply formal organizations; they may be said to be "unorganized" in popular usage, but the fact is, that wherever they work together for any considerable period, at least an informal organization comes to be formulated among the workers with norms of conduct and attitudes toward the hierarchy of managers. In this sense, workers in a continuing enterprise are never unorganized.As Freeman and Medoff (1984, p. 19) noted in their comprehensive study of union effects, "Our most far-reaching conclusion is that, in addition to well advertised effects on wages, unions alter nearly every other measurable aspect of the operations of workplaces ... ". Our study extends past research by examining possible effects on individual perceptions of shared leadership expectations, but also by documenting how relations between union status and outcomes may vary with union strength. At a more macro-level, recent research showed that declining union membership density explains from one-fifth to one-third of growing wage inequality over the period from 1973-2007 (Western and Rosenfeld, 2011). In a sense, our results illustrate one of many possible micro-level outcomes that parallel this macro-level result.Strengths and limitations of study A notable strength of the study was the use of a large national sample composed of diverse respondents (e.g. diverse in race/ethnicity, age, occupation, gender, etc.). This sample enabled the results to be more generalizable. Also, the inclusion of important control variables (i.e. occupational self-efficacy, occupational level, and tenure with supervisor) enabled the elimination of many obvious competing hypotheses.There were some noteworthy limitations of the study. One limitation was that the study utilized a cross-sectional design, which made it impossible to make an affirmative conclusion about the directionality. In other words, it might have been the case that strong unions were caused by more empowered constituents. However, our hypotheses were grounded in theory, so we had good reasons to believe that strong unions led to empowered constituents.Another limitation was the use of self-reports for all measures. This put our study at risk with regards to common method bias. However, the self-report survey was the most practical data collection procedure for assessing the large national sample; the constructs possessed adequate validity; and multicollinearity was minimal. Therefore, the benefits of using the self-report questionnaires probably outweighed the risks.Future research directions An obvious direction for future research would be to develop a more comprehensive union strength measure. Perhaps a scale with several key dimensions could be developed. Better yet, a global union strength scale, with just a few reliable items, could be developed. A shorter scale would be most useful when including union strength as a control variable, or as part of a larger survey.Another area for promising research includes longitudinal research designs. For instance, measuring important outcome variables, such as shared-leadership expectations, in work units before and after union certification election wins. This type of study might reveal the actual changes in the outcomes due to the union and its characteristics (i.e. union strength). It has been noted that union effects arise gradually in newly-unionized workplaces (Freeman and Kleiner, 1990), and that "voice effects" (such as empowering employees through shared leadership), rather than economic effects, appear to emerge first. These points add further motivation for longitudinal study. Of course, countless other outcome variables, besides shared-leadership expectations (e.g. performance measures), could be investigated. Fundamentally, labor unions are power-balancing tools, hence, they should be measured as such. Perhaps it is intuitive that stronger unions empower their constituents. However, it is less obvious how union strength relates to supervisor-subordinate dyads. In this study, we found that those employees associated with stronger unions had improved power relations with their direct supervisors. Union strength is connected to important employee outcomes, such as shared-leadership expectations; thus, it might behoove researchers, organizations, employees, and unions to more routinely consider union strength as a key union attribute. Opens in a new window.Table I Descriptive statistics for union strength study Opens in a new window.Table II Pearson correlation matrix for union strength study Opens in a new window.Table III ANOVA table for SLX Opens in a new window.Table IV Bonferroni comparisons for SLX
|
- Employees who belonged to more powerful unions (i.e. compared to employees who belonged to less powerful unions) demonstrated increased shared-leadership expectations with their supervisors. In support of Hypothesis 1, non-union employees also possessed increased shared leadership expectations in comparison to union workers where the union was perceived as weak. As proposed in Hypothesis 2, unions perceived as strong produced more empowered constituents relative to unions perceived as weak. Finally, non-union employees did not appear to differ in shared-leadership expectations from employees perceiving strong unions, contrary to Hypothesis 3.
|
[SECTION: Value] The "equalization of bargaining power" (Kaufman, 1993, p. 5) between relatively powerless employees and increasingly large and corporate concentrations of employer prowess was a fundamental goal of early twentieth century reformers and theorists addressing "the labor problem" - the numerous abuses of employees such as child labor, low wages, long hours, and inhumane treatment, and broader disruptions of employees' lives in the rapidly industrializing American workplace of that time. The goal of balancing employee and employer power was officially enshrined in US federal law with the Wagner Act (or National Labor Relations Act [NLRA]) in 1935.Congress expressly cited bargaining power imbalances as a major policy concern, a perceived cause of strife, depressed wages, and commercial disruptions (US Code, Title 29, Ch. 7, or Wagner Act, Section 1):The inequality of bargaining power between employees who do not possess full freedom of association or actual liberty of contract and employers who are organized in the corporate or other forms of ownership association substantially burdens and affects the flow of commerce, and tends to aggravate recurrent business depressions.The Act went on to prescribe "restoring the equality of bargaining power between employees and employers" (US Code, Title 29, Ch. 7, or Wagner Act, Section 1), and to specify principles and mechanism to implement that goal.Traditionally, the focus of employment relations (and its traditional namesake, "industrial relations") and its study of power was the formal bargaining or negotiating level, at the organizational level where the employer's and employee's formal representatives negotiated wages, hours, and other terms of employment. Roughly 25 years ago, however, theorists began to recognize that this focus tended to encourage neglect of other important levels of employer-employee interaction, including public policy or societal, strategic, and most importantly for immediate purposes, "workplace and individual/organization relationships" (Kochan et al., 1986, pp. 16-20). This latter "micro" level remains an area of relative neglect despite some progress. This study aims to address one part of this neglect by proposing and testing hypotheses on the relationship between perceived union strength, a micro- or workplace-level analog of union bargaining power, and perceptions of shared leader-member expectations using supervisor-subordinate dyads as a unit of analysis. Theory of power-dependence relations and coalition formation The theory of power-dependence relations (Emerson, 1962) posits that power and dependence are equivalent. In other words, entity A depending on entity B is equivalent to B having power over A. The theory also suggests that, when power relations are imbalanced, people seek to bring those relations into balance via balancing operations. One such balancing operation is coalition formation. Coalitions are basically organized individuals that form a single collective entity. Coalitions increase member power levels via processes such as group identification and internalization of collective demands (Emerson, 1962, p. 38). Role prescriptions and group norms demand that members behave in ways that benefit the group. Labor unions, which are coalitions, can provide a source of power for their constituents.Of course, coalitions will vary in their overall effectiveness. For example, some labor unions are stronger than others. The more effective a coalition, the more empowering it will be for its constituents. Hence, a crucial variable for study is union strength. Therefore, simply determining whether there is union representation is necessary, but it is insufficient for a complete understanding of the association between unions and employee empowerment.A key relationship at work is between employees and their direct supervisors (e.g. Ferris et al., 2009; Ferris et al., 2008). Some organizational researchers have argued that this relationship normally represents employee conceptions about their relationships with their entire employing organization (e.g. Shore and Tetrick, 1994; Shore et al., 2004). For this reason, we wish to specifically investigate how union strength corresponds with power relations in the supervisor-subordinate relationship. Shared-leadership expectation (SLX) is a measure of employees' expectations of sharing power with their supervisors (e.g. Graen, 2009; Pearce and Conger, 2003).Pre-union, post-union, and non-union power relation contexts It is important to distinguish between pre-union, post-union, and non-union power relation contexts. If any particular work unit is represented by a labor union, then typically there must have been a pre-union context that led to that unionization[1], and a post-union context that informs about the overall effectiveness of that union. On the other hand, work units without unions probably have power relation contexts that are different. Of course, there are non-union work units that possess power relation contexts similar to pre-union contexts, but on the whole they will likely differ (i.e. employees probably find them generally less intolerable or more satisfactory).The fact that any particular work unit formed a union suggests that the pre-union environment was probably intolerably imbalanced. Union formations are coalition formations intended to balance power between labor and management groups (e.g. Kaufman, 1993). Therefore, we assume that intolerable power imbalances between labor and management groups pre-existed any union formation. Correspondingly, and on the whole, non-union units are probably more balanced than pre-union units, or at least, the employees are likely more tolerant of any such imbalances.The effectiveness or ineffectiveness of the union will determine if the post-union context has improved or not. A weak union implies that the union has not achieved improved balance. Therefore, work units with weak unions are essentially equivalent to pre-union work units (i.e. intolerably imbalanced). On the other hand, work units with strong unions imply that the unions have achieved improved balance. Last, we argue that strong unions likely create more balance than non-union units because they likely foster more employee power than management typically would offer without unions (e.g. Kaufman, 1993). Therefore, we formulate the following set of hypotheses:H1. Non-union employees tend to have higher SLX than constituents of weak unions.H2. Constituents of strong unions tend to have higher SLX than constituents of weak unions.H3. Constituents of strong unions tend to have higher SLX than non-union employees. Data The sample consisted of working adults across the United States (N=347). We purchased responses from a survey software company that makes survey panels commercially available. The survey company compensated all participants who completed our online survey. Respondents were racially/ethnically diverse: 25.4 percent White, 24.8 percent Black, 22.8 percent Hispanic, 23.1 percent Asian, and 4 percent other. The mean age was approximately 40.9 years, and ranged from 18 to over 62 years. About 65.4 percent of the respondents were female. Approximately 26.5 percent were High School graduates (or less); 24.2 percent possessed only 2-year college degrees; 32 percent had only four-year college degrees; 13.5 percent possessed only master's degrees; and 3.7 percent held a doctoral degree.About 70.6 percent of the respondents were full-time employees. Roughly 41.2 percent worked for private employers; 46.1 percent for public employers; and 12.7 percent were self-employed. The average job tenure was about 4.3 years. The average company tenure was 5.1 years. Roughly 13.5 percent were members of a labor union. The average income was $50,000 per year, and ranged from less than $25,000 to over $100,000 per year.Measures The outcome variable of interest was shared-leadership expectation (SLX). The central independent variable was the union status of the work unit (i.e. workplace with no union, a weak union, or a strong union). In addition, important control variables were employed in order to rule out competing hypotheses. We controlled for occupational self-efficacy, occupational level, and tenure with supervisor. Rationales for using these particular control variables are provided below.Shared-leadership expectation (SLX). This six-item measure assesses the extent to which employees can expect to develop a shared-leadership relationship with their supervisors (Graen et al., 2006; Graen, 2009). Graen et al. (2006) originally developed this measure and borrowed four items from the well-known LMX-7 scale (e.g. see Graen and Uhl-Bien, 1995 for an extensive review of LMX-7); the remaining two items correlated strongly with those four LMX-7 items. Their six-item measure of SLX correlated significantly, and in predicted directions, with measures of team fairness, process, and success (Graen et al., 2006). Representative items include "I have an excellent working relationship with my supervisor" and "My supervisor has respect for my capabilities." The Cronbach's alpha internal consistency reliability estimate was 0.88.Union status. This categorical variable included three groups:1. work unit with no labor union;2. work unit with a weak labor union; and3. work unit with a strong labor union.Classifications were obtained from a single-item measure:When dealing with company management, your labor union is typically ... weak (ineffective); powerful (effective); not applicable - there's no labor union at my workplace.About 64.5 percent worked in units with no union; 11.0 percent in a unit with a weak union; and 24.5 percent worked in units with strong unions.Occupational self-efficacy. The six-item scale developed by Rigotti et al. (2008) was used to measure occupational self-efficacy. Rigotti et al. described occupational self-efficacy as an employee's felt competence with regards to successfully performing job tasks. As part of their assessment of construct validity, Rigotti et al. demonstrated that their measure correlated significantly, and in anticipated directions, with job satisfaction, commitment, performance, and job insecurity - in five samples from different countries:1. Germany;2. Sweden;3. Belgium;4. United Kingdom; and5. Spain.Representative items include "I feel prepared for most of the demands of my job" and "When I m confronted with a problem in my job, I can usually find several solutions." This variable was used as a control because supervisors are more inclined to share power with more competent employees (e.g. Graen, 2009). The Cronbach's alpha reliability estimate was 0.90.Occupational level. The question "How would you describe your occupation?" permitted six occupational type responses:1. unskilled or semi-skilled manual worker;2. generally trained office worker or secretary;3. vocationally trained craftsperson, technician, IT-specialist, nurse, artist or equivalent;4. academically trained professional or equivalent (but not a manager of people);5. manager of one or more subordinates (non-managers); and6. manager of one or more managers.This question, along with its response categories, was borrowed from Hofstede (2008).Hofstede and Hofstede (2005) explained that power distance (i.e. "the extent that the less powerful members of institutions and organizations expect and accept that power is distributed unequally," p. 46) varies by occupation within some nations; where blue-collar occupations tend to have employees with higher power distance than white-collar occupations. Hence, controlling for occupation may indirectly help account for cultural differences with regards to tolerances for power imbalances. The six occupational categories are basically ordered from the most blue-collar to the most white-collar, so we treated it as an ordinal variable.Tenure with supervisor. One item captured supervisor-subordinate dyad tenure: "How long have you worked with your current supervisor?" Response options were: less than 1 year, between 1 and 3 years, between 3 and 5 years, between 5 and 10 years, between 10 and 15 years, between 15 and 20 years, and more than 20 years. Supervisor-subordinate relationship tenure has shown to positively correlate with overall relationship quality (e.g. Wayne et al., 1997).Data analyses techniques A univariate analysis of variance (ANOVA) model was conducted. Pair-wise comparisons were assessed via Bonferroni's method. Descriptive statistics and correlations (and Cronbach's alphas) are presented in Tables I and II, respectively. As seen in Table I, the variables in this study possessed significant variation, hence, did not appear to be range-restricted. Multicollinearity did not seem to pose a substantial threat to this study - the occupational level variable had only small-to-moderate correlations with the other independent variables.The results of the ANOVA procedure, with SLX as the dependent variable, are summarized in Table III. Based on the sampling procedure employed, it was appropriate to assume that the cases were independent. The Levene's test failed to reject the null hypothesis that there was equality of error variances (F (2, 344)=1.46, p=0.23). Finally, the distribution of residuals appeared normal. Therefore, the major assumptions required for ANOVA procedures were adequately met in this study.The analysis revealed a significant main effect for the union status variable, F(2, 341)=4.00; p=0.019. Bonferroni comparisons, in Table IV, quantify the nature of the relationship between union status and SLX. Employees in non-union units demonstrated significantly higher SLX than those in weak union units (d=0.27, p=0.04), providing support for H1. Furthermore, employees in strong union units exhibited significantly higher SLX than those in weak union units (d=0.39, p < 0.01), providing support for H2. While those in strong union units seemed to show slightly higher SLX than employees in non-union units, the difference was not statistically significant (d=0.12, p=0.37), providing no support for H3. Unions perceived as strong produce more empowered constituents relative to unions perceived as weak, as predicted (H2). This finding is consistent with the theory of power-dependence relations because coalition building is a power balancing operation, and unions are coalitions. In this study, we found that unions vary in their perceived effectiveness in dealing with management, and this variable was significantly linked with the nature of supervisor-subordinate dyads in host organizations.Specifically, employees who belonged to more powerful unions possessed increased shared-leadership expectations with their supervisors, compared to employees who belonged to less powerful unions. Consistent with our prediction (H1), non-union employees also possessed increased shared leadership expectations in comparison to union workers where the union was perceived as weak. Non-union employees did not differ significantly in shared-leadership expectations from employees perceiving strong unions, contrary to expectations (H3).Implications of study This study suggests that organizational researchers who study or control for the impacts of labor unions should routinely consider the "strength" of unions. Most researchers who sample workers via surveys typically ask respondents if they belong to a union or not, but this data alone is insufficient because not all unions are equal; therefore, a next step would be to ask respondents how powerful they think their unions are. For example, one could use the one-item variable employed in this study (i.e. union's perceived effectiveness in dealing with management). The construct is similar to that of "union instrumentality" that is often used in studies of non-union workers' voting intentions in union representation elections (e.g. Kochan, 1979), but it is rarely assessed in studies using union member subjects. In short, unions are power-balancing tools, so their power levels should be customarily assessed.Another implication of the study is that unionized employees may become more interested in helping to make their unions stronger if they understood the connections that stronger unions have on their relationships with direct supervisors. Employees should be informed about the importance of union strength so that they avoid lumping all unions together. For instance, in terms of shared leadership, if one lumps all unions together, then the stronger and weaker unions will "average out" to some level below non-union workplaces. But, we now know that in terms of shared leadership; stronger unions can perform just as well or better than non-union workplaces.A noteworthy societal implication of this study is that belonging to an effective coalition may correspond with improved relations with important others outside, but related to, the coalition. Hence, groups who desire to improve their standing in any larger social structure may organize themselves into effective coalitions. Unions are simply one kind of coalition, but, as we have seen, not all unions are effective. Perhaps workers can (and do) form other kinds of coalitions (formal or informal) to achieve their ends, and this could be another fruitful area of future research. For example, in the US Postal Service the National Alliance of Postal and Federal Employees, has since 1913 championed non-discrimination for postal employees, but holds limited bargaining rights. More generally, it has been suggested that no workplace is ever truly unorganized (Dunlop, 1958; pp. 7-8):The hierarchy of workers does not necessarily imply formal organizations; they may be said to be "unorganized" in popular usage, but the fact is, that wherever they work together for any considerable period, at least an informal organization comes to be formulated among the workers with norms of conduct and attitudes toward the hierarchy of managers. In this sense, workers in a continuing enterprise are never unorganized.As Freeman and Medoff (1984, p. 19) noted in their comprehensive study of union effects, "Our most far-reaching conclusion is that, in addition to well advertised effects on wages, unions alter nearly every other measurable aspect of the operations of workplaces ... ". Our study extends past research by examining possible effects on individual perceptions of shared leadership expectations, but also by documenting how relations between union status and outcomes may vary with union strength. At a more macro-level, recent research showed that declining union membership density explains from one-fifth to one-third of growing wage inequality over the period from 1973-2007 (Western and Rosenfeld, 2011). In a sense, our results illustrate one of many possible micro-level outcomes that parallel this macro-level result.Strengths and limitations of study A notable strength of the study was the use of a large national sample composed of diverse respondents (e.g. diverse in race/ethnicity, age, occupation, gender, etc.). This sample enabled the results to be more generalizable. Also, the inclusion of important control variables (i.e. occupational self-efficacy, occupational level, and tenure with supervisor) enabled the elimination of many obvious competing hypotheses.There were some noteworthy limitations of the study. One limitation was that the study utilized a cross-sectional design, which made it impossible to make an affirmative conclusion about the directionality. In other words, it might have been the case that strong unions were caused by more empowered constituents. However, our hypotheses were grounded in theory, so we had good reasons to believe that strong unions led to empowered constituents.Another limitation was the use of self-reports for all measures. This put our study at risk with regards to common method bias. However, the self-report survey was the most practical data collection procedure for assessing the large national sample; the constructs possessed adequate validity; and multicollinearity was minimal. Therefore, the benefits of using the self-report questionnaires probably outweighed the risks.Future research directions An obvious direction for future research would be to develop a more comprehensive union strength measure. Perhaps a scale with several key dimensions could be developed. Better yet, a global union strength scale, with just a few reliable items, could be developed. A shorter scale would be most useful when including union strength as a control variable, or as part of a larger survey.Another area for promising research includes longitudinal research designs. For instance, measuring important outcome variables, such as shared-leadership expectations, in work units before and after union certification election wins. This type of study might reveal the actual changes in the outcomes due to the union and its characteristics (i.e. union strength). It has been noted that union effects arise gradually in newly-unionized workplaces (Freeman and Kleiner, 1990), and that "voice effects" (such as empowering employees through shared leadership), rather than economic effects, appear to emerge first. These points add further motivation for longitudinal study. Of course, countless other outcome variables, besides shared-leadership expectations (e.g. performance measures), could be investigated. Fundamentally, labor unions are power-balancing tools, hence, they should be measured as such. Perhaps it is intuitive that stronger unions empower their constituents. However, it is less obvious how union strength relates to supervisor-subordinate dyads. In this study, we found that those employees associated with stronger unions had improved power relations with their direct supervisors. Union strength is connected to important employee outcomes, such as shared-leadership expectations; thus, it might behoove researchers, organizations, employees, and unions to more routinely consider union strength as a key union attribute. Opens in a new window.Table I Descriptive statistics for union strength study Opens in a new window.Table II Pearson correlation matrix for union strength study Opens in a new window.Table III ANOVA table for SLX Opens in a new window.Table IV Bonferroni comparisons for SLX
|
- A contribution of the present study is to show that unions also have significant connections with supervisor-subordinate relations (i.e. shared leadership), and that simply having a unionized workplace does not guarantee increased employee empowerment; unions must also be strong.
|
[SECTION: Purpose] The considerable amount of regulatory attention given to corporate governance issues in recent years suggests that stronger governance mechanisms would reduce opportunistic management behavior, thus improving the quality and reliability of financial reporting. Regulators believe that this in turn will help to maintain and enhance investors' confidence in the integrity of capital markets. In contrast, some critics argue that the enhanced governance and litigation environment may change the balance of business and information risk for many firms, with the predictable and undesirable result that many firms will become more cautious, and forgo promising opportunities. Thus, shareholder wealth may ultimately be reduced. Although in the literature studies have examined the association between the attributes of governance mechanisms and firm performance, as well as the information content of the financial reporting process, much less is known about the impact of the recent changes in corporate governance codes on earnings quality internationally (Beekes et al., 2004)[1].The purpose of this study is to provide insight into the ongoing debate in the regulatory and academic communities on the effectiveness of the new governance regulations in Canada. Since the late 1990s, publicly traded firms in Canada have been subject to stricter corporate governance rules and guidelines. These changes in expectations regarding corporate governance were motivated, to a large extent, by some large corporate scandals in the USA and Canada. Due to the fact that many Canadian companies also rely on the US capital market, the dramatic changes in the US corporate governance regulations and practices (e.g. the Sarbanes-Oxley Act and the new SEC regulations) have also had a significant impact in Canada.For several reasons, Canada is a unique and interesting setting in which to assess the sensitivity of the relation between governance and the integrity of the financial reporting process to new governance initiatives. First, although Canadian securities laws are substantially similar to those in the USA, unlike the USA, Canada does not have a centralized securities commission. Securities regulation is enforced at the provincial and territorial level (Rosen, 1995). Therefore, any nationwide governance agreement must obtain strong support from large provinces, such as Ontario and Quebec. Second, Canada uses a flexible method to address matters of corporate governance, which is distinct from the mandatory approach adopted by the US. Moreover, a much higher percentage of Canadian public companies have a controlling shareholder, as compared to US public companies (La Porta et al., 1999). These controlling shareholders have a natural incentive to be represented on the board of directors, which raises issues about the appropriate definition of independence. Finally, many Canadian public corporations are relatively small firms with a limited capacity to attract large numbers of completely independent directors; therefore, for these companies, complying with a strict set of corporate governance rules would be a significant financial and administrative burden. These institutional features raise questions about the effectiveness of governance control practice in improving the financial reporting process in Canada.This paper examines the effect of governance on the quality of the financial reporting process by linking governance attributes to the quality of accounting earnings. The focus on earnings is appropriate since it is a summary performance measure that is frequently quoted, analyzed and discussed in the literature and in the financial community. In this paper, the quality of earnings is measured in two ways:1. an accounting-based measure of earnings management (the magnitude of abnormal accruals); and2. a market-based measure of earnings informativeness (the return-earnings association)[2].Employing both measures can provide collaborating evidence, since enhanced regulations may provide fewer incentives for managers to manage earnings, and thus the magnitude of abnormal accruals may be lower; on the other hand, highly significant legislative changes to financial practice and corporate governance may encourage firms to undertake less optimal yet safer investment opportunities. Therefore, financial information may become a less clear representation of the economic resources and changes in economic resources of the firm. Accordingly, all else being equal, earnings informativeness may be reduced.Using data on corporate governance practice for Canadian firms comprising the S&P/TSX composite index for 2002-2005, the paper finds that overall governance quality is negatively related to the level of abnormal accruals and positively influences the return-earnings association. This suggests that good corporate governance mechanisms provide greater monitoring of the financial accounting process and are associated with reported earnings that are more informative. The study also finds that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis further support these inferences.Studies in the literature typically focus on particular aspects of governance, such as board composition, shareholder activism, executive compensation, or insider ownership on firms' market value or performance (e.g. Morck et al., 1988; Warfield et al., 1995). For example, using a sample of Canadian companies during the years 1991-1997, Park and Shin (2004) examine the relationship between the proportion of outside board members and the level of accrual management, and find that only outside directors from financial intermediaries or institutional shareholders reduce earnings management. They conjecture, but do not test, that the insignificant results may be due to Canadian directors' lack of ownership interest in the firms they monitor and the presence of dominant shareholders. By examining the relation between corporate governance (including board composition, management shareholding, shareholders' rights and the extent of disclosure of governance practices), and the quality of earnings (measured by both accrual management and earnings informativeness), this study can contribute to a more comprehensive understanding of the significance of governance. The evidence suggests that enhanced governance initiatives are accompanied by an improved quality of earnings. The paper also differs from Park and Shin (2004) in another important aspect, since the current study investigates the years 2001-2004, a period where significant governance initiatives were imposed after the accounting scandals; thus, the evidence can provide more relevant and useful insights to the current policy debate regarding governance effectiveness.The results from this study support the notion that enhanced governance practices, especially independent boards and committees, effective management compensation, and powerful shareholders are important in constraining management from managing earnings and in ensuring a higher quality of earnings. Given the increasing interest in corporate governance, the evidence provides additional support of continuing regulatory initiatives throughout much of the world on corporate governance concerning the board independence and managerial ownership. It also calls for more actively involved shareholders to play a greater role in firms' accounting reporting processes.The rest of the paper proceeds as follows. The next section describes recent corporate governance initiatives in Canada. Section 3 develops hypotheses, while Section 4 describes the research design and variable measurements. Sample selection and empirical results are presented in Section 5. Section 6 provides additional analyses and conducts sensitivity tests. Section 7 provides concluding remarks. Canada has placed an emphasis on corporate governance for a number of years. The significant governance initiative may date back to 1995, when the Toronto Stock Exchange (TSX) adopted 14 voluntary corporate governance best practices, and required Canadian-incorporated listed companies to disclose annually their corporate governance practices, and compare their practices to the 14 best practices (Labelle, 2003; Park and Shin, 2004)[3].In late 2001, the Joint Committee on Corporate Governance, established by the TSX, the Canadian Venture Exchange and the Canadian Institute of Chartered Accountants, issued a report, which led to new TSX proposals. In response to the passing of the Sarbanes-Oxley Act in the USA, Canadian regulators adopted a set of corporate governance rules, which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various US stock exchange listing standards. Canadian securities regulators believe that it is essential that Canadian public firms are subject to corporate governance rules that are as strict as those in the USA, but tailored to Canadian markets. These rules can be classified into several categories. The first set of rules relates to CEO/CFO certifications of annual and quarterly reports. Canadian companies also have to adopt disclosure controls and procedures that provide reasonable assurances that material information required to be disclosed by the company is made known to the CEO/CFO and is disclosed within the periods acquired by Canadian securities laws. Like the amended SEC rule, issuers need to design internal controls that provide reasonable assurances that their financial statements are fairly presented in accordance with GAAP.The second set of rules deals with audit committee independence, financial literacy and expertise. Major Canadian public companies must have fully independent and financially literate audit committees; thus, the education and experience of all committee members should be disclosed, so that investors can judge the committee's expertise. The third set of rules relates to the auditing process. To oversee the auditing profession, Canada established the Canadian Public Accountability Board (CPAB). Under the proposed auditing regulations, independent auditors are prohibited from performing various non-audit services for their audit clients.In addition, Canadian provincial securities regulators have proposed regulating other aspects of governance that are enforced through the stock exchange listing standards in the USA. For example, the Ontario Securities Commission (OSC), together with the majority of the other provincial and territorial securities commissions, proposed 18 recommended best practices and accompanying disclosure rules. These guidelines address such topics as the composition of a company's board and job descriptions for directors and officers. Although complying with these guidelines is again voluntary, companies that issue securities in Ontario are required to disclose whether or not they have adopted the guidelines, and if not, they need to explain why in the annual reports to be filed with the OSC.The governance initiatives in Canada have underlined the need for more evidence on corporate governance and its impact on quality of reporting issues. The governance data used in this paper is obtained from a survey on governance practices of Canadian firms in the S&P/TSX index, while the survey data are derived from a company's most recent proxy circulars. The survey has been conducted independently by the Report on Business in the Globe and Mail (a leading newspaper in Canada) on an annual basis since 2002 and the results have been published since then. The scores of corporate governance are based on a set of practices identified by regulators and investor groups which is considered to be critical to corporate governance effectiveness, and can be classified broadly into the following categories:* board composition;* shareholding and compensation of directors and management;* shareholder rights; and* disclosures of corporate governance practice.Each category contains several criteria, with corresponding weights for each criterion. This measurement of governance is relevant for assessing the degree of independence, objectivity, and attentiveness the board exercises in overseeing management performance, and the degree to which they hold management accountable to stakeholders for its actions[4]. Details about the criteria used by the survey are provided in the Appendix. The importance of corporate governance has been a question of substantial interest to regulators, financial institutions, investors, and the media. Governance problems arise from divergent incentives and asymmetric information between shareholders and managers. These conflicts of interests, coupled with the impossibility of writing explicit contacts on all future contingencies, lead to unresolved agency problems that affect firm valuation (Hart, 1995). Corporate governance mechanisms are intended to mitigate agency costs by increasing the monitoring of management's actions and limiting managers' opportunistic behavior (Ashbaugh et al., 2004). In this section, several hypotheses are developed that identify and link specific elements of governance to accounting earnings.3.1 Board composition and the quality of earnings One of the most important factors influencing the integrity of the financial accounting process involves the board of directors, whose responsibility is to provide independent oversight of management performance and to hold management accountable to shareholders for its actions (DeFond and Jiambalvo, 1994; Dichev and Skinner, 2002). Prior research examining the association between the corporate governance mechanisms concerning the board of directors (e.g. independence of board or board size, expertise of directors or board members, and stock ownership of board members) and the extent of earnings manipulation finds inconclusive results. While the empirical results concerning board attributes are mixed due to different research designs and empirical settings, a general belief is that boards are more effective in their monitoring of management when there is a strong base of independent directors on the board (e.g. Beasley, 1996; Peasnell et al., 2000; Klein, 2002; Xie et al., 2003). For example, Beasley (1996) finds that the presence of outside directors reduced the probability of fraud in the presentation of financial statements during the period of 1980-1991. Similarly, Klein (2002) provides evidence concerning board independence and earnings manipulation and finds that companies with independent boards are less likely to report abnormal accruals. Xie et al. (2003) find similar results with respect to the relationship between earnings management and the independence of boards, as well as the financial sophistication of board members.On the other hand, there are some counter-arguments proposing that completely independent boards may not be effective in monitoring management, since management is more likely to cooperate with board members with whom they are better acquainted. Indeed, Agrawal and Knoeber (1996) find a significant negative relationship between outside membership on the board and firm performance, leading them to conclude that boards that have too many outsiders lose the expertise associated with officers serving on the board.The reliability of financial reporting is also due, in part, to the independence and integrity of the audit process. Audit committees are responsible for recommending the selection of external auditors to the board, ensuring the soundness and quality of internal accounting and control practices, and monitoring external auditor independence from management. Empirical evidence generally supports the positive effect of independent audit committees. For example, Carcello and Neal (2000) document a relation between greater audit committee independence and the quality of financial reporting. Similarly, Xie et al. (2003) report a negative association between earnings management and the independence of audit committees.Finally, the presence of an independent nomination committee is also important for board effectiveness and monitoring ability, since the manager's power to nominate new members to the board can be removed. Overall, to the extent that independent boards and committees are superior monitors of management and likely limit managers' earnings management discretion and reduce managerial incentives to adopt aggressive earnings management strategies in the financial reporting process, we expect that the quality of earnings increases in proportion to the independence and the functionality of the board and its key committees. Hence, the first hypothesis is as follows (in alternate form):H1. Firms with more independent boards and subcommittees have less abnormal accruals and more informative earnings.3.2 Shareholding by managers or directors and the quality of earnings Another element of governance that affects the incentives for directors to actively monitor management and for managers to perform in the best interests of shareholders is the compensation of directors and managers. There are two opposing views in the literature regarding the relationship between board or management shareholding and the quality of financial reporting. Morck et al. (1988) show that high stockholding may cause a moral hazard and an information-asymmetry problem between the insiders (management and directors) and outside investors. Under this managerial entrenchment hypothesis, managers may have more incentives to exercise discretion in accounting reporting, and monitoring and disciplining will be more difficult for directors with an equity stake in the firm. As a result, the quality of the financial reporting process may be compromised when stockholding by directors is high.On the other hand, agency theory (Jensen and Meckling, 1976) predicts that managers with lower firm ownership have greater incentives to manipulate accounting numbers in order to relieve the constraints imposed on accounting based compensation contracts. In addition, Jensen (1989) argues that outside directors with little equity stake in the firm cannot effectively monitor and discipline the managers. Indeed, many firms require their directors to increase shareholding in their firms (Hambrick and Jackson, 2000). Consistent with this theory, Warfield et al. (1995) find a negative relation between managerial stockholdings and the absolute value of abnormal accruals. They interpret their results as being consistent with the belief that managerial shareholdings act as a disciplining mechanism. Under this alignment of interest hypothesis, mandatory shareholding of board and management can effectively motivate managers' performance, and create incentives for independent directors to more closely monitor management, a scenario under which a positive association between mandatory shareholding and the quality of accounting earnings is expected. This discussion leads to the following hypothesis:H2. Firms with a higher level of board (management) share ownership have less abnormal accruals and more informative earnings.3.3 Shareholder rights and the quality of earnings An important aspect of best practices in corporate governance deals with shareholder rights, which reflect shareholders' ability to exercise their control over firm assets, remove ineffective or opportunistic management, monitor the conduct of the board of directors or initiate ownership changes that increase firm valuation (Ashbaugh et al., 2004). One of the most effective means of controlling management's behavior is to grant shareholders the right to vote on major issues, such as electing directors and the chairperson and approving senior executive appointments, and important changes affecting the firm such as mergers or liquidation. Normally these rights are proportionate to the shareholder's equity ownership. However, these rights are often severally limited under a governance system that allows dual-class share structures, which are very common in Canada[5].Recent research also indicates that the existence of stronger shareholders may improve internal control, and thus may be an effective monitoring device for improving financial reporting quality. To the extent that an appropriate power-sharing relationship between shareholders and managers reduces the moral hazard problems that lower overall firm value and allows shareholders to effectively monitor the financial reporting practice, we predict a positive association between shareholder rights and the quality of earnings. Hence:H3. Firms with stronger shareholder rights have less abnormal accruals and more informative earnings.3.4 Disclosure of corporate governance practice and quality of earnings Prior research indicates that corporate disclosure reduces information asymmetry between investors and managers (e.g. Lang and Lundholm, 1996; Welker, 1995). For instance, Lang and Lundholm (1996) provide evidence that firms with more informative disclosure policies have a larger analyst following, more accurate analyst earnings forecasts, less dispersion among individual analyst forecasts, and less volatility in forecast revisions. Similarly, Welker (1995) finds that information asymmetry, measured as the bid-ask spread, is reduced and market liquidity increased as the level of disclosure is increased. Prior research also demonstrates a relationship between information asymmetry and earnings quality (e.g. Dye, 1988; Trueman and Titman, 1988). For example, Dye (1988) and Trueman and Titman (1988) show analytically that the existence of information asymmetry between management and shareholders is a necessary condition for earnings management.The above two lines of research suggest that enhanced corporate disclosures may benefit a firm in many ways; however, managers wishing to retain the flexibility to engage in earnings management may have incentives to limit the disclosure. To the extent that disclosure of governance practice may reduce information asymmetry and enable board and investors to effectively monitor management decisions and performance, we predict that a better quality of disclosures on governance practice is associated with a higher quality of earnings. Hence:H4. Firms with higher quality of corporate governance disclosures have less abnormal accruals and more informative earnings. 4.1 Measuring abnormal accruals Since managers may have incentives to manage earnings either upward or downward, we use the absolute value of the abnormal accruals as a proxy for earnings quality (DeFond and Park, 1997; Bartov et al., 2000). To the extent that better monitoring of the financial reporting process leads to greater financial transparency, the firm is expected to have a lesser degree of earnings management, and thus fewer abnormal accruals. Accordingly, a negative relationship between the governance quality and the absolute value of abnormal accruals is predicted.Abnormal accruals are calculated using the modified Jones model (Dechow et al., 1995): Equation 1 where TA is total accruals, defined as net income before extraordinary items (Compustat #123) minus cash flow from operations (Compustat #308) scaled by beginning of fiscal year total assets (Compustat #6); DREV is the change in sales (Compustat #12) from year t-1 to year t, scaled by beginning of fiscal year total assets; DREC is change in accounts receivable (Compustat #302) from year t-1 to year t, scaled by beginning of fiscal year total assets; PPE is gross property, plant and equipment (Compustat #7) scaled by beginning of fiscal year total assets; BM is the book value (Compustat #60) to market value of common equity (Compustat #25x#199) for the year; and OCF are current operating cash flows (Compustat #308), scaled by beginning of fiscal year total assets.The model assumes that normal accruals are positively related to change in revenues, less the change in accounts receivable, and negatively related to the capital intensity of the firm. Following Larcker and Richardson (2004), book-to-market ratio (BM) is used as a proxy for growth, and we expect it to be positively related to total accruals. We also include current operating cash flows (OCF) as an additional variable to control for extreme level performance (Dechow et al., 1995), and expect OCF to be negatively associated with total accruals.The model is estimated using all available Compustat Canadian firms with available data for each two-digit SIC group separately, with at least eight firms in each group. To reduce the impact of influential observations, independent variables in the model are all winsorized to be no greater than 1 in absolute value, and the book-to-market is winsorized at the extreme of 2 percent. Discretionary accruals (DA) for each firm i in each industry are defined as the difference between the total accruals (TA) and the fitted value of equation (1), as follows: Equation 2 Prior research documents that discretionary accrual estimates are correlated with firm performance (Kothari et al., 2005). To mitigate this misspecification problem, we control for firm performance by using the performance-matched discretionary accruals model. Specifically, following Kothari et al. (2005), each firm-year observation is matched with another firm from the same two-digit SIC code and year with the closest ROA in the current year. The performance-matched discretionary accrual (PMDA) is the discretionary accruals (DA) calculated from equation (2), minus the matched firm's DA for the year. To test the hypotheses, the following pooled cross-sectional and time series regression model is then estimated: Equation 3 where Board is the governance quality on board composition for firm i at year t; Comp is the governance quality on shareholding and compensation of board or management, for firm i at year t; Share is the governance quality on shareholder rights for firm i at year t; Disc is the governance quality on corporate governance disclosures for firm i at year t; Audit_S is an indicator (1 if an auditor audits at least 20 percent of the industry revenue, and 0 otherwise); SIZE is the logarithm of total assets for firm i at year t; and LEV is the total liabilities to total assets for firm i at year t.Each firm's performance score for individual governance categories is used as a proxy for the quality of that governance feature, and a higher score implies a lesser extent of earnings management. Thus, consistent with H1 to H4, we expect a1<0, a2<0, a3<0, and a4<0.Given that corporate governance is not the sole factor affecting discretionary accruals, several control variables are introduced to isolate other contracting incentives that have been found to influence managers' accounting choices. For example, we control for auditor specialization (Audit_S) since prior research (Dunn and Mayhew, 2004; Myers et al., 2003) finds that industry-specialist audit firms assist clients in enhancing disclosures and in reducing earnings management. Firm size (SIZE) and financial leverage (LEV) are also controlled for, since Press and Weintrop (1990) indicate that these factors may affect managers' discretionary accounting choices. Although there is no prediction for SIZE due to the ambiguity on the link between firm size and discretionary accruals, LEV is expected to have a positive coefficient[6].4.2 Measuring return-earnings association To provide collaborating evidence on the market's perception of the impact of governance on financial reports, the return-earnings association is used as an additional measure of earnings quality. To the extent that better governance systems can effectively align managers' interests with those of shareholders, and actively monitor and control firm-management, the transparency and the reliability of a firm's financial reporting process, and consequently, the informativeness of earnings would increase. Therefore, a positive relation between the governance quality and the return-earnings association is expected.Following Warfield et al. (1995), we estimate the following pooled time-series and cross-sectional regression model: Equation 4 where Rit is the stock return of firm i for the 12 months from nine months before and three months after the fiscal year-end, calculated as (Pit-Pit-1+Dit)/Pit-1 (Pit is the stock price of firm i at time t, and Dit is the dividend of firm i at year t); Eit is earnings per share (before extraordinary item) of firm i for year t; SIZE is a proxy for size, which is measured as logarithm of sales revenue; LEVE is a proxy for leverage, which is measured as total debt divided by total assets; and GWTH is a proxy for growth prospect of the firm (Tobin's Q), which is defined as the market value of equity divided by book value of equity.To control for other factors, which may affect return-earnings associations, we include a proxy for firm size (SIZE), and a proxy for financial leverage (LEVE), since highly levered firms are associated with higher risk, and hence their earnings-return relation is weakened (Watts and Zimmerman, 1990). Following prior studies (e.g. Collins and Kothari, 1989), we also include a proxy for growth (GWTH), which is expected to be positively associated with returns.The coefficient a1 measures the traditional return-earnings relation. The coefficients a2 to a5 measure differential earnings informativeness according to the effectiveness of governance controls. Consistent with H1 to H4, which predict that earnings informativeness increases as a firm's quality of governance mechanism increases, we expect a2>0, a3>0, a4>0, and a5>0. 5.1 Sample selection The initial sample consisted of firms listed on the S&P/TSX composite index as of 1 September 2002, 2003, 2004 and 2005. Governance scores for these companies, as published in the Globe and Mail survey, are obtained from years 2002 to 2005. Of these initial 888 firm year observations, firms that are either missing financial variables or that have insufficient data in order to estimate performance-matched abnormal accruals are eliminated. Financial institutions and insurance companies (SIC=60-69) are also eliminated, since their special accounting methods make the estimation of discretional accruals problematic.Firm-level financial information is obtained from Compustat and is supplemented by firm annual reports. Since the governance surveys are based on the assessment of the most recent proxy statement, firm-level accounting data in the most recent fiscal year prior to the survey is used in the analysis. To be included in the sample, a firm must have stock return data from the Canadian Financial Markets Research Center database. The final sample consists of 519 firm-year observations for the accrual model and 528 firm-year observations for the returns-earnings association analysis, from the years 2001 to 2004.5.2 Results from accruals model Panel A of Table I reports the descriptive statistics for selected variables in equation (3). The mean and median for the absolute value of performance-matched discretionary accruals, PMDA, are 9 percent and 7 percent of the prior year's total assets, respectively. The mean scores for the overall governance quality are 64.7 (out of 100) with a standard deviation of 14.4. There is also a variation within each governance category across firms. For example, the mean quality of disclosures of governance practice has a mean of 9.6 with a standard deviation of 3.3. Other characteristics of the sample firms include an average leverage ratio (LEV) of 47 percent, a median auditor-client relationship of eight years and an observation that more than half of the sample firms were audited by an industry-specialized auditor.Correlations between the dependent and the explanatory variables are shown in Panel B of Table I. As predicted, firms that are ranked higher in terms of board independence, management compensation, shareholder rights and disclosures of governance practice have smaller abnormal accruals. Also, there are strong correlations among attributes of corporate governance at the 1 percent level, indicating that firms that are performing well in one governance category tend to perform well in other categories. SIZE is positively associated with leverage (LEV), suggesting that larger firms have higher leverage constraint levels. Finally, larger firms are more likely to have an independent board and powerful shareholders, to make more governance practice disclosures and to have effective compensation polices for management and directors.Regression results based on equation (3) are reported in Table II, which shows White-adjusted t-statistics for all the coefficients. The association between abnormal accruals and the overall governance quality in columns 3 and 4 in Table II indicates a significant negative association between the magnitude of abnormal accruals and total governance scores at the 1 percent level (with or without control variables), suggesting that overall governance quality is negatively associated with the magnitude of discretionary accruals. In addition, consistent with H1, the results in column 6 indicate that there is a negative association between the proxy for board compositions and the abnormal accruals (-0.003, t=- 3.474) after controlling for other factors, suggesting that as the independence of the board increases, the sample firms engage in less income increasing or decreasing discretionary accruals (Klein, 2002).As predicted in H2, there is a significant negative association between the attributes of the board or management shareholding or compensation and the measure of earning quality (-0.005, t=-3.577). In addition, as predicted by H3, the strong negative association between the quality of shareholder rights and abnormal accruals is significant at the 1 percent level (-0.003, t=-3.366), suggesting that the incentives to manage earnings diminish as increased shareholder rights limit managers' accruals discretion. However, inconsistent with the prediction in H4, there is no significant association between the quality of governance disclosures and the magnitude of discretionary accruals. With regards to the control variables, the results show a significant negative coefficient on Audit_S, suggesting that firms which are audited by an industry-specialized auditor are more likely to have less absolute value of abnormal accruals, a finding that is consistent with prior research.In summary, linking the quality of corporate governance and abnormal accruals, we find that firms that are ranked highly in terms of governance quality have fewer abnormal accruals. In addition, firms with powerful shareholders, more independent or functional boards and with more effective compensation policies for management or directors are more likely to have fewer discretionary accruals, suggesting that these factors may be effective in monitoring managerial opportunism. Finally, we do not find that disclosure of governance practices is associated with the magnitude of earnings management.5.3 Results from the return-earnings model Panel A in Table III reports the descriptive statistics of dependent and independent variables in equation (4). Median raw returns for sample firms over the sample period are 0.11; firms, on average, have earnings per share deflated by the prior year-end price of 0.33. The average Tobin's Q for the sample firms is 2.53 with a median of 2.03. Consistent with prior studies, the correlation table in Panel B of Table III shows a very strong positive correlation between stock raw returns and deflated earnings measure at the 1 percent level. Consistent with predictions, there are also strong correlations among returns and interactions of earnings and attributes of corporate governance at the univariate basis, indicating that firms having strong corporate governance mechanisms in place have earnings that are more informative. Again, there are very strong positive correlations among attributes of governance. Finally, larger firms are more leveraged and have lower Tobin's Q than smaller firms.Regression results of the relationship between returns and earnings and earnings-governance interactions are reported in Table IV. Again, we see a strong positive association between returns and the measure of earnings, which is consistent with prior studies. Column 4 also indicates a significant association between returns and the earnings-governance overall quality interaction at the 10 percent level.To examine whether the market perceives the attributes of governance quality differently, we regress returns on earnings and the interaction of earning and each individual measure of governance attribute. As shown in column 5, there are positive and significant coefficients on the interactions between earnings and proxies for quality of board composition, management compensation and shareholder rights at the 5 percent or better level. The results are insensitive to adding the control variables in the model, as shown in column 6. This provides support for H1-H3, suggesting that board independence, efficient management and director compensation, as well as effective monitoring by shareholders, improve earnings informativeness. Inconsistent with H4, however, a higher level of governance practice disclosures does not incrementally explain the return-earnings relation.Overall, linking the quality of corporate governance and the return-earnings association, an alternative measure of earnings quality, we find results which are largely consistent with the results obtained by using abnormal accruals as a proxy for earnings quality. Namely, firms with a more independent or functional board, and stronger shareholder rights, are more likely to have a stronger returns-earnings association, suggesting the earnings have a higher level of information content. In addition, earnings are more informative for firms with mandatory shareholding by board or management.5.4 Additional analysis To provide further evidence that governance matters for a firm's financial reporting quality, we examine whether firms with improved governance structure over the sample period are found to have a higher quality of reported earnings. Canadian companies have made significant changes in governance practices during the sample period, pushed by stronger governance rules, higher stakes and shareholder pressure[7]. If the new governance initiatives and the ensuing governance debate have helped improve the effectiveness with which boards discharge their financial reporting responsibilities, one might expect that the association between governance effectiveness and the magnitude of earnings management and the return-earnings association has become more pronounced over time. Furthermore, a change analysis can better address the endogeneity concerns.We use a balanced sample design to assess whether the association between abnormal accruals and governance attributes differs over time. This design allows each sample firm to serve as its own control, thereby eliminating any differences that might result from temporal variation in sample composition. We estimate the following accruals model using a sample of 93 panel firms for the years 2001 and 2004[8]: Equation 5 where Time is an indicator, coded as 1 for year 2004, and 0 otherwise.We use a time dummy, which is 1 for the year 2004 (to represent the post-regulation period) and 0 for the year 2001 (to proxy for the pre-regulation period). To control for factors other than governance regulations, such as stronger litigation environment and shareholder pressure for stronger practices, which may drive the improvement of governance, we also include time as a separate independent variable in the equation. The primary variables of interest are a5 to a8. We predict that enhanced corporate governance is associated with a lower degree of earnings management, thus we expect that a5<0, a6<0, a7<0, and a8<0.Unreported descriptive statistics of selected variables used in equation (5) suggest that, unsurprisingly, the total governance scores improved significantly from 2001 to 2004, with the mean score climbing to 61.5 out of 100 in 2001 to 74.9 in 2004. Governance along other dimensions significantly improved as well. For example, the mean score for the board composition and shareholder rights increased from 25.2 in 2001 to 31.2 in 2004, and from 17 in 2001 to 19.1 in 2004, respectively. Also, the magnitude of PMDA was reduced from 2001 to 2004, with a median of 0.09 in 2001, to 0.071 in 2004.The regression results for equation (5) are provided in Table V[9]. As shown in column 4 of Table V, there is a negative association between abnormal accruals and the interaction of board composition and the time dummy (-0.0004, t=-2.005) at the 5 percent level, suggesting that firms that increased their board independence experienced a decrease in abnormal accruals. Similarly, in columns 6 and 8 in Table V, the statistical negative associations between the accrual measure and the interactions of governance and the time dummy indicate that firms with improved shareholder rights incurred less abnormal accruals. However, the change in shareholding of managers and directors and disclosure in governance practice is not associated with any change in abnormal accruals.Next, we next examine whether changes in governance are also associated with changes in the return-earnings association. If the effectiveness of governance in enhancing the financial reporting quality is greater than the negative impact of stricter litigation regulations on investment behavior, then we expect that the market's perception of earnings quality has improved. We estimate the following return-earnings regression using the balanced sample: Equation 6 All the variables in equation (6) are as defined previously. If enhanced corporate governance is associated with a higher quality of reported earnings, and consequently a higher return-earnings association, we expect that a6>0, a7>0, a8>0 and a9>0.Table VI reports the regression results. Inspection of the results in Table VI shows that firms with more independent boards and stronger shareholder rights have higher return-earnings associations at the 5 percent or 10 percent level, after controlling for firm size, growth and the time period. The coefficient on interaction between shareholding and the time dummy is positive, as expected, but not significant. The corporate governance disclosure variable does not affect the informativeness of earnings differently across periods.Overall, additional tests indicate that improvements in governance practice are generally associated with less discretional accruals and more informative reported earnings; in addition, board independence and shareholder rights seem to be the most important factors in influencing the corresponding improvement of earnings quality as the quality of governance effectiveness increases. However, one caveat of the analyses is that, due to data limitation, we are unable to explicitly control for all other factors during the sample period that may contribute to the correlation between improved governance and the proxies of accounting earnings. Thus, the inference must be interpreted with caution.5.5 Sensitivity analyses The Globe and Mail surveys conducted in 2003-2005 are based on slightly modified but tougher marking standards than the methods used in 2002; thus, scores may not be strictly comparable over time. To mitigate this comparability concern, instead of using continuous variables, following Gompers et al. (2003), we classify the sample into three groups based on each category of governance: strong governance as a score above the 67th percentile, weak governance as a score below the 33th percentile, and the remainder as neutral governance. The results are not sensitive to this alternative classification[10].In addition to the determinants of the variation in the accruals model which are considered in this study, managers may have incentives to manipulate earnings in order to avoid earnings losses and earnings declines (Burgstahler and Dichev, 1997). Accordingly, sample firms are partitioned according to whether their unmanaged earning per share (before the performance-matched abnormal accruals) is negative or below last year's reported earnings per share[11]. We expect the incentives for income-increasing earnings management to be particularly strong when the unmanaged earnings numbers fall below target. The repeated analysis does not support this conjecture. The unreported results are qualitatively the same as those observed in Table II[12].Several sensitivity tests are also conducted in the returns-earnings analyses. Fist of all, we re-estimate the return-earnings model by partitioning the samples into positive and negative earnings. Hayn (1995) suggests that the earnings-response coefficient is low and not stable when earnings are negative. While we find that all the results are qualitatively the same when the earnings are positive, there are inconsistent findings when the earnings are negative. In addition, we also consider alternative measures of return (12-month window ending at the fiscal year-end) and earnings, including both the level and the change of earnings. The sensitivity checks do not affect the main results, and the variable for change of earnings is generally not significant across all model specifications. Finally, prior studies (Subramanyam and Wild, 1996) indicate that the persistence and variability of earnings may explain earnings informativeness[13]. Earnings persistence, earnings variability and their respective interactive terms with earnings are then included in the returns and earnings model. The models are re-estimated using both the pooled sample and the panel sample. Neither interactive term is statistically significant in explaining returns, while the main results of governance on return-earnings association remain unchanged. Recent governance initiatives in Canada have underlined the need for more evidence on the corporate governance and the quality of reporting issues. The purpose of this study is to provide early evidence to assess the merit of calls for stringent governance regulations in Canada by examining the association between the quality of overall and specific governance features and the quality of accounting earnings.We use absolute value of performance-matched abnormal accruals and return-earnings association as the proxies for quality of earnings. Using results from recently published data on corporate governance for a sample of Canadian firms, we find that overall governance quality is inversely related to the level of abnormal accruals and positively associated with the return-earnings association, suggesting that good corporate governance mechanisms provide greater monitoring of the financial accounting process and ensure more informative accounting earnings. We also find that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis are consistent with these findings.Overall, this study provides early evidence that is consistent with Canadian regulators' initiatives that stronger corporate governance mechanisms may be important factors in improving the integrity of financial reporting for Canadian firms. Since Canadian regulators adopted a set of corporate governance rules which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various stock exchange listing standards around the world, the evidence in the paper suggests that future policy initiatives in the USA and other countries should reinforce the need for independent boards, effective management compensation and stronger shareholders' rights, which is likely to result in better earnings quality.This study has several limitations. First, like many empirical studies that rely on disclosed proxy data, proxy disclosures may not represent all aspects of corporate governance practices. It is possible that some companies may have strong practices in some areas, but received lower scores because the details are not disclosed in their proxies. In addition, the sampling process may suffer from survivorship bias[14]. Third, the tests in this study are association tests, and thus do not directly distinguish whether the structural change in the association between the proxies for earnings quality and governance characteristics is due to the enhancement of governance or the associated pressure for increased managerial accountability. Future research may need to adopt qualitative research approaches to provide collaborative evidence on the link between quality of financial reporting and governance effectiveness. Opens in a new window.Equation 1 Opens in a new window.Equation 2 Opens in a new window.Equation 3 Opens in a new window.Equation 4 Opens in a new window.Equation 5 Opens in a new window.Equation 6 Opens in a new window.Table I Descriptive statistics of variables in abnormal accruals regression (Panel A: dependent variable and selected independent variables, Panel B: Pearson correlation matrix for dependent variable and selected independent variables) Opens in a new window.Table II Regression results for the association between absolute value of abnormal accruals and governance attributes Opens in a new window.Table III Descriptive statistics of variables in return-earnings regression (Panel A: descriptive statistics for dependent and independent variables; Panel B: Pearson correlations matrix for dependent and independent variables) Opens in a new window.Table IV Regression results for the association between returns and earnings, earnings-governance interactions and other determinates of return-earnings association Opens in a new window.Table V Regression results for the association between abnormal accruals and governance attributes over time Opens in a new window.Table VI Regression results for the association between returns and earnings, earnings-governance interactions over time
|
- The paper seeks to examine the association between corporate governance mechanisms and the quality of accounting earnings.
|
[SECTION: Method] The considerable amount of regulatory attention given to corporate governance issues in recent years suggests that stronger governance mechanisms would reduce opportunistic management behavior, thus improving the quality and reliability of financial reporting. Regulators believe that this in turn will help to maintain and enhance investors' confidence in the integrity of capital markets. In contrast, some critics argue that the enhanced governance and litigation environment may change the balance of business and information risk for many firms, with the predictable and undesirable result that many firms will become more cautious, and forgo promising opportunities. Thus, shareholder wealth may ultimately be reduced. Although in the literature studies have examined the association between the attributes of governance mechanisms and firm performance, as well as the information content of the financial reporting process, much less is known about the impact of the recent changes in corporate governance codes on earnings quality internationally (Beekes et al., 2004)[1].The purpose of this study is to provide insight into the ongoing debate in the regulatory and academic communities on the effectiveness of the new governance regulations in Canada. Since the late 1990s, publicly traded firms in Canada have been subject to stricter corporate governance rules and guidelines. These changes in expectations regarding corporate governance were motivated, to a large extent, by some large corporate scandals in the USA and Canada. Due to the fact that many Canadian companies also rely on the US capital market, the dramatic changes in the US corporate governance regulations and practices (e.g. the Sarbanes-Oxley Act and the new SEC regulations) have also had a significant impact in Canada.For several reasons, Canada is a unique and interesting setting in which to assess the sensitivity of the relation between governance and the integrity of the financial reporting process to new governance initiatives. First, although Canadian securities laws are substantially similar to those in the USA, unlike the USA, Canada does not have a centralized securities commission. Securities regulation is enforced at the provincial and territorial level (Rosen, 1995). Therefore, any nationwide governance agreement must obtain strong support from large provinces, such as Ontario and Quebec. Second, Canada uses a flexible method to address matters of corporate governance, which is distinct from the mandatory approach adopted by the US. Moreover, a much higher percentage of Canadian public companies have a controlling shareholder, as compared to US public companies (La Porta et al., 1999). These controlling shareholders have a natural incentive to be represented on the board of directors, which raises issues about the appropriate definition of independence. Finally, many Canadian public corporations are relatively small firms with a limited capacity to attract large numbers of completely independent directors; therefore, for these companies, complying with a strict set of corporate governance rules would be a significant financial and administrative burden. These institutional features raise questions about the effectiveness of governance control practice in improving the financial reporting process in Canada.This paper examines the effect of governance on the quality of the financial reporting process by linking governance attributes to the quality of accounting earnings. The focus on earnings is appropriate since it is a summary performance measure that is frequently quoted, analyzed and discussed in the literature and in the financial community. In this paper, the quality of earnings is measured in two ways:1. an accounting-based measure of earnings management (the magnitude of abnormal accruals); and2. a market-based measure of earnings informativeness (the return-earnings association)[2].Employing both measures can provide collaborating evidence, since enhanced regulations may provide fewer incentives for managers to manage earnings, and thus the magnitude of abnormal accruals may be lower; on the other hand, highly significant legislative changes to financial practice and corporate governance may encourage firms to undertake less optimal yet safer investment opportunities. Therefore, financial information may become a less clear representation of the economic resources and changes in economic resources of the firm. Accordingly, all else being equal, earnings informativeness may be reduced.Using data on corporate governance practice for Canadian firms comprising the S&P/TSX composite index for 2002-2005, the paper finds that overall governance quality is negatively related to the level of abnormal accruals and positively influences the return-earnings association. This suggests that good corporate governance mechanisms provide greater monitoring of the financial accounting process and are associated with reported earnings that are more informative. The study also finds that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis further support these inferences.Studies in the literature typically focus on particular aspects of governance, such as board composition, shareholder activism, executive compensation, or insider ownership on firms' market value or performance (e.g. Morck et al., 1988; Warfield et al., 1995). For example, using a sample of Canadian companies during the years 1991-1997, Park and Shin (2004) examine the relationship between the proportion of outside board members and the level of accrual management, and find that only outside directors from financial intermediaries or institutional shareholders reduce earnings management. They conjecture, but do not test, that the insignificant results may be due to Canadian directors' lack of ownership interest in the firms they monitor and the presence of dominant shareholders. By examining the relation between corporate governance (including board composition, management shareholding, shareholders' rights and the extent of disclosure of governance practices), and the quality of earnings (measured by both accrual management and earnings informativeness), this study can contribute to a more comprehensive understanding of the significance of governance. The evidence suggests that enhanced governance initiatives are accompanied by an improved quality of earnings. The paper also differs from Park and Shin (2004) in another important aspect, since the current study investigates the years 2001-2004, a period where significant governance initiatives were imposed after the accounting scandals; thus, the evidence can provide more relevant and useful insights to the current policy debate regarding governance effectiveness.The results from this study support the notion that enhanced governance practices, especially independent boards and committees, effective management compensation, and powerful shareholders are important in constraining management from managing earnings and in ensuring a higher quality of earnings. Given the increasing interest in corporate governance, the evidence provides additional support of continuing regulatory initiatives throughout much of the world on corporate governance concerning the board independence and managerial ownership. It also calls for more actively involved shareholders to play a greater role in firms' accounting reporting processes.The rest of the paper proceeds as follows. The next section describes recent corporate governance initiatives in Canada. Section 3 develops hypotheses, while Section 4 describes the research design and variable measurements. Sample selection and empirical results are presented in Section 5. Section 6 provides additional analyses and conducts sensitivity tests. Section 7 provides concluding remarks. Canada has placed an emphasis on corporate governance for a number of years. The significant governance initiative may date back to 1995, when the Toronto Stock Exchange (TSX) adopted 14 voluntary corporate governance best practices, and required Canadian-incorporated listed companies to disclose annually their corporate governance practices, and compare their practices to the 14 best practices (Labelle, 2003; Park and Shin, 2004)[3].In late 2001, the Joint Committee on Corporate Governance, established by the TSX, the Canadian Venture Exchange and the Canadian Institute of Chartered Accountants, issued a report, which led to new TSX proposals. In response to the passing of the Sarbanes-Oxley Act in the USA, Canadian regulators adopted a set of corporate governance rules, which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various US stock exchange listing standards. Canadian securities regulators believe that it is essential that Canadian public firms are subject to corporate governance rules that are as strict as those in the USA, but tailored to Canadian markets. These rules can be classified into several categories. The first set of rules relates to CEO/CFO certifications of annual and quarterly reports. Canadian companies also have to adopt disclosure controls and procedures that provide reasonable assurances that material information required to be disclosed by the company is made known to the CEO/CFO and is disclosed within the periods acquired by Canadian securities laws. Like the amended SEC rule, issuers need to design internal controls that provide reasonable assurances that their financial statements are fairly presented in accordance with GAAP.The second set of rules deals with audit committee independence, financial literacy and expertise. Major Canadian public companies must have fully independent and financially literate audit committees; thus, the education and experience of all committee members should be disclosed, so that investors can judge the committee's expertise. The third set of rules relates to the auditing process. To oversee the auditing profession, Canada established the Canadian Public Accountability Board (CPAB). Under the proposed auditing regulations, independent auditors are prohibited from performing various non-audit services for their audit clients.In addition, Canadian provincial securities regulators have proposed regulating other aspects of governance that are enforced through the stock exchange listing standards in the USA. For example, the Ontario Securities Commission (OSC), together with the majority of the other provincial and territorial securities commissions, proposed 18 recommended best practices and accompanying disclosure rules. These guidelines address such topics as the composition of a company's board and job descriptions for directors and officers. Although complying with these guidelines is again voluntary, companies that issue securities in Ontario are required to disclose whether or not they have adopted the guidelines, and if not, they need to explain why in the annual reports to be filed with the OSC.The governance initiatives in Canada have underlined the need for more evidence on corporate governance and its impact on quality of reporting issues. The governance data used in this paper is obtained from a survey on governance practices of Canadian firms in the S&P/TSX index, while the survey data are derived from a company's most recent proxy circulars. The survey has been conducted independently by the Report on Business in the Globe and Mail (a leading newspaper in Canada) on an annual basis since 2002 and the results have been published since then. The scores of corporate governance are based on a set of practices identified by regulators and investor groups which is considered to be critical to corporate governance effectiveness, and can be classified broadly into the following categories:* board composition;* shareholding and compensation of directors and management;* shareholder rights; and* disclosures of corporate governance practice.Each category contains several criteria, with corresponding weights for each criterion. This measurement of governance is relevant for assessing the degree of independence, objectivity, and attentiveness the board exercises in overseeing management performance, and the degree to which they hold management accountable to stakeholders for its actions[4]. Details about the criteria used by the survey are provided in the Appendix. The importance of corporate governance has been a question of substantial interest to regulators, financial institutions, investors, and the media. Governance problems arise from divergent incentives and asymmetric information between shareholders and managers. These conflicts of interests, coupled with the impossibility of writing explicit contacts on all future contingencies, lead to unresolved agency problems that affect firm valuation (Hart, 1995). Corporate governance mechanisms are intended to mitigate agency costs by increasing the monitoring of management's actions and limiting managers' opportunistic behavior (Ashbaugh et al., 2004). In this section, several hypotheses are developed that identify and link specific elements of governance to accounting earnings.3.1 Board composition and the quality of earnings One of the most important factors influencing the integrity of the financial accounting process involves the board of directors, whose responsibility is to provide independent oversight of management performance and to hold management accountable to shareholders for its actions (DeFond and Jiambalvo, 1994; Dichev and Skinner, 2002). Prior research examining the association between the corporate governance mechanisms concerning the board of directors (e.g. independence of board or board size, expertise of directors or board members, and stock ownership of board members) and the extent of earnings manipulation finds inconclusive results. While the empirical results concerning board attributes are mixed due to different research designs and empirical settings, a general belief is that boards are more effective in their monitoring of management when there is a strong base of independent directors on the board (e.g. Beasley, 1996; Peasnell et al., 2000; Klein, 2002; Xie et al., 2003). For example, Beasley (1996) finds that the presence of outside directors reduced the probability of fraud in the presentation of financial statements during the period of 1980-1991. Similarly, Klein (2002) provides evidence concerning board independence and earnings manipulation and finds that companies with independent boards are less likely to report abnormal accruals. Xie et al. (2003) find similar results with respect to the relationship between earnings management and the independence of boards, as well as the financial sophistication of board members.On the other hand, there are some counter-arguments proposing that completely independent boards may not be effective in monitoring management, since management is more likely to cooperate with board members with whom they are better acquainted. Indeed, Agrawal and Knoeber (1996) find a significant negative relationship between outside membership on the board and firm performance, leading them to conclude that boards that have too many outsiders lose the expertise associated with officers serving on the board.The reliability of financial reporting is also due, in part, to the independence and integrity of the audit process. Audit committees are responsible for recommending the selection of external auditors to the board, ensuring the soundness and quality of internal accounting and control practices, and monitoring external auditor independence from management. Empirical evidence generally supports the positive effect of independent audit committees. For example, Carcello and Neal (2000) document a relation between greater audit committee independence and the quality of financial reporting. Similarly, Xie et al. (2003) report a negative association between earnings management and the independence of audit committees.Finally, the presence of an independent nomination committee is also important for board effectiveness and monitoring ability, since the manager's power to nominate new members to the board can be removed. Overall, to the extent that independent boards and committees are superior monitors of management and likely limit managers' earnings management discretion and reduce managerial incentives to adopt aggressive earnings management strategies in the financial reporting process, we expect that the quality of earnings increases in proportion to the independence and the functionality of the board and its key committees. Hence, the first hypothesis is as follows (in alternate form):H1. Firms with more independent boards and subcommittees have less abnormal accruals and more informative earnings.3.2 Shareholding by managers or directors and the quality of earnings Another element of governance that affects the incentives for directors to actively monitor management and for managers to perform in the best interests of shareholders is the compensation of directors and managers. There are two opposing views in the literature regarding the relationship between board or management shareholding and the quality of financial reporting. Morck et al. (1988) show that high stockholding may cause a moral hazard and an information-asymmetry problem between the insiders (management and directors) and outside investors. Under this managerial entrenchment hypothesis, managers may have more incentives to exercise discretion in accounting reporting, and monitoring and disciplining will be more difficult for directors with an equity stake in the firm. As a result, the quality of the financial reporting process may be compromised when stockholding by directors is high.On the other hand, agency theory (Jensen and Meckling, 1976) predicts that managers with lower firm ownership have greater incentives to manipulate accounting numbers in order to relieve the constraints imposed on accounting based compensation contracts. In addition, Jensen (1989) argues that outside directors with little equity stake in the firm cannot effectively monitor and discipline the managers. Indeed, many firms require their directors to increase shareholding in their firms (Hambrick and Jackson, 2000). Consistent with this theory, Warfield et al. (1995) find a negative relation between managerial stockholdings and the absolute value of abnormal accruals. They interpret their results as being consistent with the belief that managerial shareholdings act as a disciplining mechanism. Under this alignment of interest hypothesis, mandatory shareholding of board and management can effectively motivate managers' performance, and create incentives for independent directors to more closely monitor management, a scenario under which a positive association between mandatory shareholding and the quality of accounting earnings is expected. This discussion leads to the following hypothesis:H2. Firms with a higher level of board (management) share ownership have less abnormal accruals and more informative earnings.3.3 Shareholder rights and the quality of earnings An important aspect of best practices in corporate governance deals with shareholder rights, which reflect shareholders' ability to exercise their control over firm assets, remove ineffective or opportunistic management, monitor the conduct of the board of directors or initiate ownership changes that increase firm valuation (Ashbaugh et al., 2004). One of the most effective means of controlling management's behavior is to grant shareholders the right to vote on major issues, such as electing directors and the chairperson and approving senior executive appointments, and important changes affecting the firm such as mergers or liquidation. Normally these rights are proportionate to the shareholder's equity ownership. However, these rights are often severally limited under a governance system that allows dual-class share structures, which are very common in Canada[5].Recent research also indicates that the existence of stronger shareholders may improve internal control, and thus may be an effective monitoring device for improving financial reporting quality. To the extent that an appropriate power-sharing relationship between shareholders and managers reduces the moral hazard problems that lower overall firm value and allows shareholders to effectively monitor the financial reporting practice, we predict a positive association between shareholder rights and the quality of earnings. Hence:H3. Firms with stronger shareholder rights have less abnormal accruals and more informative earnings.3.4 Disclosure of corporate governance practice and quality of earnings Prior research indicates that corporate disclosure reduces information asymmetry between investors and managers (e.g. Lang and Lundholm, 1996; Welker, 1995). For instance, Lang and Lundholm (1996) provide evidence that firms with more informative disclosure policies have a larger analyst following, more accurate analyst earnings forecasts, less dispersion among individual analyst forecasts, and less volatility in forecast revisions. Similarly, Welker (1995) finds that information asymmetry, measured as the bid-ask spread, is reduced and market liquidity increased as the level of disclosure is increased. Prior research also demonstrates a relationship between information asymmetry and earnings quality (e.g. Dye, 1988; Trueman and Titman, 1988). For example, Dye (1988) and Trueman and Titman (1988) show analytically that the existence of information asymmetry between management and shareholders is a necessary condition for earnings management.The above two lines of research suggest that enhanced corporate disclosures may benefit a firm in many ways; however, managers wishing to retain the flexibility to engage in earnings management may have incentives to limit the disclosure. To the extent that disclosure of governance practice may reduce information asymmetry and enable board and investors to effectively monitor management decisions and performance, we predict that a better quality of disclosures on governance practice is associated with a higher quality of earnings. Hence:H4. Firms with higher quality of corporate governance disclosures have less abnormal accruals and more informative earnings. 4.1 Measuring abnormal accruals Since managers may have incentives to manage earnings either upward or downward, we use the absolute value of the abnormal accruals as a proxy for earnings quality (DeFond and Park, 1997; Bartov et al., 2000). To the extent that better monitoring of the financial reporting process leads to greater financial transparency, the firm is expected to have a lesser degree of earnings management, and thus fewer abnormal accruals. Accordingly, a negative relationship between the governance quality and the absolute value of abnormal accruals is predicted.Abnormal accruals are calculated using the modified Jones model (Dechow et al., 1995): Equation 1 where TA is total accruals, defined as net income before extraordinary items (Compustat #123) minus cash flow from operations (Compustat #308) scaled by beginning of fiscal year total assets (Compustat #6); DREV is the change in sales (Compustat #12) from year t-1 to year t, scaled by beginning of fiscal year total assets; DREC is change in accounts receivable (Compustat #302) from year t-1 to year t, scaled by beginning of fiscal year total assets; PPE is gross property, plant and equipment (Compustat #7) scaled by beginning of fiscal year total assets; BM is the book value (Compustat #60) to market value of common equity (Compustat #25x#199) for the year; and OCF are current operating cash flows (Compustat #308), scaled by beginning of fiscal year total assets.The model assumes that normal accruals are positively related to change in revenues, less the change in accounts receivable, and negatively related to the capital intensity of the firm. Following Larcker and Richardson (2004), book-to-market ratio (BM) is used as a proxy for growth, and we expect it to be positively related to total accruals. We also include current operating cash flows (OCF) as an additional variable to control for extreme level performance (Dechow et al., 1995), and expect OCF to be negatively associated with total accruals.The model is estimated using all available Compustat Canadian firms with available data for each two-digit SIC group separately, with at least eight firms in each group. To reduce the impact of influential observations, independent variables in the model are all winsorized to be no greater than 1 in absolute value, and the book-to-market is winsorized at the extreme of 2 percent. Discretionary accruals (DA) for each firm i in each industry are defined as the difference between the total accruals (TA) and the fitted value of equation (1), as follows: Equation 2 Prior research documents that discretionary accrual estimates are correlated with firm performance (Kothari et al., 2005). To mitigate this misspecification problem, we control for firm performance by using the performance-matched discretionary accruals model. Specifically, following Kothari et al. (2005), each firm-year observation is matched with another firm from the same two-digit SIC code and year with the closest ROA in the current year. The performance-matched discretionary accrual (PMDA) is the discretionary accruals (DA) calculated from equation (2), minus the matched firm's DA for the year. To test the hypotheses, the following pooled cross-sectional and time series regression model is then estimated: Equation 3 where Board is the governance quality on board composition for firm i at year t; Comp is the governance quality on shareholding and compensation of board or management, for firm i at year t; Share is the governance quality on shareholder rights for firm i at year t; Disc is the governance quality on corporate governance disclosures for firm i at year t; Audit_S is an indicator (1 if an auditor audits at least 20 percent of the industry revenue, and 0 otherwise); SIZE is the logarithm of total assets for firm i at year t; and LEV is the total liabilities to total assets for firm i at year t.Each firm's performance score for individual governance categories is used as a proxy for the quality of that governance feature, and a higher score implies a lesser extent of earnings management. Thus, consistent with H1 to H4, we expect a1<0, a2<0, a3<0, and a4<0.Given that corporate governance is not the sole factor affecting discretionary accruals, several control variables are introduced to isolate other contracting incentives that have been found to influence managers' accounting choices. For example, we control for auditor specialization (Audit_S) since prior research (Dunn and Mayhew, 2004; Myers et al., 2003) finds that industry-specialist audit firms assist clients in enhancing disclosures and in reducing earnings management. Firm size (SIZE) and financial leverage (LEV) are also controlled for, since Press and Weintrop (1990) indicate that these factors may affect managers' discretionary accounting choices. Although there is no prediction for SIZE due to the ambiguity on the link between firm size and discretionary accruals, LEV is expected to have a positive coefficient[6].4.2 Measuring return-earnings association To provide collaborating evidence on the market's perception of the impact of governance on financial reports, the return-earnings association is used as an additional measure of earnings quality. To the extent that better governance systems can effectively align managers' interests with those of shareholders, and actively monitor and control firm-management, the transparency and the reliability of a firm's financial reporting process, and consequently, the informativeness of earnings would increase. Therefore, a positive relation between the governance quality and the return-earnings association is expected.Following Warfield et al. (1995), we estimate the following pooled time-series and cross-sectional regression model: Equation 4 where Rit is the stock return of firm i for the 12 months from nine months before and three months after the fiscal year-end, calculated as (Pit-Pit-1+Dit)/Pit-1 (Pit is the stock price of firm i at time t, and Dit is the dividend of firm i at year t); Eit is earnings per share (before extraordinary item) of firm i for year t; SIZE is a proxy for size, which is measured as logarithm of sales revenue; LEVE is a proxy for leverage, which is measured as total debt divided by total assets; and GWTH is a proxy for growth prospect of the firm (Tobin's Q), which is defined as the market value of equity divided by book value of equity.To control for other factors, which may affect return-earnings associations, we include a proxy for firm size (SIZE), and a proxy for financial leverage (LEVE), since highly levered firms are associated with higher risk, and hence their earnings-return relation is weakened (Watts and Zimmerman, 1990). Following prior studies (e.g. Collins and Kothari, 1989), we also include a proxy for growth (GWTH), which is expected to be positively associated with returns.The coefficient a1 measures the traditional return-earnings relation. The coefficients a2 to a5 measure differential earnings informativeness according to the effectiveness of governance controls. Consistent with H1 to H4, which predict that earnings informativeness increases as a firm's quality of governance mechanism increases, we expect a2>0, a3>0, a4>0, and a5>0. 5.1 Sample selection The initial sample consisted of firms listed on the S&P/TSX composite index as of 1 September 2002, 2003, 2004 and 2005. Governance scores for these companies, as published in the Globe and Mail survey, are obtained from years 2002 to 2005. Of these initial 888 firm year observations, firms that are either missing financial variables or that have insufficient data in order to estimate performance-matched abnormal accruals are eliminated. Financial institutions and insurance companies (SIC=60-69) are also eliminated, since their special accounting methods make the estimation of discretional accruals problematic.Firm-level financial information is obtained from Compustat and is supplemented by firm annual reports. Since the governance surveys are based on the assessment of the most recent proxy statement, firm-level accounting data in the most recent fiscal year prior to the survey is used in the analysis. To be included in the sample, a firm must have stock return data from the Canadian Financial Markets Research Center database. The final sample consists of 519 firm-year observations for the accrual model and 528 firm-year observations for the returns-earnings association analysis, from the years 2001 to 2004.5.2 Results from accruals model Panel A of Table I reports the descriptive statistics for selected variables in equation (3). The mean and median for the absolute value of performance-matched discretionary accruals, PMDA, are 9 percent and 7 percent of the prior year's total assets, respectively. The mean scores for the overall governance quality are 64.7 (out of 100) with a standard deviation of 14.4. There is also a variation within each governance category across firms. For example, the mean quality of disclosures of governance practice has a mean of 9.6 with a standard deviation of 3.3. Other characteristics of the sample firms include an average leverage ratio (LEV) of 47 percent, a median auditor-client relationship of eight years and an observation that more than half of the sample firms were audited by an industry-specialized auditor.Correlations between the dependent and the explanatory variables are shown in Panel B of Table I. As predicted, firms that are ranked higher in terms of board independence, management compensation, shareholder rights and disclosures of governance practice have smaller abnormal accruals. Also, there are strong correlations among attributes of corporate governance at the 1 percent level, indicating that firms that are performing well in one governance category tend to perform well in other categories. SIZE is positively associated with leverage (LEV), suggesting that larger firms have higher leverage constraint levels. Finally, larger firms are more likely to have an independent board and powerful shareholders, to make more governance practice disclosures and to have effective compensation polices for management and directors.Regression results based on equation (3) are reported in Table II, which shows White-adjusted t-statistics for all the coefficients. The association between abnormal accruals and the overall governance quality in columns 3 and 4 in Table II indicates a significant negative association between the magnitude of abnormal accruals and total governance scores at the 1 percent level (with or without control variables), suggesting that overall governance quality is negatively associated with the magnitude of discretionary accruals. In addition, consistent with H1, the results in column 6 indicate that there is a negative association between the proxy for board compositions and the abnormal accruals (-0.003, t=- 3.474) after controlling for other factors, suggesting that as the independence of the board increases, the sample firms engage in less income increasing or decreasing discretionary accruals (Klein, 2002).As predicted in H2, there is a significant negative association between the attributes of the board or management shareholding or compensation and the measure of earning quality (-0.005, t=-3.577). In addition, as predicted by H3, the strong negative association between the quality of shareholder rights and abnormal accruals is significant at the 1 percent level (-0.003, t=-3.366), suggesting that the incentives to manage earnings diminish as increased shareholder rights limit managers' accruals discretion. However, inconsistent with the prediction in H4, there is no significant association between the quality of governance disclosures and the magnitude of discretionary accruals. With regards to the control variables, the results show a significant negative coefficient on Audit_S, suggesting that firms which are audited by an industry-specialized auditor are more likely to have less absolute value of abnormal accruals, a finding that is consistent with prior research.In summary, linking the quality of corporate governance and abnormal accruals, we find that firms that are ranked highly in terms of governance quality have fewer abnormal accruals. In addition, firms with powerful shareholders, more independent or functional boards and with more effective compensation policies for management or directors are more likely to have fewer discretionary accruals, suggesting that these factors may be effective in monitoring managerial opportunism. Finally, we do not find that disclosure of governance practices is associated with the magnitude of earnings management.5.3 Results from the return-earnings model Panel A in Table III reports the descriptive statistics of dependent and independent variables in equation (4). Median raw returns for sample firms over the sample period are 0.11; firms, on average, have earnings per share deflated by the prior year-end price of 0.33. The average Tobin's Q for the sample firms is 2.53 with a median of 2.03. Consistent with prior studies, the correlation table in Panel B of Table III shows a very strong positive correlation between stock raw returns and deflated earnings measure at the 1 percent level. Consistent with predictions, there are also strong correlations among returns and interactions of earnings and attributes of corporate governance at the univariate basis, indicating that firms having strong corporate governance mechanisms in place have earnings that are more informative. Again, there are very strong positive correlations among attributes of governance. Finally, larger firms are more leveraged and have lower Tobin's Q than smaller firms.Regression results of the relationship between returns and earnings and earnings-governance interactions are reported in Table IV. Again, we see a strong positive association between returns and the measure of earnings, which is consistent with prior studies. Column 4 also indicates a significant association between returns and the earnings-governance overall quality interaction at the 10 percent level.To examine whether the market perceives the attributes of governance quality differently, we regress returns on earnings and the interaction of earning and each individual measure of governance attribute. As shown in column 5, there are positive and significant coefficients on the interactions between earnings and proxies for quality of board composition, management compensation and shareholder rights at the 5 percent or better level. The results are insensitive to adding the control variables in the model, as shown in column 6. This provides support for H1-H3, suggesting that board independence, efficient management and director compensation, as well as effective monitoring by shareholders, improve earnings informativeness. Inconsistent with H4, however, a higher level of governance practice disclosures does not incrementally explain the return-earnings relation.Overall, linking the quality of corporate governance and the return-earnings association, an alternative measure of earnings quality, we find results which are largely consistent with the results obtained by using abnormal accruals as a proxy for earnings quality. Namely, firms with a more independent or functional board, and stronger shareholder rights, are more likely to have a stronger returns-earnings association, suggesting the earnings have a higher level of information content. In addition, earnings are more informative for firms with mandatory shareholding by board or management.5.4 Additional analysis To provide further evidence that governance matters for a firm's financial reporting quality, we examine whether firms with improved governance structure over the sample period are found to have a higher quality of reported earnings. Canadian companies have made significant changes in governance practices during the sample period, pushed by stronger governance rules, higher stakes and shareholder pressure[7]. If the new governance initiatives and the ensuing governance debate have helped improve the effectiveness with which boards discharge their financial reporting responsibilities, one might expect that the association between governance effectiveness and the magnitude of earnings management and the return-earnings association has become more pronounced over time. Furthermore, a change analysis can better address the endogeneity concerns.We use a balanced sample design to assess whether the association between abnormal accruals and governance attributes differs over time. This design allows each sample firm to serve as its own control, thereby eliminating any differences that might result from temporal variation in sample composition. We estimate the following accruals model using a sample of 93 panel firms for the years 2001 and 2004[8]: Equation 5 where Time is an indicator, coded as 1 for year 2004, and 0 otherwise.We use a time dummy, which is 1 for the year 2004 (to represent the post-regulation period) and 0 for the year 2001 (to proxy for the pre-regulation period). To control for factors other than governance regulations, such as stronger litigation environment and shareholder pressure for stronger practices, which may drive the improvement of governance, we also include time as a separate independent variable in the equation. The primary variables of interest are a5 to a8. We predict that enhanced corporate governance is associated with a lower degree of earnings management, thus we expect that a5<0, a6<0, a7<0, and a8<0.Unreported descriptive statistics of selected variables used in equation (5) suggest that, unsurprisingly, the total governance scores improved significantly from 2001 to 2004, with the mean score climbing to 61.5 out of 100 in 2001 to 74.9 in 2004. Governance along other dimensions significantly improved as well. For example, the mean score for the board composition and shareholder rights increased from 25.2 in 2001 to 31.2 in 2004, and from 17 in 2001 to 19.1 in 2004, respectively. Also, the magnitude of PMDA was reduced from 2001 to 2004, with a median of 0.09 in 2001, to 0.071 in 2004.The regression results for equation (5) are provided in Table V[9]. As shown in column 4 of Table V, there is a negative association between abnormal accruals and the interaction of board composition and the time dummy (-0.0004, t=-2.005) at the 5 percent level, suggesting that firms that increased their board independence experienced a decrease in abnormal accruals. Similarly, in columns 6 and 8 in Table V, the statistical negative associations between the accrual measure and the interactions of governance and the time dummy indicate that firms with improved shareholder rights incurred less abnormal accruals. However, the change in shareholding of managers and directors and disclosure in governance practice is not associated with any change in abnormal accruals.Next, we next examine whether changes in governance are also associated with changes in the return-earnings association. If the effectiveness of governance in enhancing the financial reporting quality is greater than the negative impact of stricter litigation regulations on investment behavior, then we expect that the market's perception of earnings quality has improved. We estimate the following return-earnings regression using the balanced sample: Equation 6 All the variables in equation (6) are as defined previously. If enhanced corporate governance is associated with a higher quality of reported earnings, and consequently a higher return-earnings association, we expect that a6>0, a7>0, a8>0 and a9>0.Table VI reports the regression results. Inspection of the results in Table VI shows that firms with more independent boards and stronger shareholder rights have higher return-earnings associations at the 5 percent or 10 percent level, after controlling for firm size, growth and the time period. The coefficient on interaction between shareholding and the time dummy is positive, as expected, but not significant. The corporate governance disclosure variable does not affect the informativeness of earnings differently across periods.Overall, additional tests indicate that improvements in governance practice are generally associated with less discretional accruals and more informative reported earnings; in addition, board independence and shareholder rights seem to be the most important factors in influencing the corresponding improvement of earnings quality as the quality of governance effectiveness increases. However, one caveat of the analyses is that, due to data limitation, we are unable to explicitly control for all other factors during the sample period that may contribute to the correlation between improved governance and the proxies of accounting earnings. Thus, the inference must be interpreted with caution.5.5 Sensitivity analyses The Globe and Mail surveys conducted in 2003-2005 are based on slightly modified but tougher marking standards than the methods used in 2002; thus, scores may not be strictly comparable over time. To mitigate this comparability concern, instead of using continuous variables, following Gompers et al. (2003), we classify the sample into three groups based on each category of governance: strong governance as a score above the 67th percentile, weak governance as a score below the 33th percentile, and the remainder as neutral governance. The results are not sensitive to this alternative classification[10].In addition to the determinants of the variation in the accruals model which are considered in this study, managers may have incentives to manipulate earnings in order to avoid earnings losses and earnings declines (Burgstahler and Dichev, 1997). Accordingly, sample firms are partitioned according to whether their unmanaged earning per share (before the performance-matched abnormal accruals) is negative or below last year's reported earnings per share[11]. We expect the incentives for income-increasing earnings management to be particularly strong when the unmanaged earnings numbers fall below target. The repeated analysis does not support this conjecture. The unreported results are qualitatively the same as those observed in Table II[12].Several sensitivity tests are also conducted in the returns-earnings analyses. Fist of all, we re-estimate the return-earnings model by partitioning the samples into positive and negative earnings. Hayn (1995) suggests that the earnings-response coefficient is low and not stable when earnings are negative. While we find that all the results are qualitatively the same when the earnings are positive, there are inconsistent findings when the earnings are negative. In addition, we also consider alternative measures of return (12-month window ending at the fiscal year-end) and earnings, including both the level and the change of earnings. The sensitivity checks do not affect the main results, and the variable for change of earnings is generally not significant across all model specifications. Finally, prior studies (Subramanyam and Wild, 1996) indicate that the persistence and variability of earnings may explain earnings informativeness[13]. Earnings persistence, earnings variability and their respective interactive terms with earnings are then included in the returns and earnings model. The models are re-estimated using both the pooled sample and the panel sample. Neither interactive term is statistically significant in explaining returns, while the main results of governance on return-earnings association remain unchanged. Recent governance initiatives in Canada have underlined the need for more evidence on the corporate governance and the quality of reporting issues. The purpose of this study is to provide early evidence to assess the merit of calls for stringent governance regulations in Canada by examining the association between the quality of overall and specific governance features and the quality of accounting earnings.We use absolute value of performance-matched abnormal accruals and return-earnings association as the proxies for quality of earnings. Using results from recently published data on corporate governance for a sample of Canadian firms, we find that overall governance quality is inversely related to the level of abnormal accruals and positively associated with the return-earnings association, suggesting that good corporate governance mechanisms provide greater monitoring of the financial accounting process and ensure more informative accounting earnings. We also find that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis are consistent with these findings.Overall, this study provides early evidence that is consistent with Canadian regulators' initiatives that stronger corporate governance mechanisms may be important factors in improving the integrity of financial reporting for Canadian firms. Since Canadian regulators adopted a set of corporate governance rules which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various stock exchange listing standards around the world, the evidence in the paper suggests that future policy initiatives in the USA and other countries should reinforce the need for independent boards, effective management compensation and stronger shareholders' rights, which is likely to result in better earnings quality.This study has several limitations. First, like many empirical studies that rely on disclosed proxy data, proxy disclosures may not represent all aspects of corporate governance practices. It is possible that some companies may have strong practices in some areas, but received lower scores because the details are not disclosed in their proxies. In addition, the sampling process may suffer from survivorship bias[14]. Third, the tests in this study are association tests, and thus do not directly distinguish whether the structural change in the association between the proxies for earnings quality and governance characteristics is due to the enhancement of governance or the associated pressure for increased managerial accountability. Future research may need to adopt qualitative research approaches to provide collaborative evidence on the link between quality of financial reporting and governance effectiveness. Opens in a new window.Equation 1 Opens in a new window.Equation 2 Opens in a new window.Equation 3 Opens in a new window.Equation 4 Opens in a new window.Equation 5 Opens in a new window.Equation 6 Opens in a new window.Table I Descriptive statistics of variables in abnormal accruals regression (Panel A: dependent variable and selected independent variables, Panel B: Pearson correlation matrix for dependent variable and selected independent variables) Opens in a new window.Table II Regression results for the association between absolute value of abnormal accruals and governance attributes Opens in a new window.Table III Descriptive statistics of variables in return-earnings regression (Panel A: descriptive statistics for dependent and independent variables; Panel B: Pearson correlations matrix for dependent and independent variables) Opens in a new window.Table IV Regression results for the association between returns and earnings, earnings-governance interactions and other determinates of return-earnings association Opens in a new window.Table V Regression results for the association between abnormal accruals and governance attributes over time Opens in a new window.Table VI Regression results for the association between returns and earnings, earnings-governance interactions over time
|
- Quality of earnings is measured in two ways: the accounting-based measure of earnings management and the market-based measure of earnings informativeness. Using firm-level corporate governance data for a sample of Canadian firms in the years 2001-2004, regression analysis explores the relation between corporate governance (including board composition, management shareholding, shareholders' rights and the extent of disclosure of governance practices), and the quality of earnings.
|
[SECTION: Findings] The considerable amount of regulatory attention given to corporate governance issues in recent years suggests that stronger governance mechanisms would reduce opportunistic management behavior, thus improving the quality and reliability of financial reporting. Regulators believe that this in turn will help to maintain and enhance investors' confidence in the integrity of capital markets. In contrast, some critics argue that the enhanced governance and litigation environment may change the balance of business and information risk for many firms, with the predictable and undesirable result that many firms will become more cautious, and forgo promising opportunities. Thus, shareholder wealth may ultimately be reduced. Although in the literature studies have examined the association between the attributes of governance mechanisms and firm performance, as well as the information content of the financial reporting process, much less is known about the impact of the recent changes in corporate governance codes on earnings quality internationally (Beekes et al., 2004)[1].The purpose of this study is to provide insight into the ongoing debate in the regulatory and academic communities on the effectiveness of the new governance regulations in Canada. Since the late 1990s, publicly traded firms in Canada have been subject to stricter corporate governance rules and guidelines. These changes in expectations regarding corporate governance were motivated, to a large extent, by some large corporate scandals in the USA and Canada. Due to the fact that many Canadian companies also rely on the US capital market, the dramatic changes in the US corporate governance regulations and practices (e.g. the Sarbanes-Oxley Act and the new SEC regulations) have also had a significant impact in Canada.For several reasons, Canada is a unique and interesting setting in which to assess the sensitivity of the relation between governance and the integrity of the financial reporting process to new governance initiatives. First, although Canadian securities laws are substantially similar to those in the USA, unlike the USA, Canada does not have a centralized securities commission. Securities regulation is enforced at the provincial and territorial level (Rosen, 1995). Therefore, any nationwide governance agreement must obtain strong support from large provinces, such as Ontario and Quebec. Second, Canada uses a flexible method to address matters of corporate governance, which is distinct from the mandatory approach adopted by the US. Moreover, a much higher percentage of Canadian public companies have a controlling shareholder, as compared to US public companies (La Porta et al., 1999). These controlling shareholders have a natural incentive to be represented on the board of directors, which raises issues about the appropriate definition of independence. Finally, many Canadian public corporations are relatively small firms with a limited capacity to attract large numbers of completely independent directors; therefore, for these companies, complying with a strict set of corporate governance rules would be a significant financial and administrative burden. These institutional features raise questions about the effectiveness of governance control practice in improving the financial reporting process in Canada.This paper examines the effect of governance on the quality of the financial reporting process by linking governance attributes to the quality of accounting earnings. The focus on earnings is appropriate since it is a summary performance measure that is frequently quoted, analyzed and discussed in the literature and in the financial community. In this paper, the quality of earnings is measured in two ways:1. an accounting-based measure of earnings management (the magnitude of abnormal accruals); and2. a market-based measure of earnings informativeness (the return-earnings association)[2].Employing both measures can provide collaborating evidence, since enhanced regulations may provide fewer incentives for managers to manage earnings, and thus the magnitude of abnormal accruals may be lower; on the other hand, highly significant legislative changes to financial practice and corporate governance may encourage firms to undertake less optimal yet safer investment opportunities. Therefore, financial information may become a less clear representation of the economic resources and changes in economic resources of the firm. Accordingly, all else being equal, earnings informativeness may be reduced.Using data on corporate governance practice for Canadian firms comprising the S&P/TSX composite index for 2002-2005, the paper finds that overall governance quality is negatively related to the level of abnormal accruals and positively influences the return-earnings association. This suggests that good corporate governance mechanisms provide greater monitoring of the financial accounting process and are associated with reported earnings that are more informative. The study also finds that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis further support these inferences.Studies in the literature typically focus on particular aspects of governance, such as board composition, shareholder activism, executive compensation, or insider ownership on firms' market value or performance (e.g. Morck et al., 1988; Warfield et al., 1995). For example, using a sample of Canadian companies during the years 1991-1997, Park and Shin (2004) examine the relationship between the proportion of outside board members and the level of accrual management, and find that only outside directors from financial intermediaries or institutional shareholders reduce earnings management. They conjecture, but do not test, that the insignificant results may be due to Canadian directors' lack of ownership interest in the firms they monitor and the presence of dominant shareholders. By examining the relation between corporate governance (including board composition, management shareholding, shareholders' rights and the extent of disclosure of governance practices), and the quality of earnings (measured by both accrual management and earnings informativeness), this study can contribute to a more comprehensive understanding of the significance of governance. The evidence suggests that enhanced governance initiatives are accompanied by an improved quality of earnings. The paper also differs from Park and Shin (2004) in another important aspect, since the current study investigates the years 2001-2004, a period where significant governance initiatives were imposed after the accounting scandals; thus, the evidence can provide more relevant and useful insights to the current policy debate regarding governance effectiveness.The results from this study support the notion that enhanced governance practices, especially independent boards and committees, effective management compensation, and powerful shareholders are important in constraining management from managing earnings and in ensuring a higher quality of earnings. Given the increasing interest in corporate governance, the evidence provides additional support of continuing regulatory initiatives throughout much of the world on corporate governance concerning the board independence and managerial ownership. It also calls for more actively involved shareholders to play a greater role in firms' accounting reporting processes.The rest of the paper proceeds as follows. The next section describes recent corporate governance initiatives in Canada. Section 3 develops hypotheses, while Section 4 describes the research design and variable measurements. Sample selection and empirical results are presented in Section 5. Section 6 provides additional analyses and conducts sensitivity tests. Section 7 provides concluding remarks. Canada has placed an emphasis on corporate governance for a number of years. The significant governance initiative may date back to 1995, when the Toronto Stock Exchange (TSX) adopted 14 voluntary corporate governance best practices, and required Canadian-incorporated listed companies to disclose annually their corporate governance practices, and compare their practices to the 14 best practices (Labelle, 2003; Park and Shin, 2004)[3].In late 2001, the Joint Committee on Corporate Governance, established by the TSX, the Canadian Venture Exchange and the Canadian Institute of Chartered Accountants, issued a report, which led to new TSX proposals. In response to the passing of the Sarbanes-Oxley Act in the USA, Canadian regulators adopted a set of corporate governance rules, which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various US stock exchange listing standards. Canadian securities regulators believe that it is essential that Canadian public firms are subject to corporate governance rules that are as strict as those in the USA, but tailored to Canadian markets. These rules can be classified into several categories. The first set of rules relates to CEO/CFO certifications of annual and quarterly reports. Canadian companies also have to adopt disclosure controls and procedures that provide reasonable assurances that material information required to be disclosed by the company is made known to the CEO/CFO and is disclosed within the periods acquired by Canadian securities laws. Like the amended SEC rule, issuers need to design internal controls that provide reasonable assurances that their financial statements are fairly presented in accordance with GAAP.The second set of rules deals with audit committee independence, financial literacy and expertise. Major Canadian public companies must have fully independent and financially literate audit committees; thus, the education and experience of all committee members should be disclosed, so that investors can judge the committee's expertise. The third set of rules relates to the auditing process. To oversee the auditing profession, Canada established the Canadian Public Accountability Board (CPAB). Under the proposed auditing regulations, independent auditors are prohibited from performing various non-audit services for their audit clients.In addition, Canadian provincial securities regulators have proposed regulating other aspects of governance that are enforced through the stock exchange listing standards in the USA. For example, the Ontario Securities Commission (OSC), together with the majority of the other provincial and territorial securities commissions, proposed 18 recommended best practices and accompanying disclosure rules. These guidelines address such topics as the composition of a company's board and job descriptions for directors and officers. Although complying with these guidelines is again voluntary, companies that issue securities in Ontario are required to disclose whether or not they have adopted the guidelines, and if not, they need to explain why in the annual reports to be filed with the OSC.The governance initiatives in Canada have underlined the need for more evidence on corporate governance and its impact on quality of reporting issues. The governance data used in this paper is obtained from a survey on governance practices of Canadian firms in the S&P/TSX index, while the survey data are derived from a company's most recent proxy circulars. The survey has been conducted independently by the Report on Business in the Globe and Mail (a leading newspaper in Canada) on an annual basis since 2002 and the results have been published since then. The scores of corporate governance are based on a set of practices identified by regulators and investor groups which is considered to be critical to corporate governance effectiveness, and can be classified broadly into the following categories:* board composition;* shareholding and compensation of directors and management;* shareholder rights; and* disclosures of corporate governance practice.Each category contains several criteria, with corresponding weights for each criterion. This measurement of governance is relevant for assessing the degree of independence, objectivity, and attentiveness the board exercises in overseeing management performance, and the degree to which they hold management accountable to stakeholders for its actions[4]. Details about the criteria used by the survey are provided in the Appendix. The importance of corporate governance has been a question of substantial interest to regulators, financial institutions, investors, and the media. Governance problems arise from divergent incentives and asymmetric information between shareholders and managers. These conflicts of interests, coupled with the impossibility of writing explicit contacts on all future contingencies, lead to unresolved agency problems that affect firm valuation (Hart, 1995). Corporate governance mechanisms are intended to mitigate agency costs by increasing the monitoring of management's actions and limiting managers' opportunistic behavior (Ashbaugh et al., 2004). In this section, several hypotheses are developed that identify and link specific elements of governance to accounting earnings.3.1 Board composition and the quality of earnings One of the most important factors influencing the integrity of the financial accounting process involves the board of directors, whose responsibility is to provide independent oversight of management performance and to hold management accountable to shareholders for its actions (DeFond and Jiambalvo, 1994; Dichev and Skinner, 2002). Prior research examining the association between the corporate governance mechanisms concerning the board of directors (e.g. independence of board or board size, expertise of directors or board members, and stock ownership of board members) and the extent of earnings manipulation finds inconclusive results. While the empirical results concerning board attributes are mixed due to different research designs and empirical settings, a general belief is that boards are more effective in their monitoring of management when there is a strong base of independent directors on the board (e.g. Beasley, 1996; Peasnell et al., 2000; Klein, 2002; Xie et al., 2003). For example, Beasley (1996) finds that the presence of outside directors reduced the probability of fraud in the presentation of financial statements during the period of 1980-1991. Similarly, Klein (2002) provides evidence concerning board independence and earnings manipulation and finds that companies with independent boards are less likely to report abnormal accruals. Xie et al. (2003) find similar results with respect to the relationship between earnings management and the independence of boards, as well as the financial sophistication of board members.On the other hand, there are some counter-arguments proposing that completely independent boards may not be effective in monitoring management, since management is more likely to cooperate with board members with whom they are better acquainted. Indeed, Agrawal and Knoeber (1996) find a significant negative relationship between outside membership on the board and firm performance, leading them to conclude that boards that have too many outsiders lose the expertise associated with officers serving on the board.The reliability of financial reporting is also due, in part, to the independence and integrity of the audit process. Audit committees are responsible for recommending the selection of external auditors to the board, ensuring the soundness and quality of internal accounting and control practices, and monitoring external auditor independence from management. Empirical evidence generally supports the positive effect of independent audit committees. For example, Carcello and Neal (2000) document a relation between greater audit committee independence and the quality of financial reporting. Similarly, Xie et al. (2003) report a negative association between earnings management and the independence of audit committees.Finally, the presence of an independent nomination committee is also important for board effectiveness and monitoring ability, since the manager's power to nominate new members to the board can be removed. Overall, to the extent that independent boards and committees are superior monitors of management and likely limit managers' earnings management discretion and reduce managerial incentives to adopt aggressive earnings management strategies in the financial reporting process, we expect that the quality of earnings increases in proportion to the independence and the functionality of the board and its key committees. Hence, the first hypothesis is as follows (in alternate form):H1. Firms with more independent boards and subcommittees have less abnormal accruals and more informative earnings.3.2 Shareholding by managers or directors and the quality of earnings Another element of governance that affects the incentives for directors to actively monitor management and for managers to perform in the best interests of shareholders is the compensation of directors and managers. There are two opposing views in the literature regarding the relationship between board or management shareholding and the quality of financial reporting. Morck et al. (1988) show that high stockholding may cause a moral hazard and an information-asymmetry problem between the insiders (management and directors) and outside investors. Under this managerial entrenchment hypothesis, managers may have more incentives to exercise discretion in accounting reporting, and monitoring and disciplining will be more difficult for directors with an equity stake in the firm. As a result, the quality of the financial reporting process may be compromised when stockholding by directors is high.On the other hand, agency theory (Jensen and Meckling, 1976) predicts that managers with lower firm ownership have greater incentives to manipulate accounting numbers in order to relieve the constraints imposed on accounting based compensation contracts. In addition, Jensen (1989) argues that outside directors with little equity stake in the firm cannot effectively monitor and discipline the managers. Indeed, many firms require their directors to increase shareholding in their firms (Hambrick and Jackson, 2000). Consistent with this theory, Warfield et al. (1995) find a negative relation between managerial stockholdings and the absolute value of abnormal accruals. They interpret their results as being consistent with the belief that managerial shareholdings act as a disciplining mechanism. Under this alignment of interest hypothesis, mandatory shareholding of board and management can effectively motivate managers' performance, and create incentives for independent directors to more closely monitor management, a scenario under which a positive association between mandatory shareholding and the quality of accounting earnings is expected. This discussion leads to the following hypothesis:H2. Firms with a higher level of board (management) share ownership have less abnormal accruals and more informative earnings.3.3 Shareholder rights and the quality of earnings An important aspect of best practices in corporate governance deals with shareholder rights, which reflect shareholders' ability to exercise their control over firm assets, remove ineffective or opportunistic management, monitor the conduct of the board of directors or initiate ownership changes that increase firm valuation (Ashbaugh et al., 2004). One of the most effective means of controlling management's behavior is to grant shareholders the right to vote on major issues, such as electing directors and the chairperson and approving senior executive appointments, and important changes affecting the firm such as mergers or liquidation. Normally these rights are proportionate to the shareholder's equity ownership. However, these rights are often severally limited under a governance system that allows dual-class share structures, which are very common in Canada[5].Recent research also indicates that the existence of stronger shareholders may improve internal control, and thus may be an effective monitoring device for improving financial reporting quality. To the extent that an appropriate power-sharing relationship between shareholders and managers reduces the moral hazard problems that lower overall firm value and allows shareholders to effectively monitor the financial reporting practice, we predict a positive association between shareholder rights and the quality of earnings. Hence:H3. Firms with stronger shareholder rights have less abnormal accruals and more informative earnings.3.4 Disclosure of corporate governance practice and quality of earnings Prior research indicates that corporate disclosure reduces information asymmetry between investors and managers (e.g. Lang and Lundholm, 1996; Welker, 1995). For instance, Lang and Lundholm (1996) provide evidence that firms with more informative disclosure policies have a larger analyst following, more accurate analyst earnings forecasts, less dispersion among individual analyst forecasts, and less volatility in forecast revisions. Similarly, Welker (1995) finds that information asymmetry, measured as the bid-ask spread, is reduced and market liquidity increased as the level of disclosure is increased. Prior research also demonstrates a relationship between information asymmetry and earnings quality (e.g. Dye, 1988; Trueman and Titman, 1988). For example, Dye (1988) and Trueman and Titman (1988) show analytically that the existence of information asymmetry between management and shareholders is a necessary condition for earnings management.The above two lines of research suggest that enhanced corporate disclosures may benefit a firm in many ways; however, managers wishing to retain the flexibility to engage in earnings management may have incentives to limit the disclosure. To the extent that disclosure of governance practice may reduce information asymmetry and enable board and investors to effectively monitor management decisions and performance, we predict that a better quality of disclosures on governance practice is associated with a higher quality of earnings. Hence:H4. Firms with higher quality of corporate governance disclosures have less abnormal accruals and more informative earnings. 4.1 Measuring abnormal accruals Since managers may have incentives to manage earnings either upward or downward, we use the absolute value of the abnormal accruals as a proxy for earnings quality (DeFond and Park, 1997; Bartov et al., 2000). To the extent that better monitoring of the financial reporting process leads to greater financial transparency, the firm is expected to have a lesser degree of earnings management, and thus fewer abnormal accruals. Accordingly, a negative relationship between the governance quality and the absolute value of abnormal accruals is predicted.Abnormal accruals are calculated using the modified Jones model (Dechow et al., 1995): Equation 1 where TA is total accruals, defined as net income before extraordinary items (Compustat #123) minus cash flow from operations (Compustat #308) scaled by beginning of fiscal year total assets (Compustat #6); DREV is the change in sales (Compustat #12) from year t-1 to year t, scaled by beginning of fiscal year total assets; DREC is change in accounts receivable (Compustat #302) from year t-1 to year t, scaled by beginning of fiscal year total assets; PPE is gross property, plant and equipment (Compustat #7) scaled by beginning of fiscal year total assets; BM is the book value (Compustat #60) to market value of common equity (Compustat #25x#199) for the year; and OCF are current operating cash flows (Compustat #308), scaled by beginning of fiscal year total assets.The model assumes that normal accruals are positively related to change in revenues, less the change in accounts receivable, and negatively related to the capital intensity of the firm. Following Larcker and Richardson (2004), book-to-market ratio (BM) is used as a proxy for growth, and we expect it to be positively related to total accruals. We also include current operating cash flows (OCF) as an additional variable to control for extreme level performance (Dechow et al., 1995), and expect OCF to be negatively associated with total accruals.The model is estimated using all available Compustat Canadian firms with available data for each two-digit SIC group separately, with at least eight firms in each group. To reduce the impact of influential observations, independent variables in the model are all winsorized to be no greater than 1 in absolute value, and the book-to-market is winsorized at the extreme of 2 percent. Discretionary accruals (DA) for each firm i in each industry are defined as the difference between the total accruals (TA) and the fitted value of equation (1), as follows: Equation 2 Prior research documents that discretionary accrual estimates are correlated with firm performance (Kothari et al., 2005). To mitigate this misspecification problem, we control for firm performance by using the performance-matched discretionary accruals model. Specifically, following Kothari et al. (2005), each firm-year observation is matched with another firm from the same two-digit SIC code and year with the closest ROA in the current year. The performance-matched discretionary accrual (PMDA) is the discretionary accruals (DA) calculated from equation (2), minus the matched firm's DA for the year. To test the hypotheses, the following pooled cross-sectional and time series regression model is then estimated: Equation 3 where Board is the governance quality on board composition for firm i at year t; Comp is the governance quality on shareholding and compensation of board or management, for firm i at year t; Share is the governance quality on shareholder rights for firm i at year t; Disc is the governance quality on corporate governance disclosures for firm i at year t; Audit_S is an indicator (1 if an auditor audits at least 20 percent of the industry revenue, and 0 otherwise); SIZE is the logarithm of total assets for firm i at year t; and LEV is the total liabilities to total assets for firm i at year t.Each firm's performance score for individual governance categories is used as a proxy for the quality of that governance feature, and a higher score implies a lesser extent of earnings management. Thus, consistent with H1 to H4, we expect a1<0, a2<0, a3<0, and a4<0.Given that corporate governance is not the sole factor affecting discretionary accruals, several control variables are introduced to isolate other contracting incentives that have been found to influence managers' accounting choices. For example, we control for auditor specialization (Audit_S) since prior research (Dunn and Mayhew, 2004; Myers et al., 2003) finds that industry-specialist audit firms assist clients in enhancing disclosures and in reducing earnings management. Firm size (SIZE) and financial leverage (LEV) are also controlled for, since Press and Weintrop (1990) indicate that these factors may affect managers' discretionary accounting choices. Although there is no prediction for SIZE due to the ambiguity on the link between firm size and discretionary accruals, LEV is expected to have a positive coefficient[6].4.2 Measuring return-earnings association To provide collaborating evidence on the market's perception of the impact of governance on financial reports, the return-earnings association is used as an additional measure of earnings quality. To the extent that better governance systems can effectively align managers' interests with those of shareholders, and actively monitor and control firm-management, the transparency and the reliability of a firm's financial reporting process, and consequently, the informativeness of earnings would increase. Therefore, a positive relation between the governance quality and the return-earnings association is expected.Following Warfield et al. (1995), we estimate the following pooled time-series and cross-sectional regression model: Equation 4 where Rit is the stock return of firm i for the 12 months from nine months before and three months after the fiscal year-end, calculated as (Pit-Pit-1+Dit)/Pit-1 (Pit is the stock price of firm i at time t, and Dit is the dividend of firm i at year t); Eit is earnings per share (before extraordinary item) of firm i for year t; SIZE is a proxy for size, which is measured as logarithm of sales revenue; LEVE is a proxy for leverage, which is measured as total debt divided by total assets; and GWTH is a proxy for growth prospect of the firm (Tobin's Q), which is defined as the market value of equity divided by book value of equity.To control for other factors, which may affect return-earnings associations, we include a proxy for firm size (SIZE), and a proxy for financial leverage (LEVE), since highly levered firms are associated with higher risk, and hence their earnings-return relation is weakened (Watts and Zimmerman, 1990). Following prior studies (e.g. Collins and Kothari, 1989), we also include a proxy for growth (GWTH), which is expected to be positively associated with returns.The coefficient a1 measures the traditional return-earnings relation. The coefficients a2 to a5 measure differential earnings informativeness according to the effectiveness of governance controls. Consistent with H1 to H4, which predict that earnings informativeness increases as a firm's quality of governance mechanism increases, we expect a2>0, a3>0, a4>0, and a5>0. 5.1 Sample selection The initial sample consisted of firms listed on the S&P/TSX composite index as of 1 September 2002, 2003, 2004 and 2005. Governance scores for these companies, as published in the Globe and Mail survey, are obtained from years 2002 to 2005. Of these initial 888 firm year observations, firms that are either missing financial variables or that have insufficient data in order to estimate performance-matched abnormal accruals are eliminated. Financial institutions and insurance companies (SIC=60-69) are also eliminated, since their special accounting methods make the estimation of discretional accruals problematic.Firm-level financial information is obtained from Compustat and is supplemented by firm annual reports. Since the governance surveys are based on the assessment of the most recent proxy statement, firm-level accounting data in the most recent fiscal year prior to the survey is used in the analysis. To be included in the sample, a firm must have stock return data from the Canadian Financial Markets Research Center database. The final sample consists of 519 firm-year observations for the accrual model and 528 firm-year observations for the returns-earnings association analysis, from the years 2001 to 2004.5.2 Results from accruals model Panel A of Table I reports the descriptive statistics for selected variables in equation (3). The mean and median for the absolute value of performance-matched discretionary accruals, PMDA, are 9 percent and 7 percent of the prior year's total assets, respectively. The mean scores for the overall governance quality are 64.7 (out of 100) with a standard deviation of 14.4. There is also a variation within each governance category across firms. For example, the mean quality of disclosures of governance practice has a mean of 9.6 with a standard deviation of 3.3. Other characteristics of the sample firms include an average leverage ratio (LEV) of 47 percent, a median auditor-client relationship of eight years and an observation that more than half of the sample firms were audited by an industry-specialized auditor.Correlations between the dependent and the explanatory variables are shown in Panel B of Table I. As predicted, firms that are ranked higher in terms of board independence, management compensation, shareholder rights and disclosures of governance practice have smaller abnormal accruals. Also, there are strong correlations among attributes of corporate governance at the 1 percent level, indicating that firms that are performing well in one governance category tend to perform well in other categories. SIZE is positively associated with leverage (LEV), suggesting that larger firms have higher leverage constraint levels. Finally, larger firms are more likely to have an independent board and powerful shareholders, to make more governance practice disclosures and to have effective compensation polices for management and directors.Regression results based on equation (3) are reported in Table II, which shows White-adjusted t-statistics for all the coefficients. The association between abnormal accruals and the overall governance quality in columns 3 and 4 in Table II indicates a significant negative association between the magnitude of abnormal accruals and total governance scores at the 1 percent level (with or without control variables), suggesting that overall governance quality is negatively associated with the magnitude of discretionary accruals. In addition, consistent with H1, the results in column 6 indicate that there is a negative association between the proxy for board compositions and the abnormal accruals (-0.003, t=- 3.474) after controlling for other factors, suggesting that as the independence of the board increases, the sample firms engage in less income increasing or decreasing discretionary accruals (Klein, 2002).As predicted in H2, there is a significant negative association between the attributes of the board or management shareholding or compensation and the measure of earning quality (-0.005, t=-3.577). In addition, as predicted by H3, the strong negative association between the quality of shareholder rights and abnormal accruals is significant at the 1 percent level (-0.003, t=-3.366), suggesting that the incentives to manage earnings diminish as increased shareholder rights limit managers' accruals discretion. However, inconsistent with the prediction in H4, there is no significant association between the quality of governance disclosures and the magnitude of discretionary accruals. With regards to the control variables, the results show a significant negative coefficient on Audit_S, suggesting that firms which are audited by an industry-specialized auditor are more likely to have less absolute value of abnormal accruals, a finding that is consistent with prior research.In summary, linking the quality of corporate governance and abnormal accruals, we find that firms that are ranked highly in terms of governance quality have fewer abnormal accruals. In addition, firms with powerful shareholders, more independent or functional boards and with more effective compensation policies for management or directors are more likely to have fewer discretionary accruals, suggesting that these factors may be effective in monitoring managerial opportunism. Finally, we do not find that disclosure of governance practices is associated with the magnitude of earnings management.5.3 Results from the return-earnings model Panel A in Table III reports the descriptive statistics of dependent and independent variables in equation (4). Median raw returns for sample firms over the sample period are 0.11; firms, on average, have earnings per share deflated by the prior year-end price of 0.33. The average Tobin's Q for the sample firms is 2.53 with a median of 2.03. Consistent with prior studies, the correlation table in Panel B of Table III shows a very strong positive correlation between stock raw returns and deflated earnings measure at the 1 percent level. Consistent with predictions, there are also strong correlations among returns and interactions of earnings and attributes of corporate governance at the univariate basis, indicating that firms having strong corporate governance mechanisms in place have earnings that are more informative. Again, there are very strong positive correlations among attributes of governance. Finally, larger firms are more leveraged and have lower Tobin's Q than smaller firms.Regression results of the relationship between returns and earnings and earnings-governance interactions are reported in Table IV. Again, we see a strong positive association between returns and the measure of earnings, which is consistent with prior studies. Column 4 also indicates a significant association between returns and the earnings-governance overall quality interaction at the 10 percent level.To examine whether the market perceives the attributes of governance quality differently, we regress returns on earnings and the interaction of earning and each individual measure of governance attribute. As shown in column 5, there are positive and significant coefficients on the interactions between earnings and proxies for quality of board composition, management compensation and shareholder rights at the 5 percent or better level. The results are insensitive to adding the control variables in the model, as shown in column 6. This provides support for H1-H3, suggesting that board independence, efficient management and director compensation, as well as effective monitoring by shareholders, improve earnings informativeness. Inconsistent with H4, however, a higher level of governance practice disclosures does not incrementally explain the return-earnings relation.Overall, linking the quality of corporate governance and the return-earnings association, an alternative measure of earnings quality, we find results which are largely consistent with the results obtained by using abnormal accruals as a proxy for earnings quality. Namely, firms with a more independent or functional board, and stronger shareholder rights, are more likely to have a stronger returns-earnings association, suggesting the earnings have a higher level of information content. In addition, earnings are more informative for firms with mandatory shareholding by board or management.5.4 Additional analysis To provide further evidence that governance matters for a firm's financial reporting quality, we examine whether firms with improved governance structure over the sample period are found to have a higher quality of reported earnings. Canadian companies have made significant changes in governance practices during the sample period, pushed by stronger governance rules, higher stakes and shareholder pressure[7]. If the new governance initiatives and the ensuing governance debate have helped improve the effectiveness with which boards discharge their financial reporting responsibilities, one might expect that the association between governance effectiveness and the magnitude of earnings management and the return-earnings association has become more pronounced over time. Furthermore, a change analysis can better address the endogeneity concerns.We use a balanced sample design to assess whether the association between abnormal accruals and governance attributes differs over time. This design allows each sample firm to serve as its own control, thereby eliminating any differences that might result from temporal variation in sample composition. We estimate the following accruals model using a sample of 93 panel firms for the years 2001 and 2004[8]: Equation 5 where Time is an indicator, coded as 1 for year 2004, and 0 otherwise.We use a time dummy, which is 1 for the year 2004 (to represent the post-regulation period) and 0 for the year 2001 (to proxy for the pre-regulation period). To control for factors other than governance regulations, such as stronger litigation environment and shareholder pressure for stronger practices, which may drive the improvement of governance, we also include time as a separate independent variable in the equation. The primary variables of interest are a5 to a8. We predict that enhanced corporate governance is associated with a lower degree of earnings management, thus we expect that a5<0, a6<0, a7<0, and a8<0.Unreported descriptive statistics of selected variables used in equation (5) suggest that, unsurprisingly, the total governance scores improved significantly from 2001 to 2004, with the mean score climbing to 61.5 out of 100 in 2001 to 74.9 in 2004. Governance along other dimensions significantly improved as well. For example, the mean score for the board composition and shareholder rights increased from 25.2 in 2001 to 31.2 in 2004, and from 17 in 2001 to 19.1 in 2004, respectively. Also, the magnitude of PMDA was reduced from 2001 to 2004, with a median of 0.09 in 2001, to 0.071 in 2004.The regression results for equation (5) are provided in Table V[9]. As shown in column 4 of Table V, there is a negative association between abnormal accruals and the interaction of board composition and the time dummy (-0.0004, t=-2.005) at the 5 percent level, suggesting that firms that increased their board independence experienced a decrease in abnormal accruals. Similarly, in columns 6 and 8 in Table V, the statistical negative associations between the accrual measure and the interactions of governance and the time dummy indicate that firms with improved shareholder rights incurred less abnormal accruals. However, the change in shareholding of managers and directors and disclosure in governance practice is not associated with any change in abnormal accruals.Next, we next examine whether changes in governance are also associated with changes in the return-earnings association. If the effectiveness of governance in enhancing the financial reporting quality is greater than the negative impact of stricter litigation regulations on investment behavior, then we expect that the market's perception of earnings quality has improved. We estimate the following return-earnings regression using the balanced sample: Equation 6 All the variables in equation (6) are as defined previously. If enhanced corporate governance is associated with a higher quality of reported earnings, and consequently a higher return-earnings association, we expect that a6>0, a7>0, a8>0 and a9>0.Table VI reports the regression results. Inspection of the results in Table VI shows that firms with more independent boards and stronger shareholder rights have higher return-earnings associations at the 5 percent or 10 percent level, after controlling for firm size, growth and the time period. The coefficient on interaction between shareholding and the time dummy is positive, as expected, but not significant. The corporate governance disclosure variable does not affect the informativeness of earnings differently across periods.Overall, additional tests indicate that improvements in governance practice are generally associated with less discretional accruals and more informative reported earnings; in addition, board independence and shareholder rights seem to be the most important factors in influencing the corresponding improvement of earnings quality as the quality of governance effectiveness increases. However, one caveat of the analyses is that, due to data limitation, we are unable to explicitly control for all other factors during the sample period that may contribute to the correlation between improved governance and the proxies of accounting earnings. Thus, the inference must be interpreted with caution.5.5 Sensitivity analyses The Globe and Mail surveys conducted in 2003-2005 are based on slightly modified but tougher marking standards than the methods used in 2002; thus, scores may not be strictly comparable over time. To mitigate this comparability concern, instead of using continuous variables, following Gompers et al. (2003), we classify the sample into three groups based on each category of governance: strong governance as a score above the 67th percentile, weak governance as a score below the 33th percentile, and the remainder as neutral governance. The results are not sensitive to this alternative classification[10].In addition to the determinants of the variation in the accruals model which are considered in this study, managers may have incentives to manipulate earnings in order to avoid earnings losses and earnings declines (Burgstahler and Dichev, 1997). Accordingly, sample firms are partitioned according to whether their unmanaged earning per share (before the performance-matched abnormal accruals) is negative or below last year's reported earnings per share[11]. We expect the incentives for income-increasing earnings management to be particularly strong when the unmanaged earnings numbers fall below target. The repeated analysis does not support this conjecture. The unreported results are qualitatively the same as those observed in Table II[12].Several sensitivity tests are also conducted in the returns-earnings analyses. Fist of all, we re-estimate the return-earnings model by partitioning the samples into positive and negative earnings. Hayn (1995) suggests that the earnings-response coefficient is low and not stable when earnings are negative. While we find that all the results are qualitatively the same when the earnings are positive, there are inconsistent findings when the earnings are negative. In addition, we also consider alternative measures of return (12-month window ending at the fiscal year-end) and earnings, including both the level and the change of earnings. The sensitivity checks do not affect the main results, and the variable for change of earnings is generally not significant across all model specifications. Finally, prior studies (Subramanyam and Wild, 1996) indicate that the persistence and variability of earnings may explain earnings informativeness[13]. Earnings persistence, earnings variability and their respective interactive terms with earnings are then included in the returns and earnings model. The models are re-estimated using both the pooled sample and the panel sample. Neither interactive term is statistically significant in explaining returns, while the main results of governance on return-earnings association remain unchanged. Recent governance initiatives in Canada have underlined the need for more evidence on the corporate governance and the quality of reporting issues. The purpose of this study is to provide early evidence to assess the merit of calls for stringent governance regulations in Canada by examining the association between the quality of overall and specific governance features and the quality of accounting earnings.We use absolute value of performance-matched abnormal accruals and return-earnings association as the proxies for quality of earnings. Using results from recently published data on corporate governance for a sample of Canadian firms, we find that overall governance quality is inversely related to the level of abnormal accruals and positively associated with the return-earnings association, suggesting that good corporate governance mechanisms provide greater monitoring of the financial accounting process and ensure more informative accounting earnings. We also find that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis are consistent with these findings.Overall, this study provides early evidence that is consistent with Canadian regulators' initiatives that stronger corporate governance mechanisms may be important factors in improving the integrity of financial reporting for Canadian firms. Since Canadian regulators adopted a set of corporate governance rules which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various stock exchange listing standards around the world, the evidence in the paper suggests that future policy initiatives in the USA and other countries should reinforce the need for independent boards, effective management compensation and stronger shareholders' rights, which is likely to result in better earnings quality.This study has several limitations. First, like many empirical studies that rely on disclosed proxy data, proxy disclosures may not represent all aspects of corporate governance practices. It is possible that some companies may have strong practices in some areas, but received lower scores because the details are not disclosed in their proxies. In addition, the sampling process may suffer from survivorship bias[14]. Third, the tests in this study are association tests, and thus do not directly distinguish whether the structural change in the association between the proxies for earnings quality and governance characteristics is due to the enhancement of governance or the associated pressure for increased managerial accountability. Future research may need to adopt qualitative research approaches to provide collaborative evidence on the link between quality of financial reporting and governance effectiveness. Opens in a new window.Equation 1 Opens in a new window.Equation 2 Opens in a new window.Equation 3 Opens in a new window.Equation 4 Opens in a new window.Equation 5 Opens in a new window.Equation 6 Opens in a new window.Table I Descriptive statistics of variables in abnormal accruals regression (Panel A: dependent variable and selected independent variables, Panel B: Pearson correlation matrix for dependent variable and selected independent variables) Opens in a new window.Table II Regression results for the association between absolute value of abnormal accruals and governance attributes Opens in a new window.Table III Descriptive statistics of variables in return-earnings regression (Panel A: descriptive statistics for dependent and independent variables; Panel B: Pearson correlations matrix for dependent and independent variables) Opens in a new window.Table IV Regression results for the association between returns and earnings, earnings-governance interactions and other determinates of return-earnings association Opens in a new window.Table V Regression results for the association between abnormal accruals and governance attributes over time Opens in a new window.Table VI Regression results for the association between returns and earnings, earnings-governance interactions over time
|
- Empirical tests demonstrate that overall governance quality is negatively related to the level of abnormal accruals and positively influences the return-earnings association. In addition, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis are consistent with these findings.
|
[SECTION: Value] The considerable amount of regulatory attention given to corporate governance issues in recent years suggests that stronger governance mechanisms would reduce opportunistic management behavior, thus improving the quality and reliability of financial reporting. Regulators believe that this in turn will help to maintain and enhance investors' confidence in the integrity of capital markets. In contrast, some critics argue that the enhanced governance and litigation environment may change the balance of business and information risk for many firms, with the predictable and undesirable result that many firms will become more cautious, and forgo promising opportunities. Thus, shareholder wealth may ultimately be reduced. Although in the literature studies have examined the association between the attributes of governance mechanisms and firm performance, as well as the information content of the financial reporting process, much less is known about the impact of the recent changes in corporate governance codes on earnings quality internationally (Beekes et al., 2004)[1].The purpose of this study is to provide insight into the ongoing debate in the regulatory and academic communities on the effectiveness of the new governance regulations in Canada. Since the late 1990s, publicly traded firms in Canada have been subject to stricter corporate governance rules and guidelines. These changes in expectations regarding corporate governance were motivated, to a large extent, by some large corporate scandals in the USA and Canada. Due to the fact that many Canadian companies also rely on the US capital market, the dramatic changes in the US corporate governance regulations and practices (e.g. the Sarbanes-Oxley Act and the new SEC regulations) have also had a significant impact in Canada.For several reasons, Canada is a unique and interesting setting in which to assess the sensitivity of the relation between governance and the integrity of the financial reporting process to new governance initiatives. First, although Canadian securities laws are substantially similar to those in the USA, unlike the USA, Canada does not have a centralized securities commission. Securities regulation is enforced at the provincial and territorial level (Rosen, 1995). Therefore, any nationwide governance agreement must obtain strong support from large provinces, such as Ontario and Quebec. Second, Canada uses a flexible method to address matters of corporate governance, which is distinct from the mandatory approach adopted by the US. Moreover, a much higher percentage of Canadian public companies have a controlling shareholder, as compared to US public companies (La Porta et al., 1999). These controlling shareholders have a natural incentive to be represented on the board of directors, which raises issues about the appropriate definition of independence. Finally, many Canadian public corporations are relatively small firms with a limited capacity to attract large numbers of completely independent directors; therefore, for these companies, complying with a strict set of corporate governance rules would be a significant financial and administrative burden. These institutional features raise questions about the effectiveness of governance control practice in improving the financial reporting process in Canada.This paper examines the effect of governance on the quality of the financial reporting process by linking governance attributes to the quality of accounting earnings. The focus on earnings is appropriate since it is a summary performance measure that is frequently quoted, analyzed and discussed in the literature and in the financial community. In this paper, the quality of earnings is measured in two ways:1. an accounting-based measure of earnings management (the magnitude of abnormal accruals); and2. a market-based measure of earnings informativeness (the return-earnings association)[2].Employing both measures can provide collaborating evidence, since enhanced regulations may provide fewer incentives for managers to manage earnings, and thus the magnitude of abnormal accruals may be lower; on the other hand, highly significant legislative changes to financial practice and corporate governance may encourage firms to undertake less optimal yet safer investment opportunities. Therefore, financial information may become a less clear representation of the economic resources and changes in economic resources of the firm. Accordingly, all else being equal, earnings informativeness may be reduced.Using data on corporate governance practice for Canadian firms comprising the S&P/TSX composite index for 2002-2005, the paper finds that overall governance quality is negatively related to the level of abnormal accruals and positively influences the return-earnings association. This suggests that good corporate governance mechanisms provide greater monitoring of the financial accounting process and are associated with reported earnings that are more informative. The study also finds that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis further support these inferences.Studies in the literature typically focus on particular aspects of governance, such as board composition, shareholder activism, executive compensation, or insider ownership on firms' market value or performance (e.g. Morck et al., 1988; Warfield et al., 1995). For example, using a sample of Canadian companies during the years 1991-1997, Park and Shin (2004) examine the relationship between the proportion of outside board members and the level of accrual management, and find that only outside directors from financial intermediaries or institutional shareholders reduce earnings management. They conjecture, but do not test, that the insignificant results may be due to Canadian directors' lack of ownership interest in the firms they monitor and the presence of dominant shareholders. By examining the relation between corporate governance (including board composition, management shareholding, shareholders' rights and the extent of disclosure of governance practices), and the quality of earnings (measured by both accrual management and earnings informativeness), this study can contribute to a more comprehensive understanding of the significance of governance. The evidence suggests that enhanced governance initiatives are accompanied by an improved quality of earnings. The paper also differs from Park and Shin (2004) in another important aspect, since the current study investigates the years 2001-2004, a period where significant governance initiatives were imposed after the accounting scandals; thus, the evidence can provide more relevant and useful insights to the current policy debate regarding governance effectiveness.The results from this study support the notion that enhanced governance practices, especially independent boards and committees, effective management compensation, and powerful shareholders are important in constraining management from managing earnings and in ensuring a higher quality of earnings. Given the increasing interest in corporate governance, the evidence provides additional support of continuing regulatory initiatives throughout much of the world on corporate governance concerning the board independence and managerial ownership. It also calls for more actively involved shareholders to play a greater role in firms' accounting reporting processes.The rest of the paper proceeds as follows. The next section describes recent corporate governance initiatives in Canada. Section 3 develops hypotheses, while Section 4 describes the research design and variable measurements. Sample selection and empirical results are presented in Section 5. Section 6 provides additional analyses and conducts sensitivity tests. Section 7 provides concluding remarks. Canada has placed an emphasis on corporate governance for a number of years. The significant governance initiative may date back to 1995, when the Toronto Stock Exchange (TSX) adopted 14 voluntary corporate governance best practices, and required Canadian-incorporated listed companies to disclose annually their corporate governance practices, and compare their practices to the 14 best practices (Labelle, 2003; Park and Shin, 2004)[3].In late 2001, the Joint Committee on Corporate Governance, established by the TSX, the Canadian Venture Exchange and the Canadian Institute of Chartered Accountants, issued a report, which led to new TSX proposals. In response to the passing of the Sarbanes-Oxley Act in the USA, Canadian regulators adopted a set of corporate governance rules, which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various US stock exchange listing standards. Canadian securities regulators believe that it is essential that Canadian public firms are subject to corporate governance rules that are as strict as those in the USA, but tailored to Canadian markets. These rules can be classified into several categories. The first set of rules relates to CEO/CFO certifications of annual and quarterly reports. Canadian companies also have to adopt disclosure controls and procedures that provide reasonable assurances that material information required to be disclosed by the company is made known to the CEO/CFO and is disclosed within the periods acquired by Canadian securities laws. Like the amended SEC rule, issuers need to design internal controls that provide reasonable assurances that their financial statements are fairly presented in accordance with GAAP.The second set of rules deals with audit committee independence, financial literacy and expertise. Major Canadian public companies must have fully independent and financially literate audit committees; thus, the education and experience of all committee members should be disclosed, so that investors can judge the committee's expertise. The third set of rules relates to the auditing process. To oversee the auditing profession, Canada established the Canadian Public Accountability Board (CPAB). Under the proposed auditing regulations, independent auditors are prohibited from performing various non-audit services for their audit clients.In addition, Canadian provincial securities regulators have proposed regulating other aspects of governance that are enforced through the stock exchange listing standards in the USA. For example, the Ontario Securities Commission (OSC), together with the majority of the other provincial and territorial securities commissions, proposed 18 recommended best practices and accompanying disclosure rules. These guidelines address such topics as the composition of a company's board and job descriptions for directors and officers. Although complying with these guidelines is again voluntary, companies that issue securities in Ontario are required to disclose whether or not they have adopted the guidelines, and if not, they need to explain why in the annual reports to be filed with the OSC.The governance initiatives in Canada have underlined the need for more evidence on corporate governance and its impact on quality of reporting issues. The governance data used in this paper is obtained from a survey on governance practices of Canadian firms in the S&P/TSX index, while the survey data are derived from a company's most recent proxy circulars. The survey has been conducted independently by the Report on Business in the Globe and Mail (a leading newspaper in Canada) on an annual basis since 2002 and the results have been published since then. The scores of corporate governance are based on a set of practices identified by regulators and investor groups which is considered to be critical to corporate governance effectiveness, and can be classified broadly into the following categories:* board composition;* shareholding and compensation of directors and management;* shareholder rights; and* disclosures of corporate governance practice.Each category contains several criteria, with corresponding weights for each criterion. This measurement of governance is relevant for assessing the degree of independence, objectivity, and attentiveness the board exercises in overseeing management performance, and the degree to which they hold management accountable to stakeholders for its actions[4]. Details about the criteria used by the survey are provided in the Appendix. The importance of corporate governance has been a question of substantial interest to regulators, financial institutions, investors, and the media. Governance problems arise from divergent incentives and asymmetric information between shareholders and managers. These conflicts of interests, coupled with the impossibility of writing explicit contacts on all future contingencies, lead to unresolved agency problems that affect firm valuation (Hart, 1995). Corporate governance mechanisms are intended to mitigate agency costs by increasing the monitoring of management's actions and limiting managers' opportunistic behavior (Ashbaugh et al., 2004). In this section, several hypotheses are developed that identify and link specific elements of governance to accounting earnings.3.1 Board composition and the quality of earnings One of the most important factors influencing the integrity of the financial accounting process involves the board of directors, whose responsibility is to provide independent oversight of management performance and to hold management accountable to shareholders for its actions (DeFond and Jiambalvo, 1994; Dichev and Skinner, 2002). Prior research examining the association between the corporate governance mechanisms concerning the board of directors (e.g. independence of board or board size, expertise of directors or board members, and stock ownership of board members) and the extent of earnings manipulation finds inconclusive results. While the empirical results concerning board attributes are mixed due to different research designs and empirical settings, a general belief is that boards are more effective in their monitoring of management when there is a strong base of independent directors on the board (e.g. Beasley, 1996; Peasnell et al., 2000; Klein, 2002; Xie et al., 2003). For example, Beasley (1996) finds that the presence of outside directors reduced the probability of fraud in the presentation of financial statements during the period of 1980-1991. Similarly, Klein (2002) provides evidence concerning board independence and earnings manipulation and finds that companies with independent boards are less likely to report abnormal accruals. Xie et al. (2003) find similar results with respect to the relationship between earnings management and the independence of boards, as well as the financial sophistication of board members.On the other hand, there are some counter-arguments proposing that completely independent boards may not be effective in monitoring management, since management is more likely to cooperate with board members with whom they are better acquainted. Indeed, Agrawal and Knoeber (1996) find a significant negative relationship between outside membership on the board and firm performance, leading them to conclude that boards that have too many outsiders lose the expertise associated with officers serving on the board.The reliability of financial reporting is also due, in part, to the independence and integrity of the audit process. Audit committees are responsible for recommending the selection of external auditors to the board, ensuring the soundness and quality of internal accounting and control practices, and monitoring external auditor independence from management. Empirical evidence generally supports the positive effect of independent audit committees. For example, Carcello and Neal (2000) document a relation between greater audit committee independence and the quality of financial reporting. Similarly, Xie et al. (2003) report a negative association between earnings management and the independence of audit committees.Finally, the presence of an independent nomination committee is also important for board effectiveness and monitoring ability, since the manager's power to nominate new members to the board can be removed. Overall, to the extent that independent boards and committees are superior monitors of management and likely limit managers' earnings management discretion and reduce managerial incentives to adopt aggressive earnings management strategies in the financial reporting process, we expect that the quality of earnings increases in proportion to the independence and the functionality of the board and its key committees. Hence, the first hypothesis is as follows (in alternate form):H1. Firms with more independent boards and subcommittees have less abnormal accruals and more informative earnings.3.2 Shareholding by managers or directors and the quality of earnings Another element of governance that affects the incentives for directors to actively monitor management and for managers to perform in the best interests of shareholders is the compensation of directors and managers. There are two opposing views in the literature regarding the relationship between board or management shareholding and the quality of financial reporting. Morck et al. (1988) show that high stockholding may cause a moral hazard and an information-asymmetry problem between the insiders (management and directors) and outside investors. Under this managerial entrenchment hypothesis, managers may have more incentives to exercise discretion in accounting reporting, and monitoring and disciplining will be more difficult for directors with an equity stake in the firm. As a result, the quality of the financial reporting process may be compromised when stockholding by directors is high.On the other hand, agency theory (Jensen and Meckling, 1976) predicts that managers with lower firm ownership have greater incentives to manipulate accounting numbers in order to relieve the constraints imposed on accounting based compensation contracts. In addition, Jensen (1989) argues that outside directors with little equity stake in the firm cannot effectively monitor and discipline the managers. Indeed, many firms require their directors to increase shareholding in their firms (Hambrick and Jackson, 2000). Consistent with this theory, Warfield et al. (1995) find a negative relation between managerial stockholdings and the absolute value of abnormal accruals. They interpret their results as being consistent with the belief that managerial shareholdings act as a disciplining mechanism. Under this alignment of interest hypothesis, mandatory shareholding of board and management can effectively motivate managers' performance, and create incentives for independent directors to more closely monitor management, a scenario under which a positive association between mandatory shareholding and the quality of accounting earnings is expected. This discussion leads to the following hypothesis:H2. Firms with a higher level of board (management) share ownership have less abnormal accruals and more informative earnings.3.3 Shareholder rights and the quality of earnings An important aspect of best practices in corporate governance deals with shareholder rights, which reflect shareholders' ability to exercise their control over firm assets, remove ineffective or opportunistic management, monitor the conduct of the board of directors or initiate ownership changes that increase firm valuation (Ashbaugh et al., 2004). One of the most effective means of controlling management's behavior is to grant shareholders the right to vote on major issues, such as electing directors and the chairperson and approving senior executive appointments, and important changes affecting the firm such as mergers or liquidation. Normally these rights are proportionate to the shareholder's equity ownership. However, these rights are often severally limited under a governance system that allows dual-class share structures, which are very common in Canada[5].Recent research also indicates that the existence of stronger shareholders may improve internal control, and thus may be an effective monitoring device for improving financial reporting quality. To the extent that an appropriate power-sharing relationship between shareholders and managers reduces the moral hazard problems that lower overall firm value and allows shareholders to effectively monitor the financial reporting practice, we predict a positive association between shareholder rights and the quality of earnings. Hence:H3. Firms with stronger shareholder rights have less abnormal accruals and more informative earnings.3.4 Disclosure of corporate governance practice and quality of earnings Prior research indicates that corporate disclosure reduces information asymmetry between investors and managers (e.g. Lang and Lundholm, 1996; Welker, 1995). For instance, Lang and Lundholm (1996) provide evidence that firms with more informative disclosure policies have a larger analyst following, more accurate analyst earnings forecasts, less dispersion among individual analyst forecasts, and less volatility in forecast revisions. Similarly, Welker (1995) finds that information asymmetry, measured as the bid-ask spread, is reduced and market liquidity increased as the level of disclosure is increased. Prior research also demonstrates a relationship between information asymmetry and earnings quality (e.g. Dye, 1988; Trueman and Titman, 1988). For example, Dye (1988) and Trueman and Titman (1988) show analytically that the existence of information asymmetry between management and shareholders is a necessary condition for earnings management.The above two lines of research suggest that enhanced corporate disclosures may benefit a firm in many ways; however, managers wishing to retain the flexibility to engage in earnings management may have incentives to limit the disclosure. To the extent that disclosure of governance practice may reduce information asymmetry and enable board and investors to effectively monitor management decisions and performance, we predict that a better quality of disclosures on governance practice is associated with a higher quality of earnings. Hence:H4. Firms with higher quality of corporate governance disclosures have less abnormal accruals and more informative earnings. 4.1 Measuring abnormal accruals Since managers may have incentives to manage earnings either upward or downward, we use the absolute value of the abnormal accruals as a proxy for earnings quality (DeFond and Park, 1997; Bartov et al., 2000). To the extent that better monitoring of the financial reporting process leads to greater financial transparency, the firm is expected to have a lesser degree of earnings management, and thus fewer abnormal accruals. Accordingly, a negative relationship between the governance quality and the absolute value of abnormal accruals is predicted.Abnormal accruals are calculated using the modified Jones model (Dechow et al., 1995): Equation 1 where TA is total accruals, defined as net income before extraordinary items (Compustat #123) minus cash flow from operations (Compustat #308) scaled by beginning of fiscal year total assets (Compustat #6); DREV is the change in sales (Compustat #12) from year t-1 to year t, scaled by beginning of fiscal year total assets; DREC is change in accounts receivable (Compustat #302) from year t-1 to year t, scaled by beginning of fiscal year total assets; PPE is gross property, plant and equipment (Compustat #7) scaled by beginning of fiscal year total assets; BM is the book value (Compustat #60) to market value of common equity (Compustat #25x#199) for the year; and OCF are current operating cash flows (Compustat #308), scaled by beginning of fiscal year total assets.The model assumes that normal accruals are positively related to change in revenues, less the change in accounts receivable, and negatively related to the capital intensity of the firm. Following Larcker and Richardson (2004), book-to-market ratio (BM) is used as a proxy for growth, and we expect it to be positively related to total accruals. We also include current operating cash flows (OCF) as an additional variable to control for extreme level performance (Dechow et al., 1995), and expect OCF to be negatively associated with total accruals.The model is estimated using all available Compustat Canadian firms with available data for each two-digit SIC group separately, with at least eight firms in each group. To reduce the impact of influential observations, independent variables in the model are all winsorized to be no greater than 1 in absolute value, and the book-to-market is winsorized at the extreme of 2 percent. Discretionary accruals (DA) for each firm i in each industry are defined as the difference between the total accruals (TA) and the fitted value of equation (1), as follows: Equation 2 Prior research documents that discretionary accrual estimates are correlated with firm performance (Kothari et al., 2005). To mitigate this misspecification problem, we control for firm performance by using the performance-matched discretionary accruals model. Specifically, following Kothari et al. (2005), each firm-year observation is matched with another firm from the same two-digit SIC code and year with the closest ROA in the current year. The performance-matched discretionary accrual (PMDA) is the discretionary accruals (DA) calculated from equation (2), minus the matched firm's DA for the year. To test the hypotheses, the following pooled cross-sectional and time series regression model is then estimated: Equation 3 where Board is the governance quality on board composition for firm i at year t; Comp is the governance quality on shareholding and compensation of board or management, for firm i at year t; Share is the governance quality on shareholder rights for firm i at year t; Disc is the governance quality on corporate governance disclosures for firm i at year t; Audit_S is an indicator (1 if an auditor audits at least 20 percent of the industry revenue, and 0 otherwise); SIZE is the logarithm of total assets for firm i at year t; and LEV is the total liabilities to total assets for firm i at year t.Each firm's performance score for individual governance categories is used as a proxy for the quality of that governance feature, and a higher score implies a lesser extent of earnings management. Thus, consistent with H1 to H4, we expect a1<0, a2<0, a3<0, and a4<0.Given that corporate governance is not the sole factor affecting discretionary accruals, several control variables are introduced to isolate other contracting incentives that have been found to influence managers' accounting choices. For example, we control for auditor specialization (Audit_S) since prior research (Dunn and Mayhew, 2004; Myers et al., 2003) finds that industry-specialist audit firms assist clients in enhancing disclosures and in reducing earnings management. Firm size (SIZE) and financial leverage (LEV) are also controlled for, since Press and Weintrop (1990) indicate that these factors may affect managers' discretionary accounting choices. Although there is no prediction for SIZE due to the ambiguity on the link between firm size and discretionary accruals, LEV is expected to have a positive coefficient[6].4.2 Measuring return-earnings association To provide collaborating evidence on the market's perception of the impact of governance on financial reports, the return-earnings association is used as an additional measure of earnings quality. To the extent that better governance systems can effectively align managers' interests with those of shareholders, and actively monitor and control firm-management, the transparency and the reliability of a firm's financial reporting process, and consequently, the informativeness of earnings would increase. Therefore, a positive relation between the governance quality and the return-earnings association is expected.Following Warfield et al. (1995), we estimate the following pooled time-series and cross-sectional regression model: Equation 4 where Rit is the stock return of firm i for the 12 months from nine months before and three months after the fiscal year-end, calculated as (Pit-Pit-1+Dit)/Pit-1 (Pit is the stock price of firm i at time t, and Dit is the dividend of firm i at year t); Eit is earnings per share (before extraordinary item) of firm i for year t; SIZE is a proxy for size, which is measured as logarithm of sales revenue; LEVE is a proxy for leverage, which is measured as total debt divided by total assets; and GWTH is a proxy for growth prospect of the firm (Tobin's Q), which is defined as the market value of equity divided by book value of equity.To control for other factors, which may affect return-earnings associations, we include a proxy for firm size (SIZE), and a proxy for financial leverage (LEVE), since highly levered firms are associated with higher risk, and hence their earnings-return relation is weakened (Watts and Zimmerman, 1990). Following prior studies (e.g. Collins and Kothari, 1989), we also include a proxy for growth (GWTH), which is expected to be positively associated with returns.The coefficient a1 measures the traditional return-earnings relation. The coefficients a2 to a5 measure differential earnings informativeness according to the effectiveness of governance controls. Consistent with H1 to H4, which predict that earnings informativeness increases as a firm's quality of governance mechanism increases, we expect a2>0, a3>0, a4>0, and a5>0. 5.1 Sample selection The initial sample consisted of firms listed on the S&P/TSX composite index as of 1 September 2002, 2003, 2004 and 2005. Governance scores for these companies, as published in the Globe and Mail survey, are obtained from years 2002 to 2005. Of these initial 888 firm year observations, firms that are either missing financial variables or that have insufficient data in order to estimate performance-matched abnormal accruals are eliminated. Financial institutions and insurance companies (SIC=60-69) are also eliminated, since their special accounting methods make the estimation of discretional accruals problematic.Firm-level financial information is obtained from Compustat and is supplemented by firm annual reports. Since the governance surveys are based on the assessment of the most recent proxy statement, firm-level accounting data in the most recent fiscal year prior to the survey is used in the analysis. To be included in the sample, a firm must have stock return data from the Canadian Financial Markets Research Center database. The final sample consists of 519 firm-year observations for the accrual model and 528 firm-year observations for the returns-earnings association analysis, from the years 2001 to 2004.5.2 Results from accruals model Panel A of Table I reports the descriptive statistics for selected variables in equation (3). The mean and median for the absolute value of performance-matched discretionary accruals, PMDA, are 9 percent and 7 percent of the prior year's total assets, respectively. The mean scores for the overall governance quality are 64.7 (out of 100) with a standard deviation of 14.4. There is also a variation within each governance category across firms. For example, the mean quality of disclosures of governance practice has a mean of 9.6 with a standard deviation of 3.3. Other characteristics of the sample firms include an average leverage ratio (LEV) of 47 percent, a median auditor-client relationship of eight years and an observation that more than half of the sample firms were audited by an industry-specialized auditor.Correlations between the dependent and the explanatory variables are shown in Panel B of Table I. As predicted, firms that are ranked higher in terms of board independence, management compensation, shareholder rights and disclosures of governance practice have smaller abnormal accruals. Also, there are strong correlations among attributes of corporate governance at the 1 percent level, indicating that firms that are performing well in one governance category tend to perform well in other categories. SIZE is positively associated with leverage (LEV), suggesting that larger firms have higher leverage constraint levels. Finally, larger firms are more likely to have an independent board and powerful shareholders, to make more governance practice disclosures and to have effective compensation polices for management and directors.Regression results based on equation (3) are reported in Table II, which shows White-adjusted t-statistics for all the coefficients. The association between abnormal accruals and the overall governance quality in columns 3 and 4 in Table II indicates a significant negative association between the magnitude of abnormal accruals and total governance scores at the 1 percent level (with or without control variables), suggesting that overall governance quality is negatively associated with the magnitude of discretionary accruals. In addition, consistent with H1, the results in column 6 indicate that there is a negative association between the proxy for board compositions and the abnormal accruals (-0.003, t=- 3.474) after controlling for other factors, suggesting that as the independence of the board increases, the sample firms engage in less income increasing or decreasing discretionary accruals (Klein, 2002).As predicted in H2, there is a significant negative association between the attributes of the board or management shareholding or compensation and the measure of earning quality (-0.005, t=-3.577). In addition, as predicted by H3, the strong negative association between the quality of shareholder rights and abnormal accruals is significant at the 1 percent level (-0.003, t=-3.366), suggesting that the incentives to manage earnings diminish as increased shareholder rights limit managers' accruals discretion. However, inconsistent with the prediction in H4, there is no significant association between the quality of governance disclosures and the magnitude of discretionary accruals. With regards to the control variables, the results show a significant negative coefficient on Audit_S, suggesting that firms which are audited by an industry-specialized auditor are more likely to have less absolute value of abnormal accruals, a finding that is consistent with prior research.In summary, linking the quality of corporate governance and abnormal accruals, we find that firms that are ranked highly in terms of governance quality have fewer abnormal accruals. In addition, firms with powerful shareholders, more independent or functional boards and with more effective compensation policies for management or directors are more likely to have fewer discretionary accruals, suggesting that these factors may be effective in monitoring managerial opportunism. Finally, we do not find that disclosure of governance practices is associated with the magnitude of earnings management.5.3 Results from the return-earnings model Panel A in Table III reports the descriptive statistics of dependent and independent variables in equation (4). Median raw returns for sample firms over the sample period are 0.11; firms, on average, have earnings per share deflated by the prior year-end price of 0.33. The average Tobin's Q for the sample firms is 2.53 with a median of 2.03. Consistent with prior studies, the correlation table in Panel B of Table III shows a very strong positive correlation between stock raw returns and deflated earnings measure at the 1 percent level. Consistent with predictions, there are also strong correlations among returns and interactions of earnings and attributes of corporate governance at the univariate basis, indicating that firms having strong corporate governance mechanisms in place have earnings that are more informative. Again, there are very strong positive correlations among attributes of governance. Finally, larger firms are more leveraged and have lower Tobin's Q than smaller firms.Regression results of the relationship between returns and earnings and earnings-governance interactions are reported in Table IV. Again, we see a strong positive association between returns and the measure of earnings, which is consistent with prior studies. Column 4 also indicates a significant association between returns and the earnings-governance overall quality interaction at the 10 percent level.To examine whether the market perceives the attributes of governance quality differently, we regress returns on earnings and the interaction of earning and each individual measure of governance attribute. As shown in column 5, there are positive and significant coefficients on the interactions between earnings and proxies for quality of board composition, management compensation and shareholder rights at the 5 percent or better level. The results are insensitive to adding the control variables in the model, as shown in column 6. This provides support for H1-H3, suggesting that board independence, efficient management and director compensation, as well as effective monitoring by shareholders, improve earnings informativeness. Inconsistent with H4, however, a higher level of governance practice disclosures does not incrementally explain the return-earnings relation.Overall, linking the quality of corporate governance and the return-earnings association, an alternative measure of earnings quality, we find results which are largely consistent with the results obtained by using abnormal accruals as a proxy for earnings quality. Namely, firms with a more independent or functional board, and stronger shareholder rights, are more likely to have a stronger returns-earnings association, suggesting the earnings have a higher level of information content. In addition, earnings are more informative for firms with mandatory shareholding by board or management.5.4 Additional analysis To provide further evidence that governance matters for a firm's financial reporting quality, we examine whether firms with improved governance structure over the sample period are found to have a higher quality of reported earnings. Canadian companies have made significant changes in governance practices during the sample period, pushed by stronger governance rules, higher stakes and shareholder pressure[7]. If the new governance initiatives and the ensuing governance debate have helped improve the effectiveness with which boards discharge their financial reporting responsibilities, one might expect that the association between governance effectiveness and the magnitude of earnings management and the return-earnings association has become more pronounced over time. Furthermore, a change analysis can better address the endogeneity concerns.We use a balanced sample design to assess whether the association between abnormal accruals and governance attributes differs over time. This design allows each sample firm to serve as its own control, thereby eliminating any differences that might result from temporal variation in sample composition. We estimate the following accruals model using a sample of 93 panel firms for the years 2001 and 2004[8]: Equation 5 where Time is an indicator, coded as 1 for year 2004, and 0 otherwise.We use a time dummy, which is 1 for the year 2004 (to represent the post-regulation period) and 0 for the year 2001 (to proxy for the pre-regulation period). To control for factors other than governance regulations, such as stronger litigation environment and shareholder pressure for stronger practices, which may drive the improvement of governance, we also include time as a separate independent variable in the equation. The primary variables of interest are a5 to a8. We predict that enhanced corporate governance is associated with a lower degree of earnings management, thus we expect that a5<0, a6<0, a7<0, and a8<0.Unreported descriptive statistics of selected variables used in equation (5) suggest that, unsurprisingly, the total governance scores improved significantly from 2001 to 2004, with the mean score climbing to 61.5 out of 100 in 2001 to 74.9 in 2004. Governance along other dimensions significantly improved as well. For example, the mean score for the board composition and shareholder rights increased from 25.2 in 2001 to 31.2 in 2004, and from 17 in 2001 to 19.1 in 2004, respectively. Also, the magnitude of PMDA was reduced from 2001 to 2004, with a median of 0.09 in 2001, to 0.071 in 2004.The regression results for equation (5) are provided in Table V[9]. As shown in column 4 of Table V, there is a negative association between abnormal accruals and the interaction of board composition and the time dummy (-0.0004, t=-2.005) at the 5 percent level, suggesting that firms that increased their board independence experienced a decrease in abnormal accruals. Similarly, in columns 6 and 8 in Table V, the statistical negative associations between the accrual measure and the interactions of governance and the time dummy indicate that firms with improved shareholder rights incurred less abnormal accruals. However, the change in shareholding of managers and directors and disclosure in governance practice is not associated with any change in abnormal accruals.Next, we next examine whether changes in governance are also associated with changes in the return-earnings association. If the effectiveness of governance in enhancing the financial reporting quality is greater than the negative impact of stricter litigation regulations on investment behavior, then we expect that the market's perception of earnings quality has improved. We estimate the following return-earnings regression using the balanced sample: Equation 6 All the variables in equation (6) are as defined previously. If enhanced corporate governance is associated with a higher quality of reported earnings, and consequently a higher return-earnings association, we expect that a6>0, a7>0, a8>0 and a9>0.Table VI reports the regression results. Inspection of the results in Table VI shows that firms with more independent boards and stronger shareholder rights have higher return-earnings associations at the 5 percent or 10 percent level, after controlling for firm size, growth and the time period. The coefficient on interaction between shareholding and the time dummy is positive, as expected, but not significant. The corporate governance disclosure variable does not affect the informativeness of earnings differently across periods.Overall, additional tests indicate that improvements in governance practice are generally associated with less discretional accruals and more informative reported earnings; in addition, board independence and shareholder rights seem to be the most important factors in influencing the corresponding improvement of earnings quality as the quality of governance effectiveness increases. However, one caveat of the analyses is that, due to data limitation, we are unable to explicitly control for all other factors during the sample period that may contribute to the correlation between improved governance and the proxies of accounting earnings. Thus, the inference must be interpreted with caution.5.5 Sensitivity analyses The Globe and Mail surveys conducted in 2003-2005 are based on slightly modified but tougher marking standards than the methods used in 2002; thus, scores may not be strictly comparable over time. To mitigate this comparability concern, instead of using continuous variables, following Gompers et al. (2003), we classify the sample into three groups based on each category of governance: strong governance as a score above the 67th percentile, weak governance as a score below the 33th percentile, and the remainder as neutral governance. The results are not sensitive to this alternative classification[10].In addition to the determinants of the variation in the accruals model which are considered in this study, managers may have incentives to manipulate earnings in order to avoid earnings losses and earnings declines (Burgstahler and Dichev, 1997). Accordingly, sample firms are partitioned according to whether their unmanaged earning per share (before the performance-matched abnormal accruals) is negative or below last year's reported earnings per share[11]. We expect the incentives for income-increasing earnings management to be particularly strong when the unmanaged earnings numbers fall below target. The repeated analysis does not support this conjecture. The unreported results are qualitatively the same as those observed in Table II[12].Several sensitivity tests are also conducted in the returns-earnings analyses. Fist of all, we re-estimate the return-earnings model by partitioning the samples into positive and negative earnings. Hayn (1995) suggests that the earnings-response coefficient is low and not stable when earnings are negative. While we find that all the results are qualitatively the same when the earnings are positive, there are inconsistent findings when the earnings are negative. In addition, we also consider alternative measures of return (12-month window ending at the fiscal year-end) and earnings, including both the level and the change of earnings. The sensitivity checks do not affect the main results, and the variable for change of earnings is generally not significant across all model specifications. Finally, prior studies (Subramanyam and Wild, 1996) indicate that the persistence and variability of earnings may explain earnings informativeness[13]. Earnings persistence, earnings variability and their respective interactive terms with earnings are then included in the returns and earnings model. The models are re-estimated using both the pooled sample and the panel sample. Neither interactive term is statistically significant in explaining returns, while the main results of governance on return-earnings association remain unchanged. Recent governance initiatives in Canada have underlined the need for more evidence on the corporate governance and the quality of reporting issues. The purpose of this study is to provide early evidence to assess the merit of calls for stringent governance regulations in Canada by examining the association between the quality of overall and specific governance features and the quality of accounting earnings.We use absolute value of performance-matched abnormal accruals and return-earnings association as the proxies for quality of earnings. Using results from recently published data on corporate governance for a sample of Canadian firms, we find that overall governance quality is inversely related to the level of abnormal accruals and positively associated with the return-earnings association, suggesting that good corporate governance mechanisms provide greater monitoring of the financial accounting process and ensure more informative accounting earnings. We also find that governance attributes do not affect the quality of earnings equally. Specifically, the magnitude of abnormal accruals is negatively associated with the level of independence of board composition, the extent of alignment of management compensation with interests of shareholders and the strength of shareholder rights. The results from the returns and earnings analysis are consistent with these findings.Overall, this study provides early evidence that is consistent with Canadian regulators' initiatives that stronger corporate governance mechanisms may be important factors in improving the integrity of financial reporting for Canadian firms. Since Canadian regulators adopted a set of corporate governance rules which are similar to some provisions of the Sarbanes-Oxley Act, the SEC and various stock exchange listing standards around the world, the evidence in the paper suggests that future policy initiatives in the USA and other countries should reinforce the need for independent boards, effective management compensation and stronger shareholders' rights, which is likely to result in better earnings quality.This study has several limitations. First, like many empirical studies that rely on disclosed proxy data, proxy disclosures may not represent all aspects of corporate governance practices. It is possible that some companies may have strong practices in some areas, but received lower scores because the details are not disclosed in their proxies. In addition, the sampling process may suffer from survivorship bias[14]. Third, the tests in this study are association tests, and thus do not directly distinguish whether the structural change in the association between the proxies for earnings quality and governance characteristics is due to the enhancement of governance or the associated pressure for increased managerial accountability. Future research may need to adopt qualitative research approaches to provide collaborative evidence on the link between quality of financial reporting and governance effectiveness. Opens in a new window.Equation 1 Opens in a new window.Equation 2 Opens in a new window.Equation 3 Opens in a new window.Equation 4 Opens in a new window.Equation 5 Opens in a new window.Equation 6 Opens in a new window.Table I Descriptive statistics of variables in abnormal accruals regression (Panel A: dependent variable and selected independent variables, Panel B: Pearson correlation matrix for dependent variable and selected independent variables) Opens in a new window.Table II Regression results for the association between absolute value of abnormal accruals and governance attributes Opens in a new window.Table III Descriptive statistics of variables in return-earnings regression (Panel A: descriptive statistics for dependent and independent variables; Panel B: Pearson correlations matrix for dependent and independent variables) Opens in a new window.Table IV Regression results for the association between returns and earnings, earnings-governance interactions and other determinates of return-earnings association Opens in a new window.Table V Regression results for the association between abnormal accruals and governance attributes over time Opens in a new window.Table VI Regression results for the association between returns and earnings, earnings-governance interactions over time
|
- The tests in this study are association tests. Future research may use qualitative research approaches to examine the link between quality of financial reporting and governance effectiveness.
|
[SECTION: Purpose] Concerns about climate change are widely shared among global communities. However, according to the Australian Bureau of Statistics (2012), Australians' concerns about the environment are dynamic and have continued to shift in the past few years. Concern about climate change decreased in Australia from 73 per cent in 2007-2008 to 57 per cent in 2011-2012, with climate change ranked low in importance when compared to other concerns (Leviston et al., 2014). A 2018 public opinion survey exploring the concerns of Australians revealed that only 10 per cent of 650 respondents consider environmental issues as being the most important problem facing Australia. Currently, economic issues such as unemployment are at the forefront of Australians' minds, yet climate change is the single biggest issue facing our world (Roy Morgan survey, 2018). Concerns about climate change are related to one's environmental paradigm which is an effective predictor of green conscious behavior (Dunlap and Liere, 1978; Shephard et al., 2015). An environmental paradigm represents commonly accepted belief systems regarding how individuals relate to the natural environment. For example, some individuals assume themselves to be controllers (i.e. master) of the natural environment, establishing a high utility value for humans. On the other hand, some individuals assume themselves to be a part of the environment, possessing no control over it, but appreciating the inherent value of their surroundings regardless of its utility value for humans. The former group's beliefs fall into the anthropocentric environmental paradigm, while the latter group's beliefs fall into the eco-centric environmental paradigm. Anthropocentric individuals can perceive climate change as a controllable environmental problem. Eco-centric individuals can perceive climate change as a natural phenomenon and humans are subject to its adverse effects. Willmott (2014, p. 22) argues that shifting environmental views from the anthropocentric paradigm to the eco-centric paradigm is an essential condition to effectively address climate change problems as anthropocentric interferences on the natural environment might not necessarily resolve climate change problems. Ojala (2012, p. 630) finds a positive relationship between "constructive" hopes and young adults' response to climate change problems, as opposed to hopes that are based on the denial of climate change. The notion of "constructive" hope is generally related to favorable judgments of how effectively anthropocentric individuals can take actions to resolve climate change problems. On the contrary, eco-centric individuals who have a strong sense of inclusion toward nature are found to be more sensitive to climate change problems and understand it as a natural process (Asplund, 2016; Cheung et al., 2014). Considering the coexisting debate, it is intriguing to further investigate how individuals' environmental paradigms associate with climate change concerns and the resultant green conscious behavior. Moreover, when there is a contradiction in the existing literature that a shift in environmental views is found to be either necessary to ameliorate environmental degradation (Kilbourne and Carlson, 2008) or not so essential, the prevailing environmental views can be complementary (Gollnhofer and Schouten, 2017) in addressing the environmental problems through effective use of technology (Humphreys, 2014). According to the meta reviews on green conscious behavior studies (Bamberg and Moser, 2007; Leonidou et al., 2010), attitude-behavioral intention models have been widely used to examine green conscious behavior during the last few decades. The studies, however, reveal an issue pertaining to an attitude-behavior gap in green conscious behavior, that is, consumers do not necessarily covert green attitudes into green conscious behavior (Gupta and Ogden, 2006; Kollmuss and Agyeman, 2002; Mainieri et al., 1997; Wright and Klyn, 1998). Thus, it is recommended to seek alternative research approaches as attitudes alone cannot predict green conscious behavior, some studies even urging caution when interpreting the results obtained using attitude-behavioral intention models (Auger and Devinney, 2007; Casey, 1992; Peattie, 2001; Royne et al., 2011; Shen, 2012; Wong et al., 1996). With a high level of green consciousness recorded in global surveys (ACNielsen, 2015), young adults appear to be making an effort to ensure environmental wellbeing at both a personal and collective level. Thus, this cohort can be a rich information source in gaining a deeper understanding about subjective experiences in engaging in green conscious behavior (Connolly and Prothero, 2008), as opposed to behavioral intentions. This study, therefore, explores subjective experiences of green conscious young adults (aged between 19-25 years). Given the prevalence of contradictory findings, this study also explores whether environmental paradigms can be associated with climate change risk perceptions and green conscious behavior. With their purchasing power estimated to be around $170 billion every year (comScore, 2012), young adults are gradually becoming more aware of how their choices, lifestyles and behaviors play a vital role in developing a sustainable society (Lim, 2017), and 84 per cent of those surveyed believe that it is their generation's responsibility to change the world (Keeble, 2013). If this is the case, it is high time that green marketers tap into this largely ignored green market (Unruh and Ettenson, 2010). Climate change risk perceptions Climate change risk perception is defined as the expectation of the occurrence of a climate change problem, understanding its adverse impacts on oneself and others and knowledge of the causes (Leiserowitz, 2006). Research shows that climate change risk perceptions are informed by factors such as awareness, confidence in scientists, personal efficacy (Kellstedt et al., 2008), the outdoor temperature (Joireman et al., 2010), feelings of guilt about carbon emissions (Ferguson and Branscombe, 2010), personal experience (Leiserowitz, 2006) and exposure to sensitive information regarding climate change effects (such as dangers to wildlife; Otieno et al., 2013). Interestingly, while some studies confirm a relationship between climate change risk perceptions and individuals' support for climate change remedial actions (Leiserowitz, 2006; Whitmarsh, 2007), other studies disconfirm this relationship, claiming that individuals find it difficult to clearly understand the adverse effects of this complex environmental problem and are not interested in climate change remedial actions (Kronlid and Ohman, 2013) . Nevertheless, climate change problems are increasingly recognized as not only a significant environmental problem, but also an economic problem, with some controversial reports claiming that the impact of climate change will reduce the global economy by at least 5 per cent each year (Stern, 2008). Given the severity of the problem, Shepardson et al. (2012) stress the importance of using a systematic framework to build climate change awareness. It is further recommended to incorporate a target individuals' understanding into such systematic frameworks especially when cognitive dissonance and trust in science and government policy inhibit individuals' engagement in climate change remedial actions (Brownlee et al., 2013). Anthropocentric and eco-centric paradigms The notion of a paradigm is often described as a worldview which governs how individuals in a society collectively see, interpret and understand the world around them (Kilbourne and Carlson, 2008). In relation to green conscious behavior, each person views the relationship between humans and nature differently. These views are referred to as "environmental paradigms" (Dunlap and Van Liere, 1978) or human-nature connectedness (Burgh-Woodman and King, 2012). Broadly classified into two perspectives (the anthropocentric and eco-centric paradigms), previous research tends to debate the duality of these two paradigms (Barter and Bebbington, 2012), which is not the focus of this study. The term anthropocentric paradigm is often used to denote a set of beliefs that the environment should be preserved and protected because of its utility value for humans (Domanska, 2011; Dunlap, 2008). In contrast, the eco-centric paradigm is referred to as a set of beliefs that the environment should be preserved and protected because of its inherent value regardless of its utility for humans (Bailey and Wilson, 2009). Individuals who subscribe to the anthropocentric paradigm are commonly referred to as "anthropocentric individuals" (Surmeli and Saka, 2013), "instrumentalists" (Grankvist, 2015), "utilitarians" (Cembalo et al., 2016), "egoistic" and "social-altruistic individuals" (Schultz et al., 2005) and "individuals who manage the environment" (Purser et al., 1995). Individuals who subscribe to the eco-centric paradigm are commonly referred to as "eco-centric individuals" (Afsar et al., 2016), egalitarians (Price et al., 2014) and individuals with strong biospheric values whose environmental concerns are based on a value for all living things (Schultz et al., 2005, p. 392). Previous research provides many other classifications of environmental paradigms (Kronlid and Ohman, 2013; Price et al., 2014). Informed by these studies, a general distinction between the anthropocentric and eco-centric paradigms is considered a useful conceptual starting point for this in-depth, qualitative investigation. It is found that as anthropocentric individuals' support of environmental protection is governed by human-centered values and utilitarian purposes, these individuals are less likely to act to protect the environment if other human-centered values (e.g. material quality of life, accumulation of wealth, etc.) interfere. In contrast, eco-centric individuals are inclined to protect the environment even if their actions involve discomfort, inconvenience and expenses that may reduce their material quality of life (Mayer and Frantz, 2004). Thus, while it can be seen that both eco-centric and anthropocentric individuals may be concerned about environmental wellbeing, the motives behind their concerns are different. However, as shown in more recent research, green conscious individuals usually tend to project motives that resonate with eco-centric individuals (Barter and Bebbington, 2012) who are more likely to convert environmental attitudes into actual behavior than anthropocentric individuals (Cheung et al., 2014). Regarding climate change, eco-centric individuals who have a strong sense of inclusion toward nature also tend to be more sensitive to climate change problems than anthropocentric individuals (Cheung et al., 2014). As such, the eco-centric paradigm (positively) and the anthropocentric paradigm (negatively) are associated with a general concern for environmental wellbeing and climate change related problems (Cheung et al., 2014; Grankvist, 2015). Kronlid and Ohman (2013) suggest an environmental ethical framework with two parts: "value-oriented environmental ethics" and "relation-oriented environmental ethics" (see, Kronlid and Ohman, 2013, pp. 24-33). These generally resemble the distinction between anthropogenic and eco-centric environmental values. The importance of developing an environmental awareness framework that takes climate change into consideration is stressed in previous literature because environmental paradigms have more influence over informing climate change risk perceptions than scientific information does (Price et al., 2014; Stevenson et al., 2014; Kronlid and Ohman, 2013). An Australian study finds that socio-demographic variables are positively associated with higher levels of eco-centrism (Casey and Scott, 2006), which, in turn, is found to be positively associated with sensitivity toward climate change related problems (Stevenson et al., 2014). On the other hand, more recent research finds a negative correlation between age and eco-centrism (Gangaas et al., 2015). Given the mixed-evidence emerging from the existing literature, investigating whether environmental paradigms inform young adults' climate change risk perceptions and subsequent green conscious behavior remains a vital question. This investigation is also significant in helping to clarify the contradictory evidence that exists regarding young adults' actual engagement in green conscious behavior (Hume, 2010), as opposed to their willingness to purchase green commodities (AcNielsen, 2015). This study uses an interpretive approach with a view to exploring individuals' subjective experiences relating to green conscious behavior as opposed to intentional behavior (Thompson et al., 1989). This approach also responds to the criticism that quantitative studies based largely on simple liner models may not adequately capture the vital elements of contemporary issues in environmental debates (e.g. climate change issues; Kronlid and Ohman, 2013). Respondents' sense of the environment is considered important in exploring how they think and feel about climate change risk (Connolly and Prothero, 2008, p. 123; Royne et al., 2011). Therefore, the study recruited young adults who engage in green conscious behavior at varying degrees (e.g. green commodity purchases, recycling or reusing, online support for environmental causes, active engagement in aggressive environmental actions, dumpster-diving, used apparel swapping, passionately engage in organic gardening and choosing to work as "green-collar" employees at renewable energy companies) over a reasonable period (e.g. approximately 1 to 6 1/2 years). Appendix 1 contains the profiles of the respondents recruited, with pseudonyms being used to conceal their identity. Initially, predefined respondents were approached based on the researchers' observations and informal conversations with them. These respondents then either volunteered or were asked to recommend other individuals to participate in the study. This snowball sampling method (also known as chain-referral sampling) ensured having access to information-rich cases (Patton, 2002) and recruiting respondents who genuinely engage in green conscious behavior. Participant diversity was ensured through preliminary screening conversations with potential participants before commencing 20 1 1/2 -3-hour duration in-depth interviews, with 20 young adults aged between 19-25 years. An interview protocol consisting of ten open-ended questions (see Appendix 2) aided the interviews. Consistent with interpretive research tradition, the interview protocol was revised and redefined throughout the interview process. Ten member check interviews conducted via telephone conversations complemented the primary interviews. Member check interviews are a mechanism of checking the validity of the findings of interpretive research, either confirming or, at times, setting the boundary conditions of research findings (Wallendorf and Belk, 1989). All interviews were audio recorded and transcribed. Excluding the member check interviews, approximately 400 single-spaced pages of interview transcripts were analyzed. The Straussian School of thought (Strauss and Corbin, 1998), commonly known as the constant comparative method (Lincoln and Guba, 1985, pp. 334-341), informed the data analysis of this qualitative study. A line-by-line analysis (microanalysis) of the interview transcripts was manually carried out through open, axial and selective coding, confirming and disconfirming the themes that occurred at each of the three stages (generative, emergent and confirmatory) of data interpretation (Spiggle, 1994). The first, generative stage involved open-coding to identify different concepts and their constituent meaning in the interview transcripts. At this stage, no respondents showed clear evidence of aligning toward one particular environmental paradigm. This generative stage led to further exploration of the environmental views. A constant comparison method (Lincoln and Guba, 1985) incorporated data from each additional interview transcript and generated open codes. The specific incidents for comparison are also derived from consulting previous literature (Strauss and Corbin, 1998). Next, the emergent stage involved axial-coding which found themes shared between respondents. The final confirmatory stage involved selective coding to draw the central phenomenon and systematically relate it to other categories, validating the relationships between the thematic categories. Overall, three themes emerged from analysis of the data: "non-local" climate change risk, oscillation between environmental paradigms and anthropocentric environmentalism. In this section, each thematic category is elaborated. "Non-local" climate change risk Climate change risk is perceived by the respondents situating it as a "non-local" owing to several experiential as well as non-experiential factors. Confirming previous research (Connolly and Prothero, 2008; Whitmarsh, 2008), the respondents also perceived climate change as a complex phenomenon that is difficult to comprehend and/or imagine. Except for two respondents who studied climate change science, the rest found it difficult to explain what climate change means to them or its adverse consequences on their own. For example, Julian tries to explain climate change as, "I guess we don't know what's exactly going on. Pretty hard to imagine". Chris, another respondent appears to believe that Australia would not face similar consequences of climate change to other countries: Look, we had Katrina in the US and some of those kinds of very extreme events but we very rarely see that happen in Australia. Bush fires are probably the most recent one for Australia [...][it] only could kill a couple of hundred people in comparison to thousands in other countries. The words such as "pretty hard to imagine" (Julian), "we rarely see that happen in Australia" (Chris), "it (climate change) is a big unknown" (Jess) suggest that climate change risk is beyond their comprehension. Whitmarsh (2008) finds that the indirect relationship between the experience in facing environmental disasters (e.g. flood) and climate change risk perceptions are mediated by environmental values. That is, when facing environmental disasters, individuals with higher pro-environmental values are likely to have a higher level of climate change risk perceptions. This study, however, finds that regardless of some respondents' personal experiences with environmental disasters and their environmental values, they do not necessarily relate those experiences to climate change problems. Aron, a respondent, easily recalls the incidents and victims of climate-change-related disasters in other countries (e.g. Bangladesh) but not in Australia. It is evident that the respondents distance themselves and their surroundings from climate change risk either attributing the effects to other counties (Aron), saying "Australia will be alright, we are a big country" (Chris) or reflecting on the natural disasters reluctantly in relation to climate change (Elly). Therefore, it can be concluded that climate change is "non-local" to them in the sense of "non-personal". Consequently, as also found by Ojala (2012), there is also a sort of denial of climate change that seems to inhibit the respondents' engagement in climate change related environmental actions. The theme of "non-local" also indicates the respondents' reluctance to be associated with discourses that are presumably unsuccessful climate change remedial actions (Humphreys, 2014). The respondents express unfavorable attitudes toward some of the climate change related actions taken by the state governments of Australia (e.g. political debates and policies) and firms (e.g. labelled as "green wash"). They also express unfavorable attitudes toward the media in distributing information about climate change, assuming the media are driven by political agendas. For example, Chris finds the debate on the occurrence of climate change (e.g. climate change believers versus sceptics) is frustrating. Ellen also echoes this frustration: I think it's shit [lack of governmental actions on climate change]. I think that most of the time when the government puts forward policies that maybe initially they have, you know, they really want to make a change [...] they end up not being useful. Consequently, the respondents disassociate themselves from the mainstream discourse of climate change problems that they assume not favorable. Instead, they reframe their engagement in green conscious behavior as a positive experience (e.g. gardening, creating new items from recycled materials and engaging environmental actions in groups). The higher the number of cognitive barriers faced by the respondents in understanding the risks of climate change, the greater is their tendency to disassociate the phenomenon from their personal green conscious behavior. Only a few respondents who are more aware of the adverse effects of climate change have less ambiguous climate change risk perceptions. However, they also take precautions to not to bring unfavorable discourses of climate change (e.g. adverse effects of climate change and unsuccessful climate change remedial actions) to their everyday conversations. As Paige, a respondent, says, "personal relationships are not the best place to store my ideology". It is also observed that imposing views, sometimes referred to as "tall poppy syndrome" - a perceived tendency to discredit those who have prominence in public life, and tendency to not to bring discourses of complex issues to their valued relationships, are common among the respondents. Surprisingly, the study finds no evidence of any association between environmental paradigms and climate change risk perceptions. Regardless of the paradigm they believe in, the respondents tend to disassociate themselves from the complexities of the climate change phenomenon. Oscillation between environmental paradigms The study finds environmental views shared among the respondents are oscillated between the two environmental paradigms. Table I depicts a summary of findings of this thematic category, oscillation between environmental paradigms. The first column contains examples of power quotes, the most compelling evidence gathered from the qualitative data collection (Pratt, 2009). The second column shows the findings highlighted at the generative and emergent stages of open and axial coding respectively. The column also shows previous literature that supports these findings. Kilbourne and Polonsky (2005) report a negative relationship between one's beliefs on the anthropocentric paradigm and attitudes toward green conscious behavior. In contrast, as shown in Table I, the study finds that the respondents involved in various environmental actions appear to, at least partially, be motivated by both paradigms. For example, Kathy (Interview Excerpt no. 2 in Table I) does not totally disagree with the anthropocentric view when she says, "We have a right to use natural resources for economic gain", but she also believes that the perception of humans as being preeminent is misguided. This is close to an eco-centric view. Overall, this confirms that drastic changes in how individuals perceive the natural environment may not be essential as prevailing environmental views can be complementary in addressing environmental problems (Gollnhofer and Schouten, 2017). This study also finds that only a few respondents' environmental paradigms strongly align toward a single paradigm. For example, among the respondents who strongly hold anthropocentric environmental views, Amy perceives the natural environment as a tool, a hammer. She believes that humans have the capacity to rebuild the natural environment whenever they find it necessary. Therefore, she sees "nothing inherently amazing about the natural environment". Words such as "tool" or "hammer" can be related to her anthropocentric views. Previous research shows that anthropocentric individuals tend to protect the environment owing to the benefits they derive from the environment (Elliott, 2014). Similarly, the respondents perceive humans as preeminent and the natural environment as a resource base available for human existence and, hence, needs to be protected. Previous studies find that green conscious individuals tend to hold eco-centric views more than their counter parties (Dunlap et al., 2000). Contrary to this, however, the current study finds that eco-centric views are strongly held by only three respondents. Table II provides some examples of power quotes (Pratt, 2009) pertaining to eco-centric views together with evidence from previous literature. It should be noted that the final confirmatory stage of the study did not reveal compelling evidence to draw a thematic category relating to respondents' eco-centric views. As shown in the table above, three respondents reject the distinction between humans and the environment and the idea of perceiving the natural environment as an external phenomenon separate from humans. Instead, they hold eco-centric views that see humans as an intrinsic part of nature. This is referred to as "connectedness to nature" or a "naturalistic view", in which individuals feel an emotionally connected relationship with the natural environment (Mayer and Frantz, 2004). Anthropocentric environmentalism This study finds an association between environmental paradigms and how the respondents engage in green conscious behavior. When explaining some environmental actions with positive appeals at a collective level (e.g. having fun together, unity among friends, etc.), the respondents' views predominantly are aligned toward anthropocentric views. The respondents who share eco-centric views appear to be more passionate about engaging in green conscious behavior at an individual level. This unexpected finding has not been discussed in previous literature and, thus, represents a key finding of the study which will be elaborated in the discussion section. Table III shows the power quotes pertaining to an emerging association between environmental paradigms and how the respondents engage in green conscious behavior at collective and individual levels. As shown in Table III, Sean presents himself as a committed activist who engages in dumpster-diving. This involves procuring necessities from dumpsters. Recalling a recent dumpster-diving experience, which he enjoyed with a group of his friends, he also describes how he uses dumpster-diving to encourage youth involvement in environmental actions with a fun appeal as opposed to "preaching" (e.g. lecturing about the adverse effects of climate change) as the former is perceived to be more appealing to young individuals. Tim, who has been engaging in dumpster-diving for three years, also shares a similar view. He also (proudly) shows a photograph of his collection from dumpsters at the interview. Amy's dumpster-food is used to make free breakfasts for her environmental group members during environmental activities (e.g. a bicycling event). She explains: Sometimes, we run regular free dinners, and if you have a lot of dumpster food, you bring them along and cook that up. You know, community, free food for people. Several emotional affiliations such as fun, adventure and undertaking challenges can be seen in relation to these collective activities. The emotional benefits appear to drive these individuals' engagement in environmental actions, with this enthusiasm stronger among the committed activists who share predominantly anthropocentric views (e.g. Sean, Tim, Shasha and Amy) rather than eco-centric views. Respondents who share eco-centric views, such as Bob, appear to be more passionate about engaging in green conscious behavior at an individual level. He describes how gardening symbolizes his preference for making necessities himself rather than buying green commodities as "kind of cool". Elly, who is another eco-centric respondent, also explains her passion toward organic agriculture: I am a bit of an optimist here, but [I am] looking into biodynamics and an organic agriculture [...] which is organic [...], just mimicking what nature does [...] replicating ... So, I think the relationship between nature and humans needs to be respectable of what nature does well. The member check interviews, however, reveal a boundary condition to this thematic category. That disappointment toward aggressive activism may have triggered individuals to disassociate themselves with collective actions. At the member check interview, Aron, who is an eco-centric respondent, clearly distinguishes himself from mainstream activism owing to his disappointment with certain collective actions. He explains: When they rallied, people [environmental activists] were purposefully trying to get arrested and I never understood that. When it was put in to the media when people heard about the event it wasn't "Oh! We successfully managed to show the government that we want this to be closed down". [But] it was about "Ah! Did you hear about the arrest?" So, it made causes seem less legitimate because it sounded like people were trying to be over the top rather than actually caring about the cause. Consistent with his peripheral engagement in group environmentalist activities, Bob, while not predominantly holding eco-centric views, is not interested in collective actions. He believes that individuals who are affiliated with environmental groups "tend to be passionate to the extent that they almost push beyond reason". Subject to the boundary condition, it can be concluded that the respondents who predominantly engage in environmental actions with positive appeals are influenced by their stronger alignment toward the anthropocentric paradigm. On the other hand, the respondents who predominantly engage in green conscious behavior at an individual level (e.g. organic gardening) are influenced by their stronger alignment toward the eco-centric paradigm. This research investigated whether environmental paradigms inform climate change risk perceptions and green conscious behavior among young adults. Overall, the study found that young adults perceive climate change to be a "non-local" problem owing to various reasons. They have no local (personal) experience of climate change, are reluctant to engage or are unable to relate local environmental problems, if any, to climate change. They also hesitate to be associated with discourses pertaining to climate change programs run by various governmental and other organizations as they perceive the programs as largely unsuccessful. This study found no association between environmental paradigms and climate change risk perceptions. However, environmental paradigms and "non-local" climate change perceptions can be used to understand green conscious behavior. This section discusses three theoretical propositions generated by this study. It should be noted that this interpretive study explored the phenomenon with a view to making a theoretical contribution and hence the study used snowball sampling methods to recruit 20 young adults who engage in green conscious behavior as respondents. The expectation was to gain a deeper understanding of their subjective experiences in engaging in green conscious behavior. The study findings should, therefore, be complemented by additional research, using a larger sample, to draw more generalizable findings. This study found that regardless of the eco-centric or anthropocentric views they hold, young individuals demonstrate a low level of climate change risk perception. Willmott (2014) claims that shifting from the anthropocentric paradigm to the eco-centric paradigm is essential in addressing climate change problems at a societal level. However, this research stresses that the "non-local" climate change risks shared among young individuals should be resolved using programs aimed at enhancing knowledge about the adverse effects of climate change to personal lives. It appears pointless to push for anthropocentric and eco-centric paradigm changes as this seems to have no bearing on the issue (Gollnhofer and Schouten, 2017). Thus, it can be postulated that environmental paradigms do not inform climate change risk perceptions (1st proposition). This study also found that predominantly held environmental views can explain how young adults engage in green conscious behavior. Young adults appear to be engaged in green conscious behavior in two ways: collective level engagement; and individual level engagement. The former largely involves environmental actions, which mostly consist of collective actions (e.g. dumpster diving, rallies, etc.). The latter encompasses certain other green conscious behavior that mostly consists of individual actions (e.g. permaculture, gardening, etc.). Confirming previous research (Pentina and Amos, 2011), this research shows that environmental reasons as well as other emotional affiliations are associated with green conscious behavior. This is common among the individuals who predominantly hold anthropocentric views. This also confirms the young adults' effort to use positive connotations of their green conscious behavior to reconcile them with other individuals who may not be identified with green discourse owing to stereotyping or social labelling (Barnhart and Mish, 2017). However, this study also found that the eco-centric individuals tend to keep their engagement in green conscious behavior outside the public sphere. Thus, it partly confirmed previous research that eco-centric individuals tend to be engaged in particular green conscious behavior that signals human connection with the natural environment within a private sphere (Davison, 2008). Most of the green conscious behaviors performed collectively and enthusiastically are informed by an anthropocentric paradigm, and hence, this study partly disconfirms previous claims that individuals with eco-centric views consider themselves to be embedded elements of the natural environment and, as such, are highly engaged in green conscious behavior (Frantz and Mayer, 2014). Overall, this study postulates that environmental paradigms can be used to understand how young individuals engage in green conscious behavior (2nd proposition). Previous research (Whitmarsh, 2008) finds an association between the perception of physical vulnerability (e.g. rising sea level, floods, etc.) and climate change risk perceptions. Despite a few of the young adults having personal experiences of natural disasters, they were reluctant to associate with climate change. Their risk perceptions are not different from the other respondents who find it difficult to imagine the potential effects of climate change in Australia. As such, their climate change risk perceptions are largely based on the adverse effects of climate change in other countries. This study, therefore, confirms previous research (Leiserowitz, 2006) reporting that individuals face several barriers in perceiving climate change risks, with the adverse effects of climate change perceived as a psychologically and geographically distant matter (Lorenzoni et al., 2007). Further, discourses of climate change and green conscious behavior have shifted from environmental wellbeing to efficient use of resources (Humphreys, 2014), and young individuals lead the pack (Prothero et al., 2010). Fielding and Head (2012) find that young adults with a greater sense of collectivism have more positive environmental attitudes than respondents who perceive environmental protection as a governmental responsibility. This is partly confirmed by the study as among other factors, skepticism toward existing climate change remedial actions resulted in "non-local" climate change risk perceptions among the young individuals, in turn encouraging them to dissociate from the prevailing climate change discourses and reframe their green conscious behavior with positive experiences. Thus, it can be postulated that climate change risk perceptions inform green conscious behavior (3rd proposition). The three key theoretical propositions discussed in this section of this inductive research can be depicted using a skeletal theoretical framework, as shown in Figure 1. According to Morse et al. (2008), a skeletal framework can serve to sensitize future research, providing an internal structure to a research program. As shown in Figure 1, environmental paradigms, belief systems that guide how individuals relate to the natural environment, do not inform climate change risk perceptions (1st proposition). The environmental paradigms can be used to understand how young individuals engage in green conscious behavior that is manifested at collective and individual levels (2nd proposition). Climate change risk perceptions also inform the green conscious behaviors (3rd proposition). As shown in this study, young adults' environmental concerns - especially those of climate change - are shadowed by several experiential and non-experiential factors and young adults seek emotional benefits through engaging in green conscious behavior. They either reject or ignore negative connotations associated with climate change, such as its possible local impact and distrust of governmental actions. This presents special challenges for marketers of green commodities. Young consumers can more easily be convinced by advertising messages than older generations (Len-Rios et al., 2016). Therefore, promotional campaigns of green commodities can be effectively used to convince young consumers to engage in green conscious behavior. However, such campaigns should consist of positive appeal, enthusiasm and opportunities to engage with the campaigns collectively (e.g. used apparels-sharing events, collective cycling and car-pooling). Generally, appeals appearing in advertising of green conscious behavior focus on restrictions and controls of existing behavior and display negative connotations. Such appeals might not be effective when targeting young consumers. Developing climate change awareness building programs is highly recommended. However, it is essential not to present climate change as an uncontrollable and unactionable problem as that will discourage the youth's interest in taking part in the programs or to be associated with the discourse. Ojala (2012, p. 630) found a positive relationship between "constructive" hopes and young adults' response to climate change-related problems. Otieno et al. (2013) also found sensational styles of presenting climate change information significantly influences young adults' climate change risk perceptions. This study concludes that although a sensational or sensitive approach may be appealing to eco-centric individuals (Asplund, 2016) who largely engage in green conscious behaviors at an individual level, a positive approach is highly essential in managing the issue of seeing climate change as a "non-local" issue as well as promoting green commodities aimed at anthropocentric individuals. Emphasis on empowered connotations, positive emotional affiliations and constructive hopes (Ojala, 2012) can provide a market potential for green commodities (Prothero and Fitchett, 2000). This study postulated three key theoretical propositions pertaining to how young adults relate to climate change, their environmental paradigms and green conscious behavior and recommended that environmental programs should revisit the idea that the anthropocentric paradigm inhibits engagement in green conscious behavior. Further, the promotional campaigns of green commodities can be largely benefited through incorporating messages with positive appeals providing young individuals with opportunities to engage with the campaigns collectively. Several limitations of the findings should be noted in the future use of the theoretical propositions postulated by this research. The study used snowball sampling methods in recruiting 20 respondents and in-depth interviews with the purpose of investigating the unique experiences of young adults engaging in green conscious behavior. Future research can consider running a comparative study using a larger sample of individuals who can be pretested to determine their tendency to align with either anthropocentric or eco-centric environmental paradigms.
|
This study aims to explore how young adults understand the climate change problem. It also explores whether environmental paradigms explain how young adults perceive climate change risks in their everyday green conscious behavior.
|
[SECTION: Method] Concerns about climate change are widely shared among global communities. However, according to the Australian Bureau of Statistics (2012), Australians' concerns about the environment are dynamic and have continued to shift in the past few years. Concern about climate change decreased in Australia from 73 per cent in 2007-2008 to 57 per cent in 2011-2012, with climate change ranked low in importance when compared to other concerns (Leviston et al., 2014). A 2018 public opinion survey exploring the concerns of Australians revealed that only 10 per cent of 650 respondents consider environmental issues as being the most important problem facing Australia. Currently, economic issues such as unemployment are at the forefront of Australians' minds, yet climate change is the single biggest issue facing our world (Roy Morgan survey, 2018). Concerns about climate change are related to one's environmental paradigm which is an effective predictor of green conscious behavior (Dunlap and Liere, 1978; Shephard et al., 2015). An environmental paradigm represents commonly accepted belief systems regarding how individuals relate to the natural environment. For example, some individuals assume themselves to be controllers (i.e. master) of the natural environment, establishing a high utility value for humans. On the other hand, some individuals assume themselves to be a part of the environment, possessing no control over it, but appreciating the inherent value of their surroundings regardless of its utility value for humans. The former group's beliefs fall into the anthropocentric environmental paradigm, while the latter group's beliefs fall into the eco-centric environmental paradigm. Anthropocentric individuals can perceive climate change as a controllable environmental problem. Eco-centric individuals can perceive climate change as a natural phenomenon and humans are subject to its adverse effects. Willmott (2014, p. 22) argues that shifting environmental views from the anthropocentric paradigm to the eco-centric paradigm is an essential condition to effectively address climate change problems as anthropocentric interferences on the natural environment might not necessarily resolve climate change problems. Ojala (2012, p. 630) finds a positive relationship between "constructive" hopes and young adults' response to climate change problems, as opposed to hopes that are based on the denial of climate change. The notion of "constructive" hope is generally related to favorable judgments of how effectively anthropocentric individuals can take actions to resolve climate change problems. On the contrary, eco-centric individuals who have a strong sense of inclusion toward nature are found to be more sensitive to climate change problems and understand it as a natural process (Asplund, 2016; Cheung et al., 2014). Considering the coexisting debate, it is intriguing to further investigate how individuals' environmental paradigms associate with climate change concerns and the resultant green conscious behavior. Moreover, when there is a contradiction in the existing literature that a shift in environmental views is found to be either necessary to ameliorate environmental degradation (Kilbourne and Carlson, 2008) or not so essential, the prevailing environmental views can be complementary (Gollnhofer and Schouten, 2017) in addressing the environmental problems through effective use of technology (Humphreys, 2014). According to the meta reviews on green conscious behavior studies (Bamberg and Moser, 2007; Leonidou et al., 2010), attitude-behavioral intention models have been widely used to examine green conscious behavior during the last few decades. The studies, however, reveal an issue pertaining to an attitude-behavior gap in green conscious behavior, that is, consumers do not necessarily covert green attitudes into green conscious behavior (Gupta and Ogden, 2006; Kollmuss and Agyeman, 2002; Mainieri et al., 1997; Wright and Klyn, 1998). Thus, it is recommended to seek alternative research approaches as attitudes alone cannot predict green conscious behavior, some studies even urging caution when interpreting the results obtained using attitude-behavioral intention models (Auger and Devinney, 2007; Casey, 1992; Peattie, 2001; Royne et al., 2011; Shen, 2012; Wong et al., 1996). With a high level of green consciousness recorded in global surveys (ACNielsen, 2015), young adults appear to be making an effort to ensure environmental wellbeing at both a personal and collective level. Thus, this cohort can be a rich information source in gaining a deeper understanding about subjective experiences in engaging in green conscious behavior (Connolly and Prothero, 2008), as opposed to behavioral intentions. This study, therefore, explores subjective experiences of green conscious young adults (aged between 19-25 years). Given the prevalence of contradictory findings, this study also explores whether environmental paradigms can be associated with climate change risk perceptions and green conscious behavior. With their purchasing power estimated to be around $170 billion every year (comScore, 2012), young adults are gradually becoming more aware of how their choices, lifestyles and behaviors play a vital role in developing a sustainable society (Lim, 2017), and 84 per cent of those surveyed believe that it is their generation's responsibility to change the world (Keeble, 2013). If this is the case, it is high time that green marketers tap into this largely ignored green market (Unruh and Ettenson, 2010). Climate change risk perceptions Climate change risk perception is defined as the expectation of the occurrence of a climate change problem, understanding its adverse impacts on oneself and others and knowledge of the causes (Leiserowitz, 2006). Research shows that climate change risk perceptions are informed by factors such as awareness, confidence in scientists, personal efficacy (Kellstedt et al., 2008), the outdoor temperature (Joireman et al., 2010), feelings of guilt about carbon emissions (Ferguson and Branscombe, 2010), personal experience (Leiserowitz, 2006) and exposure to sensitive information regarding climate change effects (such as dangers to wildlife; Otieno et al., 2013). Interestingly, while some studies confirm a relationship between climate change risk perceptions and individuals' support for climate change remedial actions (Leiserowitz, 2006; Whitmarsh, 2007), other studies disconfirm this relationship, claiming that individuals find it difficult to clearly understand the adverse effects of this complex environmental problem and are not interested in climate change remedial actions (Kronlid and Ohman, 2013) . Nevertheless, climate change problems are increasingly recognized as not only a significant environmental problem, but also an economic problem, with some controversial reports claiming that the impact of climate change will reduce the global economy by at least 5 per cent each year (Stern, 2008). Given the severity of the problem, Shepardson et al. (2012) stress the importance of using a systematic framework to build climate change awareness. It is further recommended to incorporate a target individuals' understanding into such systematic frameworks especially when cognitive dissonance and trust in science and government policy inhibit individuals' engagement in climate change remedial actions (Brownlee et al., 2013). Anthropocentric and eco-centric paradigms The notion of a paradigm is often described as a worldview which governs how individuals in a society collectively see, interpret and understand the world around them (Kilbourne and Carlson, 2008). In relation to green conscious behavior, each person views the relationship between humans and nature differently. These views are referred to as "environmental paradigms" (Dunlap and Van Liere, 1978) or human-nature connectedness (Burgh-Woodman and King, 2012). Broadly classified into two perspectives (the anthropocentric and eco-centric paradigms), previous research tends to debate the duality of these two paradigms (Barter and Bebbington, 2012), which is not the focus of this study. The term anthropocentric paradigm is often used to denote a set of beliefs that the environment should be preserved and protected because of its utility value for humans (Domanska, 2011; Dunlap, 2008). In contrast, the eco-centric paradigm is referred to as a set of beliefs that the environment should be preserved and protected because of its inherent value regardless of its utility for humans (Bailey and Wilson, 2009). Individuals who subscribe to the anthropocentric paradigm are commonly referred to as "anthropocentric individuals" (Surmeli and Saka, 2013), "instrumentalists" (Grankvist, 2015), "utilitarians" (Cembalo et al., 2016), "egoistic" and "social-altruistic individuals" (Schultz et al., 2005) and "individuals who manage the environment" (Purser et al., 1995). Individuals who subscribe to the eco-centric paradigm are commonly referred to as "eco-centric individuals" (Afsar et al., 2016), egalitarians (Price et al., 2014) and individuals with strong biospheric values whose environmental concerns are based on a value for all living things (Schultz et al., 2005, p. 392). Previous research provides many other classifications of environmental paradigms (Kronlid and Ohman, 2013; Price et al., 2014). Informed by these studies, a general distinction between the anthropocentric and eco-centric paradigms is considered a useful conceptual starting point for this in-depth, qualitative investigation. It is found that as anthropocentric individuals' support of environmental protection is governed by human-centered values and utilitarian purposes, these individuals are less likely to act to protect the environment if other human-centered values (e.g. material quality of life, accumulation of wealth, etc.) interfere. In contrast, eco-centric individuals are inclined to protect the environment even if their actions involve discomfort, inconvenience and expenses that may reduce their material quality of life (Mayer and Frantz, 2004). Thus, while it can be seen that both eco-centric and anthropocentric individuals may be concerned about environmental wellbeing, the motives behind their concerns are different. However, as shown in more recent research, green conscious individuals usually tend to project motives that resonate with eco-centric individuals (Barter and Bebbington, 2012) who are more likely to convert environmental attitudes into actual behavior than anthropocentric individuals (Cheung et al., 2014). Regarding climate change, eco-centric individuals who have a strong sense of inclusion toward nature also tend to be more sensitive to climate change problems than anthropocentric individuals (Cheung et al., 2014). As such, the eco-centric paradigm (positively) and the anthropocentric paradigm (negatively) are associated with a general concern for environmental wellbeing and climate change related problems (Cheung et al., 2014; Grankvist, 2015). Kronlid and Ohman (2013) suggest an environmental ethical framework with two parts: "value-oriented environmental ethics" and "relation-oriented environmental ethics" (see, Kronlid and Ohman, 2013, pp. 24-33). These generally resemble the distinction between anthropogenic and eco-centric environmental values. The importance of developing an environmental awareness framework that takes climate change into consideration is stressed in previous literature because environmental paradigms have more influence over informing climate change risk perceptions than scientific information does (Price et al., 2014; Stevenson et al., 2014; Kronlid and Ohman, 2013). An Australian study finds that socio-demographic variables are positively associated with higher levels of eco-centrism (Casey and Scott, 2006), which, in turn, is found to be positively associated with sensitivity toward climate change related problems (Stevenson et al., 2014). On the other hand, more recent research finds a negative correlation between age and eco-centrism (Gangaas et al., 2015). Given the mixed-evidence emerging from the existing literature, investigating whether environmental paradigms inform young adults' climate change risk perceptions and subsequent green conscious behavior remains a vital question. This investigation is also significant in helping to clarify the contradictory evidence that exists regarding young adults' actual engagement in green conscious behavior (Hume, 2010), as opposed to their willingness to purchase green commodities (AcNielsen, 2015). This study uses an interpretive approach with a view to exploring individuals' subjective experiences relating to green conscious behavior as opposed to intentional behavior (Thompson et al., 1989). This approach also responds to the criticism that quantitative studies based largely on simple liner models may not adequately capture the vital elements of contemporary issues in environmental debates (e.g. climate change issues; Kronlid and Ohman, 2013). Respondents' sense of the environment is considered important in exploring how they think and feel about climate change risk (Connolly and Prothero, 2008, p. 123; Royne et al., 2011). Therefore, the study recruited young adults who engage in green conscious behavior at varying degrees (e.g. green commodity purchases, recycling or reusing, online support for environmental causes, active engagement in aggressive environmental actions, dumpster-diving, used apparel swapping, passionately engage in organic gardening and choosing to work as "green-collar" employees at renewable energy companies) over a reasonable period (e.g. approximately 1 to 6 1/2 years). Appendix 1 contains the profiles of the respondents recruited, with pseudonyms being used to conceal their identity. Initially, predefined respondents were approached based on the researchers' observations and informal conversations with them. These respondents then either volunteered or were asked to recommend other individuals to participate in the study. This snowball sampling method (also known as chain-referral sampling) ensured having access to information-rich cases (Patton, 2002) and recruiting respondents who genuinely engage in green conscious behavior. Participant diversity was ensured through preliminary screening conversations with potential participants before commencing 20 1 1/2 -3-hour duration in-depth interviews, with 20 young adults aged between 19-25 years. An interview protocol consisting of ten open-ended questions (see Appendix 2) aided the interviews. Consistent with interpretive research tradition, the interview protocol was revised and redefined throughout the interview process. Ten member check interviews conducted via telephone conversations complemented the primary interviews. Member check interviews are a mechanism of checking the validity of the findings of interpretive research, either confirming or, at times, setting the boundary conditions of research findings (Wallendorf and Belk, 1989). All interviews were audio recorded and transcribed. Excluding the member check interviews, approximately 400 single-spaced pages of interview transcripts were analyzed. The Straussian School of thought (Strauss and Corbin, 1998), commonly known as the constant comparative method (Lincoln and Guba, 1985, pp. 334-341), informed the data analysis of this qualitative study. A line-by-line analysis (microanalysis) of the interview transcripts was manually carried out through open, axial and selective coding, confirming and disconfirming the themes that occurred at each of the three stages (generative, emergent and confirmatory) of data interpretation (Spiggle, 1994). The first, generative stage involved open-coding to identify different concepts and their constituent meaning in the interview transcripts. At this stage, no respondents showed clear evidence of aligning toward one particular environmental paradigm. This generative stage led to further exploration of the environmental views. A constant comparison method (Lincoln and Guba, 1985) incorporated data from each additional interview transcript and generated open codes. The specific incidents for comparison are also derived from consulting previous literature (Strauss and Corbin, 1998). Next, the emergent stage involved axial-coding which found themes shared between respondents. The final confirmatory stage involved selective coding to draw the central phenomenon and systematically relate it to other categories, validating the relationships between the thematic categories. Overall, three themes emerged from analysis of the data: "non-local" climate change risk, oscillation between environmental paradigms and anthropocentric environmentalism. In this section, each thematic category is elaborated. "Non-local" climate change risk Climate change risk is perceived by the respondents situating it as a "non-local" owing to several experiential as well as non-experiential factors. Confirming previous research (Connolly and Prothero, 2008; Whitmarsh, 2008), the respondents also perceived climate change as a complex phenomenon that is difficult to comprehend and/or imagine. Except for two respondents who studied climate change science, the rest found it difficult to explain what climate change means to them or its adverse consequences on their own. For example, Julian tries to explain climate change as, "I guess we don't know what's exactly going on. Pretty hard to imagine". Chris, another respondent appears to believe that Australia would not face similar consequences of climate change to other countries: Look, we had Katrina in the US and some of those kinds of very extreme events but we very rarely see that happen in Australia. Bush fires are probably the most recent one for Australia [...][it] only could kill a couple of hundred people in comparison to thousands in other countries. The words such as "pretty hard to imagine" (Julian), "we rarely see that happen in Australia" (Chris), "it (climate change) is a big unknown" (Jess) suggest that climate change risk is beyond their comprehension. Whitmarsh (2008) finds that the indirect relationship between the experience in facing environmental disasters (e.g. flood) and climate change risk perceptions are mediated by environmental values. That is, when facing environmental disasters, individuals with higher pro-environmental values are likely to have a higher level of climate change risk perceptions. This study, however, finds that regardless of some respondents' personal experiences with environmental disasters and their environmental values, they do not necessarily relate those experiences to climate change problems. Aron, a respondent, easily recalls the incidents and victims of climate-change-related disasters in other countries (e.g. Bangladesh) but not in Australia. It is evident that the respondents distance themselves and their surroundings from climate change risk either attributing the effects to other counties (Aron), saying "Australia will be alright, we are a big country" (Chris) or reflecting on the natural disasters reluctantly in relation to climate change (Elly). Therefore, it can be concluded that climate change is "non-local" to them in the sense of "non-personal". Consequently, as also found by Ojala (2012), there is also a sort of denial of climate change that seems to inhibit the respondents' engagement in climate change related environmental actions. The theme of "non-local" also indicates the respondents' reluctance to be associated with discourses that are presumably unsuccessful climate change remedial actions (Humphreys, 2014). The respondents express unfavorable attitudes toward some of the climate change related actions taken by the state governments of Australia (e.g. political debates and policies) and firms (e.g. labelled as "green wash"). They also express unfavorable attitudes toward the media in distributing information about climate change, assuming the media are driven by political agendas. For example, Chris finds the debate on the occurrence of climate change (e.g. climate change believers versus sceptics) is frustrating. Ellen also echoes this frustration: I think it's shit [lack of governmental actions on climate change]. I think that most of the time when the government puts forward policies that maybe initially they have, you know, they really want to make a change [...] they end up not being useful. Consequently, the respondents disassociate themselves from the mainstream discourse of climate change problems that they assume not favorable. Instead, they reframe their engagement in green conscious behavior as a positive experience (e.g. gardening, creating new items from recycled materials and engaging environmental actions in groups). The higher the number of cognitive barriers faced by the respondents in understanding the risks of climate change, the greater is their tendency to disassociate the phenomenon from their personal green conscious behavior. Only a few respondents who are more aware of the adverse effects of climate change have less ambiguous climate change risk perceptions. However, they also take precautions to not to bring unfavorable discourses of climate change (e.g. adverse effects of climate change and unsuccessful climate change remedial actions) to their everyday conversations. As Paige, a respondent, says, "personal relationships are not the best place to store my ideology". It is also observed that imposing views, sometimes referred to as "tall poppy syndrome" - a perceived tendency to discredit those who have prominence in public life, and tendency to not to bring discourses of complex issues to their valued relationships, are common among the respondents. Surprisingly, the study finds no evidence of any association between environmental paradigms and climate change risk perceptions. Regardless of the paradigm they believe in, the respondents tend to disassociate themselves from the complexities of the climate change phenomenon. Oscillation between environmental paradigms The study finds environmental views shared among the respondents are oscillated between the two environmental paradigms. Table I depicts a summary of findings of this thematic category, oscillation between environmental paradigms. The first column contains examples of power quotes, the most compelling evidence gathered from the qualitative data collection (Pratt, 2009). The second column shows the findings highlighted at the generative and emergent stages of open and axial coding respectively. The column also shows previous literature that supports these findings. Kilbourne and Polonsky (2005) report a negative relationship between one's beliefs on the anthropocentric paradigm and attitudes toward green conscious behavior. In contrast, as shown in Table I, the study finds that the respondents involved in various environmental actions appear to, at least partially, be motivated by both paradigms. For example, Kathy (Interview Excerpt no. 2 in Table I) does not totally disagree with the anthropocentric view when she says, "We have a right to use natural resources for economic gain", but she also believes that the perception of humans as being preeminent is misguided. This is close to an eco-centric view. Overall, this confirms that drastic changes in how individuals perceive the natural environment may not be essential as prevailing environmental views can be complementary in addressing environmental problems (Gollnhofer and Schouten, 2017). This study also finds that only a few respondents' environmental paradigms strongly align toward a single paradigm. For example, among the respondents who strongly hold anthropocentric environmental views, Amy perceives the natural environment as a tool, a hammer. She believes that humans have the capacity to rebuild the natural environment whenever they find it necessary. Therefore, she sees "nothing inherently amazing about the natural environment". Words such as "tool" or "hammer" can be related to her anthropocentric views. Previous research shows that anthropocentric individuals tend to protect the environment owing to the benefits they derive from the environment (Elliott, 2014). Similarly, the respondents perceive humans as preeminent and the natural environment as a resource base available for human existence and, hence, needs to be protected. Previous studies find that green conscious individuals tend to hold eco-centric views more than their counter parties (Dunlap et al., 2000). Contrary to this, however, the current study finds that eco-centric views are strongly held by only three respondents. Table II provides some examples of power quotes (Pratt, 2009) pertaining to eco-centric views together with evidence from previous literature. It should be noted that the final confirmatory stage of the study did not reveal compelling evidence to draw a thematic category relating to respondents' eco-centric views. As shown in the table above, three respondents reject the distinction between humans and the environment and the idea of perceiving the natural environment as an external phenomenon separate from humans. Instead, they hold eco-centric views that see humans as an intrinsic part of nature. This is referred to as "connectedness to nature" or a "naturalistic view", in which individuals feel an emotionally connected relationship with the natural environment (Mayer and Frantz, 2004). Anthropocentric environmentalism This study finds an association between environmental paradigms and how the respondents engage in green conscious behavior. When explaining some environmental actions with positive appeals at a collective level (e.g. having fun together, unity among friends, etc.), the respondents' views predominantly are aligned toward anthropocentric views. The respondents who share eco-centric views appear to be more passionate about engaging in green conscious behavior at an individual level. This unexpected finding has not been discussed in previous literature and, thus, represents a key finding of the study which will be elaborated in the discussion section. Table III shows the power quotes pertaining to an emerging association between environmental paradigms and how the respondents engage in green conscious behavior at collective and individual levels. As shown in Table III, Sean presents himself as a committed activist who engages in dumpster-diving. This involves procuring necessities from dumpsters. Recalling a recent dumpster-diving experience, which he enjoyed with a group of his friends, he also describes how he uses dumpster-diving to encourage youth involvement in environmental actions with a fun appeal as opposed to "preaching" (e.g. lecturing about the adverse effects of climate change) as the former is perceived to be more appealing to young individuals. Tim, who has been engaging in dumpster-diving for three years, also shares a similar view. He also (proudly) shows a photograph of his collection from dumpsters at the interview. Amy's dumpster-food is used to make free breakfasts for her environmental group members during environmental activities (e.g. a bicycling event). She explains: Sometimes, we run regular free dinners, and if you have a lot of dumpster food, you bring them along and cook that up. You know, community, free food for people. Several emotional affiliations such as fun, adventure and undertaking challenges can be seen in relation to these collective activities. The emotional benefits appear to drive these individuals' engagement in environmental actions, with this enthusiasm stronger among the committed activists who share predominantly anthropocentric views (e.g. Sean, Tim, Shasha and Amy) rather than eco-centric views. Respondents who share eco-centric views, such as Bob, appear to be more passionate about engaging in green conscious behavior at an individual level. He describes how gardening symbolizes his preference for making necessities himself rather than buying green commodities as "kind of cool". Elly, who is another eco-centric respondent, also explains her passion toward organic agriculture: I am a bit of an optimist here, but [I am] looking into biodynamics and an organic agriculture [...] which is organic [...], just mimicking what nature does [...] replicating ... So, I think the relationship between nature and humans needs to be respectable of what nature does well. The member check interviews, however, reveal a boundary condition to this thematic category. That disappointment toward aggressive activism may have triggered individuals to disassociate themselves with collective actions. At the member check interview, Aron, who is an eco-centric respondent, clearly distinguishes himself from mainstream activism owing to his disappointment with certain collective actions. He explains: When they rallied, people [environmental activists] were purposefully trying to get arrested and I never understood that. When it was put in to the media when people heard about the event it wasn't "Oh! We successfully managed to show the government that we want this to be closed down". [But] it was about "Ah! Did you hear about the arrest?" So, it made causes seem less legitimate because it sounded like people were trying to be over the top rather than actually caring about the cause. Consistent with his peripheral engagement in group environmentalist activities, Bob, while not predominantly holding eco-centric views, is not interested in collective actions. He believes that individuals who are affiliated with environmental groups "tend to be passionate to the extent that they almost push beyond reason". Subject to the boundary condition, it can be concluded that the respondents who predominantly engage in environmental actions with positive appeals are influenced by their stronger alignment toward the anthropocentric paradigm. On the other hand, the respondents who predominantly engage in green conscious behavior at an individual level (e.g. organic gardening) are influenced by their stronger alignment toward the eco-centric paradigm. This research investigated whether environmental paradigms inform climate change risk perceptions and green conscious behavior among young adults. Overall, the study found that young adults perceive climate change to be a "non-local" problem owing to various reasons. They have no local (personal) experience of climate change, are reluctant to engage or are unable to relate local environmental problems, if any, to climate change. They also hesitate to be associated with discourses pertaining to climate change programs run by various governmental and other organizations as they perceive the programs as largely unsuccessful. This study found no association between environmental paradigms and climate change risk perceptions. However, environmental paradigms and "non-local" climate change perceptions can be used to understand green conscious behavior. This section discusses three theoretical propositions generated by this study. It should be noted that this interpretive study explored the phenomenon with a view to making a theoretical contribution and hence the study used snowball sampling methods to recruit 20 young adults who engage in green conscious behavior as respondents. The expectation was to gain a deeper understanding of their subjective experiences in engaging in green conscious behavior. The study findings should, therefore, be complemented by additional research, using a larger sample, to draw more generalizable findings. This study found that regardless of the eco-centric or anthropocentric views they hold, young individuals demonstrate a low level of climate change risk perception. Willmott (2014) claims that shifting from the anthropocentric paradigm to the eco-centric paradigm is essential in addressing climate change problems at a societal level. However, this research stresses that the "non-local" climate change risks shared among young individuals should be resolved using programs aimed at enhancing knowledge about the adverse effects of climate change to personal lives. It appears pointless to push for anthropocentric and eco-centric paradigm changes as this seems to have no bearing on the issue (Gollnhofer and Schouten, 2017). Thus, it can be postulated that environmental paradigms do not inform climate change risk perceptions (1st proposition). This study also found that predominantly held environmental views can explain how young adults engage in green conscious behavior. Young adults appear to be engaged in green conscious behavior in two ways: collective level engagement; and individual level engagement. The former largely involves environmental actions, which mostly consist of collective actions (e.g. dumpster diving, rallies, etc.). The latter encompasses certain other green conscious behavior that mostly consists of individual actions (e.g. permaculture, gardening, etc.). Confirming previous research (Pentina and Amos, 2011), this research shows that environmental reasons as well as other emotional affiliations are associated with green conscious behavior. This is common among the individuals who predominantly hold anthropocentric views. This also confirms the young adults' effort to use positive connotations of their green conscious behavior to reconcile them with other individuals who may not be identified with green discourse owing to stereotyping or social labelling (Barnhart and Mish, 2017). However, this study also found that the eco-centric individuals tend to keep their engagement in green conscious behavior outside the public sphere. Thus, it partly confirmed previous research that eco-centric individuals tend to be engaged in particular green conscious behavior that signals human connection with the natural environment within a private sphere (Davison, 2008). Most of the green conscious behaviors performed collectively and enthusiastically are informed by an anthropocentric paradigm, and hence, this study partly disconfirms previous claims that individuals with eco-centric views consider themselves to be embedded elements of the natural environment and, as such, are highly engaged in green conscious behavior (Frantz and Mayer, 2014). Overall, this study postulates that environmental paradigms can be used to understand how young individuals engage in green conscious behavior (2nd proposition). Previous research (Whitmarsh, 2008) finds an association between the perception of physical vulnerability (e.g. rising sea level, floods, etc.) and climate change risk perceptions. Despite a few of the young adults having personal experiences of natural disasters, they were reluctant to associate with climate change. Their risk perceptions are not different from the other respondents who find it difficult to imagine the potential effects of climate change in Australia. As such, their climate change risk perceptions are largely based on the adverse effects of climate change in other countries. This study, therefore, confirms previous research (Leiserowitz, 2006) reporting that individuals face several barriers in perceiving climate change risks, with the adverse effects of climate change perceived as a psychologically and geographically distant matter (Lorenzoni et al., 2007). Further, discourses of climate change and green conscious behavior have shifted from environmental wellbeing to efficient use of resources (Humphreys, 2014), and young individuals lead the pack (Prothero et al., 2010). Fielding and Head (2012) find that young adults with a greater sense of collectivism have more positive environmental attitudes than respondents who perceive environmental protection as a governmental responsibility. This is partly confirmed by the study as among other factors, skepticism toward existing climate change remedial actions resulted in "non-local" climate change risk perceptions among the young individuals, in turn encouraging them to dissociate from the prevailing climate change discourses and reframe their green conscious behavior with positive experiences. Thus, it can be postulated that climate change risk perceptions inform green conscious behavior (3rd proposition). The three key theoretical propositions discussed in this section of this inductive research can be depicted using a skeletal theoretical framework, as shown in Figure 1. According to Morse et al. (2008), a skeletal framework can serve to sensitize future research, providing an internal structure to a research program. As shown in Figure 1, environmental paradigms, belief systems that guide how individuals relate to the natural environment, do not inform climate change risk perceptions (1st proposition). The environmental paradigms can be used to understand how young individuals engage in green conscious behavior that is manifested at collective and individual levels (2nd proposition). Climate change risk perceptions also inform the green conscious behaviors (3rd proposition). As shown in this study, young adults' environmental concerns - especially those of climate change - are shadowed by several experiential and non-experiential factors and young adults seek emotional benefits through engaging in green conscious behavior. They either reject or ignore negative connotations associated with climate change, such as its possible local impact and distrust of governmental actions. This presents special challenges for marketers of green commodities. Young consumers can more easily be convinced by advertising messages than older generations (Len-Rios et al., 2016). Therefore, promotional campaigns of green commodities can be effectively used to convince young consumers to engage in green conscious behavior. However, such campaigns should consist of positive appeal, enthusiasm and opportunities to engage with the campaigns collectively (e.g. used apparels-sharing events, collective cycling and car-pooling). Generally, appeals appearing in advertising of green conscious behavior focus on restrictions and controls of existing behavior and display negative connotations. Such appeals might not be effective when targeting young consumers. Developing climate change awareness building programs is highly recommended. However, it is essential not to present climate change as an uncontrollable and unactionable problem as that will discourage the youth's interest in taking part in the programs or to be associated with the discourse. Ojala (2012, p. 630) found a positive relationship between "constructive" hopes and young adults' response to climate change-related problems. Otieno et al. (2013) also found sensational styles of presenting climate change information significantly influences young adults' climate change risk perceptions. This study concludes that although a sensational or sensitive approach may be appealing to eco-centric individuals (Asplund, 2016) who largely engage in green conscious behaviors at an individual level, a positive approach is highly essential in managing the issue of seeing climate change as a "non-local" issue as well as promoting green commodities aimed at anthropocentric individuals. Emphasis on empowered connotations, positive emotional affiliations and constructive hopes (Ojala, 2012) can provide a market potential for green commodities (Prothero and Fitchett, 2000). This study postulated three key theoretical propositions pertaining to how young adults relate to climate change, their environmental paradigms and green conscious behavior and recommended that environmental programs should revisit the idea that the anthropocentric paradigm inhibits engagement in green conscious behavior. Further, the promotional campaigns of green commodities can be largely benefited through incorporating messages with positive appeals providing young individuals with opportunities to engage with the campaigns collectively. Several limitations of the findings should be noted in the future use of the theoretical propositions postulated by this research. The study used snowball sampling methods in recruiting 20 respondents and in-depth interviews with the purpose of investigating the unique experiences of young adults engaging in green conscious behavior. Future research can consider running a comparative study using a larger sample of individuals who can be pretested to determine their tendency to align with either anthropocentric or eco-centric environmental paradigms.
|
This interpretive research draws on in-depth interviews with 20 young Australians (aged between 19-25 years) who engage in green conscious behavior.
|
[SECTION: Findings] Concerns about climate change are widely shared among global communities. However, according to the Australian Bureau of Statistics (2012), Australians' concerns about the environment are dynamic and have continued to shift in the past few years. Concern about climate change decreased in Australia from 73 per cent in 2007-2008 to 57 per cent in 2011-2012, with climate change ranked low in importance when compared to other concerns (Leviston et al., 2014). A 2018 public opinion survey exploring the concerns of Australians revealed that only 10 per cent of 650 respondents consider environmental issues as being the most important problem facing Australia. Currently, economic issues such as unemployment are at the forefront of Australians' minds, yet climate change is the single biggest issue facing our world (Roy Morgan survey, 2018). Concerns about climate change are related to one's environmental paradigm which is an effective predictor of green conscious behavior (Dunlap and Liere, 1978; Shephard et al., 2015). An environmental paradigm represents commonly accepted belief systems regarding how individuals relate to the natural environment. For example, some individuals assume themselves to be controllers (i.e. master) of the natural environment, establishing a high utility value for humans. On the other hand, some individuals assume themselves to be a part of the environment, possessing no control over it, but appreciating the inherent value of their surroundings regardless of its utility value for humans. The former group's beliefs fall into the anthropocentric environmental paradigm, while the latter group's beliefs fall into the eco-centric environmental paradigm. Anthropocentric individuals can perceive climate change as a controllable environmental problem. Eco-centric individuals can perceive climate change as a natural phenomenon and humans are subject to its adverse effects. Willmott (2014, p. 22) argues that shifting environmental views from the anthropocentric paradigm to the eco-centric paradigm is an essential condition to effectively address climate change problems as anthropocentric interferences on the natural environment might not necessarily resolve climate change problems. Ojala (2012, p. 630) finds a positive relationship between "constructive" hopes and young adults' response to climate change problems, as opposed to hopes that are based on the denial of climate change. The notion of "constructive" hope is generally related to favorable judgments of how effectively anthropocentric individuals can take actions to resolve climate change problems. On the contrary, eco-centric individuals who have a strong sense of inclusion toward nature are found to be more sensitive to climate change problems and understand it as a natural process (Asplund, 2016; Cheung et al., 2014). Considering the coexisting debate, it is intriguing to further investigate how individuals' environmental paradigms associate with climate change concerns and the resultant green conscious behavior. Moreover, when there is a contradiction in the existing literature that a shift in environmental views is found to be either necessary to ameliorate environmental degradation (Kilbourne and Carlson, 2008) or not so essential, the prevailing environmental views can be complementary (Gollnhofer and Schouten, 2017) in addressing the environmental problems through effective use of technology (Humphreys, 2014). According to the meta reviews on green conscious behavior studies (Bamberg and Moser, 2007; Leonidou et al., 2010), attitude-behavioral intention models have been widely used to examine green conscious behavior during the last few decades. The studies, however, reveal an issue pertaining to an attitude-behavior gap in green conscious behavior, that is, consumers do not necessarily covert green attitudes into green conscious behavior (Gupta and Ogden, 2006; Kollmuss and Agyeman, 2002; Mainieri et al., 1997; Wright and Klyn, 1998). Thus, it is recommended to seek alternative research approaches as attitudes alone cannot predict green conscious behavior, some studies even urging caution when interpreting the results obtained using attitude-behavioral intention models (Auger and Devinney, 2007; Casey, 1992; Peattie, 2001; Royne et al., 2011; Shen, 2012; Wong et al., 1996). With a high level of green consciousness recorded in global surveys (ACNielsen, 2015), young adults appear to be making an effort to ensure environmental wellbeing at both a personal and collective level. Thus, this cohort can be a rich information source in gaining a deeper understanding about subjective experiences in engaging in green conscious behavior (Connolly and Prothero, 2008), as opposed to behavioral intentions. This study, therefore, explores subjective experiences of green conscious young adults (aged between 19-25 years). Given the prevalence of contradictory findings, this study also explores whether environmental paradigms can be associated with climate change risk perceptions and green conscious behavior. With their purchasing power estimated to be around $170 billion every year (comScore, 2012), young adults are gradually becoming more aware of how their choices, lifestyles and behaviors play a vital role in developing a sustainable society (Lim, 2017), and 84 per cent of those surveyed believe that it is their generation's responsibility to change the world (Keeble, 2013). If this is the case, it is high time that green marketers tap into this largely ignored green market (Unruh and Ettenson, 2010). Climate change risk perceptions Climate change risk perception is defined as the expectation of the occurrence of a climate change problem, understanding its adverse impacts on oneself and others and knowledge of the causes (Leiserowitz, 2006). Research shows that climate change risk perceptions are informed by factors such as awareness, confidence in scientists, personal efficacy (Kellstedt et al., 2008), the outdoor temperature (Joireman et al., 2010), feelings of guilt about carbon emissions (Ferguson and Branscombe, 2010), personal experience (Leiserowitz, 2006) and exposure to sensitive information regarding climate change effects (such as dangers to wildlife; Otieno et al., 2013). Interestingly, while some studies confirm a relationship between climate change risk perceptions and individuals' support for climate change remedial actions (Leiserowitz, 2006; Whitmarsh, 2007), other studies disconfirm this relationship, claiming that individuals find it difficult to clearly understand the adverse effects of this complex environmental problem and are not interested in climate change remedial actions (Kronlid and Ohman, 2013) . Nevertheless, climate change problems are increasingly recognized as not only a significant environmental problem, but also an economic problem, with some controversial reports claiming that the impact of climate change will reduce the global economy by at least 5 per cent each year (Stern, 2008). Given the severity of the problem, Shepardson et al. (2012) stress the importance of using a systematic framework to build climate change awareness. It is further recommended to incorporate a target individuals' understanding into such systematic frameworks especially when cognitive dissonance and trust in science and government policy inhibit individuals' engagement in climate change remedial actions (Brownlee et al., 2013). Anthropocentric and eco-centric paradigms The notion of a paradigm is often described as a worldview which governs how individuals in a society collectively see, interpret and understand the world around them (Kilbourne and Carlson, 2008). In relation to green conscious behavior, each person views the relationship between humans and nature differently. These views are referred to as "environmental paradigms" (Dunlap and Van Liere, 1978) or human-nature connectedness (Burgh-Woodman and King, 2012). Broadly classified into two perspectives (the anthropocentric and eco-centric paradigms), previous research tends to debate the duality of these two paradigms (Barter and Bebbington, 2012), which is not the focus of this study. The term anthropocentric paradigm is often used to denote a set of beliefs that the environment should be preserved and protected because of its utility value for humans (Domanska, 2011; Dunlap, 2008). In contrast, the eco-centric paradigm is referred to as a set of beliefs that the environment should be preserved and protected because of its inherent value regardless of its utility for humans (Bailey and Wilson, 2009). Individuals who subscribe to the anthropocentric paradigm are commonly referred to as "anthropocentric individuals" (Surmeli and Saka, 2013), "instrumentalists" (Grankvist, 2015), "utilitarians" (Cembalo et al., 2016), "egoistic" and "social-altruistic individuals" (Schultz et al., 2005) and "individuals who manage the environment" (Purser et al., 1995). Individuals who subscribe to the eco-centric paradigm are commonly referred to as "eco-centric individuals" (Afsar et al., 2016), egalitarians (Price et al., 2014) and individuals with strong biospheric values whose environmental concerns are based on a value for all living things (Schultz et al., 2005, p. 392). Previous research provides many other classifications of environmental paradigms (Kronlid and Ohman, 2013; Price et al., 2014). Informed by these studies, a general distinction between the anthropocentric and eco-centric paradigms is considered a useful conceptual starting point for this in-depth, qualitative investigation. It is found that as anthropocentric individuals' support of environmental protection is governed by human-centered values and utilitarian purposes, these individuals are less likely to act to protect the environment if other human-centered values (e.g. material quality of life, accumulation of wealth, etc.) interfere. In contrast, eco-centric individuals are inclined to protect the environment even if their actions involve discomfort, inconvenience and expenses that may reduce their material quality of life (Mayer and Frantz, 2004). Thus, while it can be seen that both eco-centric and anthropocentric individuals may be concerned about environmental wellbeing, the motives behind their concerns are different. However, as shown in more recent research, green conscious individuals usually tend to project motives that resonate with eco-centric individuals (Barter and Bebbington, 2012) who are more likely to convert environmental attitudes into actual behavior than anthropocentric individuals (Cheung et al., 2014). Regarding climate change, eco-centric individuals who have a strong sense of inclusion toward nature also tend to be more sensitive to climate change problems than anthropocentric individuals (Cheung et al., 2014). As such, the eco-centric paradigm (positively) and the anthropocentric paradigm (negatively) are associated with a general concern for environmental wellbeing and climate change related problems (Cheung et al., 2014; Grankvist, 2015). Kronlid and Ohman (2013) suggest an environmental ethical framework with two parts: "value-oriented environmental ethics" and "relation-oriented environmental ethics" (see, Kronlid and Ohman, 2013, pp. 24-33). These generally resemble the distinction between anthropogenic and eco-centric environmental values. The importance of developing an environmental awareness framework that takes climate change into consideration is stressed in previous literature because environmental paradigms have more influence over informing climate change risk perceptions than scientific information does (Price et al., 2014; Stevenson et al., 2014; Kronlid and Ohman, 2013). An Australian study finds that socio-demographic variables are positively associated with higher levels of eco-centrism (Casey and Scott, 2006), which, in turn, is found to be positively associated with sensitivity toward climate change related problems (Stevenson et al., 2014). On the other hand, more recent research finds a negative correlation between age and eco-centrism (Gangaas et al., 2015). Given the mixed-evidence emerging from the existing literature, investigating whether environmental paradigms inform young adults' climate change risk perceptions and subsequent green conscious behavior remains a vital question. This investigation is also significant in helping to clarify the contradictory evidence that exists regarding young adults' actual engagement in green conscious behavior (Hume, 2010), as opposed to their willingness to purchase green commodities (AcNielsen, 2015). This study uses an interpretive approach with a view to exploring individuals' subjective experiences relating to green conscious behavior as opposed to intentional behavior (Thompson et al., 1989). This approach also responds to the criticism that quantitative studies based largely on simple liner models may not adequately capture the vital elements of contemporary issues in environmental debates (e.g. climate change issues; Kronlid and Ohman, 2013). Respondents' sense of the environment is considered important in exploring how they think and feel about climate change risk (Connolly and Prothero, 2008, p. 123; Royne et al., 2011). Therefore, the study recruited young adults who engage in green conscious behavior at varying degrees (e.g. green commodity purchases, recycling or reusing, online support for environmental causes, active engagement in aggressive environmental actions, dumpster-diving, used apparel swapping, passionately engage in organic gardening and choosing to work as "green-collar" employees at renewable energy companies) over a reasonable period (e.g. approximately 1 to 6 1/2 years). Appendix 1 contains the profiles of the respondents recruited, with pseudonyms being used to conceal their identity. Initially, predefined respondents were approached based on the researchers' observations and informal conversations with them. These respondents then either volunteered or were asked to recommend other individuals to participate in the study. This snowball sampling method (also known as chain-referral sampling) ensured having access to information-rich cases (Patton, 2002) and recruiting respondents who genuinely engage in green conscious behavior. Participant diversity was ensured through preliminary screening conversations with potential participants before commencing 20 1 1/2 -3-hour duration in-depth interviews, with 20 young adults aged between 19-25 years. An interview protocol consisting of ten open-ended questions (see Appendix 2) aided the interviews. Consistent with interpretive research tradition, the interview protocol was revised and redefined throughout the interview process. Ten member check interviews conducted via telephone conversations complemented the primary interviews. Member check interviews are a mechanism of checking the validity of the findings of interpretive research, either confirming or, at times, setting the boundary conditions of research findings (Wallendorf and Belk, 1989). All interviews were audio recorded and transcribed. Excluding the member check interviews, approximately 400 single-spaced pages of interview transcripts were analyzed. The Straussian School of thought (Strauss and Corbin, 1998), commonly known as the constant comparative method (Lincoln and Guba, 1985, pp. 334-341), informed the data analysis of this qualitative study. A line-by-line analysis (microanalysis) of the interview transcripts was manually carried out through open, axial and selective coding, confirming and disconfirming the themes that occurred at each of the three stages (generative, emergent and confirmatory) of data interpretation (Spiggle, 1994). The first, generative stage involved open-coding to identify different concepts and their constituent meaning in the interview transcripts. At this stage, no respondents showed clear evidence of aligning toward one particular environmental paradigm. This generative stage led to further exploration of the environmental views. A constant comparison method (Lincoln and Guba, 1985) incorporated data from each additional interview transcript and generated open codes. The specific incidents for comparison are also derived from consulting previous literature (Strauss and Corbin, 1998). Next, the emergent stage involved axial-coding which found themes shared between respondents. The final confirmatory stage involved selective coding to draw the central phenomenon and systematically relate it to other categories, validating the relationships between the thematic categories. Overall, three themes emerged from analysis of the data: "non-local" climate change risk, oscillation between environmental paradigms and anthropocentric environmentalism. In this section, each thematic category is elaborated. "Non-local" climate change risk Climate change risk is perceived by the respondents situating it as a "non-local" owing to several experiential as well as non-experiential factors. Confirming previous research (Connolly and Prothero, 2008; Whitmarsh, 2008), the respondents also perceived climate change as a complex phenomenon that is difficult to comprehend and/or imagine. Except for two respondents who studied climate change science, the rest found it difficult to explain what climate change means to them or its adverse consequences on their own. For example, Julian tries to explain climate change as, "I guess we don't know what's exactly going on. Pretty hard to imagine". Chris, another respondent appears to believe that Australia would not face similar consequences of climate change to other countries: Look, we had Katrina in the US and some of those kinds of very extreme events but we very rarely see that happen in Australia. Bush fires are probably the most recent one for Australia [...][it] only could kill a couple of hundred people in comparison to thousands in other countries. The words such as "pretty hard to imagine" (Julian), "we rarely see that happen in Australia" (Chris), "it (climate change) is a big unknown" (Jess) suggest that climate change risk is beyond their comprehension. Whitmarsh (2008) finds that the indirect relationship between the experience in facing environmental disasters (e.g. flood) and climate change risk perceptions are mediated by environmental values. That is, when facing environmental disasters, individuals with higher pro-environmental values are likely to have a higher level of climate change risk perceptions. This study, however, finds that regardless of some respondents' personal experiences with environmental disasters and their environmental values, they do not necessarily relate those experiences to climate change problems. Aron, a respondent, easily recalls the incidents and victims of climate-change-related disasters in other countries (e.g. Bangladesh) but not in Australia. It is evident that the respondents distance themselves and their surroundings from climate change risk either attributing the effects to other counties (Aron), saying "Australia will be alright, we are a big country" (Chris) or reflecting on the natural disasters reluctantly in relation to climate change (Elly). Therefore, it can be concluded that climate change is "non-local" to them in the sense of "non-personal". Consequently, as also found by Ojala (2012), there is also a sort of denial of climate change that seems to inhibit the respondents' engagement in climate change related environmental actions. The theme of "non-local" also indicates the respondents' reluctance to be associated with discourses that are presumably unsuccessful climate change remedial actions (Humphreys, 2014). The respondents express unfavorable attitudes toward some of the climate change related actions taken by the state governments of Australia (e.g. political debates and policies) and firms (e.g. labelled as "green wash"). They also express unfavorable attitudes toward the media in distributing information about climate change, assuming the media are driven by political agendas. For example, Chris finds the debate on the occurrence of climate change (e.g. climate change believers versus sceptics) is frustrating. Ellen also echoes this frustration: I think it's shit [lack of governmental actions on climate change]. I think that most of the time when the government puts forward policies that maybe initially they have, you know, they really want to make a change [...] they end up not being useful. Consequently, the respondents disassociate themselves from the mainstream discourse of climate change problems that they assume not favorable. Instead, they reframe their engagement in green conscious behavior as a positive experience (e.g. gardening, creating new items from recycled materials and engaging environmental actions in groups). The higher the number of cognitive barriers faced by the respondents in understanding the risks of climate change, the greater is their tendency to disassociate the phenomenon from their personal green conscious behavior. Only a few respondents who are more aware of the adverse effects of climate change have less ambiguous climate change risk perceptions. However, they also take precautions to not to bring unfavorable discourses of climate change (e.g. adverse effects of climate change and unsuccessful climate change remedial actions) to their everyday conversations. As Paige, a respondent, says, "personal relationships are not the best place to store my ideology". It is also observed that imposing views, sometimes referred to as "tall poppy syndrome" - a perceived tendency to discredit those who have prominence in public life, and tendency to not to bring discourses of complex issues to their valued relationships, are common among the respondents. Surprisingly, the study finds no evidence of any association between environmental paradigms and climate change risk perceptions. Regardless of the paradigm they believe in, the respondents tend to disassociate themselves from the complexities of the climate change phenomenon. Oscillation between environmental paradigms The study finds environmental views shared among the respondents are oscillated between the two environmental paradigms. Table I depicts a summary of findings of this thematic category, oscillation between environmental paradigms. The first column contains examples of power quotes, the most compelling evidence gathered from the qualitative data collection (Pratt, 2009). The second column shows the findings highlighted at the generative and emergent stages of open and axial coding respectively. The column also shows previous literature that supports these findings. Kilbourne and Polonsky (2005) report a negative relationship between one's beliefs on the anthropocentric paradigm and attitudes toward green conscious behavior. In contrast, as shown in Table I, the study finds that the respondents involved in various environmental actions appear to, at least partially, be motivated by both paradigms. For example, Kathy (Interview Excerpt no. 2 in Table I) does not totally disagree with the anthropocentric view when she says, "We have a right to use natural resources for economic gain", but she also believes that the perception of humans as being preeminent is misguided. This is close to an eco-centric view. Overall, this confirms that drastic changes in how individuals perceive the natural environment may not be essential as prevailing environmental views can be complementary in addressing environmental problems (Gollnhofer and Schouten, 2017). This study also finds that only a few respondents' environmental paradigms strongly align toward a single paradigm. For example, among the respondents who strongly hold anthropocentric environmental views, Amy perceives the natural environment as a tool, a hammer. She believes that humans have the capacity to rebuild the natural environment whenever they find it necessary. Therefore, she sees "nothing inherently amazing about the natural environment". Words such as "tool" or "hammer" can be related to her anthropocentric views. Previous research shows that anthropocentric individuals tend to protect the environment owing to the benefits they derive from the environment (Elliott, 2014). Similarly, the respondents perceive humans as preeminent and the natural environment as a resource base available for human existence and, hence, needs to be protected. Previous studies find that green conscious individuals tend to hold eco-centric views more than their counter parties (Dunlap et al., 2000). Contrary to this, however, the current study finds that eco-centric views are strongly held by only three respondents. Table II provides some examples of power quotes (Pratt, 2009) pertaining to eco-centric views together with evidence from previous literature. It should be noted that the final confirmatory stage of the study did not reveal compelling evidence to draw a thematic category relating to respondents' eco-centric views. As shown in the table above, three respondents reject the distinction between humans and the environment and the idea of perceiving the natural environment as an external phenomenon separate from humans. Instead, they hold eco-centric views that see humans as an intrinsic part of nature. This is referred to as "connectedness to nature" or a "naturalistic view", in which individuals feel an emotionally connected relationship with the natural environment (Mayer and Frantz, 2004). Anthropocentric environmentalism This study finds an association between environmental paradigms and how the respondents engage in green conscious behavior. When explaining some environmental actions with positive appeals at a collective level (e.g. having fun together, unity among friends, etc.), the respondents' views predominantly are aligned toward anthropocentric views. The respondents who share eco-centric views appear to be more passionate about engaging in green conscious behavior at an individual level. This unexpected finding has not been discussed in previous literature and, thus, represents a key finding of the study which will be elaborated in the discussion section. Table III shows the power quotes pertaining to an emerging association between environmental paradigms and how the respondents engage in green conscious behavior at collective and individual levels. As shown in Table III, Sean presents himself as a committed activist who engages in dumpster-diving. This involves procuring necessities from dumpsters. Recalling a recent dumpster-diving experience, which he enjoyed with a group of his friends, he also describes how he uses dumpster-diving to encourage youth involvement in environmental actions with a fun appeal as opposed to "preaching" (e.g. lecturing about the adverse effects of climate change) as the former is perceived to be more appealing to young individuals. Tim, who has been engaging in dumpster-diving for three years, also shares a similar view. He also (proudly) shows a photograph of his collection from dumpsters at the interview. Amy's dumpster-food is used to make free breakfasts for her environmental group members during environmental activities (e.g. a bicycling event). She explains: Sometimes, we run regular free dinners, and if you have a lot of dumpster food, you bring them along and cook that up. You know, community, free food for people. Several emotional affiliations such as fun, adventure and undertaking challenges can be seen in relation to these collective activities. The emotional benefits appear to drive these individuals' engagement in environmental actions, with this enthusiasm stronger among the committed activists who share predominantly anthropocentric views (e.g. Sean, Tim, Shasha and Amy) rather than eco-centric views. Respondents who share eco-centric views, such as Bob, appear to be more passionate about engaging in green conscious behavior at an individual level. He describes how gardening symbolizes his preference for making necessities himself rather than buying green commodities as "kind of cool". Elly, who is another eco-centric respondent, also explains her passion toward organic agriculture: I am a bit of an optimist here, but [I am] looking into biodynamics and an organic agriculture [...] which is organic [...], just mimicking what nature does [...] replicating ... So, I think the relationship between nature and humans needs to be respectable of what nature does well. The member check interviews, however, reveal a boundary condition to this thematic category. That disappointment toward aggressive activism may have triggered individuals to disassociate themselves with collective actions. At the member check interview, Aron, who is an eco-centric respondent, clearly distinguishes himself from mainstream activism owing to his disappointment with certain collective actions. He explains: When they rallied, people [environmental activists] were purposefully trying to get arrested and I never understood that. When it was put in to the media when people heard about the event it wasn't "Oh! We successfully managed to show the government that we want this to be closed down". [But] it was about "Ah! Did you hear about the arrest?" So, it made causes seem less legitimate because it sounded like people were trying to be over the top rather than actually caring about the cause. Consistent with his peripheral engagement in group environmentalist activities, Bob, while not predominantly holding eco-centric views, is not interested in collective actions. He believes that individuals who are affiliated with environmental groups "tend to be passionate to the extent that they almost push beyond reason". Subject to the boundary condition, it can be concluded that the respondents who predominantly engage in environmental actions with positive appeals are influenced by their stronger alignment toward the anthropocentric paradigm. On the other hand, the respondents who predominantly engage in green conscious behavior at an individual level (e.g. organic gardening) are influenced by their stronger alignment toward the eco-centric paradigm. This research investigated whether environmental paradigms inform climate change risk perceptions and green conscious behavior among young adults. Overall, the study found that young adults perceive climate change to be a "non-local" problem owing to various reasons. They have no local (personal) experience of climate change, are reluctant to engage or are unable to relate local environmental problems, if any, to climate change. They also hesitate to be associated with discourses pertaining to climate change programs run by various governmental and other organizations as they perceive the programs as largely unsuccessful. This study found no association between environmental paradigms and climate change risk perceptions. However, environmental paradigms and "non-local" climate change perceptions can be used to understand green conscious behavior. This section discusses three theoretical propositions generated by this study. It should be noted that this interpretive study explored the phenomenon with a view to making a theoretical contribution and hence the study used snowball sampling methods to recruit 20 young adults who engage in green conscious behavior as respondents. The expectation was to gain a deeper understanding of their subjective experiences in engaging in green conscious behavior. The study findings should, therefore, be complemented by additional research, using a larger sample, to draw more generalizable findings. This study found that regardless of the eco-centric or anthropocentric views they hold, young individuals demonstrate a low level of climate change risk perception. Willmott (2014) claims that shifting from the anthropocentric paradigm to the eco-centric paradigm is essential in addressing climate change problems at a societal level. However, this research stresses that the "non-local" climate change risks shared among young individuals should be resolved using programs aimed at enhancing knowledge about the adverse effects of climate change to personal lives. It appears pointless to push for anthropocentric and eco-centric paradigm changes as this seems to have no bearing on the issue (Gollnhofer and Schouten, 2017). Thus, it can be postulated that environmental paradigms do not inform climate change risk perceptions (1st proposition). This study also found that predominantly held environmental views can explain how young adults engage in green conscious behavior. Young adults appear to be engaged in green conscious behavior in two ways: collective level engagement; and individual level engagement. The former largely involves environmental actions, which mostly consist of collective actions (e.g. dumpster diving, rallies, etc.). The latter encompasses certain other green conscious behavior that mostly consists of individual actions (e.g. permaculture, gardening, etc.). Confirming previous research (Pentina and Amos, 2011), this research shows that environmental reasons as well as other emotional affiliations are associated with green conscious behavior. This is common among the individuals who predominantly hold anthropocentric views. This also confirms the young adults' effort to use positive connotations of their green conscious behavior to reconcile them with other individuals who may not be identified with green discourse owing to stereotyping or social labelling (Barnhart and Mish, 2017). However, this study also found that the eco-centric individuals tend to keep their engagement in green conscious behavior outside the public sphere. Thus, it partly confirmed previous research that eco-centric individuals tend to be engaged in particular green conscious behavior that signals human connection with the natural environment within a private sphere (Davison, 2008). Most of the green conscious behaviors performed collectively and enthusiastically are informed by an anthropocentric paradigm, and hence, this study partly disconfirms previous claims that individuals with eco-centric views consider themselves to be embedded elements of the natural environment and, as such, are highly engaged in green conscious behavior (Frantz and Mayer, 2014). Overall, this study postulates that environmental paradigms can be used to understand how young individuals engage in green conscious behavior (2nd proposition). Previous research (Whitmarsh, 2008) finds an association between the perception of physical vulnerability (e.g. rising sea level, floods, etc.) and climate change risk perceptions. Despite a few of the young adults having personal experiences of natural disasters, they were reluctant to associate with climate change. Their risk perceptions are not different from the other respondents who find it difficult to imagine the potential effects of climate change in Australia. As such, their climate change risk perceptions are largely based on the adverse effects of climate change in other countries. This study, therefore, confirms previous research (Leiserowitz, 2006) reporting that individuals face several barriers in perceiving climate change risks, with the adverse effects of climate change perceived as a psychologically and geographically distant matter (Lorenzoni et al., 2007). Further, discourses of climate change and green conscious behavior have shifted from environmental wellbeing to efficient use of resources (Humphreys, 2014), and young individuals lead the pack (Prothero et al., 2010). Fielding and Head (2012) find that young adults with a greater sense of collectivism have more positive environmental attitudes than respondents who perceive environmental protection as a governmental responsibility. This is partly confirmed by the study as among other factors, skepticism toward existing climate change remedial actions resulted in "non-local" climate change risk perceptions among the young individuals, in turn encouraging them to dissociate from the prevailing climate change discourses and reframe their green conscious behavior with positive experiences. Thus, it can be postulated that climate change risk perceptions inform green conscious behavior (3rd proposition). The three key theoretical propositions discussed in this section of this inductive research can be depicted using a skeletal theoretical framework, as shown in Figure 1. According to Morse et al. (2008), a skeletal framework can serve to sensitize future research, providing an internal structure to a research program. As shown in Figure 1, environmental paradigms, belief systems that guide how individuals relate to the natural environment, do not inform climate change risk perceptions (1st proposition). The environmental paradigms can be used to understand how young individuals engage in green conscious behavior that is manifested at collective and individual levels (2nd proposition). Climate change risk perceptions also inform the green conscious behaviors (3rd proposition). As shown in this study, young adults' environmental concerns - especially those of climate change - are shadowed by several experiential and non-experiential factors and young adults seek emotional benefits through engaging in green conscious behavior. They either reject or ignore negative connotations associated with climate change, such as its possible local impact and distrust of governmental actions. This presents special challenges for marketers of green commodities. Young consumers can more easily be convinced by advertising messages than older generations (Len-Rios et al., 2016). Therefore, promotional campaigns of green commodities can be effectively used to convince young consumers to engage in green conscious behavior. However, such campaigns should consist of positive appeal, enthusiasm and opportunities to engage with the campaigns collectively (e.g. used apparels-sharing events, collective cycling and car-pooling). Generally, appeals appearing in advertising of green conscious behavior focus on restrictions and controls of existing behavior and display negative connotations. Such appeals might not be effective when targeting young consumers. Developing climate change awareness building programs is highly recommended. However, it is essential not to present climate change as an uncontrollable and unactionable problem as that will discourage the youth's interest in taking part in the programs or to be associated with the discourse. Ojala (2012, p. 630) found a positive relationship between "constructive" hopes and young adults' response to climate change-related problems. Otieno et al. (2013) also found sensational styles of presenting climate change information significantly influences young adults' climate change risk perceptions. This study concludes that although a sensational or sensitive approach may be appealing to eco-centric individuals (Asplund, 2016) who largely engage in green conscious behaviors at an individual level, a positive approach is highly essential in managing the issue of seeing climate change as a "non-local" issue as well as promoting green commodities aimed at anthropocentric individuals. Emphasis on empowered connotations, positive emotional affiliations and constructive hopes (Ojala, 2012) can provide a market potential for green commodities (Prothero and Fitchett, 2000). This study postulated three key theoretical propositions pertaining to how young adults relate to climate change, their environmental paradigms and green conscious behavior and recommended that environmental programs should revisit the idea that the anthropocentric paradigm inhibits engagement in green conscious behavior. Further, the promotional campaigns of green commodities can be largely benefited through incorporating messages with positive appeals providing young individuals with opportunities to engage with the campaigns collectively. Several limitations of the findings should be noted in the future use of the theoretical propositions postulated by this research. The study used snowball sampling methods in recruiting 20 respondents and in-depth interviews with the purpose of investigating the unique experiences of young adults engaging in green conscious behavior. Future research can consider running a comparative study using a larger sample of individuals who can be pretested to determine their tendency to align with either anthropocentric or eco-centric environmental paradigms.
|
Three thematic categories ("non-local" climate change risk, oscillation between environmental paradigms and anthropocentric environmentalism) emerged from the data. The study finds that "non-local" climate change risk perceptions and environmental paradigms inform green conscious behavior. However, no association between environmental paradigms and climate change risk perceptions is found. The study postulates a skeletal theoretical framework for understanding the green conscious behavior of young adults.
|
[SECTION: Value] Concerns about climate change are widely shared among global communities. However, according to the Australian Bureau of Statistics (2012), Australians' concerns about the environment are dynamic and have continued to shift in the past few years. Concern about climate change decreased in Australia from 73 per cent in 2007-2008 to 57 per cent in 2011-2012, with climate change ranked low in importance when compared to other concerns (Leviston et al., 2014). A 2018 public opinion survey exploring the concerns of Australians revealed that only 10 per cent of 650 respondents consider environmental issues as being the most important problem facing Australia. Currently, economic issues such as unemployment are at the forefront of Australians' minds, yet climate change is the single biggest issue facing our world (Roy Morgan survey, 2018). Concerns about climate change are related to one's environmental paradigm which is an effective predictor of green conscious behavior (Dunlap and Liere, 1978; Shephard et al., 2015). An environmental paradigm represents commonly accepted belief systems regarding how individuals relate to the natural environment. For example, some individuals assume themselves to be controllers (i.e. master) of the natural environment, establishing a high utility value for humans. On the other hand, some individuals assume themselves to be a part of the environment, possessing no control over it, but appreciating the inherent value of their surroundings regardless of its utility value for humans. The former group's beliefs fall into the anthropocentric environmental paradigm, while the latter group's beliefs fall into the eco-centric environmental paradigm. Anthropocentric individuals can perceive climate change as a controllable environmental problem. Eco-centric individuals can perceive climate change as a natural phenomenon and humans are subject to its adverse effects. Willmott (2014, p. 22) argues that shifting environmental views from the anthropocentric paradigm to the eco-centric paradigm is an essential condition to effectively address climate change problems as anthropocentric interferences on the natural environment might not necessarily resolve climate change problems. Ojala (2012, p. 630) finds a positive relationship between "constructive" hopes and young adults' response to climate change problems, as opposed to hopes that are based on the denial of climate change. The notion of "constructive" hope is generally related to favorable judgments of how effectively anthropocentric individuals can take actions to resolve climate change problems. On the contrary, eco-centric individuals who have a strong sense of inclusion toward nature are found to be more sensitive to climate change problems and understand it as a natural process (Asplund, 2016; Cheung et al., 2014). Considering the coexisting debate, it is intriguing to further investigate how individuals' environmental paradigms associate with climate change concerns and the resultant green conscious behavior. Moreover, when there is a contradiction in the existing literature that a shift in environmental views is found to be either necessary to ameliorate environmental degradation (Kilbourne and Carlson, 2008) or not so essential, the prevailing environmental views can be complementary (Gollnhofer and Schouten, 2017) in addressing the environmental problems through effective use of technology (Humphreys, 2014). According to the meta reviews on green conscious behavior studies (Bamberg and Moser, 2007; Leonidou et al., 2010), attitude-behavioral intention models have been widely used to examine green conscious behavior during the last few decades. The studies, however, reveal an issue pertaining to an attitude-behavior gap in green conscious behavior, that is, consumers do not necessarily covert green attitudes into green conscious behavior (Gupta and Ogden, 2006; Kollmuss and Agyeman, 2002; Mainieri et al., 1997; Wright and Klyn, 1998). Thus, it is recommended to seek alternative research approaches as attitudes alone cannot predict green conscious behavior, some studies even urging caution when interpreting the results obtained using attitude-behavioral intention models (Auger and Devinney, 2007; Casey, 1992; Peattie, 2001; Royne et al., 2011; Shen, 2012; Wong et al., 1996). With a high level of green consciousness recorded in global surveys (ACNielsen, 2015), young adults appear to be making an effort to ensure environmental wellbeing at both a personal and collective level. Thus, this cohort can be a rich information source in gaining a deeper understanding about subjective experiences in engaging in green conscious behavior (Connolly and Prothero, 2008), as opposed to behavioral intentions. This study, therefore, explores subjective experiences of green conscious young adults (aged between 19-25 years). Given the prevalence of contradictory findings, this study also explores whether environmental paradigms can be associated with climate change risk perceptions and green conscious behavior. With their purchasing power estimated to be around $170 billion every year (comScore, 2012), young adults are gradually becoming more aware of how their choices, lifestyles and behaviors play a vital role in developing a sustainable society (Lim, 2017), and 84 per cent of those surveyed believe that it is their generation's responsibility to change the world (Keeble, 2013). If this is the case, it is high time that green marketers tap into this largely ignored green market (Unruh and Ettenson, 2010). Climate change risk perceptions Climate change risk perception is defined as the expectation of the occurrence of a climate change problem, understanding its adverse impacts on oneself and others and knowledge of the causes (Leiserowitz, 2006). Research shows that climate change risk perceptions are informed by factors such as awareness, confidence in scientists, personal efficacy (Kellstedt et al., 2008), the outdoor temperature (Joireman et al., 2010), feelings of guilt about carbon emissions (Ferguson and Branscombe, 2010), personal experience (Leiserowitz, 2006) and exposure to sensitive information regarding climate change effects (such as dangers to wildlife; Otieno et al., 2013). Interestingly, while some studies confirm a relationship between climate change risk perceptions and individuals' support for climate change remedial actions (Leiserowitz, 2006; Whitmarsh, 2007), other studies disconfirm this relationship, claiming that individuals find it difficult to clearly understand the adverse effects of this complex environmental problem and are not interested in climate change remedial actions (Kronlid and Ohman, 2013) . Nevertheless, climate change problems are increasingly recognized as not only a significant environmental problem, but also an economic problem, with some controversial reports claiming that the impact of climate change will reduce the global economy by at least 5 per cent each year (Stern, 2008). Given the severity of the problem, Shepardson et al. (2012) stress the importance of using a systematic framework to build climate change awareness. It is further recommended to incorporate a target individuals' understanding into such systematic frameworks especially when cognitive dissonance and trust in science and government policy inhibit individuals' engagement in climate change remedial actions (Brownlee et al., 2013). Anthropocentric and eco-centric paradigms The notion of a paradigm is often described as a worldview which governs how individuals in a society collectively see, interpret and understand the world around them (Kilbourne and Carlson, 2008). In relation to green conscious behavior, each person views the relationship between humans and nature differently. These views are referred to as "environmental paradigms" (Dunlap and Van Liere, 1978) or human-nature connectedness (Burgh-Woodman and King, 2012). Broadly classified into two perspectives (the anthropocentric and eco-centric paradigms), previous research tends to debate the duality of these two paradigms (Barter and Bebbington, 2012), which is not the focus of this study. The term anthropocentric paradigm is often used to denote a set of beliefs that the environment should be preserved and protected because of its utility value for humans (Domanska, 2011; Dunlap, 2008). In contrast, the eco-centric paradigm is referred to as a set of beliefs that the environment should be preserved and protected because of its inherent value regardless of its utility for humans (Bailey and Wilson, 2009). Individuals who subscribe to the anthropocentric paradigm are commonly referred to as "anthropocentric individuals" (Surmeli and Saka, 2013), "instrumentalists" (Grankvist, 2015), "utilitarians" (Cembalo et al., 2016), "egoistic" and "social-altruistic individuals" (Schultz et al., 2005) and "individuals who manage the environment" (Purser et al., 1995). Individuals who subscribe to the eco-centric paradigm are commonly referred to as "eco-centric individuals" (Afsar et al., 2016), egalitarians (Price et al., 2014) and individuals with strong biospheric values whose environmental concerns are based on a value for all living things (Schultz et al., 2005, p. 392). Previous research provides many other classifications of environmental paradigms (Kronlid and Ohman, 2013; Price et al., 2014). Informed by these studies, a general distinction between the anthropocentric and eco-centric paradigms is considered a useful conceptual starting point for this in-depth, qualitative investigation. It is found that as anthropocentric individuals' support of environmental protection is governed by human-centered values and utilitarian purposes, these individuals are less likely to act to protect the environment if other human-centered values (e.g. material quality of life, accumulation of wealth, etc.) interfere. In contrast, eco-centric individuals are inclined to protect the environment even if their actions involve discomfort, inconvenience and expenses that may reduce their material quality of life (Mayer and Frantz, 2004). Thus, while it can be seen that both eco-centric and anthropocentric individuals may be concerned about environmental wellbeing, the motives behind their concerns are different. However, as shown in more recent research, green conscious individuals usually tend to project motives that resonate with eco-centric individuals (Barter and Bebbington, 2012) who are more likely to convert environmental attitudes into actual behavior than anthropocentric individuals (Cheung et al., 2014). Regarding climate change, eco-centric individuals who have a strong sense of inclusion toward nature also tend to be more sensitive to climate change problems than anthropocentric individuals (Cheung et al., 2014). As such, the eco-centric paradigm (positively) and the anthropocentric paradigm (negatively) are associated with a general concern for environmental wellbeing and climate change related problems (Cheung et al., 2014; Grankvist, 2015). Kronlid and Ohman (2013) suggest an environmental ethical framework with two parts: "value-oriented environmental ethics" and "relation-oriented environmental ethics" (see, Kronlid and Ohman, 2013, pp. 24-33). These generally resemble the distinction between anthropogenic and eco-centric environmental values. The importance of developing an environmental awareness framework that takes climate change into consideration is stressed in previous literature because environmental paradigms have more influence over informing climate change risk perceptions than scientific information does (Price et al., 2014; Stevenson et al., 2014; Kronlid and Ohman, 2013). An Australian study finds that socio-demographic variables are positively associated with higher levels of eco-centrism (Casey and Scott, 2006), which, in turn, is found to be positively associated with sensitivity toward climate change related problems (Stevenson et al., 2014). On the other hand, more recent research finds a negative correlation between age and eco-centrism (Gangaas et al., 2015). Given the mixed-evidence emerging from the existing literature, investigating whether environmental paradigms inform young adults' climate change risk perceptions and subsequent green conscious behavior remains a vital question. This investigation is also significant in helping to clarify the contradictory evidence that exists regarding young adults' actual engagement in green conscious behavior (Hume, 2010), as opposed to their willingness to purchase green commodities (AcNielsen, 2015). This study uses an interpretive approach with a view to exploring individuals' subjective experiences relating to green conscious behavior as opposed to intentional behavior (Thompson et al., 1989). This approach also responds to the criticism that quantitative studies based largely on simple liner models may not adequately capture the vital elements of contemporary issues in environmental debates (e.g. climate change issues; Kronlid and Ohman, 2013). Respondents' sense of the environment is considered important in exploring how they think and feel about climate change risk (Connolly and Prothero, 2008, p. 123; Royne et al., 2011). Therefore, the study recruited young adults who engage in green conscious behavior at varying degrees (e.g. green commodity purchases, recycling or reusing, online support for environmental causes, active engagement in aggressive environmental actions, dumpster-diving, used apparel swapping, passionately engage in organic gardening and choosing to work as "green-collar" employees at renewable energy companies) over a reasonable period (e.g. approximately 1 to 6 1/2 years). Appendix 1 contains the profiles of the respondents recruited, with pseudonyms being used to conceal their identity. Initially, predefined respondents were approached based on the researchers' observations and informal conversations with them. These respondents then either volunteered or were asked to recommend other individuals to participate in the study. This snowball sampling method (also known as chain-referral sampling) ensured having access to information-rich cases (Patton, 2002) and recruiting respondents who genuinely engage in green conscious behavior. Participant diversity was ensured through preliminary screening conversations with potential participants before commencing 20 1 1/2 -3-hour duration in-depth interviews, with 20 young adults aged between 19-25 years. An interview protocol consisting of ten open-ended questions (see Appendix 2) aided the interviews. Consistent with interpretive research tradition, the interview protocol was revised and redefined throughout the interview process. Ten member check interviews conducted via telephone conversations complemented the primary interviews. Member check interviews are a mechanism of checking the validity of the findings of interpretive research, either confirming or, at times, setting the boundary conditions of research findings (Wallendorf and Belk, 1989). All interviews were audio recorded and transcribed. Excluding the member check interviews, approximately 400 single-spaced pages of interview transcripts were analyzed. The Straussian School of thought (Strauss and Corbin, 1998), commonly known as the constant comparative method (Lincoln and Guba, 1985, pp. 334-341), informed the data analysis of this qualitative study. A line-by-line analysis (microanalysis) of the interview transcripts was manually carried out through open, axial and selective coding, confirming and disconfirming the themes that occurred at each of the three stages (generative, emergent and confirmatory) of data interpretation (Spiggle, 1994). The first, generative stage involved open-coding to identify different concepts and their constituent meaning in the interview transcripts. At this stage, no respondents showed clear evidence of aligning toward one particular environmental paradigm. This generative stage led to further exploration of the environmental views. A constant comparison method (Lincoln and Guba, 1985) incorporated data from each additional interview transcript and generated open codes. The specific incidents for comparison are also derived from consulting previous literature (Strauss and Corbin, 1998). Next, the emergent stage involved axial-coding which found themes shared between respondents. The final confirmatory stage involved selective coding to draw the central phenomenon and systematically relate it to other categories, validating the relationships between the thematic categories. Overall, three themes emerged from analysis of the data: "non-local" climate change risk, oscillation between environmental paradigms and anthropocentric environmentalism. In this section, each thematic category is elaborated. "Non-local" climate change risk Climate change risk is perceived by the respondents situating it as a "non-local" owing to several experiential as well as non-experiential factors. Confirming previous research (Connolly and Prothero, 2008; Whitmarsh, 2008), the respondents also perceived climate change as a complex phenomenon that is difficult to comprehend and/or imagine. Except for two respondents who studied climate change science, the rest found it difficult to explain what climate change means to them or its adverse consequences on their own. For example, Julian tries to explain climate change as, "I guess we don't know what's exactly going on. Pretty hard to imagine". Chris, another respondent appears to believe that Australia would not face similar consequences of climate change to other countries: Look, we had Katrina in the US and some of those kinds of very extreme events but we very rarely see that happen in Australia. Bush fires are probably the most recent one for Australia [...][it] only could kill a couple of hundred people in comparison to thousands in other countries. The words such as "pretty hard to imagine" (Julian), "we rarely see that happen in Australia" (Chris), "it (climate change) is a big unknown" (Jess) suggest that climate change risk is beyond their comprehension. Whitmarsh (2008) finds that the indirect relationship between the experience in facing environmental disasters (e.g. flood) and climate change risk perceptions are mediated by environmental values. That is, when facing environmental disasters, individuals with higher pro-environmental values are likely to have a higher level of climate change risk perceptions. This study, however, finds that regardless of some respondents' personal experiences with environmental disasters and their environmental values, they do not necessarily relate those experiences to climate change problems. Aron, a respondent, easily recalls the incidents and victims of climate-change-related disasters in other countries (e.g. Bangladesh) but not in Australia. It is evident that the respondents distance themselves and their surroundings from climate change risk either attributing the effects to other counties (Aron), saying "Australia will be alright, we are a big country" (Chris) or reflecting on the natural disasters reluctantly in relation to climate change (Elly). Therefore, it can be concluded that climate change is "non-local" to them in the sense of "non-personal". Consequently, as also found by Ojala (2012), there is also a sort of denial of climate change that seems to inhibit the respondents' engagement in climate change related environmental actions. The theme of "non-local" also indicates the respondents' reluctance to be associated with discourses that are presumably unsuccessful climate change remedial actions (Humphreys, 2014). The respondents express unfavorable attitudes toward some of the climate change related actions taken by the state governments of Australia (e.g. political debates and policies) and firms (e.g. labelled as "green wash"). They also express unfavorable attitudes toward the media in distributing information about climate change, assuming the media are driven by political agendas. For example, Chris finds the debate on the occurrence of climate change (e.g. climate change believers versus sceptics) is frustrating. Ellen also echoes this frustration: I think it's shit [lack of governmental actions on climate change]. I think that most of the time when the government puts forward policies that maybe initially they have, you know, they really want to make a change [...] they end up not being useful. Consequently, the respondents disassociate themselves from the mainstream discourse of climate change problems that they assume not favorable. Instead, they reframe their engagement in green conscious behavior as a positive experience (e.g. gardening, creating new items from recycled materials and engaging environmental actions in groups). The higher the number of cognitive barriers faced by the respondents in understanding the risks of climate change, the greater is their tendency to disassociate the phenomenon from their personal green conscious behavior. Only a few respondents who are more aware of the adverse effects of climate change have less ambiguous climate change risk perceptions. However, they also take precautions to not to bring unfavorable discourses of climate change (e.g. adverse effects of climate change and unsuccessful climate change remedial actions) to their everyday conversations. As Paige, a respondent, says, "personal relationships are not the best place to store my ideology". It is also observed that imposing views, sometimes referred to as "tall poppy syndrome" - a perceived tendency to discredit those who have prominence in public life, and tendency to not to bring discourses of complex issues to their valued relationships, are common among the respondents. Surprisingly, the study finds no evidence of any association between environmental paradigms and climate change risk perceptions. Regardless of the paradigm they believe in, the respondents tend to disassociate themselves from the complexities of the climate change phenomenon. Oscillation between environmental paradigms The study finds environmental views shared among the respondents are oscillated between the two environmental paradigms. Table I depicts a summary of findings of this thematic category, oscillation between environmental paradigms. The first column contains examples of power quotes, the most compelling evidence gathered from the qualitative data collection (Pratt, 2009). The second column shows the findings highlighted at the generative and emergent stages of open and axial coding respectively. The column also shows previous literature that supports these findings. Kilbourne and Polonsky (2005) report a negative relationship between one's beliefs on the anthropocentric paradigm and attitudes toward green conscious behavior. In contrast, as shown in Table I, the study finds that the respondents involved in various environmental actions appear to, at least partially, be motivated by both paradigms. For example, Kathy (Interview Excerpt no. 2 in Table I) does not totally disagree with the anthropocentric view when she says, "We have a right to use natural resources for economic gain", but she also believes that the perception of humans as being preeminent is misguided. This is close to an eco-centric view. Overall, this confirms that drastic changes in how individuals perceive the natural environment may not be essential as prevailing environmental views can be complementary in addressing environmental problems (Gollnhofer and Schouten, 2017). This study also finds that only a few respondents' environmental paradigms strongly align toward a single paradigm. For example, among the respondents who strongly hold anthropocentric environmental views, Amy perceives the natural environment as a tool, a hammer. She believes that humans have the capacity to rebuild the natural environment whenever they find it necessary. Therefore, she sees "nothing inherently amazing about the natural environment". Words such as "tool" or "hammer" can be related to her anthropocentric views. Previous research shows that anthropocentric individuals tend to protect the environment owing to the benefits they derive from the environment (Elliott, 2014). Similarly, the respondents perceive humans as preeminent and the natural environment as a resource base available for human existence and, hence, needs to be protected. Previous studies find that green conscious individuals tend to hold eco-centric views more than their counter parties (Dunlap et al., 2000). Contrary to this, however, the current study finds that eco-centric views are strongly held by only three respondents. Table II provides some examples of power quotes (Pratt, 2009) pertaining to eco-centric views together with evidence from previous literature. It should be noted that the final confirmatory stage of the study did not reveal compelling evidence to draw a thematic category relating to respondents' eco-centric views. As shown in the table above, three respondents reject the distinction between humans and the environment and the idea of perceiving the natural environment as an external phenomenon separate from humans. Instead, they hold eco-centric views that see humans as an intrinsic part of nature. This is referred to as "connectedness to nature" or a "naturalistic view", in which individuals feel an emotionally connected relationship with the natural environment (Mayer and Frantz, 2004). Anthropocentric environmentalism This study finds an association between environmental paradigms and how the respondents engage in green conscious behavior. When explaining some environmental actions with positive appeals at a collective level (e.g. having fun together, unity among friends, etc.), the respondents' views predominantly are aligned toward anthropocentric views. The respondents who share eco-centric views appear to be more passionate about engaging in green conscious behavior at an individual level. This unexpected finding has not been discussed in previous literature and, thus, represents a key finding of the study which will be elaborated in the discussion section. Table III shows the power quotes pertaining to an emerging association between environmental paradigms and how the respondents engage in green conscious behavior at collective and individual levels. As shown in Table III, Sean presents himself as a committed activist who engages in dumpster-diving. This involves procuring necessities from dumpsters. Recalling a recent dumpster-diving experience, which he enjoyed with a group of his friends, he also describes how he uses dumpster-diving to encourage youth involvement in environmental actions with a fun appeal as opposed to "preaching" (e.g. lecturing about the adverse effects of climate change) as the former is perceived to be more appealing to young individuals. Tim, who has been engaging in dumpster-diving for three years, also shares a similar view. He also (proudly) shows a photograph of his collection from dumpsters at the interview. Amy's dumpster-food is used to make free breakfasts for her environmental group members during environmental activities (e.g. a bicycling event). She explains: Sometimes, we run regular free dinners, and if you have a lot of dumpster food, you bring them along and cook that up. You know, community, free food for people. Several emotional affiliations such as fun, adventure and undertaking challenges can be seen in relation to these collective activities. The emotional benefits appear to drive these individuals' engagement in environmental actions, with this enthusiasm stronger among the committed activists who share predominantly anthropocentric views (e.g. Sean, Tim, Shasha and Amy) rather than eco-centric views. Respondents who share eco-centric views, such as Bob, appear to be more passionate about engaging in green conscious behavior at an individual level. He describes how gardening symbolizes his preference for making necessities himself rather than buying green commodities as "kind of cool". Elly, who is another eco-centric respondent, also explains her passion toward organic agriculture: I am a bit of an optimist here, but [I am] looking into biodynamics and an organic agriculture [...] which is organic [...], just mimicking what nature does [...] replicating ... So, I think the relationship between nature and humans needs to be respectable of what nature does well. The member check interviews, however, reveal a boundary condition to this thematic category. That disappointment toward aggressive activism may have triggered individuals to disassociate themselves with collective actions. At the member check interview, Aron, who is an eco-centric respondent, clearly distinguishes himself from mainstream activism owing to his disappointment with certain collective actions. He explains: When they rallied, people [environmental activists] were purposefully trying to get arrested and I never understood that. When it was put in to the media when people heard about the event it wasn't "Oh! We successfully managed to show the government that we want this to be closed down". [But] it was about "Ah! Did you hear about the arrest?" So, it made causes seem less legitimate because it sounded like people were trying to be over the top rather than actually caring about the cause. Consistent with his peripheral engagement in group environmentalist activities, Bob, while not predominantly holding eco-centric views, is not interested in collective actions. He believes that individuals who are affiliated with environmental groups "tend to be passionate to the extent that they almost push beyond reason". Subject to the boundary condition, it can be concluded that the respondents who predominantly engage in environmental actions with positive appeals are influenced by their stronger alignment toward the anthropocentric paradigm. On the other hand, the respondents who predominantly engage in green conscious behavior at an individual level (e.g. organic gardening) are influenced by their stronger alignment toward the eco-centric paradigm. This research investigated whether environmental paradigms inform climate change risk perceptions and green conscious behavior among young adults. Overall, the study found that young adults perceive climate change to be a "non-local" problem owing to various reasons. They have no local (personal) experience of climate change, are reluctant to engage or are unable to relate local environmental problems, if any, to climate change. They also hesitate to be associated with discourses pertaining to climate change programs run by various governmental and other organizations as they perceive the programs as largely unsuccessful. This study found no association between environmental paradigms and climate change risk perceptions. However, environmental paradigms and "non-local" climate change perceptions can be used to understand green conscious behavior. This section discusses three theoretical propositions generated by this study. It should be noted that this interpretive study explored the phenomenon with a view to making a theoretical contribution and hence the study used snowball sampling methods to recruit 20 young adults who engage in green conscious behavior as respondents. The expectation was to gain a deeper understanding of their subjective experiences in engaging in green conscious behavior. The study findings should, therefore, be complemented by additional research, using a larger sample, to draw more generalizable findings. This study found that regardless of the eco-centric or anthropocentric views they hold, young individuals demonstrate a low level of climate change risk perception. Willmott (2014) claims that shifting from the anthropocentric paradigm to the eco-centric paradigm is essential in addressing climate change problems at a societal level. However, this research stresses that the "non-local" climate change risks shared among young individuals should be resolved using programs aimed at enhancing knowledge about the adverse effects of climate change to personal lives. It appears pointless to push for anthropocentric and eco-centric paradigm changes as this seems to have no bearing on the issue (Gollnhofer and Schouten, 2017). Thus, it can be postulated that environmental paradigms do not inform climate change risk perceptions (1st proposition). This study also found that predominantly held environmental views can explain how young adults engage in green conscious behavior. Young adults appear to be engaged in green conscious behavior in two ways: collective level engagement; and individual level engagement. The former largely involves environmental actions, which mostly consist of collective actions (e.g. dumpster diving, rallies, etc.). The latter encompasses certain other green conscious behavior that mostly consists of individual actions (e.g. permaculture, gardening, etc.). Confirming previous research (Pentina and Amos, 2011), this research shows that environmental reasons as well as other emotional affiliations are associated with green conscious behavior. This is common among the individuals who predominantly hold anthropocentric views. This also confirms the young adults' effort to use positive connotations of their green conscious behavior to reconcile them with other individuals who may not be identified with green discourse owing to stereotyping or social labelling (Barnhart and Mish, 2017). However, this study also found that the eco-centric individuals tend to keep their engagement in green conscious behavior outside the public sphere. Thus, it partly confirmed previous research that eco-centric individuals tend to be engaged in particular green conscious behavior that signals human connection with the natural environment within a private sphere (Davison, 2008). Most of the green conscious behaviors performed collectively and enthusiastically are informed by an anthropocentric paradigm, and hence, this study partly disconfirms previous claims that individuals with eco-centric views consider themselves to be embedded elements of the natural environment and, as such, are highly engaged in green conscious behavior (Frantz and Mayer, 2014). Overall, this study postulates that environmental paradigms can be used to understand how young individuals engage in green conscious behavior (2nd proposition). Previous research (Whitmarsh, 2008) finds an association between the perception of physical vulnerability (e.g. rising sea level, floods, etc.) and climate change risk perceptions. Despite a few of the young adults having personal experiences of natural disasters, they were reluctant to associate with climate change. Their risk perceptions are not different from the other respondents who find it difficult to imagine the potential effects of climate change in Australia. As such, their climate change risk perceptions are largely based on the adverse effects of climate change in other countries. This study, therefore, confirms previous research (Leiserowitz, 2006) reporting that individuals face several barriers in perceiving climate change risks, with the adverse effects of climate change perceived as a psychologically and geographically distant matter (Lorenzoni et al., 2007). Further, discourses of climate change and green conscious behavior have shifted from environmental wellbeing to efficient use of resources (Humphreys, 2014), and young individuals lead the pack (Prothero et al., 2010). Fielding and Head (2012) find that young adults with a greater sense of collectivism have more positive environmental attitudes than respondents who perceive environmental protection as a governmental responsibility. This is partly confirmed by the study as among other factors, skepticism toward existing climate change remedial actions resulted in "non-local" climate change risk perceptions among the young individuals, in turn encouraging them to dissociate from the prevailing climate change discourses and reframe their green conscious behavior with positive experiences. Thus, it can be postulated that climate change risk perceptions inform green conscious behavior (3rd proposition). The three key theoretical propositions discussed in this section of this inductive research can be depicted using a skeletal theoretical framework, as shown in Figure 1. According to Morse et al. (2008), a skeletal framework can serve to sensitize future research, providing an internal structure to a research program. As shown in Figure 1, environmental paradigms, belief systems that guide how individuals relate to the natural environment, do not inform climate change risk perceptions (1st proposition). The environmental paradigms can be used to understand how young individuals engage in green conscious behavior that is manifested at collective and individual levels (2nd proposition). Climate change risk perceptions also inform the green conscious behaviors (3rd proposition). As shown in this study, young adults' environmental concerns - especially those of climate change - are shadowed by several experiential and non-experiential factors and young adults seek emotional benefits through engaging in green conscious behavior. They either reject or ignore negative connotations associated with climate change, such as its possible local impact and distrust of governmental actions. This presents special challenges for marketers of green commodities. Young consumers can more easily be convinced by advertising messages than older generations (Len-Rios et al., 2016). Therefore, promotional campaigns of green commodities can be effectively used to convince young consumers to engage in green conscious behavior. However, such campaigns should consist of positive appeal, enthusiasm and opportunities to engage with the campaigns collectively (e.g. used apparels-sharing events, collective cycling and car-pooling). Generally, appeals appearing in advertising of green conscious behavior focus on restrictions and controls of existing behavior and display negative connotations. Such appeals might not be effective when targeting young consumers. Developing climate change awareness building programs is highly recommended. However, it is essential not to present climate change as an uncontrollable and unactionable problem as that will discourage the youth's interest in taking part in the programs or to be associated with the discourse. Ojala (2012, p. 630) found a positive relationship between "constructive" hopes and young adults' response to climate change-related problems. Otieno et al. (2013) also found sensational styles of presenting climate change information significantly influences young adults' climate change risk perceptions. This study concludes that although a sensational or sensitive approach may be appealing to eco-centric individuals (Asplund, 2016) who largely engage in green conscious behaviors at an individual level, a positive approach is highly essential in managing the issue of seeing climate change as a "non-local" issue as well as promoting green commodities aimed at anthropocentric individuals. Emphasis on empowered connotations, positive emotional affiliations and constructive hopes (Ojala, 2012) can provide a market potential for green commodities (Prothero and Fitchett, 2000). This study postulated three key theoretical propositions pertaining to how young adults relate to climate change, their environmental paradigms and green conscious behavior and recommended that environmental programs should revisit the idea that the anthropocentric paradigm inhibits engagement in green conscious behavior. Further, the promotional campaigns of green commodities can be largely benefited through incorporating messages with positive appeals providing young individuals with opportunities to engage with the campaigns collectively. Several limitations of the findings should be noted in the future use of the theoretical propositions postulated by this research. The study used snowball sampling methods in recruiting 20 respondents and in-depth interviews with the purpose of investigating the unique experiences of young adults engaging in green conscious behavior. Future research can consider running a comparative study using a larger sample of individuals who can be pretested to determine their tendency to align with either anthropocentric or eco-centric environmental paradigms.
|
Recommendations are provided on how to sustain young adults' interest in environmental wellbeing and in promoting green commodities in young consumer markets. Suggestions include creating a clear awareness of climate change with a constructive or positive appeal resolving 'non-local' climate change risk perceptions and position green commodities as "pro-actions" or "solutions", as opposed to "reactions", when reaching young consumer markets.
|
[SECTION: Purpose] The emergence of social networking sites (SNSs) has resulted in the rapid evolution of online community platforms into popular forums for communication and entertainment, while users' word-of-mouth behavior has become increasingly decisive influence. With the growing maturity of technologies related to SNSs, business managers have learned to increase commercial profits (Trusov et al., 2009). In other words, firms are striving to increase interactivity among brands' SNSs, website users, and non-website users to generate positive outcomes through internet-enabled dissemination. For example, Global Web Index (2014), a market research institution, found that social media has continued to grow and develop in the last year based on the globalization information of social media. In 2013, the number of new registered users on popular social websites increased by 135 million. In 2014, the total number of Facebook users reached 1.393 billion and generated total revenue of USD$3.85 billion in the fourth quarter of the year, which was an increase of 3.18 percent compared with the third quarter (Facebook, 2015). That same year, the total revenue of Facebook reached USD$12.47 billion, which was an increase of 58 percent compared with the previous year. Daily users have increased 18 percent compared with 13 percent total growth of non-daily users (Business Next, 2015). It has become popular for companies to use Facebook as a customer service channel to communicate brands with their customers. More companies build up relationships with their customers, as well as respond to and solve problems from customers, through Facebook rather than other social media (Social Time, 2015). In addition, the development model of online marketing has gradually transformed from business-to-consumer (B2C) to consumer-to-consumer (C2C), a revolutionary and well-received model that enables interactive e-commerce (Chu and Liao, 2007). Because of the advances in internet technology, online WOM, which differs from traditional, offline WOM, allows internet users to transmit messages to hundreds or thousands of people with just a few clicks (Mangold and Faulds, 2009). Regarding media value, Vitrue, a firm specializing in social media, calculates that the impressions generated by one million fans are equivalent to a media value of USD$300,000 per month. For example, the Starbucks Facebook page has a fan base of approximately 6.5 million, translating into an annual media value of USD$23.4 million. On average, one fan generates USD$3.60 in media value, and one million fans are worth USD$3.6 million (Moorman et al., 1993). Therefore, the quicker a fan base expands, the larger the generation of media value. In practice, firms use Facebook brand fan pages to create interaction and rapport with fans. The companies then combine these pages with other online marketing activities to transfer advertising from cyberspace to offline environments (Electronic Commerce Times, 2010). Because of Facebook's high reach rates, numerous firms have created pages to garner popularity. In addition, Facebook brand fan pages benefit firms by serving as a channel for managers to inform fans of new product information and to announce relevant activities (Social Media Marketing Co., 2011). In addition to maintaining positive trust relationships between brand manufacturers and consumers, online community platforms allow brands to communicate product information to consumers (thereby establishing information exchange and interactions with similar communities) and assist community members in their future purchase decisions. Previous studies investigating virtual brand communities have discussed cognitive trust and affective trust and whether these two factors are keys to the successful management of virtual communities (Lin, 2008; Yeh and Choi, 2011). However, these studies have typically studied the parallel relationship between cognitive and affective trust rather than the non-parallel aspect between them. Lewis and Weigert (1985) pointed out that the difference between cognitive and emotional attitude form a long-standing debate. The concept of cognitive trust and affective trust is clarified as a non-parallel relationship in the organizational research literature. The cause and effect of the relationship between cognitive trust and affective trust has not been explored in the context of virtual research (e.g. Venetis and Ghauri, 2004; Vesel and Zabkar, 2009; Yeh and Choi, 2011). Based on previous organizational research, trust is not a single holistic view. Trust also includes a cause and effect relationship between cognitive and affective factors. Previous scholars have advocated that cognitive trust is an antecedent of affective trust (Johnson and Grayson, 2005; McAllister, 1995). The current study advocates that, in a virtual context, the attitude of community members is formed by the same phenomenon. It is necessary to clarify the relationship between cognitive trust and affective trust. This study examines the mutual influence of cognitive and affective trust to assess whether cognitive trust in a virtual brand community affects the establishment of users' affective trust. In addition, SNSs are characterized by frequent interpersonal interactions. During such interactions, virtual community users are susceptible to the influence of other users (Bearden et al., 1989). Thus, this study also explores whether social influence among virtual community users is affected by cognitive trust, affective trust, and the sense of virtual community among members. Social influence can be divided into normative influence and informative influence according to user motivation for participating in virtual communities. In building up our theory and illustrating a research gap, this study considers three dimensions as exogenous variables in the model. These dimensions may affect and transform their own relationships to each other. Previous literature has explored on the dimension, originally put forth by Wasko and Faraj (2005) who investigated that factors influence voluntary knowledge sharing on SNSs among individual, relational (social), and group-level relationship. The research of Tsai et al. (2012) was a follow-up study to Wasko and Faraj (2005) and proposed to extend different community environments to construct theory, and put forward the idea that community participation involves complex and interpersonal exchange processes. Hsu (2012) continued to follow the study of Wasko and Faraj (2005) and Tsai et al. (2012) in distinguishing the dimensions of individual, social, and group level and combined them with the tricomponent attitude model proposed by Rosenberg and Hovland (1960) as a theoretical basis. The tricomponent attitude model is divided into three psychological processes that include cognition, affection, and conation and action components. For the purposes of this study, the personal level is divided into cognitive trust and affective trust in the cognition phase, and cognitive factors are important antecedents of affective factors. Hsu (2012) indicated that affective trust is an important antecedent of affection in developing an important community relationship. Previous research has typically targeted the group level on the sense of a virtual community, which involves individual influences on virtual communities, but has rarely investigated the influence exerted by virtual community members on their interpersonal relationships (Tonteri et al., 2011) and finely formed social norms mechanism (Hsu et al., 2016). Therefore, this study develops a theoretical framework to explore the relationships among individual, group, and social factors. Based on the research motivation previously mentioned, this study primarily aims to achieve the following purposes. To explore factors of trust among virtual community members from an individual perspective, a sense of virtual community from the group perspective, and the normative/informative influence in interpersonal interactions among members from a social influence perspective, and examining the interaction process among these three perspectives. To propose two mediating factors and conduct relevant tests. First, affective trust is a mediator between cognitive trust and sense of virtual community. Second, sense of virtual community is a mediator between trust (cognitive trust and affective trust) and social influence (normative influence and informative influence). 2.1 Community website and Facebook fan pages SNSs are a form of virtual community. SNSs users can create a public profile, interacting and sharing common interests with friends as well as with strangers in real life (Kuss and Griffiths, 2011). Enabled by the internet, SNSs mainly provide users with the following three functions: to create a public or semi-public profile of personal information; to customize lists of users for sharing information; and to view and track information provided by other users (Boyd and Ellison, 2008). SNSs are defined as virtual communities that provide members interactions based on the Web 2.0 concept. For example, social networking and multimedia content sharing not only preserve users' existing social networks, but also allow connections among strangers who share common interests. According to this definition, the first SNS, SixDegrees.com originated in 1997. This website provided a platform for users to create profiles and friend lists. Although SNSs emerged in cyberspace in 1997, their rapid growth and popularity began in 2003 when websites such as MySpace, LinkedIn, Flicker, Facebook, and YouTube were launched. In addition to the original social networking functions, certain SNSs included options that allowed users to share multimedia, such as uploading photos and videos. Subsequently, SNSs begin to garner global attention and the number of SNS users grew exponentially. The launch of SNSs such as Facebook and MySpace changed the manner in which internet users communicated and interacted around the world. Facebook has since become the largest SNS in the world (Boyd and Ellison, 2008; Chiu et al., 2008; Kuss and Griffiths, 2011; Nadkarni and Hofman, 2012). Facebook pages, introduced in 2007, are public profiles that enable firms and users to share company news and product updates. Facebook members update their linking status with their page(s) to share with their friends through real-time feeds. Subsequently, Facebook continues to disseminate real-time updates to broader networks through online WOM when friends of these fans interact with their pages. In addition to Facebook's stunning growth in membership, pages are another feature that distinguishes these sites. Sysomos Inc. (2009) conducted the first large-scale survey regarding Facebook pages, which now exceed 630,000. The results indicate that each page has 4,596 fans on average and that page owners post on the page wall every 15.7 days on average, demonstrating rapid fan-base growth. Business Next (2015) held the second Facebook page poll and identified the strongest fan base (i.e. Facebook page) based on popularity, page content, and long-term operating outcomes. Pages can be used for business promotion, commercial marketing, or sharing professional knowledge. Members of a business, organization, or club share various social networking or marketing activities on their associated pages and announce upcoming activities. These pages update and inform fans' Facebook friends or users' viewing pages of information relevant to specific activities, which then attract additional users with common interests, thereby achieving brand promotion (Pempek et al., 2009). 2.2 Trust theory Trust verifies evidence and serves to generate a feeling of affirmation, which is a key factor in influencing the formation of relationships and partnerships (Giffin, 1967; McKnight and Chervany, 2002). Trust is vital for establishing interpersonal relationships and virtual communities, especially in uncertain or high-risk environments, such as electronic markets (Ba and Pavlou, 2002; Moorman et al., 1993). Lewis and Weigert (1985) and McAllister (1995) asserted that interpersonal trust stems from cognitive and affective bases, and that networking on SNSs results from social interactions. Cognitive trust arises from calculations and rational assessments that originate from accumulated knowledge. Such knowledge enables people to predict with some degree of confidence that their partner in the relationship can conform to their expectations. This knowledge is amassed from previous observations of the partner's external behavior and reputation. In other words, cognitive trust in the SNS context refers to internet users' assessment of the reliability of the information based on users' existing capabilities and knowledge. Conversely, affective trust is formed by affections and social interactions, and is built on people's care and concern for each other; that is, affective trust is trust that arises from mutual affection and results in emotional connections in interpersonal relationships (Johnson and Grayson, 2005; Yeh and Choi, 2011). 2.3 Sense of virtual community A sense of virtual community has been originally defined as the sense of belonging that members have toward their community, allowing them to convey beliefs and reach a mutual understanding, thereby demonstrating their commitment to the community. A sense of community can be divided into four elements: membership, influence, integration and fulfillment of needs, and shared emotional connection. These elements have been used in theories regarding the sense of virtual community (Blanchard, 2007; Tonteri et al., 2011) and each are elucidated as follows: membership: a sense of belonging that members perceive regarding their community and serves as a common symbol within the community that members self-reinforce to meet community needs and obtain approval; influence: the influence exerted on members by the community or other members, or the belief that members hold that they are capable of influencing others in the community; integration and fulfillment of needs: members believe that the community or resources and support provided by other members can satisfy their needs (e.g. joining a community provides specific advantages or rewards); and shared emotional connection: members of a community share a common experience, history, time, and space; that is, they experience events together and engage in positive interactions that lead to enhanced relationships (Abfalter et al., 2011; Koh and Kim, 2003). 2.4 Social influence Generally, during a decision-making process, individuals not only consider the matter at hand but also the surrounding social group or environment. This phenomenon is called social influence. Although social influence entails numerous dimensions, in this study it is considered the susceptibility to inter-individual influence to facilitate the discussion of SNSs and interactions among members. Bearden et al. (1989) indicated that when people interact in a group, it induces changes in perception or behavior. This transformative process refers to social influence. Scholars have assessed personality attributes that predispose a person to others' influence, such as low self-esteem. Dual-process theory in psychology postulates that messages or information received by a person may be influenced by persuasion, and that such influence is divided into two types: normative influence and informative influence (Deutsch and Gerrard, 1955). Normative influence refers to a person's conforming to social norms or others' expectations to obtain the approval of a group, thus adopting cognitive or behavioral patterns congruent to the group (Cheung et al., 2009). Informative influence arises from acknowledging information obtained as evidence of reality, which is primarily based on the recipient's assessment of the information received, which includes the content, source, and other recipients (Hovland et al., 1953). This research proposes an integrated model of the relationship between different levels of influencing factors. Based on the research of Wasko and Faraj (2005), this study investigates the factors of individual, group, and social levels as the basis of a theoretical theory to develop a research model of a community relationship. Previous studies have confirmed that social capital theory is the most accurate explanation of interpersonal Facebook relationships on SNSs (e.g. Burke et al., 2010, 2011; Ellison et al., 2011; Zhao et al., 2016). Facebook is particularly well-suited way for bridging "social capital." Social capital is used to describe the capacity of obtaining resources for individuals or groups embedded in their social networks (Bourdieu, 1986; Coleman, 1988). The establishment of a social network relationship includes different levels of viewpoints for individual, groups, and social influence, wherein trust is seen as a key construct of social capital on an individual level (Zhao et al., 2016). Connections of different clusters or groups within a network is often called "bridging" ties (Burt, 1992), and it is conducive to building strong relationships. On the other hand, bridging ties is also characterized by repeated interactions with trustworthy, highly supportive, and intimate relationships, which typically provide the transformation of acquiring capital and becomes a more substantive form of social relationship (Ellison et al., 2014). People build relationships through social interactions and build their expectations for future social resources with social capital. A key issue in changing people's attitudes or behavior is that such a change can transform the attitude or behavior of a group or community (Latkin et al., 2009). A conceptual point of view can change groups and social levels through social diffusion. A presumed social diffusion is enough to cause others to change their behavior. Social behavior will spread in the community in the way of social groups. The conceptual operation of social diffusion is determining the social norms that are an important part of a common theory (Bandura, 1986). The most successful examples involve altering social norms related to the behavior change from the perspective of a group or a society (Latkin et al., 2009). Therefore, the framework of this research is divided into three perspectives: the individual, the group, and the social influence perspectives. The individual perspective is used to explore trust among SNS users toward a specific virtual community. The group perspective is employed to observe the sense of virtual community. The social influence perspective, consisting of both normative and informative influence, is used to discuss the joint influence of the individual and group perspectives on the social influence perspective (Figure 1). 3.1 Individual perspective Previous studies have typically classified trust as both cognitive and affective trust. However, the relationship between cognitive and affective trust is seldom discussed. In a study on organizational context, Johnson and Grayson (2005) indicated that cognitive trust is an antecedent of affective trust. Scholars studying attitude theory have disputed the context of the relationship between cognitive and affective trust in relation to attitude for a long time. Previous theoretical and empirical research has shown that cognitive trust positively and significantly influences affective trust (Johnson and Grayson, 2005; McAllister, 1995). Related studies regarding service relationships and e-commerce have also indicated that cognitive trust influences the formation of affective trust (Dabholkar et al., 2009; Johnson and Grayson, 2005). Chih et al. (2015) investigated online shoppers' buying behavior from a positive- and negative-cognitive perspective and distinguished trust into cognitive trust and affective trust. They determined that cognitive trust is deemed as an antecedent of affective trust and their empirical results show that cognitive trust must be established in order to gain consumer trust and build relationships. For example, a virtual community provides accurate and credible shopping information, and further, that consumers are willing to build an affective linkage with this virtual community. Therefore, this study proposes the following hypothesis: H1. In a virtual community, members' cognitive trust has a significant and positive effect on affective trust. 3.2 Connection between individual and group In highly uncertain environments, trust helps people build interactive relationship networks. Because activities in virtual communities lack face-to-face contact, online communication requires trust. Trust facilitates the successful implementation of virtual communities. For example, in an environment without norms, it is necessary for a partner to have trust to execute socially acceptable interactions (Lin, 2008). Blanchard and Markus (2004) have asserted that in virtual communities, identification methods can enhance trust, thereby increasing members' sense of virtual community. This reflects contemporary SNSs' requirement that members provide their real names when registering to join a site. When members demonstrate trust toward a virtual community, they form a committed relationship with that community, which facilitates the formation of a sense of virtual community (Tsai et al., 2011; Wang and Tai, 2011). Ellonen et al. (2007) indicated that a deep sense of trust between community members allows them to assist one another because of the benefits of sharing a common social network and expectations, and thus foster a sense of virtual community. According to the findings of Blanchard and Markus (2004), trust between members develops after mutual support is demonstrated, which results in a multifaceted sense of virtual community. In addition, Lin (2008) asserted that successful virtual communities must generate trust between members, thereby producing a sense of virtual community. McMillan and Chavis (1986) found that trust can alleviate anxiety and insecurity for virtual community members. They also found that the relationships between virtual community members become closer and that members feel a sense of belonging when these relationships enhance trust between members and obtain assistance during the online interactive process. Zhu et al. (2012) indicated that trust is an important antecedent of shaping members' sense of community. Zhao et al. (2012) also confirmed that trust has a significant and positive effect on sense of belonging to a virtual community. In other words, trust is more likely to prompt the trustor to be more addictive to the relationship with the virtual community. Thus, this study proposes the following hypothesis: H2. In a virtual community, members' (a) cognitive trust and (b) affective trust have significant and positive effects on their sense of virtual community. 3.3 Connection between individual and society In virtual communities, establishing trust relationships with influential members is considered the foundation of interpersonal relationships because members frequently make decisions that conform to the opinions and suggestions provided by other members, who are strangers to them (Park and Feinberg, 2010). Casalo et al. (2011) indicated that trust toward online travel communities influences whether consumers accept the advice offered by a virtual community and subsequently purchase travel packages. Lascu and Zinkhan (1999) pointed out that when a group or community exhibits reliability, user conformity increases within the community. Because most online information is free, users' trust toward virtual communities affects the interpersonal relationships that they form within a community (Boush et al., 1993; Park and Feinberg, 2010). This suggests that members' assessment of the reliability of the information provided using their existing skills and knowledge, as well as the care and concern developed through emotional connections and social interactions, serve as factors that prompt members to conform to community norms and act upon suggestions provided by other members. These factors also act as references for members in purchase decisions. An individual will seek acceptance by others and change her/his attitude or behavior in order to meet the expectations of community members when the individual builds up a sense of trust with the community. Chin et al. (2009) found that online shoppers' trust has a significant and positive effect on social influence. Consumers tend to observe and gain information from others to understand products and services based on trust. Hsu et al. (2011) indicated that blog community members have a sense of trust to the community and comply with the common understanding and regulations to establish a standard behavior within the community by studying bloggers' interactive networks. Therefore, this study proposes the following hypotheses: H3. In a virtual community, members' cognitive trust has significant and positive effects on (a) normative influence and (b) informative influence. H4. In a virtual community, members' affective trust has significant and positive effects on (a) normative influence and (b) informative influence. 3.4 Connection between group and society A sense of virtual community is a sentiment generated by experiences in a virtual community that induces a sense of belonging and deep attachment toward that community (Blanchard and Markus, 2004; Koh and Kim, 2003; Tonteri et al., 2011). A number of scholars have asserted that for virtual community users, a sense of belonging in respect to such communities enhances their normative and informative influences (Lee and Park, 2008). A sense of belonging toward a virtual community is considered an antecedent to the formation of social influence, namely, normative and informative influence (Park and Feinberg, 2010). In a virtual community, a sense of community is generated when members recognize similarities and develop an intention to continue interacting, thus increasing their normative and informative influences within the community (Lascu and Zinkhan, 1999; Shen et al., 2010). Hsu et al. (2016) advocated the idea that members' sense of virtual community and their community social influence both increase if Facebook fan page members treat the community as part of their daily lives and acquire others' affirmation and praise. Hsu et al. (2016) also confirmed that the sense of virtual community has a significant and positive effect on normative and informative influence, respectively. Thus, this study proposes the following hypothesis: H5. In a virtual community, members' sense of virtual community has significant and positive effects on (a) normative influence and (b) informative influence. 4.1 Research design and data collection Our target population is Facebook fan page members because Facebook is the largest virtual community in Taiwan. This research applies Google Docs (https://drive-google-com.pitt.idm.oclc.org) online service, which has no time and/or geographical limits, to create an online questionnaire and release on a Facebook fan page and a PTT BBS station. Fan page users can connect to the questionnaire through a link. This study collected 422 samples. From these, 312 usable samples yielded a response rate of 73.93 percent. Table I shows the respondents' demographic information, with males comprising 51.92 percent. Among them, 57.37 percent of participants were between 18 and 24 years old, and the largest proportion of education background was a bachelor/associate degree, which accounted for 67.95 percent. About 36.22 percent of the respondents had a Facebook history of one to two years, and the majority of respondents had not been members for more than three years. In addition, 44.23 percent of users surfed Facebook three to five hours each day. 4.2 Measure This study aims to explore internet users' behavior in a virtual community. In order to assure the validity of the instrument, items used to measure the constructs are from scales developed in previous research. The items in the questionnaire are divided into six parts, including five dimensions of scale and demographic variables. All items are measured using a seven-point Likert scale ranging from "strongly disagree" (1) to "strongly agree" (7). Demographic variables are categorical data with a single-item measure and include gender, age, education, contact with Facebook, and surfing Facebook each day (see Table I). In this study, there are total of 36 items for five scales and five demographic variables in the questionnaire. 4.3 Sample validity For a precise reflection of Facebook users' population structure, this study used a gender ratio of the sample structure to determine whether it matches the population structure. In addition to statistics data from CheckFacebook (2013) as the basis reference of this study, we also follow the idea of Hsu et al. (2016) in taking gender as a test variable. The gender proportion is 50.6 percent male users and 49.4 percent female users. The result of a kh2 goodness-of-fit test shows that the p-value of kh2 is 2.697 (p-value is 0.101 larger than 0.050). Thus, the null hypothesis of the test cannot be rejected and no significant difference exists between the sample structure and the population structure of Taiwanese Facebook users' gender ratio provided by CheckFacebook in this study. To avoid or reduce the problem generated by the common method variance (CMV), a two-stage prevention procedure is conducted. First, we design the survey questionnaire by following these steps. The constructs are arranged randomly and the research objective is not shown on the questionnaire. The survey is conducted anonymously in order to decrease consistent answers from respondents. Second, Harman's one-factor-test is applied to examine whether the CMV problem exists in the sample data, including an exploratory factor analysis (EFA) (Harman, 1967; Podsakoff and Organ, 1986) and a confirmatory factor analysis (CFA). The analysis of EFA conducts all items, and the results show that six factors are extracted and the explained 40.573 percent variance by the first factor, which is lower than 50 percent (Wang et al., 2014). The analysis of CFA conducting all items are subsumed into one factor and the results do not show all item factor loadings higher than 0.5. Furthermore, as the model-fit of Harman's one-factor-test (kh2=3306.19, df=594, kh2/df=5.57, GFI=0.53, AGFI=0.47, RMSEA=0.12, CFI=0.63, NFI=0.59) is worse than the model-fit of the hypothesized model (kh2=799.65, df=535, kh2/df=1.50, GFI=0.88, AGFI=0.85, RMSEA=0.04, CFI=0.96, NFI=0.90), no significant CMV problem exists in the data. 4.4 Analysis of measurement model As essential prerequisites for achieving valid results, the reliability, convergent validity, and discriminant validity of the measurement model are assessed. Item reliability is assessed by applying the factor loading and squared multiple correlations (SMC), and the construct reliability is assessed by applying Cronbach's a. As shown in Table II, all factor loadings are above 0.5, and all SMCs are above 0.2, indicating good reliability of the items (Bentler and Wu, 1993; Hair et al., 2010). Also, the Cronbach's a are above 0.7, indicating good reliability of the scales (Nunnally, 1978). To test the ability of a measurement to reflect the actual circumstances, this study adopts the test for convergent validity and discriminant validity. Convergent validity can be assessed in terms of the average variance extracted (AVE) from the latent variables and the composite reliability (CR). Discriminant validity refers to the correlation between constructs. Table II shows the CRs above 0.6 and the AVE from latent variables are above the acceptable value of 0.5, with the exception of normative influence and informative influence. However, according to the findings of Fornell and Larcker (1981), the scale still indicates convergent validity if the scale CR is higher than 0.6. Thus, all the scale convergent validity can be confirmed (Bagozzi and Yi, 1988; Hulland, 1999). Furthermore, Table III indicates that the correlation with that of the other constructs is less than 1 and that the AVE square root of any particular construct is greater than the correlation of this construct with other constructs (Fornell and Larcker, 1981; Gaski and Nevin, 1985). 4.5 Structural model This study examines the structural model and considers the model-fit indices suggested by Hair et al. (2010), which are absolute fit measures, incremental fit measures, and parsimonious fit measures. All of the model-fit indices are higher than the suggested criteria. For the examination of the proposed hypotheses of the relationship among constructs, a structural model is applied. The result of the structural model is shown in Figure 2. The model explained 43.6 percent of affective trust variance, 54.2 percent of sense of virtual community variance, 44.4 percent of normative influence variance, and 40.3 percent of informative influence variance. H1 posits that cognitive trust influences affective trust. Figure 2 indicates that the path coefficient is 0.661 (p<0.001), thus supporting H1. H2, which states that (a) cognitive trust and (b) affective trust affect the sense of virtual community, respectively, are also confirmed (g21=0.381, p<0.001; b21=0.427, p<0.001). The positive effects of cognitive trust on (a) normative influence and (b) informative influence are also supported (g31=0.287, p<0.001; g41=0.182, p<0.05) thereby confirming H3. Furthermore, the results show the positive influences of affective trust on (a) normative influence and (b) informative influence (b31=0.279, p<0.001; b41=0.334, p<0.001) thus confirming H4. Finally, the effects of sense of virtual community on (a) normative influence and (b) informative influence are significant (b32=0.189, p<0.05; b42=0.199, p<0.05) thereby supporting H5. 4.6 Mediating effects Based on the previous demonstration of the effects of trust on sense of virtual community and social influence, this study tests the mediation effect of affective trust between cognitive trust and sense of virtual community and sense of virtual community between cognitive/affective trust and normative/informative influence. According to Table IV, both the bootstrapping 95 percent confidence intervals of percentile and the bias-corrected method are presented, and zero is not contained in these intervals. Such results enhance the findings that affective trust mediates the relationships between cognitive trust and sense of virtual community and that sense of virtual community mediates cognitive/affective trust and normative/informative influence. Furthermore, following prior research, three steps are required to test the mediation effect (Baron and Kenny, 1986; Komiak and Benbasat, 2006). In step 1, this study treats cognitive trust as an independent variable and sense of virtual community as a dependent variable, and finds a significant relationship between them (b=0.584, p<0.001). In step 2, this study builds a model that includes cognitive trust as an independent variable and affective trust as the dependent variable, indicating a significant effect (b=0.622, p<0.001). In step 3, this study builds a model with both cognitive trust and affective trust as independent variables and sense of virtual community as the dependent variable, and both the effects of cognitive/affective trust on sense of virtual community are significant. Thus, affective trust partially mediates between cognitive trust and sense of virtual community. Finally, this study also conducts the Sobel (1982) test to assess the significance of the mediation effect (Wood et al., 2008). The results demonstrate that affective trust significantly mediates between cognitive trust and sense of virtual community (Sobel=7.114, p<0.001). The mediating effects of sense of virtual community on the relationship between cognitive/affective trust and the normative/informative influence are also examined. Both a bootstrapping analysis and the Sobel test are conducted, and results in Table V show that sense of virtual community partially mediates the relationships between cognitive/affective trust and normative/informative influence. 5.1 Research implications This study contributes to improving the understanding about the transformation of sense of virtual community on the group level as well as the normative/informative influence on the social level, and from a trust perspective on the individual level. The findings indicate that this research model has a good explanatory power and provides a more complete explanation of members' interactive relationships on different levels in the context of Facebook brand fan pages. Regarding the individual perspective, this study references the research of Lewis and Weigert (1985) and divides trust into cognitive and affective trust to assess and clarify the non-parallel relationship between these two constructs in a virtual community. The results indicate that the formation of affective trust is partially dependent on cognitive trust. Similar to Chih et al.'s (2015) study, a virtual community must establish an accurate and reliable message source in order to build relationships with members. Virtual community members will form an affective trust of emotional feeling and emotional attachment when they obtain product information by rational demands. Second, regarding the individual-group perspective, this study identifies two salient factors, cognitive trust and affective trust, which are important antecedents of the sense of virtual community. Similar to Zhao et al.'s (2012) study, trust plays a key role in increasing brand community members' sense of belonging. The basis of interaction on an individual level is cognitive trust and affective trust. Consumers have confidence in disclosed relevant information within a virtual community and then frequently interact with other members if a virtual community manager devotes more effort to managing the community. In this scenario, virtual community members will feel a stronger sense of belonging. Third, regarding the individual-social perspective, this study investigates individual factors involved in the normative/informative influence that page fans have on other fans (i.e. members). This suggests that because of trust in other members' abilities, or trust stemming from emotional connections, fans of a brand page change their perspectives or behaviors to meet the expectations of other fans. H3 and H4 of this study are similar to the results of Hsu et al. (2011). Although cognitive trust has a significant and positive effect on normative influence and informative influence, respectively, it has a weak significant effect on informative influence. Therefore, a virtual community must build cognitive trust from a rational perspective in order to create a normative influence within this community and obtain acceptance by members. Affective trust has a significant and positive inference on normative influence and informative influence, respectively. A virtual community must build affective trust in order to create an informative influence among community members. Virtual community members seek others' suggestions and adopt correct and reliable information to reduce uncertainty and fear in the community. Finally, this study finds a significant and positive effect of a sense of virtual community on normative/informative influence from the group-social perspective. H5 of this study is same as Hsu et al.'s (2016) findings. In addition, the sense of virtual community is an important core concept in the research model. Such a sense not only strengthens the sense of belonging among virtual community members, but it also establishes a bridge between trust and social influence. Behavioral norms of virtual community members means that community members perceive common sense during the interaction and comply with the common understanding of establishing their behaviors from a community life point of view. 5.2 Managerial implications This study advocates that brand fan pages must establish a platform that virtual community members trust and are willing to participate in the discussion environment for a long time regardless from the perspective of rational cognitive trust and emotional affective trust. More engaged customers will establish a closer trust relationship in the virtual community and partake in activities such as providing brand and product-related knowledge, offer interactive discussions, reply to service-related questions on fan pages, increase the cognitive trust of brand among members, and organize fan page activities to increase the opportunities of exchanging emotion and enhancing affective trust. Cognitive/affective trust and sense of virtual community are key factors in establishing virtual community cohesion to build a reliable brand and belief as well as the development of emotional relationships for a sense of belonging. For example, Toyota cleverly used social media to resolve a vehicle recall crisis by building up customer trust. The company sets up a team to track negative rumors on Facebook and elsewhere, responded with the facts, and specifically opened a Twitter account to communicate with consumers. Toyota looked for online fans to spread their discussion through the company's media channels, taking advantage of the company's decade-long performance reputation, that is, a sense of reliable trust and durable brand commitment. They effectively used social media and other new media to offset the most negative messages. Toyota successfully lifted their brand challenge and eliminated the brand disaster to provide several important community management strategies for other companies. A brand fan page must establish an ideal community environment in which virtual community members are more likely to share and deliver brand/product-related messages with cognitive/affective trust and social influence (Brown et al., 2007). Therefore, the biggest benefit for consumers engaging in a brand community is to form strong relationship among brands, products, other customers, and companies. Customers increase their trust relationship with the brand and form an invisible norm for the community force after creating a close relationship with the brand (Habibi et al., 2014). Virtual community members choose to gain product messages and knowledge from other members to understand their personal experiences of product use. In the sense of virtual community and social influence, community managers must consider how to provide enough useful information and interactive activities to build up users' sense of belonging and cohesion with altruistic behavior of social relationship. Virtual community members can communicate by online platform to gain psychological support and attract other participating individuals who are willing to maintain a relationship with the community. Many people publish their own messages in a virtual community. Firms must further promote commercial behavior within the virtual community so that by participating, people can meet their particular needs, such as community, business transactions, knowledge, and entertainment (Hagel, 1999). This means that interpersonal relationships between virtual community members are the basis for community development and generates emotional exchanges for community members (Lee and Chang, 2011). Managers must build a successful virtual community to attract the attention of internet users as well as to provide sufficient incentives for virtual community fans to share information (Ho and Dempsey, 2010). For example, managers not only encourage members to provide relevant electronic word-of-mouth (e-WOM) information, but also recently launched brands through promotional activities on social media to provide relevant information to virtual community members. Virtual community fans will be more willing to use e-WOM to promote brand and create brand value when they are involved in the brand platform. Brand fan pages not only link with fans, but they also help to establish virtual community fans to increase discussion activity and maintain popularity of fan page. For example, firms disclose ideas or comments on styles and features on Facebook when new products are introduced. It is possible to investigate preferences and dissimilarities between fans and other consumers by collecting messages, comments, voting, and so forth. Virtual community creates membership, identification, and links with fan brands by meeting consumer demands (Fournier and Lee, 2009). The results indicate that social influence among members or Facebook pages fans is enhanced by cognitive trust, affective trust, and the sense of virtual community. Thus, a long-term sense of virtual community toward Facebook brand fan pages can be induced if online platform developers establish an environment in which users develop trust and participation. Methods of achieving this include providing knowledge related to brands or products, assigning staff to provide answers to questions posed by users, and hosting activities that encourage fans to interact, thereby enhancing cognitive and affective trust between fans. After users develop cognitive trust, affective trust, and the sense of virtual community, the social influence among members is likely to increase. The thriving development of the internet and SNSs has transformed common communication methods and lifestyles, and SNSs have become popular platforms for interaction and communication. For firms, crowds are business opportunities. Consequently, various firms aim to build platforms that facilitate positive interactions and communication with consumers on SNSs, thereby increasing product sales or brand value. Learning consumer behavior in the context of virtual brand community is required for firms to obtain tangible benefits and value from their interactive platforms. The results provide insights for brand companies in the creation of platforms on Facebook or other SNSs to enable B2C or C2C interactions. 5.3 Research contributions This study is based on trust theory and follows the concept of social exchange theory (Blau, 1964; Thibaut and Kelly, 1959). Blau (1964) advocated that trust within relationships is an important idea, especially in the process of exchange among individuals through cultivating a good rapport and provides people with a reason not to run away from social obligations. In the virtual environment, trust affects users' willingness to exchange messages with other members and is an important factor in continuing participation in the community (Blanchard et al., 2011; Ridings et al., 2002; Yeh and Choi, 2011). Trust is a psychological state (Rousseau et al., 1998) and a multi-faceted concept (Lewis and Weigert, 1985; McAllister, 1995; Riegelsberger et al., 2003). Cognitive trust and affective trust are considered to have a non-parallel relationship in organization research (Lewis and Weigert, 1985). However, this study successfully divides trust into cognitive trust and affective trust to clarify the non-parallel relationship between these two constructs in a virtual community. Past scholars have only investigated the group level of interpersonal relationships for virtual community members, but not other levels (Tonteri et al., 2011). This study concerns different levels and distinguishes the difference between individual, group, and social levels and presents trust theory to separate trust into cognitive trust and affective trust as the antecedents of a sense of virtual community. In addition, social influence is regarded as susceptible to interpersonal influence (Bearden et al., 1989) as a result of a sense of virtual community. 5.4 Limitations and future research Some limitations exist in this study. First, this study adopts a cross-sectional survey, which means it does not explore the subsequent internal changes and actual usage behavior of Facebook brand fan page users. Subsequent researchers could use a longitudinal survey. Second, this study only surveys Taiwanese Facebook users, and does not present a comprehensive picture of Facebook users in different countries. Subsequent researchers could investigate international brand fan page users. Third, this study only conducts the survey for Facebook fan page users, and in different virtual community platforms, it may create different findings and inferences. This study recommends that researchers follow up this study for different virtual community platforms, such as Twitter and Plurk. Fourth, this study only designs the multi-level constructs by using structural equation modeling. Future researchers can redesign the questionnaire to collect data from multiple groups of respondents. Finally, this study does not consider other constructs at the individual level, whereas a gap exists in considering users' personality characteristics, such as extraversion, introversion, narcissism, self-esteem, self-worth, and neuroticism (Nadkarni and Hofman, 2012). On the group-level factors, such as sense of virtual community can be divided into three facets such as membership, influence, and immersion (Koh and Kim, 2003) to verify the relationship.
|
The purpose of this paper is to explore a model of how people are influenced from the perspectives of individuals (cognitive trust and affective trust), group (sense of virtual community), and social influence (normative influence and information influence) factors.
|
[SECTION: Method] The emergence of social networking sites (SNSs) has resulted in the rapid evolution of online community platforms into popular forums for communication and entertainment, while users' word-of-mouth behavior has become increasingly decisive influence. With the growing maturity of technologies related to SNSs, business managers have learned to increase commercial profits (Trusov et al., 2009). In other words, firms are striving to increase interactivity among brands' SNSs, website users, and non-website users to generate positive outcomes through internet-enabled dissemination. For example, Global Web Index (2014), a market research institution, found that social media has continued to grow and develop in the last year based on the globalization information of social media. In 2013, the number of new registered users on popular social websites increased by 135 million. In 2014, the total number of Facebook users reached 1.393 billion and generated total revenue of USD$3.85 billion in the fourth quarter of the year, which was an increase of 3.18 percent compared with the third quarter (Facebook, 2015). That same year, the total revenue of Facebook reached USD$12.47 billion, which was an increase of 58 percent compared with the previous year. Daily users have increased 18 percent compared with 13 percent total growth of non-daily users (Business Next, 2015). It has become popular for companies to use Facebook as a customer service channel to communicate brands with their customers. More companies build up relationships with their customers, as well as respond to and solve problems from customers, through Facebook rather than other social media (Social Time, 2015). In addition, the development model of online marketing has gradually transformed from business-to-consumer (B2C) to consumer-to-consumer (C2C), a revolutionary and well-received model that enables interactive e-commerce (Chu and Liao, 2007). Because of the advances in internet technology, online WOM, which differs from traditional, offline WOM, allows internet users to transmit messages to hundreds or thousands of people with just a few clicks (Mangold and Faulds, 2009). Regarding media value, Vitrue, a firm specializing in social media, calculates that the impressions generated by one million fans are equivalent to a media value of USD$300,000 per month. For example, the Starbucks Facebook page has a fan base of approximately 6.5 million, translating into an annual media value of USD$23.4 million. On average, one fan generates USD$3.60 in media value, and one million fans are worth USD$3.6 million (Moorman et al., 1993). Therefore, the quicker a fan base expands, the larger the generation of media value. In practice, firms use Facebook brand fan pages to create interaction and rapport with fans. The companies then combine these pages with other online marketing activities to transfer advertising from cyberspace to offline environments (Electronic Commerce Times, 2010). Because of Facebook's high reach rates, numerous firms have created pages to garner popularity. In addition, Facebook brand fan pages benefit firms by serving as a channel for managers to inform fans of new product information and to announce relevant activities (Social Media Marketing Co., 2011). In addition to maintaining positive trust relationships between brand manufacturers and consumers, online community platforms allow brands to communicate product information to consumers (thereby establishing information exchange and interactions with similar communities) and assist community members in their future purchase decisions. Previous studies investigating virtual brand communities have discussed cognitive trust and affective trust and whether these two factors are keys to the successful management of virtual communities (Lin, 2008; Yeh and Choi, 2011). However, these studies have typically studied the parallel relationship between cognitive and affective trust rather than the non-parallel aspect between them. Lewis and Weigert (1985) pointed out that the difference between cognitive and emotional attitude form a long-standing debate. The concept of cognitive trust and affective trust is clarified as a non-parallel relationship in the organizational research literature. The cause and effect of the relationship between cognitive trust and affective trust has not been explored in the context of virtual research (e.g. Venetis and Ghauri, 2004; Vesel and Zabkar, 2009; Yeh and Choi, 2011). Based on previous organizational research, trust is not a single holistic view. Trust also includes a cause and effect relationship between cognitive and affective factors. Previous scholars have advocated that cognitive trust is an antecedent of affective trust (Johnson and Grayson, 2005; McAllister, 1995). The current study advocates that, in a virtual context, the attitude of community members is formed by the same phenomenon. It is necessary to clarify the relationship between cognitive trust and affective trust. This study examines the mutual influence of cognitive and affective trust to assess whether cognitive trust in a virtual brand community affects the establishment of users' affective trust. In addition, SNSs are characterized by frequent interpersonal interactions. During such interactions, virtual community users are susceptible to the influence of other users (Bearden et al., 1989). Thus, this study also explores whether social influence among virtual community users is affected by cognitive trust, affective trust, and the sense of virtual community among members. Social influence can be divided into normative influence and informative influence according to user motivation for participating in virtual communities. In building up our theory and illustrating a research gap, this study considers three dimensions as exogenous variables in the model. These dimensions may affect and transform their own relationships to each other. Previous literature has explored on the dimension, originally put forth by Wasko and Faraj (2005) who investigated that factors influence voluntary knowledge sharing on SNSs among individual, relational (social), and group-level relationship. The research of Tsai et al. (2012) was a follow-up study to Wasko and Faraj (2005) and proposed to extend different community environments to construct theory, and put forward the idea that community participation involves complex and interpersonal exchange processes. Hsu (2012) continued to follow the study of Wasko and Faraj (2005) and Tsai et al. (2012) in distinguishing the dimensions of individual, social, and group level and combined them with the tricomponent attitude model proposed by Rosenberg and Hovland (1960) as a theoretical basis. The tricomponent attitude model is divided into three psychological processes that include cognition, affection, and conation and action components. For the purposes of this study, the personal level is divided into cognitive trust and affective trust in the cognition phase, and cognitive factors are important antecedents of affective factors. Hsu (2012) indicated that affective trust is an important antecedent of affection in developing an important community relationship. Previous research has typically targeted the group level on the sense of a virtual community, which involves individual influences on virtual communities, but has rarely investigated the influence exerted by virtual community members on their interpersonal relationships (Tonteri et al., 2011) and finely formed social norms mechanism (Hsu et al., 2016). Therefore, this study develops a theoretical framework to explore the relationships among individual, group, and social factors. Based on the research motivation previously mentioned, this study primarily aims to achieve the following purposes. To explore factors of trust among virtual community members from an individual perspective, a sense of virtual community from the group perspective, and the normative/informative influence in interpersonal interactions among members from a social influence perspective, and examining the interaction process among these three perspectives. To propose two mediating factors and conduct relevant tests. First, affective trust is a mediator between cognitive trust and sense of virtual community. Second, sense of virtual community is a mediator between trust (cognitive trust and affective trust) and social influence (normative influence and informative influence). 2.1 Community website and Facebook fan pages SNSs are a form of virtual community. SNSs users can create a public profile, interacting and sharing common interests with friends as well as with strangers in real life (Kuss and Griffiths, 2011). Enabled by the internet, SNSs mainly provide users with the following three functions: to create a public or semi-public profile of personal information; to customize lists of users for sharing information; and to view and track information provided by other users (Boyd and Ellison, 2008). SNSs are defined as virtual communities that provide members interactions based on the Web 2.0 concept. For example, social networking and multimedia content sharing not only preserve users' existing social networks, but also allow connections among strangers who share common interests. According to this definition, the first SNS, SixDegrees.com originated in 1997. This website provided a platform for users to create profiles and friend lists. Although SNSs emerged in cyberspace in 1997, their rapid growth and popularity began in 2003 when websites such as MySpace, LinkedIn, Flicker, Facebook, and YouTube were launched. In addition to the original social networking functions, certain SNSs included options that allowed users to share multimedia, such as uploading photos and videos. Subsequently, SNSs begin to garner global attention and the number of SNS users grew exponentially. The launch of SNSs such as Facebook and MySpace changed the manner in which internet users communicated and interacted around the world. Facebook has since become the largest SNS in the world (Boyd and Ellison, 2008; Chiu et al., 2008; Kuss and Griffiths, 2011; Nadkarni and Hofman, 2012). Facebook pages, introduced in 2007, are public profiles that enable firms and users to share company news and product updates. Facebook members update their linking status with their page(s) to share with their friends through real-time feeds. Subsequently, Facebook continues to disseminate real-time updates to broader networks through online WOM when friends of these fans interact with their pages. In addition to Facebook's stunning growth in membership, pages are another feature that distinguishes these sites. Sysomos Inc. (2009) conducted the first large-scale survey regarding Facebook pages, which now exceed 630,000. The results indicate that each page has 4,596 fans on average and that page owners post on the page wall every 15.7 days on average, demonstrating rapid fan-base growth. Business Next (2015) held the second Facebook page poll and identified the strongest fan base (i.e. Facebook page) based on popularity, page content, and long-term operating outcomes. Pages can be used for business promotion, commercial marketing, or sharing professional knowledge. Members of a business, organization, or club share various social networking or marketing activities on their associated pages and announce upcoming activities. These pages update and inform fans' Facebook friends or users' viewing pages of information relevant to specific activities, which then attract additional users with common interests, thereby achieving brand promotion (Pempek et al., 2009). 2.2 Trust theory Trust verifies evidence and serves to generate a feeling of affirmation, which is a key factor in influencing the formation of relationships and partnerships (Giffin, 1967; McKnight and Chervany, 2002). Trust is vital for establishing interpersonal relationships and virtual communities, especially in uncertain or high-risk environments, such as electronic markets (Ba and Pavlou, 2002; Moorman et al., 1993). Lewis and Weigert (1985) and McAllister (1995) asserted that interpersonal trust stems from cognitive and affective bases, and that networking on SNSs results from social interactions. Cognitive trust arises from calculations and rational assessments that originate from accumulated knowledge. Such knowledge enables people to predict with some degree of confidence that their partner in the relationship can conform to their expectations. This knowledge is amassed from previous observations of the partner's external behavior and reputation. In other words, cognitive trust in the SNS context refers to internet users' assessment of the reliability of the information based on users' existing capabilities and knowledge. Conversely, affective trust is formed by affections and social interactions, and is built on people's care and concern for each other; that is, affective trust is trust that arises from mutual affection and results in emotional connections in interpersonal relationships (Johnson and Grayson, 2005; Yeh and Choi, 2011). 2.3 Sense of virtual community A sense of virtual community has been originally defined as the sense of belonging that members have toward their community, allowing them to convey beliefs and reach a mutual understanding, thereby demonstrating their commitment to the community. A sense of community can be divided into four elements: membership, influence, integration and fulfillment of needs, and shared emotional connection. These elements have been used in theories regarding the sense of virtual community (Blanchard, 2007; Tonteri et al., 2011) and each are elucidated as follows: membership: a sense of belonging that members perceive regarding their community and serves as a common symbol within the community that members self-reinforce to meet community needs and obtain approval; influence: the influence exerted on members by the community or other members, or the belief that members hold that they are capable of influencing others in the community; integration and fulfillment of needs: members believe that the community or resources and support provided by other members can satisfy their needs (e.g. joining a community provides specific advantages or rewards); and shared emotional connection: members of a community share a common experience, history, time, and space; that is, they experience events together and engage in positive interactions that lead to enhanced relationships (Abfalter et al., 2011; Koh and Kim, 2003). 2.4 Social influence Generally, during a decision-making process, individuals not only consider the matter at hand but also the surrounding social group or environment. This phenomenon is called social influence. Although social influence entails numerous dimensions, in this study it is considered the susceptibility to inter-individual influence to facilitate the discussion of SNSs and interactions among members. Bearden et al. (1989) indicated that when people interact in a group, it induces changes in perception or behavior. This transformative process refers to social influence. Scholars have assessed personality attributes that predispose a person to others' influence, such as low self-esteem. Dual-process theory in psychology postulates that messages or information received by a person may be influenced by persuasion, and that such influence is divided into two types: normative influence and informative influence (Deutsch and Gerrard, 1955). Normative influence refers to a person's conforming to social norms or others' expectations to obtain the approval of a group, thus adopting cognitive or behavioral patterns congruent to the group (Cheung et al., 2009). Informative influence arises from acknowledging information obtained as evidence of reality, which is primarily based on the recipient's assessment of the information received, which includes the content, source, and other recipients (Hovland et al., 1953). This research proposes an integrated model of the relationship between different levels of influencing factors. Based on the research of Wasko and Faraj (2005), this study investigates the factors of individual, group, and social levels as the basis of a theoretical theory to develop a research model of a community relationship. Previous studies have confirmed that social capital theory is the most accurate explanation of interpersonal Facebook relationships on SNSs (e.g. Burke et al., 2010, 2011; Ellison et al., 2011; Zhao et al., 2016). Facebook is particularly well-suited way for bridging "social capital." Social capital is used to describe the capacity of obtaining resources for individuals or groups embedded in their social networks (Bourdieu, 1986; Coleman, 1988). The establishment of a social network relationship includes different levels of viewpoints for individual, groups, and social influence, wherein trust is seen as a key construct of social capital on an individual level (Zhao et al., 2016). Connections of different clusters or groups within a network is often called "bridging" ties (Burt, 1992), and it is conducive to building strong relationships. On the other hand, bridging ties is also characterized by repeated interactions with trustworthy, highly supportive, and intimate relationships, which typically provide the transformation of acquiring capital and becomes a more substantive form of social relationship (Ellison et al., 2014). People build relationships through social interactions and build their expectations for future social resources with social capital. A key issue in changing people's attitudes or behavior is that such a change can transform the attitude or behavior of a group or community (Latkin et al., 2009). A conceptual point of view can change groups and social levels through social diffusion. A presumed social diffusion is enough to cause others to change their behavior. Social behavior will spread in the community in the way of social groups. The conceptual operation of social diffusion is determining the social norms that are an important part of a common theory (Bandura, 1986). The most successful examples involve altering social norms related to the behavior change from the perspective of a group or a society (Latkin et al., 2009). Therefore, the framework of this research is divided into three perspectives: the individual, the group, and the social influence perspectives. The individual perspective is used to explore trust among SNS users toward a specific virtual community. The group perspective is employed to observe the sense of virtual community. The social influence perspective, consisting of both normative and informative influence, is used to discuss the joint influence of the individual and group perspectives on the social influence perspective (Figure 1). 3.1 Individual perspective Previous studies have typically classified trust as both cognitive and affective trust. However, the relationship between cognitive and affective trust is seldom discussed. In a study on organizational context, Johnson and Grayson (2005) indicated that cognitive trust is an antecedent of affective trust. Scholars studying attitude theory have disputed the context of the relationship between cognitive and affective trust in relation to attitude for a long time. Previous theoretical and empirical research has shown that cognitive trust positively and significantly influences affective trust (Johnson and Grayson, 2005; McAllister, 1995). Related studies regarding service relationships and e-commerce have also indicated that cognitive trust influences the formation of affective trust (Dabholkar et al., 2009; Johnson and Grayson, 2005). Chih et al. (2015) investigated online shoppers' buying behavior from a positive- and negative-cognitive perspective and distinguished trust into cognitive trust and affective trust. They determined that cognitive trust is deemed as an antecedent of affective trust and their empirical results show that cognitive trust must be established in order to gain consumer trust and build relationships. For example, a virtual community provides accurate and credible shopping information, and further, that consumers are willing to build an affective linkage with this virtual community. Therefore, this study proposes the following hypothesis: H1. In a virtual community, members' cognitive trust has a significant and positive effect on affective trust. 3.2 Connection between individual and group In highly uncertain environments, trust helps people build interactive relationship networks. Because activities in virtual communities lack face-to-face contact, online communication requires trust. Trust facilitates the successful implementation of virtual communities. For example, in an environment without norms, it is necessary for a partner to have trust to execute socially acceptable interactions (Lin, 2008). Blanchard and Markus (2004) have asserted that in virtual communities, identification methods can enhance trust, thereby increasing members' sense of virtual community. This reflects contemporary SNSs' requirement that members provide their real names when registering to join a site. When members demonstrate trust toward a virtual community, they form a committed relationship with that community, which facilitates the formation of a sense of virtual community (Tsai et al., 2011; Wang and Tai, 2011). Ellonen et al. (2007) indicated that a deep sense of trust between community members allows them to assist one another because of the benefits of sharing a common social network and expectations, and thus foster a sense of virtual community. According to the findings of Blanchard and Markus (2004), trust between members develops after mutual support is demonstrated, which results in a multifaceted sense of virtual community. In addition, Lin (2008) asserted that successful virtual communities must generate trust between members, thereby producing a sense of virtual community. McMillan and Chavis (1986) found that trust can alleviate anxiety and insecurity for virtual community members. They also found that the relationships between virtual community members become closer and that members feel a sense of belonging when these relationships enhance trust between members and obtain assistance during the online interactive process. Zhu et al. (2012) indicated that trust is an important antecedent of shaping members' sense of community. Zhao et al. (2012) also confirmed that trust has a significant and positive effect on sense of belonging to a virtual community. In other words, trust is more likely to prompt the trustor to be more addictive to the relationship with the virtual community. Thus, this study proposes the following hypothesis: H2. In a virtual community, members' (a) cognitive trust and (b) affective trust have significant and positive effects on their sense of virtual community. 3.3 Connection between individual and society In virtual communities, establishing trust relationships with influential members is considered the foundation of interpersonal relationships because members frequently make decisions that conform to the opinions and suggestions provided by other members, who are strangers to them (Park and Feinberg, 2010). Casalo et al. (2011) indicated that trust toward online travel communities influences whether consumers accept the advice offered by a virtual community and subsequently purchase travel packages. Lascu and Zinkhan (1999) pointed out that when a group or community exhibits reliability, user conformity increases within the community. Because most online information is free, users' trust toward virtual communities affects the interpersonal relationships that they form within a community (Boush et al., 1993; Park and Feinberg, 2010). This suggests that members' assessment of the reliability of the information provided using their existing skills and knowledge, as well as the care and concern developed through emotional connections and social interactions, serve as factors that prompt members to conform to community norms and act upon suggestions provided by other members. These factors also act as references for members in purchase decisions. An individual will seek acceptance by others and change her/his attitude or behavior in order to meet the expectations of community members when the individual builds up a sense of trust with the community. Chin et al. (2009) found that online shoppers' trust has a significant and positive effect on social influence. Consumers tend to observe and gain information from others to understand products and services based on trust. Hsu et al. (2011) indicated that blog community members have a sense of trust to the community and comply with the common understanding and regulations to establish a standard behavior within the community by studying bloggers' interactive networks. Therefore, this study proposes the following hypotheses: H3. In a virtual community, members' cognitive trust has significant and positive effects on (a) normative influence and (b) informative influence. H4. In a virtual community, members' affective trust has significant and positive effects on (a) normative influence and (b) informative influence. 3.4 Connection between group and society A sense of virtual community is a sentiment generated by experiences in a virtual community that induces a sense of belonging and deep attachment toward that community (Blanchard and Markus, 2004; Koh and Kim, 2003; Tonteri et al., 2011). A number of scholars have asserted that for virtual community users, a sense of belonging in respect to such communities enhances their normative and informative influences (Lee and Park, 2008). A sense of belonging toward a virtual community is considered an antecedent to the formation of social influence, namely, normative and informative influence (Park and Feinberg, 2010). In a virtual community, a sense of community is generated when members recognize similarities and develop an intention to continue interacting, thus increasing their normative and informative influences within the community (Lascu and Zinkhan, 1999; Shen et al., 2010). Hsu et al. (2016) advocated the idea that members' sense of virtual community and their community social influence both increase if Facebook fan page members treat the community as part of their daily lives and acquire others' affirmation and praise. Hsu et al. (2016) also confirmed that the sense of virtual community has a significant and positive effect on normative and informative influence, respectively. Thus, this study proposes the following hypothesis: H5. In a virtual community, members' sense of virtual community has significant and positive effects on (a) normative influence and (b) informative influence. 4.1 Research design and data collection Our target population is Facebook fan page members because Facebook is the largest virtual community in Taiwan. This research applies Google Docs (https://drive-google-com.pitt.idm.oclc.org) online service, which has no time and/or geographical limits, to create an online questionnaire and release on a Facebook fan page and a PTT BBS station. Fan page users can connect to the questionnaire through a link. This study collected 422 samples. From these, 312 usable samples yielded a response rate of 73.93 percent. Table I shows the respondents' demographic information, with males comprising 51.92 percent. Among them, 57.37 percent of participants were between 18 and 24 years old, and the largest proportion of education background was a bachelor/associate degree, which accounted for 67.95 percent. About 36.22 percent of the respondents had a Facebook history of one to two years, and the majority of respondents had not been members for more than three years. In addition, 44.23 percent of users surfed Facebook three to five hours each day. 4.2 Measure This study aims to explore internet users' behavior in a virtual community. In order to assure the validity of the instrument, items used to measure the constructs are from scales developed in previous research. The items in the questionnaire are divided into six parts, including five dimensions of scale and demographic variables. All items are measured using a seven-point Likert scale ranging from "strongly disagree" (1) to "strongly agree" (7). Demographic variables are categorical data with a single-item measure and include gender, age, education, contact with Facebook, and surfing Facebook each day (see Table I). In this study, there are total of 36 items for five scales and five demographic variables in the questionnaire. 4.3 Sample validity For a precise reflection of Facebook users' population structure, this study used a gender ratio of the sample structure to determine whether it matches the population structure. In addition to statistics data from CheckFacebook (2013) as the basis reference of this study, we also follow the idea of Hsu et al. (2016) in taking gender as a test variable. The gender proportion is 50.6 percent male users and 49.4 percent female users. The result of a kh2 goodness-of-fit test shows that the p-value of kh2 is 2.697 (p-value is 0.101 larger than 0.050). Thus, the null hypothesis of the test cannot be rejected and no significant difference exists between the sample structure and the population structure of Taiwanese Facebook users' gender ratio provided by CheckFacebook in this study. To avoid or reduce the problem generated by the common method variance (CMV), a two-stage prevention procedure is conducted. First, we design the survey questionnaire by following these steps. The constructs are arranged randomly and the research objective is not shown on the questionnaire. The survey is conducted anonymously in order to decrease consistent answers from respondents. Second, Harman's one-factor-test is applied to examine whether the CMV problem exists in the sample data, including an exploratory factor analysis (EFA) (Harman, 1967; Podsakoff and Organ, 1986) and a confirmatory factor analysis (CFA). The analysis of EFA conducts all items, and the results show that six factors are extracted and the explained 40.573 percent variance by the first factor, which is lower than 50 percent (Wang et al., 2014). The analysis of CFA conducting all items are subsumed into one factor and the results do not show all item factor loadings higher than 0.5. Furthermore, as the model-fit of Harman's one-factor-test (kh2=3306.19, df=594, kh2/df=5.57, GFI=0.53, AGFI=0.47, RMSEA=0.12, CFI=0.63, NFI=0.59) is worse than the model-fit of the hypothesized model (kh2=799.65, df=535, kh2/df=1.50, GFI=0.88, AGFI=0.85, RMSEA=0.04, CFI=0.96, NFI=0.90), no significant CMV problem exists in the data. 4.4 Analysis of measurement model As essential prerequisites for achieving valid results, the reliability, convergent validity, and discriminant validity of the measurement model are assessed. Item reliability is assessed by applying the factor loading and squared multiple correlations (SMC), and the construct reliability is assessed by applying Cronbach's a. As shown in Table II, all factor loadings are above 0.5, and all SMCs are above 0.2, indicating good reliability of the items (Bentler and Wu, 1993; Hair et al., 2010). Also, the Cronbach's a are above 0.7, indicating good reliability of the scales (Nunnally, 1978). To test the ability of a measurement to reflect the actual circumstances, this study adopts the test for convergent validity and discriminant validity. Convergent validity can be assessed in terms of the average variance extracted (AVE) from the latent variables and the composite reliability (CR). Discriminant validity refers to the correlation between constructs. Table II shows the CRs above 0.6 and the AVE from latent variables are above the acceptable value of 0.5, with the exception of normative influence and informative influence. However, according to the findings of Fornell and Larcker (1981), the scale still indicates convergent validity if the scale CR is higher than 0.6. Thus, all the scale convergent validity can be confirmed (Bagozzi and Yi, 1988; Hulland, 1999). Furthermore, Table III indicates that the correlation with that of the other constructs is less than 1 and that the AVE square root of any particular construct is greater than the correlation of this construct with other constructs (Fornell and Larcker, 1981; Gaski and Nevin, 1985). 4.5 Structural model This study examines the structural model and considers the model-fit indices suggested by Hair et al. (2010), which are absolute fit measures, incremental fit measures, and parsimonious fit measures. All of the model-fit indices are higher than the suggested criteria. For the examination of the proposed hypotheses of the relationship among constructs, a structural model is applied. The result of the structural model is shown in Figure 2. The model explained 43.6 percent of affective trust variance, 54.2 percent of sense of virtual community variance, 44.4 percent of normative influence variance, and 40.3 percent of informative influence variance. H1 posits that cognitive trust influences affective trust. Figure 2 indicates that the path coefficient is 0.661 (p<0.001), thus supporting H1. H2, which states that (a) cognitive trust and (b) affective trust affect the sense of virtual community, respectively, are also confirmed (g21=0.381, p<0.001; b21=0.427, p<0.001). The positive effects of cognitive trust on (a) normative influence and (b) informative influence are also supported (g31=0.287, p<0.001; g41=0.182, p<0.05) thereby confirming H3. Furthermore, the results show the positive influences of affective trust on (a) normative influence and (b) informative influence (b31=0.279, p<0.001; b41=0.334, p<0.001) thus confirming H4. Finally, the effects of sense of virtual community on (a) normative influence and (b) informative influence are significant (b32=0.189, p<0.05; b42=0.199, p<0.05) thereby supporting H5. 4.6 Mediating effects Based on the previous demonstration of the effects of trust on sense of virtual community and social influence, this study tests the mediation effect of affective trust between cognitive trust and sense of virtual community and sense of virtual community between cognitive/affective trust and normative/informative influence. According to Table IV, both the bootstrapping 95 percent confidence intervals of percentile and the bias-corrected method are presented, and zero is not contained in these intervals. Such results enhance the findings that affective trust mediates the relationships between cognitive trust and sense of virtual community and that sense of virtual community mediates cognitive/affective trust and normative/informative influence. Furthermore, following prior research, three steps are required to test the mediation effect (Baron and Kenny, 1986; Komiak and Benbasat, 2006). In step 1, this study treats cognitive trust as an independent variable and sense of virtual community as a dependent variable, and finds a significant relationship between them (b=0.584, p<0.001). In step 2, this study builds a model that includes cognitive trust as an independent variable and affective trust as the dependent variable, indicating a significant effect (b=0.622, p<0.001). In step 3, this study builds a model with both cognitive trust and affective trust as independent variables and sense of virtual community as the dependent variable, and both the effects of cognitive/affective trust on sense of virtual community are significant. Thus, affective trust partially mediates between cognitive trust and sense of virtual community. Finally, this study also conducts the Sobel (1982) test to assess the significance of the mediation effect (Wood et al., 2008). The results demonstrate that affective trust significantly mediates between cognitive trust and sense of virtual community (Sobel=7.114, p<0.001). The mediating effects of sense of virtual community on the relationship between cognitive/affective trust and the normative/informative influence are also examined. Both a bootstrapping analysis and the Sobel test are conducted, and results in Table V show that sense of virtual community partially mediates the relationships between cognitive/affective trust and normative/informative influence. 5.1 Research implications This study contributes to improving the understanding about the transformation of sense of virtual community on the group level as well as the normative/informative influence on the social level, and from a trust perspective on the individual level. The findings indicate that this research model has a good explanatory power and provides a more complete explanation of members' interactive relationships on different levels in the context of Facebook brand fan pages. Regarding the individual perspective, this study references the research of Lewis and Weigert (1985) and divides trust into cognitive and affective trust to assess and clarify the non-parallel relationship between these two constructs in a virtual community. The results indicate that the formation of affective trust is partially dependent on cognitive trust. Similar to Chih et al.'s (2015) study, a virtual community must establish an accurate and reliable message source in order to build relationships with members. Virtual community members will form an affective trust of emotional feeling and emotional attachment when they obtain product information by rational demands. Second, regarding the individual-group perspective, this study identifies two salient factors, cognitive trust and affective trust, which are important antecedents of the sense of virtual community. Similar to Zhao et al.'s (2012) study, trust plays a key role in increasing brand community members' sense of belonging. The basis of interaction on an individual level is cognitive trust and affective trust. Consumers have confidence in disclosed relevant information within a virtual community and then frequently interact with other members if a virtual community manager devotes more effort to managing the community. In this scenario, virtual community members will feel a stronger sense of belonging. Third, regarding the individual-social perspective, this study investigates individual factors involved in the normative/informative influence that page fans have on other fans (i.e. members). This suggests that because of trust in other members' abilities, or trust stemming from emotional connections, fans of a brand page change their perspectives or behaviors to meet the expectations of other fans. H3 and H4 of this study are similar to the results of Hsu et al. (2011). Although cognitive trust has a significant and positive effect on normative influence and informative influence, respectively, it has a weak significant effect on informative influence. Therefore, a virtual community must build cognitive trust from a rational perspective in order to create a normative influence within this community and obtain acceptance by members. Affective trust has a significant and positive inference on normative influence and informative influence, respectively. A virtual community must build affective trust in order to create an informative influence among community members. Virtual community members seek others' suggestions and adopt correct and reliable information to reduce uncertainty and fear in the community. Finally, this study finds a significant and positive effect of a sense of virtual community on normative/informative influence from the group-social perspective. H5 of this study is same as Hsu et al.'s (2016) findings. In addition, the sense of virtual community is an important core concept in the research model. Such a sense not only strengthens the sense of belonging among virtual community members, but it also establishes a bridge between trust and social influence. Behavioral norms of virtual community members means that community members perceive common sense during the interaction and comply with the common understanding of establishing their behaviors from a community life point of view. 5.2 Managerial implications This study advocates that brand fan pages must establish a platform that virtual community members trust and are willing to participate in the discussion environment for a long time regardless from the perspective of rational cognitive trust and emotional affective trust. More engaged customers will establish a closer trust relationship in the virtual community and partake in activities such as providing brand and product-related knowledge, offer interactive discussions, reply to service-related questions on fan pages, increase the cognitive trust of brand among members, and organize fan page activities to increase the opportunities of exchanging emotion and enhancing affective trust. Cognitive/affective trust and sense of virtual community are key factors in establishing virtual community cohesion to build a reliable brand and belief as well as the development of emotional relationships for a sense of belonging. For example, Toyota cleverly used social media to resolve a vehicle recall crisis by building up customer trust. The company sets up a team to track negative rumors on Facebook and elsewhere, responded with the facts, and specifically opened a Twitter account to communicate with consumers. Toyota looked for online fans to spread their discussion through the company's media channels, taking advantage of the company's decade-long performance reputation, that is, a sense of reliable trust and durable brand commitment. They effectively used social media and other new media to offset the most negative messages. Toyota successfully lifted their brand challenge and eliminated the brand disaster to provide several important community management strategies for other companies. A brand fan page must establish an ideal community environment in which virtual community members are more likely to share and deliver brand/product-related messages with cognitive/affective trust and social influence (Brown et al., 2007). Therefore, the biggest benefit for consumers engaging in a brand community is to form strong relationship among brands, products, other customers, and companies. Customers increase their trust relationship with the brand and form an invisible norm for the community force after creating a close relationship with the brand (Habibi et al., 2014). Virtual community members choose to gain product messages and knowledge from other members to understand their personal experiences of product use. In the sense of virtual community and social influence, community managers must consider how to provide enough useful information and interactive activities to build up users' sense of belonging and cohesion with altruistic behavior of social relationship. Virtual community members can communicate by online platform to gain psychological support and attract other participating individuals who are willing to maintain a relationship with the community. Many people publish their own messages in a virtual community. Firms must further promote commercial behavior within the virtual community so that by participating, people can meet their particular needs, such as community, business transactions, knowledge, and entertainment (Hagel, 1999). This means that interpersonal relationships between virtual community members are the basis for community development and generates emotional exchanges for community members (Lee and Chang, 2011). Managers must build a successful virtual community to attract the attention of internet users as well as to provide sufficient incentives for virtual community fans to share information (Ho and Dempsey, 2010). For example, managers not only encourage members to provide relevant electronic word-of-mouth (e-WOM) information, but also recently launched brands through promotional activities on social media to provide relevant information to virtual community members. Virtual community fans will be more willing to use e-WOM to promote brand and create brand value when they are involved in the brand platform. Brand fan pages not only link with fans, but they also help to establish virtual community fans to increase discussion activity and maintain popularity of fan page. For example, firms disclose ideas or comments on styles and features on Facebook when new products are introduced. It is possible to investigate preferences and dissimilarities between fans and other consumers by collecting messages, comments, voting, and so forth. Virtual community creates membership, identification, and links with fan brands by meeting consumer demands (Fournier and Lee, 2009). The results indicate that social influence among members or Facebook pages fans is enhanced by cognitive trust, affective trust, and the sense of virtual community. Thus, a long-term sense of virtual community toward Facebook brand fan pages can be induced if online platform developers establish an environment in which users develop trust and participation. Methods of achieving this include providing knowledge related to brands or products, assigning staff to provide answers to questions posed by users, and hosting activities that encourage fans to interact, thereby enhancing cognitive and affective trust between fans. After users develop cognitive trust, affective trust, and the sense of virtual community, the social influence among members is likely to increase. The thriving development of the internet and SNSs has transformed common communication methods and lifestyles, and SNSs have become popular platforms for interaction and communication. For firms, crowds are business opportunities. Consequently, various firms aim to build platforms that facilitate positive interactions and communication with consumers on SNSs, thereby increasing product sales or brand value. Learning consumer behavior in the context of virtual brand community is required for firms to obtain tangible benefits and value from their interactive platforms. The results provide insights for brand companies in the creation of platforms on Facebook or other SNSs to enable B2C or C2C interactions. 5.3 Research contributions This study is based on trust theory and follows the concept of social exchange theory (Blau, 1964; Thibaut and Kelly, 1959). Blau (1964) advocated that trust within relationships is an important idea, especially in the process of exchange among individuals through cultivating a good rapport and provides people with a reason not to run away from social obligations. In the virtual environment, trust affects users' willingness to exchange messages with other members and is an important factor in continuing participation in the community (Blanchard et al., 2011; Ridings et al., 2002; Yeh and Choi, 2011). Trust is a psychological state (Rousseau et al., 1998) and a multi-faceted concept (Lewis and Weigert, 1985; McAllister, 1995; Riegelsberger et al., 2003). Cognitive trust and affective trust are considered to have a non-parallel relationship in organization research (Lewis and Weigert, 1985). However, this study successfully divides trust into cognitive trust and affective trust to clarify the non-parallel relationship between these two constructs in a virtual community. Past scholars have only investigated the group level of interpersonal relationships for virtual community members, but not other levels (Tonteri et al., 2011). This study concerns different levels and distinguishes the difference between individual, group, and social levels and presents trust theory to separate trust into cognitive trust and affective trust as the antecedents of a sense of virtual community. In addition, social influence is regarded as susceptible to interpersonal influence (Bearden et al., 1989) as a result of a sense of virtual community. 5.4 Limitations and future research Some limitations exist in this study. First, this study adopts a cross-sectional survey, which means it does not explore the subsequent internal changes and actual usage behavior of Facebook brand fan page users. Subsequent researchers could use a longitudinal survey. Second, this study only surveys Taiwanese Facebook users, and does not present a comprehensive picture of Facebook users in different countries. Subsequent researchers could investigate international brand fan page users. Third, this study only conducts the survey for Facebook fan page users, and in different virtual community platforms, it may create different findings and inferences. This study recommends that researchers follow up this study for different virtual community platforms, such as Twitter and Plurk. Fourth, this study only designs the multi-level constructs by using structural equation modeling. Future researchers can redesign the questionnaire to collect data from multiple groups of respondents. Finally, this study does not consider other constructs at the individual level, whereas a gap exists in considering users' personality characteristics, such as extraversion, introversion, narcissism, self-esteem, self-worth, and neuroticism (Nadkarni and Hofman, 2012). On the group-level factors, such as sense of virtual community can be divided into three facets such as membership, influence, and immersion (Koh and Kim, 2003) to verify the relationship.
|
This research adopts structural equation modeling to test the proposed model and the structural model shows a good fit. This research sample consists of 312 members who have used Facebook for at least six months.
|
[SECTION: Findings] The emergence of social networking sites (SNSs) has resulted in the rapid evolution of online community platforms into popular forums for communication and entertainment, while users' word-of-mouth behavior has become increasingly decisive influence. With the growing maturity of technologies related to SNSs, business managers have learned to increase commercial profits (Trusov et al., 2009). In other words, firms are striving to increase interactivity among brands' SNSs, website users, and non-website users to generate positive outcomes through internet-enabled dissemination. For example, Global Web Index (2014), a market research institution, found that social media has continued to grow and develop in the last year based on the globalization information of social media. In 2013, the number of new registered users on popular social websites increased by 135 million. In 2014, the total number of Facebook users reached 1.393 billion and generated total revenue of USD$3.85 billion in the fourth quarter of the year, which was an increase of 3.18 percent compared with the third quarter (Facebook, 2015). That same year, the total revenue of Facebook reached USD$12.47 billion, which was an increase of 58 percent compared with the previous year. Daily users have increased 18 percent compared with 13 percent total growth of non-daily users (Business Next, 2015). It has become popular for companies to use Facebook as a customer service channel to communicate brands with their customers. More companies build up relationships with their customers, as well as respond to and solve problems from customers, through Facebook rather than other social media (Social Time, 2015). In addition, the development model of online marketing has gradually transformed from business-to-consumer (B2C) to consumer-to-consumer (C2C), a revolutionary and well-received model that enables interactive e-commerce (Chu and Liao, 2007). Because of the advances in internet technology, online WOM, which differs from traditional, offline WOM, allows internet users to transmit messages to hundreds or thousands of people with just a few clicks (Mangold and Faulds, 2009). Regarding media value, Vitrue, a firm specializing in social media, calculates that the impressions generated by one million fans are equivalent to a media value of USD$300,000 per month. For example, the Starbucks Facebook page has a fan base of approximately 6.5 million, translating into an annual media value of USD$23.4 million. On average, one fan generates USD$3.60 in media value, and one million fans are worth USD$3.6 million (Moorman et al., 1993). Therefore, the quicker a fan base expands, the larger the generation of media value. In practice, firms use Facebook brand fan pages to create interaction and rapport with fans. The companies then combine these pages with other online marketing activities to transfer advertising from cyberspace to offline environments (Electronic Commerce Times, 2010). Because of Facebook's high reach rates, numerous firms have created pages to garner popularity. In addition, Facebook brand fan pages benefit firms by serving as a channel for managers to inform fans of new product information and to announce relevant activities (Social Media Marketing Co., 2011). In addition to maintaining positive trust relationships between brand manufacturers and consumers, online community platforms allow brands to communicate product information to consumers (thereby establishing information exchange and interactions with similar communities) and assist community members in their future purchase decisions. Previous studies investigating virtual brand communities have discussed cognitive trust and affective trust and whether these two factors are keys to the successful management of virtual communities (Lin, 2008; Yeh and Choi, 2011). However, these studies have typically studied the parallel relationship between cognitive and affective trust rather than the non-parallel aspect between them. Lewis and Weigert (1985) pointed out that the difference between cognitive and emotional attitude form a long-standing debate. The concept of cognitive trust and affective trust is clarified as a non-parallel relationship in the organizational research literature. The cause and effect of the relationship between cognitive trust and affective trust has not been explored in the context of virtual research (e.g. Venetis and Ghauri, 2004; Vesel and Zabkar, 2009; Yeh and Choi, 2011). Based on previous organizational research, trust is not a single holistic view. Trust also includes a cause and effect relationship between cognitive and affective factors. Previous scholars have advocated that cognitive trust is an antecedent of affective trust (Johnson and Grayson, 2005; McAllister, 1995). The current study advocates that, in a virtual context, the attitude of community members is formed by the same phenomenon. It is necessary to clarify the relationship between cognitive trust and affective trust. This study examines the mutual influence of cognitive and affective trust to assess whether cognitive trust in a virtual brand community affects the establishment of users' affective trust. In addition, SNSs are characterized by frequent interpersonal interactions. During such interactions, virtual community users are susceptible to the influence of other users (Bearden et al., 1989). Thus, this study also explores whether social influence among virtual community users is affected by cognitive trust, affective trust, and the sense of virtual community among members. Social influence can be divided into normative influence and informative influence according to user motivation for participating in virtual communities. In building up our theory and illustrating a research gap, this study considers three dimensions as exogenous variables in the model. These dimensions may affect and transform their own relationships to each other. Previous literature has explored on the dimension, originally put forth by Wasko and Faraj (2005) who investigated that factors influence voluntary knowledge sharing on SNSs among individual, relational (social), and group-level relationship. The research of Tsai et al. (2012) was a follow-up study to Wasko and Faraj (2005) and proposed to extend different community environments to construct theory, and put forward the idea that community participation involves complex and interpersonal exchange processes. Hsu (2012) continued to follow the study of Wasko and Faraj (2005) and Tsai et al. (2012) in distinguishing the dimensions of individual, social, and group level and combined them with the tricomponent attitude model proposed by Rosenberg and Hovland (1960) as a theoretical basis. The tricomponent attitude model is divided into three psychological processes that include cognition, affection, and conation and action components. For the purposes of this study, the personal level is divided into cognitive trust and affective trust in the cognition phase, and cognitive factors are important antecedents of affective factors. Hsu (2012) indicated that affective trust is an important antecedent of affection in developing an important community relationship. Previous research has typically targeted the group level on the sense of a virtual community, which involves individual influences on virtual communities, but has rarely investigated the influence exerted by virtual community members on their interpersonal relationships (Tonteri et al., 2011) and finely formed social norms mechanism (Hsu et al., 2016). Therefore, this study develops a theoretical framework to explore the relationships among individual, group, and social factors. Based on the research motivation previously mentioned, this study primarily aims to achieve the following purposes. To explore factors of trust among virtual community members from an individual perspective, a sense of virtual community from the group perspective, and the normative/informative influence in interpersonal interactions among members from a social influence perspective, and examining the interaction process among these three perspectives. To propose two mediating factors and conduct relevant tests. First, affective trust is a mediator between cognitive trust and sense of virtual community. Second, sense of virtual community is a mediator between trust (cognitive trust and affective trust) and social influence (normative influence and informative influence). 2.1 Community website and Facebook fan pages SNSs are a form of virtual community. SNSs users can create a public profile, interacting and sharing common interests with friends as well as with strangers in real life (Kuss and Griffiths, 2011). Enabled by the internet, SNSs mainly provide users with the following three functions: to create a public or semi-public profile of personal information; to customize lists of users for sharing information; and to view and track information provided by other users (Boyd and Ellison, 2008). SNSs are defined as virtual communities that provide members interactions based on the Web 2.0 concept. For example, social networking and multimedia content sharing not only preserve users' existing social networks, but also allow connections among strangers who share common interests. According to this definition, the first SNS, SixDegrees.com originated in 1997. This website provided a platform for users to create profiles and friend lists. Although SNSs emerged in cyberspace in 1997, their rapid growth and popularity began in 2003 when websites such as MySpace, LinkedIn, Flicker, Facebook, and YouTube were launched. In addition to the original social networking functions, certain SNSs included options that allowed users to share multimedia, such as uploading photos and videos. Subsequently, SNSs begin to garner global attention and the number of SNS users grew exponentially. The launch of SNSs such as Facebook and MySpace changed the manner in which internet users communicated and interacted around the world. Facebook has since become the largest SNS in the world (Boyd and Ellison, 2008; Chiu et al., 2008; Kuss and Griffiths, 2011; Nadkarni and Hofman, 2012). Facebook pages, introduced in 2007, are public profiles that enable firms and users to share company news and product updates. Facebook members update their linking status with their page(s) to share with their friends through real-time feeds. Subsequently, Facebook continues to disseminate real-time updates to broader networks through online WOM when friends of these fans interact with their pages. In addition to Facebook's stunning growth in membership, pages are another feature that distinguishes these sites. Sysomos Inc. (2009) conducted the first large-scale survey regarding Facebook pages, which now exceed 630,000. The results indicate that each page has 4,596 fans on average and that page owners post on the page wall every 15.7 days on average, demonstrating rapid fan-base growth. Business Next (2015) held the second Facebook page poll and identified the strongest fan base (i.e. Facebook page) based on popularity, page content, and long-term operating outcomes. Pages can be used for business promotion, commercial marketing, or sharing professional knowledge. Members of a business, organization, or club share various social networking or marketing activities on their associated pages and announce upcoming activities. These pages update and inform fans' Facebook friends or users' viewing pages of information relevant to specific activities, which then attract additional users with common interests, thereby achieving brand promotion (Pempek et al., 2009). 2.2 Trust theory Trust verifies evidence and serves to generate a feeling of affirmation, which is a key factor in influencing the formation of relationships and partnerships (Giffin, 1967; McKnight and Chervany, 2002). Trust is vital for establishing interpersonal relationships and virtual communities, especially in uncertain or high-risk environments, such as electronic markets (Ba and Pavlou, 2002; Moorman et al., 1993). Lewis and Weigert (1985) and McAllister (1995) asserted that interpersonal trust stems from cognitive and affective bases, and that networking on SNSs results from social interactions. Cognitive trust arises from calculations and rational assessments that originate from accumulated knowledge. Such knowledge enables people to predict with some degree of confidence that their partner in the relationship can conform to their expectations. This knowledge is amassed from previous observations of the partner's external behavior and reputation. In other words, cognitive trust in the SNS context refers to internet users' assessment of the reliability of the information based on users' existing capabilities and knowledge. Conversely, affective trust is formed by affections and social interactions, and is built on people's care and concern for each other; that is, affective trust is trust that arises from mutual affection and results in emotional connections in interpersonal relationships (Johnson and Grayson, 2005; Yeh and Choi, 2011). 2.3 Sense of virtual community A sense of virtual community has been originally defined as the sense of belonging that members have toward their community, allowing them to convey beliefs and reach a mutual understanding, thereby demonstrating their commitment to the community. A sense of community can be divided into four elements: membership, influence, integration and fulfillment of needs, and shared emotional connection. These elements have been used in theories regarding the sense of virtual community (Blanchard, 2007; Tonteri et al., 2011) and each are elucidated as follows: membership: a sense of belonging that members perceive regarding their community and serves as a common symbol within the community that members self-reinforce to meet community needs and obtain approval; influence: the influence exerted on members by the community or other members, or the belief that members hold that they are capable of influencing others in the community; integration and fulfillment of needs: members believe that the community or resources and support provided by other members can satisfy their needs (e.g. joining a community provides specific advantages or rewards); and shared emotional connection: members of a community share a common experience, history, time, and space; that is, they experience events together and engage in positive interactions that lead to enhanced relationships (Abfalter et al., 2011; Koh and Kim, 2003). 2.4 Social influence Generally, during a decision-making process, individuals not only consider the matter at hand but also the surrounding social group or environment. This phenomenon is called social influence. Although social influence entails numerous dimensions, in this study it is considered the susceptibility to inter-individual influence to facilitate the discussion of SNSs and interactions among members. Bearden et al. (1989) indicated that when people interact in a group, it induces changes in perception or behavior. This transformative process refers to social influence. Scholars have assessed personality attributes that predispose a person to others' influence, such as low self-esteem. Dual-process theory in psychology postulates that messages or information received by a person may be influenced by persuasion, and that such influence is divided into two types: normative influence and informative influence (Deutsch and Gerrard, 1955). Normative influence refers to a person's conforming to social norms or others' expectations to obtain the approval of a group, thus adopting cognitive or behavioral patterns congruent to the group (Cheung et al., 2009). Informative influence arises from acknowledging information obtained as evidence of reality, which is primarily based on the recipient's assessment of the information received, which includes the content, source, and other recipients (Hovland et al., 1953). This research proposes an integrated model of the relationship between different levels of influencing factors. Based on the research of Wasko and Faraj (2005), this study investigates the factors of individual, group, and social levels as the basis of a theoretical theory to develop a research model of a community relationship. Previous studies have confirmed that social capital theory is the most accurate explanation of interpersonal Facebook relationships on SNSs (e.g. Burke et al., 2010, 2011; Ellison et al., 2011; Zhao et al., 2016). Facebook is particularly well-suited way for bridging "social capital." Social capital is used to describe the capacity of obtaining resources for individuals or groups embedded in their social networks (Bourdieu, 1986; Coleman, 1988). The establishment of a social network relationship includes different levels of viewpoints for individual, groups, and social influence, wherein trust is seen as a key construct of social capital on an individual level (Zhao et al., 2016). Connections of different clusters or groups within a network is often called "bridging" ties (Burt, 1992), and it is conducive to building strong relationships. On the other hand, bridging ties is also characterized by repeated interactions with trustworthy, highly supportive, and intimate relationships, which typically provide the transformation of acquiring capital and becomes a more substantive form of social relationship (Ellison et al., 2014). People build relationships through social interactions and build their expectations for future social resources with social capital. A key issue in changing people's attitudes or behavior is that such a change can transform the attitude or behavior of a group or community (Latkin et al., 2009). A conceptual point of view can change groups and social levels through social diffusion. A presumed social diffusion is enough to cause others to change their behavior. Social behavior will spread in the community in the way of social groups. The conceptual operation of social diffusion is determining the social norms that are an important part of a common theory (Bandura, 1986). The most successful examples involve altering social norms related to the behavior change from the perspective of a group or a society (Latkin et al., 2009). Therefore, the framework of this research is divided into three perspectives: the individual, the group, and the social influence perspectives. The individual perspective is used to explore trust among SNS users toward a specific virtual community. The group perspective is employed to observe the sense of virtual community. The social influence perspective, consisting of both normative and informative influence, is used to discuss the joint influence of the individual and group perspectives on the social influence perspective (Figure 1). 3.1 Individual perspective Previous studies have typically classified trust as both cognitive and affective trust. However, the relationship between cognitive and affective trust is seldom discussed. In a study on organizational context, Johnson and Grayson (2005) indicated that cognitive trust is an antecedent of affective trust. Scholars studying attitude theory have disputed the context of the relationship between cognitive and affective trust in relation to attitude for a long time. Previous theoretical and empirical research has shown that cognitive trust positively and significantly influences affective trust (Johnson and Grayson, 2005; McAllister, 1995). Related studies regarding service relationships and e-commerce have also indicated that cognitive trust influences the formation of affective trust (Dabholkar et al., 2009; Johnson and Grayson, 2005). Chih et al. (2015) investigated online shoppers' buying behavior from a positive- and negative-cognitive perspective and distinguished trust into cognitive trust and affective trust. They determined that cognitive trust is deemed as an antecedent of affective trust and their empirical results show that cognitive trust must be established in order to gain consumer trust and build relationships. For example, a virtual community provides accurate and credible shopping information, and further, that consumers are willing to build an affective linkage with this virtual community. Therefore, this study proposes the following hypothesis: H1. In a virtual community, members' cognitive trust has a significant and positive effect on affective trust. 3.2 Connection between individual and group In highly uncertain environments, trust helps people build interactive relationship networks. Because activities in virtual communities lack face-to-face contact, online communication requires trust. Trust facilitates the successful implementation of virtual communities. For example, in an environment without norms, it is necessary for a partner to have trust to execute socially acceptable interactions (Lin, 2008). Blanchard and Markus (2004) have asserted that in virtual communities, identification methods can enhance trust, thereby increasing members' sense of virtual community. This reflects contemporary SNSs' requirement that members provide their real names when registering to join a site. When members demonstrate trust toward a virtual community, they form a committed relationship with that community, which facilitates the formation of a sense of virtual community (Tsai et al., 2011; Wang and Tai, 2011). Ellonen et al. (2007) indicated that a deep sense of trust between community members allows them to assist one another because of the benefits of sharing a common social network and expectations, and thus foster a sense of virtual community. According to the findings of Blanchard and Markus (2004), trust between members develops after mutual support is demonstrated, which results in a multifaceted sense of virtual community. In addition, Lin (2008) asserted that successful virtual communities must generate trust between members, thereby producing a sense of virtual community. McMillan and Chavis (1986) found that trust can alleviate anxiety and insecurity for virtual community members. They also found that the relationships between virtual community members become closer and that members feel a sense of belonging when these relationships enhance trust between members and obtain assistance during the online interactive process. Zhu et al. (2012) indicated that trust is an important antecedent of shaping members' sense of community. Zhao et al. (2012) also confirmed that trust has a significant and positive effect on sense of belonging to a virtual community. In other words, trust is more likely to prompt the trustor to be more addictive to the relationship with the virtual community. Thus, this study proposes the following hypothesis: H2. In a virtual community, members' (a) cognitive trust and (b) affective trust have significant and positive effects on their sense of virtual community. 3.3 Connection between individual and society In virtual communities, establishing trust relationships with influential members is considered the foundation of interpersonal relationships because members frequently make decisions that conform to the opinions and suggestions provided by other members, who are strangers to them (Park and Feinberg, 2010). Casalo et al. (2011) indicated that trust toward online travel communities influences whether consumers accept the advice offered by a virtual community and subsequently purchase travel packages. Lascu and Zinkhan (1999) pointed out that when a group or community exhibits reliability, user conformity increases within the community. Because most online information is free, users' trust toward virtual communities affects the interpersonal relationships that they form within a community (Boush et al., 1993; Park and Feinberg, 2010). This suggests that members' assessment of the reliability of the information provided using their existing skills and knowledge, as well as the care and concern developed through emotional connections and social interactions, serve as factors that prompt members to conform to community norms and act upon suggestions provided by other members. These factors also act as references for members in purchase decisions. An individual will seek acceptance by others and change her/his attitude or behavior in order to meet the expectations of community members when the individual builds up a sense of trust with the community. Chin et al. (2009) found that online shoppers' trust has a significant and positive effect on social influence. Consumers tend to observe and gain information from others to understand products and services based on trust. Hsu et al. (2011) indicated that blog community members have a sense of trust to the community and comply with the common understanding and regulations to establish a standard behavior within the community by studying bloggers' interactive networks. Therefore, this study proposes the following hypotheses: H3. In a virtual community, members' cognitive trust has significant and positive effects on (a) normative influence and (b) informative influence. H4. In a virtual community, members' affective trust has significant and positive effects on (a) normative influence and (b) informative influence. 3.4 Connection between group and society A sense of virtual community is a sentiment generated by experiences in a virtual community that induces a sense of belonging and deep attachment toward that community (Blanchard and Markus, 2004; Koh and Kim, 2003; Tonteri et al., 2011). A number of scholars have asserted that for virtual community users, a sense of belonging in respect to such communities enhances their normative and informative influences (Lee and Park, 2008). A sense of belonging toward a virtual community is considered an antecedent to the formation of social influence, namely, normative and informative influence (Park and Feinberg, 2010). In a virtual community, a sense of community is generated when members recognize similarities and develop an intention to continue interacting, thus increasing their normative and informative influences within the community (Lascu and Zinkhan, 1999; Shen et al., 2010). Hsu et al. (2016) advocated the idea that members' sense of virtual community and their community social influence both increase if Facebook fan page members treat the community as part of their daily lives and acquire others' affirmation and praise. Hsu et al. (2016) also confirmed that the sense of virtual community has a significant and positive effect on normative and informative influence, respectively. Thus, this study proposes the following hypothesis: H5. In a virtual community, members' sense of virtual community has significant and positive effects on (a) normative influence and (b) informative influence. 4.1 Research design and data collection Our target population is Facebook fan page members because Facebook is the largest virtual community in Taiwan. This research applies Google Docs (https://drive-google-com.pitt.idm.oclc.org) online service, which has no time and/or geographical limits, to create an online questionnaire and release on a Facebook fan page and a PTT BBS station. Fan page users can connect to the questionnaire through a link. This study collected 422 samples. From these, 312 usable samples yielded a response rate of 73.93 percent. Table I shows the respondents' demographic information, with males comprising 51.92 percent. Among them, 57.37 percent of participants were between 18 and 24 years old, and the largest proportion of education background was a bachelor/associate degree, which accounted for 67.95 percent. About 36.22 percent of the respondents had a Facebook history of one to two years, and the majority of respondents had not been members for more than three years. In addition, 44.23 percent of users surfed Facebook three to five hours each day. 4.2 Measure This study aims to explore internet users' behavior in a virtual community. In order to assure the validity of the instrument, items used to measure the constructs are from scales developed in previous research. The items in the questionnaire are divided into six parts, including five dimensions of scale and demographic variables. All items are measured using a seven-point Likert scale ranging from "strongly disagree" (1) to "strongly agree" (7). Demographic variables are categorical data with a single-item measure and include gender, age, education, contact with Facebook, and surfing Facebook each day (see Table I). In this study, there are total of 36 items for five scales and five demographic variables in the questionnaire. 4.3 Sample validity For a precise reflection of Facebook users' population structure, this study used a gender ratio of the sample structure to determine whether it matches the population structure. In addition to statistics data from CheckFacebook (2013) as the basis reference of this study, we also follow the idea of Hsu et al. (2016) in taking gender as a test variable. The gender proportion is 50.6 percent male users and 49.4 percent female users. The result of a kh2 goodness-of-fit test shows that the p-value of kh2 is 2.697 (p-value is 0.101 larger than 0.050). Thus, the null hypothesis of the test cannot be rejected and no significant difference exists between the sample structure and the population structure of Taiwanese Facebook users' gender ratio provided by CheckFacebook in this study. To avoid or reduce the problem generated by the common method variance (CMV), a two-stage prevention procedure is conducted. First, we design the survey questionnaire by following these steps. The constructs are arranged randomly and the research objective is not shown on the questionnaire. The survey is conducted anonymously in order to decrease consistent answers from respondents. Second, Harman's one-factor-test is applied to examine whether the CMV problem exists in the sample data, including an exploratory factor analysis (EFA) (Harman, 1967; Podsakoff and Organ, 1986) and a confirmatory factor analysis (CFA). The analysis of EFA conducts all items, and the results show that six factors are extracted and the explained 40.573 percent variance by the first factor, which is lower than 50 percent (Wang et al., 2014). The analysis of CFA conducting all items are subsumed into one factor and the results do not show all item factor loadings higher than 0.5. Furthermore, as the model-fit of Harman's one-factor-test (kh2=3306.19, df=594, kh2/df=5.57, GFI=0.53, AGFI=0.47, RMSEA=0.12, CFI=0.63, NFI=0.59) is worse than the model-fit of the hypothesized model (kh2=799.65, df=535, kh2/df=1.50, GFI=0.88, AGFI=0.85, RMSEA=0.04, CFI=0.96, NFI=0.90), no significant CMV problem exists in the data. 4.4 Analysis of measurement model As essential prerequisites for achieving valid results, the reliability, convergent validity, and discriminant validity of the measurement model are assessed. Item reliability is assessed by applying the factor loading and squared multiple correlations (SMC), and the construct reliability is assessed by applying Cronbach's a. As shown in Table II, all factor loadings are above 0.5, and all SMCs are above 0.2, indicating good reliability of the items (Bentler and Wu, 1993; Hair et al., 2010). Also, the Cronbach's a are above 0.7, indicating good reliability of the scales (Nunnally, 1978). To test the ability of a measurement to reflect the actual circumstances, this study adopts the test for convergent validity and discriminant validity. Convergent validity can be assessed in terms of the average variance extracted (AVE) from the latent variables and the composite reliability (CR). Discriminant validity refers to the correlation between constructs. Table II shows the CRs above 0.6 and the AVE from latent variables are above the acceptable value of 0.5, with the exception of normative influence and informative influence. However, according to the findings of Fornell and Larcker (1981), the scale still indicates convergent validity if the scale CR is higher than 0.6. Thus, all the scale convergent validity can be confirmed (Bagozzi and Yi, 1988; Hulland, 1999). Furthermore, Table III indicates that the correlation with that of the other constructs is less than 1 and that the AVE square root of any particular construct is greater than the correlation of this construct with other constructs (Fornell and Larcker, 1981; Gaski and Nevin, 1985). 4.5 Structural model This study examines the structural model and considers the model-fit indices suggested by Hair et al. (2010), which are absolute fit measures, incremental fit measures, and parsimonious fit measures. All of the model-fit indices are higher than the suggested criteria. For the examination of the proposed hypotheses of the relationship among constructs, a structural model is applied. The result of the structural model is shown in Figure 2. The model explained 43.6 percent of affective trust variance, 54.2 percent of sense of virtual community variance, 44.4 percent of normative influence variance, and 40.3 percent of informative influence variance. H1 posits that cognitive trust influences affective trust. Figure 2 indicates that the path coefficient is 0.661 (p<0.001), thus supporting H1. H2, which states that (a) cognitive trust and (b) affective trust affect the sense of virtual community, respectively, are also confirmed (g21=0.381, p<0.001; b21=0.427, p<0.001). The positive effects of cognitive trust on (a) normative influence and (b) informative influence are also supported (g31=0.287, p<0.001; g41=0.182, p<0.05) thereby confirming H3. Furthermore, the results show the positive influences of affective trust on (a) normative influence and (b) informative influence (b31=0.279, p<0.001; b41=0.334, p<0.001) thus confirming H4. Finally, the effects of sense of virtual community on (a) normative influence and (b) informative influence are significant (b32=0.189, p<0.05; b42=0.199, p<0.05) thereby supporting H5. 4.6 Mediating effects Based on the previous demonstration of the effects of trust on sense of virtual community and social influence, this study tests the mediation effect of affective trust between cognitive trust and sense of virtual community and sense of virtual community between cognitive/affective trust and normative/informative influence. According to Table IV, both the bootstrapping 95 percent confidence intervals of percentile and the bias-corrected method are presented, and zero is not contained in these intervals. Such results enhance the findings that affective trust mediates the relationships between cognitive trust and sense of virtual community and that sense of virtual community mediates cognitive/affective trust and normative/informative influence. Furthermore, following prior research, three steps are required to test the mediation effect (Baron and Kenny, 1986; Komiak and Benbasat, 2006). In step 1, this study treats cognitive trust as an independent variable and sense of virtual community as a dependent variable, and finds a significant relationship between them (b=0.584, p<0.001). In step 2, this study builds a model that includes cognitive trust as an independent variable and affective trust as the dependent variable, indicating a significant effect (b=0.622, p<0.001). In step 3, this study builds a model with both cognitive trust and affective trust as independent variables and sense of virtual community as the dependent variable, and both the effects of cognitive/affective trust on sense of virtual community are significant. Thus, affective trust partially mediates between cognitive trust and sense of virtual community. Finally, this study also conducts the Sobel (1982) test to assess the significance of the mediation effect (Wood et al., 2008). The results demonstrate that affective trust significantly mediates between cognitive trust and sense of virtual community (Sobel=7.114, p<0.001). The mediating effects of sense of virtual community on the relationship between cognitive/affective trust and the normative/informative influence are also examined. Both a bootstrapping analysis and the Sobel test are conducted, and results in Table V show that sense of virtual community partially mediates the relationships between cognitive/affective trust and normative/informative influence. 5.1 Research implications This study contributes to improving the understanding about the transformation of sense of virtual community on the group level as well as the normative/informative influence on the social level, and from a trust perspective on the individual level. The findings indicate that this research model has a good explanatory power and provides a more complete explanation of members' interactive relationships on different levels in the context of Facebook brand fan pages. Regarding the individual perspective, this study references the research of Lewis and Weigert (1985) and divides trust into cognitive and affective trust to assess and clarify the non-parallel relationship between these two constructs in a virtual community. The results indicate that the formation of affective trust is partially dependent on cognitive trust. Similar to Chih et al.'s (2015) study, a virtual community must establish an accurate and reliable message source in order to build relationships with members. Virtual community members will form an affective trust of emotional feeling and emotional attachment when they obtain product information by rational demands. Second, regarding the individual-group perspective, this study identifies two salient factors, cognitive trust and affective trust, which are important antecedents of the sense of virtual community. Similar to Zhao et al.'s (2012) study, trust plays a key role in increasing brand community members' sense of belonging. The basis of interaction on an individual level is cognitive trust and affective trust. Consumers have confidence in disclosed relevant information within a virtual community and then frequently interact with other members if a virtual community manager devotes more effort to managing the community. In this scenario, virtual community members will feel a stronger sense of belonging. Third, regarding the individual-social perspective, this study investigates individual factors involved in the normative/informative influence that page fans have on other fans (i.e. members). This suggests that because of trust in other members' abilities, or trust stemming from emotional connections, fans of a brand page change their perspectives or behaviors to meet the expectations of other fans. H3 and H4 of this study are similar to the results of Hsu et al. (2011). Although cognitive trust has a significant and positive effect on normative influence and informative influence, respectively, it has a weak significant effect on informative influence. Therefore, a virtual community must build cognitive trust from a rational perspective in order to create a normative influence within this community and obtain acceptance by members. Affective trust has a significant and positive inference on normative influence and informative influence, respectively. A virtual community must build affective trust in order to create an informative influence among community members. Virtual community members seek others' suggestions and adopt correct and reliable information to reduce uncertainty and fear in the community. Finally, this study finds a significant and positive effect of a sense of virtual community on normative/informative influence from the group-social perspective. H5 of this study is same as Hsu et al.'s (2016) findings. In addition, the sense of virtual community is an important core concept in the research model. Such a sense not only strengthens the sense of belonging among virtual community members, but it also establishes a bridge between trust and social influence. Behavioral norms of virtual community members means that community members perceive common sense during the interaction and comply with the common understanding of establishing their behaviors from a community life point of view. 5.2 Managerial implications This study advocates that brand fan pages must establish a platform that virtual community members trust and are willing to participate in the discussion environment for a long time regardless from the perspective of rational cognitive trust and emotional affective trust. More engaged customers will establish a closer trust relationship in the virtual community and partake in activities such as providing brand and product-related knowledge, offer interactive discussions, reply to service-related questions on fan pages, increase the cognitive trust of brand among members, and organize fan page activities to increase the opportunities of exchanging emotion and enhancing affective trust. Cognitive/affective trust and sense of virtual community are key factors in establishing virtual community cohesion to build a reliable brand and belief as well as the development of emotional relationships for a sense of belonging. For example, Toyota cleverly used social media to resolve a vehicle recall crisis by building up customer trust. The company sets up a team to track negative rumors on Facebook and elsewhere, responded with the facts, and specifically opened a Twitter account to communicate with consumers. Toyota looked for online fans to spread their discussion through the company's media channels, taking advantage of the company's decade-long performance reputation, that is, a sense of reliable trust and durable brand commitment. They effectively used social media and other new media to offset the most negative messages. Toyota successfully lifted their brand challenge and eliminated the brand disaster to provide several important community management strategies for other companies. A brand fan page must establish an ideal community environment in which virtual community members are more likely to share and deliver brand/product-related messages with cognitive/affective trust and social influence (Brown et al., 2007). Therefore, the biggest benefit for consumers engaging in a brand community is to form strong relationship among brands, products, other customers, and companies. Customers increase their trust relationship with the brand and form an invisible norm for the community force after creating a close relationship with the brand (Habibi et al., 2014). Virtual community members choose to gain product messages and knowledge from other members to understand their personal experiences of product use. In the sense of virtual community and social influence, community managers must consider how to provide enough useful information and interactive activities to build up users' sense of belonging and cohesion with altruistic behavior of social relationship. Virtual community members can communicate by online platform to gain psychological support and attract other participating individuals who are willing to maintain a relationship with the community. Many people publish their own messages in a virtual community. Firms must further promote commercial behavior within the virtual community so that by participating, people can meet their particular needs, such as community, business transactions, knowledge, and entertainment (Hagel, 1999). This means that interpersonal relationships between virtual community members are the basis for community development and generates emotional exchanges for community members (Lee and Chang, 2011). Managers must build a successful virtual community to attract the attention of internet users as well as to provide sufficient incentives for virtual community fans to share information (Ho and Dempsey, 2010). For example, managers not only encourage members to provide relevant electronic word-of-mouth (e-WOM) information, but also recently launched brands through promotional activities on social media to provide relevant information to virtual community members. Virtual community fans will be more willing to use e-WOM to promote brand and create brand value when they are involved in the brand platform. Brand fan pages not only link with fans, but they also help to establish virtual community fans to increase discussion activity and maintain popularity of fan page. For example, firms disclose ideas or comments on styles and features on Facebook when new products are introduced. It is possible to investigate preferences and dissimilarities between fans and other consumers by collecting messages, comments, voting, and so forth. Virtual community creates membership, identification, and links with fan brands by meeting consumer demands (Fournier and Lee, 2009). The results indicate that social influence among members or Facebook pages fans is enhanced by cognitive trust, affective trust, and the sense of virtual community. Thus, a long-term sense of virtual community toward Facebook brand fan pages can be induced if online platform developers establish an environment in which users develop trust and participation. Methods of achieving this include providing knowledge related to brands or products, assigning staff to provide answers to questions posed by users, and hosting activities that encourage fans to interact, thereby enhancing cognitive and affective trust between fans. After users develop cognitive trust, affective trust, and the sense of virtual community, the social influence among members is likely to increase. The thriving development of the internet and SNSs has transformed common communication methods and lifestyles, and SNSs have become popular platforms for interaction and communication. For firms, crowds are business opportunities. Consequently, various firms aim to build platforms that facilitate positive interactions and communication with consumers on SNSs, thereby increasing product sales or brand value. Learning consumer behavior in the context of virtual brand community is required for firms to obtain tangible benefits and value from their interactive platforms. The results provide insights for brand companies in the creation of platforms on Facebook or other SNSs to enable B2C or C2C interactions. 5.3 Research contributions This study is based on trust theory and follows the concept of social exchange theory (Blau, 1964; Thibaut and Kelly, 1959). Blau (1964) advocated that trust within relationships is an important idea, especially in the process of exchange among individuals through cultivating a good rapport and provides people with a reason not to run away from social obligations. In the virtual environment, trust affects users' willingness to exchange messages with other members and is an important factor in continuing participation in the community (Blanchard et al., 2011; Ridings et al., 2002; Yeh and Choi, 2011). Trust is a psychological state (Rousseau et al., 1998) and a multi-faceted concept (Lewis and Weigert, 1985; McAllister, 1995; Riegelsberger et al., 2003). Cognitive trust and affective trust are considered to have a non-parallel relationship in organization research (Lewis and Weigert, 1985). However, this study successfully divides trust into cognitive trust and affective trust to clarify the non-parallel relationship between these two constructs in a virtual community. Past scholars have only investigated the group level of interpersonal relationships for virtual community members, but not other levels (Tonteri et al., 2011). This study concerns different levels and distinguishes the difference between individual, group, and social levels and presents trust theory to separate trust into cognitive trust and affective trust as the antecedents of a sense of virtual community. In addition, social influence is regarded as susceptible to interpersonal influence (Bearden et al., 1989) as a result of a sense of virtual community. 5.4 Limitations and future research Some limitations exist in this study. First, this study adopts a cross-sectional survey, which means it does not explore the subsequent internal changes and actual usage behavior of Facebook brand fan page users. Subsequent researchers could use a longitudinal survey. Second, this study only surveys Taiwanese Facebook users, and does not present a comprehensive picture of Facebook users in different countries. Subsequent researchers could investigate international brand fan page users. Third, this study only conducts the survey for Facebook fan page users, and in different virtual community platforms, it may create different findings and inferences. This study recommends that researchers follow up this study for different virtual community platforms, such as Twitter and Plurk. Fourth, this study only designs the multi-level constructs by using structural equation modeling. Future researchers can redesign the questionnaire to collect data from multiple groups of respondents. Finally, this study does not consider other constructs at the individual level, whereas a gap exists in considering users' personality characteristics, such as extraversion, introversion, narcissism, self-esteem, self-worth, and neuroticism (Nadkarni and Hofman, 2012). On the group-level factors, such as sense of virtual community can be divided into three facets such as membership, influence, and immersion (Koh and Kim, 2003) to verify the relationship.
|
The results indicate the following. Both cognitive trust and affective trust have effects on members' sense of virtual community. Cognitive trust, affective trust, and sense of virtual community have effects on both normative influence and informative influence, respectively. Members in a virtual community could create a sense of virtual community via affective trust. Members' sense of virtual community partially mediates between cognitive/affective trust and normative/informative influence.
|
[SECTION: Value] The emergence of social networking sites (SNSs) has resulted in the rapid evolution of online community platforms into popular forums for communication and entertainment, while users' word-of-mouth behavior has become increasingly decisive influence. With the growing maturity of technologies related to SNSs, business managers have learned to increase commercial profits (Trusov et al., 2009). In other words, firms are striving to increase interactivity among brands' SNSs, website users, and non-website users to generate positive outcomes through internet-enabled dissemination. For example, Global Web Index (2014), a market research institution, found that social media has continued to grow and develop in the last year based on the globalization information of social media. In 2013, the number of new registered users on popular social websites increased by 135 million. In 2014, the total number of Facebook users reached 1.393 billion and generated total revenue of USD$3.85 billion in the fourth quarter of the year, which was an increase of 3.18 percent compared with the third quarter (Facebook, 2015). That same year, the total revenue of Facebook reached USD$12.47 billion, which was an increase of 58 percent compared with the previous year. Daily users have increased 18 percent compared with 13 percent total growth of non-daily users (Business Next, 2015). It has become popular for companies to use Facebook as a customer service channel to communicate brands with their customers. More companies build up relationships with their customers, as well as respond to and solve problems from customers, through Facebook rather than other social media (Social Time, 2015). In addition, the development model of online marketing has gradually transformed from business-to-consumer (B2C) to consumer-to-consumer (C2C), a revolutionary and well-received model that enables interactive e-commerce (Chu and Liao, 2007). Because of the advances in internet technology, online WOM, which differs from traditional, offline WOM, allows internet users to transmit messages to hundreds or thousands of people with just a few clicks (Mangold and Faulds, 2009). Regarding media value, Vitrue, a firm specializing in social media, calculates that the impressions generated by one million fans are equivalent to a media value of USD$300,000 per month. For example, the Starbucks Facebook page has a fan base of approximately 6.5 million, translating into an annual media value of USD$23.4 million. On average, one fan generates USD$3.60 in media value, and one million fans are worth USD$3.6 million (Moorman et al., 1993). Therefore, the quicker a fan base expands, the larger the generation of media value. In practice, firms use Facebook brand fan pages to create interaction and rapport with fans. The companies then combine these pages with other online marketing activities to transfer advertising from cyberspace to offline environments (Electronic Commerce Times, 2010). Because of Facebook's high reach rates, numerous firms have created pages to garner popularity. In addition, Facebook brand fan pages benefit firms by serving as a channel for managers to inform fans of new product information and to announce relevant activities (Social Media Marketing Co., 2011). In addition to maintaining positive trust relationships between brand manufacturers and consumers, online community platforms allow brands to communicate product information to consumers (thereby establishing information exchange and interactions with similar communities) and assist community members in their future purchase decisions. Previous studies investigating virtual brand communities have discussed cognitive trust and affective trust and whether these two factors are keys to the successful management of virtual communities (Lin, 2008; Yeh and Choi, 2011). However, these studies have typically studied the parallel relationship between cognitive and affective trust rather than the non-parallel aspect between them. Lewis and Weigert (1985) pointed out that the difference between cognitive and emotional attitude form a long-standing debate. The concept of cognitive trust and affective trust is clarified as a non-parallel relationship in the organizational research literature. The cause and effect of the relationship between cognitive trust and affective trust has not been explored in the context of virtual research (e.g. Venetis and Ghauri, 2004; Vesel and Zabkar, 2009; Yeh and Choi, 2011). Based on previous organizational research, trust is not a single holistic view. Trust also includes a cause and effect relationship between cognitive and affective factors. Previous scholars have advocated that cognitive trust is an antecedent of affective trust (Johnson and Grayson, 2005; McAllister, 1995). The current study advocates that, in a virtual context, the attitude of community members is formed by the same phenomenon. It is necessary to clarify the relationship between cognitive trust and affective trust. This study examines the mutual influence of cognitive and affective trust to assess whether cognitive trust in a virtual brand community affects the establishment of users' affective trust. In addition, SNSs are characterized by frequent interpersonal interactions. During such interactions, virtual community users are susceptible to the influence of other users (Bearden et al., 1989). Thus, this study also explores whether social influence among virtual community users is affected by cognitive trust, affective trust, and the sense of virtual community among members. Social influence can be divided into normative influence and informative influence according to user motivation for participating in virtual communities. In building up our theory and illustrating a research gap, this study considers three dimensions as exogenous variables in the model. These dimensions may affect and transform their own relationships to each other. Previous literature has explored on the dimension, originally put forth by Wasko and Faraj (2005) who investigated that factors influence voluntary knowledge sharing on SNSs among individual, relational (social), and group-level relationship. The research of Tsai et al. (2012) was a follow-up study to Wasko and Faraj (2005) and proposed to extend different community environments to construct theory, and put forward the idea that community participation involves complex and interpersonal exchange processes. Hsu (2012) continued to follow the study of Wasko and Faraj (2005) and Tsai et al. (2012) in distinguishing the dimensions of individual, social, and group level and combined them with the tricomponent attitude model proposed by Rosenberg and Hovland (1960) as a theoretical basis. The tricomponent attitude model is divided into three psychological processes that include cognition, affection, and conation and action components. For the purposes of this study, the personal level is divided into cognitive trust and affective trust in the cognition phase, and cognitive factors are important antecedents of affective factors. Hsu (2012) indicated that affective trust is an important antecedent of affection in developing an important community relationship. Previous research has typically targeted the group level on the sense of a virtual community, which involves individual influences on virtual communities, but has rarely investigated the influence exerted by virtual community members on their interpersonal relationships (Tonteri et al., 2011) and finely formed social norms mechanism (Hsu et al., 2016). Therefore, this study develops a theoretical framework to explore the relationships among individual, group, and social factors. Based on the research motivation previously mentioned, this study primarily aims to achieve the following purposes. To explore factors of trust among virtual community members from an individual perspective, a sense of virtual community from the group perspective, and the normative/informative influence in interpersonal interactions among members from a social influence perspective, and examining the interaction process among these three perspectives. To propose two mediating factors and conduct relevant tests. First, affective trust is a mediator between cognitive trust and sense of virtual community. Second, sense of virtual community is a mediator between trust (cognitive trust and affective trust) and social influence (normative influence and informative influence). 2.1 Community website and Facebook fan pages SNSs are a form of virtual community. SNSs users can create a public profile, interacting and sharing common interests with friends as well as with strangers in real life (Kuss and Griffiths, 2011). Enabled by the internet, SNSs mainly provide users with the following three functions: to create a public or semi-public profile of personal information; to customize lists of users for sharing information; and to view and track information provided by other users (Boyd and Ellison, 2008). SNSs are defined as virtual communities that provide members interactions based on the Web 2.0 concept. For example, social networking and multimedia content sharing not only preserve users' existing social networks, but also allow connections among strangers who share common interests. According to this definition, the first SNS, SixDegrees.com originated in 1997. This website provided a platform for users to create profiles and friend lists. Although SNSs emerged in cyberspace in 1997, their rapid growth and popularity began in 2003 when websites such as MySpace, LinkedIn, Flicker, Facebook, and YouTube were launched. In addition to the original social networking functions, certain SNSs included options that allowed users to share multimedia, such as uploading photos and videos. Subsequently, SNSs begin to garner global attention and the number of SNS users grew exponentially. The launch of SNSs such as Facebook and MySpace changed the manner in which internet users communicated and interacted around the world. Facebook has since become the largest SNS in the world (Boyd and Ellison, 2008; Chiu et al., 2008; Kuss and Griffiths, 2011; Nadkarni and Hofman, 2012). Facebook pages, introduced in 2007, are public profiles that enable firms and users to share company news and product updates. Facebook members update their linking status with their page(s) to share with their friends through real-time feeds. Subsequently, Facebook continues to disseminate real-time updates to broader networks through online WOM when friends of these fans interact with their pages. In addition to Facebook's stunning growth in membership, pages are another feature that distinguishes these sites. Sysomos Inc. (2009) conducted the first large-scale survey regarding Facebook pages, which now exceed 630,000. The results indicate that each page has 4,596 fans on average and that page owners post on the page wall every 15.7 days on average, demonstrating rapid fan-base growth. Business Next (2015) held the second Facebook page poll and identified the strongest fan base (i.e. Facebook page) based on popularity, page content, and long-term operating outcomes. Pages can be used for business promotion, commercial marketing, or sharing professional knowledge. Members of a business, organization, or club share various social networking or marketing activities on their associated pages and announce upcoming activities. These pages update and inform fans' Facebook friends or users' viewing pages of information relevant to specific activities, which then attract additional users with common interests, thereby achieving brand promotion (Pempek et al., 2009). 2.2 Trust theory Trust verifies evidence and serves to generate a feeling of affirmation, which is a key factor in influencing the formation of relationships and partnerships (Giffin, 1967; McKnight and Chervany, 2002). Trust is vital for establishing interpersonal relationships and virtual communities, especially in uncertain or high-risk environments, such as electronic markets (Ba and Pavlou, 2002; Moorman et al., 1993). Lewis and Weigert (1985) and McAllister (1995) asserted that interpersonal trust stems from cognitive and affective bases, and that networking on SNSs results from social interactions. Cognitive trust arises from calculations and rational assessments that originate from accumulated knowledge. Such knowledge enables people to predict with some degree of confidence that their partner in the relationship can conform to their expectations. This knowledge is amassed from previous observations of the partner's external behavior and reputation. In other words, cognitive trust in the SNS context refers to internet users' assessment of the reliability of the information based on users' existing capabilities and knowledge. Conversely, affective trust is formed by affections and social interactions, and is built on people's care and concern for each other; that is, affective trust is trust that arises from mutual affection and results in emotional connections in interpersonal relationships (Johnson and Grayson, 2005; Yeh and Choi, 2011). 2.3 Sense of virtual community A sense of virtual community has been originally defined as the sense of belonging that members have toward their community, allowing them to convey beliefs and reach a mutual understanding, thereby demonstrating their commitment to the community. A sense of community can be divided into four elements: membership, influence, integration and fulfillment of needs, and shared emotional connection. These elements have been used in theories regarding the sense of virtual community (Blanchard, 2007; Tonteri et al., 2011) and each are elucidated as follows: membership: a sense of belonging that members perceive regarding their community and serves as a common symbol within the community that members self-reinforce to meet community needs and obtain approval; influence: the influence exerted on members by the community or other members, or the belief that members hold that they are capable of influencing others in the community; integration and fulfillment of needs: members believe that the community or resources and support provided by other members can satisfy their needs (e.g. joining a community provides specific advantages or rewards); and shared emotional connection: members of a community share a common experience, history, time, and space; that is, they experience events together and engage in positive interactions that lead to enhanced relationships (Abfalter et al., 2011; Koh and Kim, 2003). 2.4 Social influence Generally, during a decision-making process, individuals not only consider the matter at hand but also the surrounding social group or environment. This phenomenon is called social influence. Although social influence entails numerous dimensions, in this study it is considered the susceptibility to inter-individual influence to facilitate the discussion of SNSs and interactions among members. Bearden et al. (1989) indicated that when people interact in a group, it induces changes in perception or behavior. This transformative process refers to social influence. Scholars have assessed personality attributes that predispose a person to others' influence, such as low self-esteem. Dual-process theory in psychology postulates that messages or information received by a person may be influenced by persuasion, and that such influence is divided into two types: normative influence and informative influence (Deutsch and Gerrard, 1955). Normative influence refers to a person's conforming to social norms or others' expectations to obtain the approval of a group, thus adopting cognitive or behavioral patterns congruent to the group (Cheung et al., 2009). Informative influence arises from acknowledging information obtained as evidence of reality, which is primarily based on the recipient's assessment of the information received, which includes the content, source, and other recipients (Hovland et al., 1953). This research proposes an integrated model of the relationship between different levels of influencing factors. Based on the research of Wasko and Faraj (2005), this study investigates the factors of individual, group, and social levels as the basis of a theoretical theory to develop a research model of a community relationship. Previous studies have confirmed that social capital theory is the most accurate explanation of interpersonal Facebook relationships on SNSs (e.g. Burke et al., 2010, 2011; Ellison et al., 2011; Zhao et al., 2016). Facebook is particularly well-suited way for bridging "social capital." Social capital is used to describe the capacity of obtaining resources for individuals or groups embedded in their social networks (Bourdieu, 1986; Coleman, 1988). The establishment of a social network relationship includes different levels of viewpoints for individual, groups, and social influence, wherein trust is seen as a key construct of social capital on an individual level (Zhao et al., 2016). Connections of different clusters or groups within a network is often called "bridging" ties (Burt, 1992), and it is conducive to building strong relationships. On the other hand, bridging ties is also characterized by repeated interactions with trustworthy, highly supportive, and intimate relationships, which typically provide the transformation of acquiring capital and becomes a more substantive form of social relationship (Ellison et al., 2014). People build relationships through social interactions and build their expectations for future social resources with social capital. A key issue in changing people's attitudes or behavior is that such a change can transform the attitude or behavior of a group or community (Latkin et al., 2009). A conceptual point of view can change groups and social levels through social diffusion. A presumed social diffusion is enough to cause others to change their behavior. Social behavior will spread in the community in the way of social groups. The conceptual operation of social diffusion is determining the social norms that are an important part of a common theory (Bandura, 1986). The most successful examples involve altering social norms related to the behavior change from the perspective of a group or a society (Latkin et al., 2009). Therefore, the framework of this research is divided into three perspectives: the individual, the group, and the social influence perspectives. The individual perspective is used to explore trust among SNS users toward a specific virtual community. The group perspective is employed to observe the sense of virtual community. The social influence perspective, consisting of both normative and informative influence, is used to discuss the joint influence of the individual and group perspectives on the social influence perspective (Figure 1). 3.1 Individual perspective Previous studies have typically classified trust as both cognitive and affective trust. However, the relationship between cognitive and affective trust is seldom discussed. In a study on organizational context, Johnson and Grayson (2005) indicated that cognitive trust is an antecedent of affective trust. Scholars studying attitude theory have disputed the context of the relationship between cognitive and affective trust in relation to attitude for a long time. Previous theoretical and empirical research has shown that cognitive trust positively and significantly influences affective trust (Johnson and Grayson, 2005; McAllister, 1995). Related studies regarding service relationships and e-commerce have also indicated that cognitive trust influences the formation of affective trust (Dabholkar et al., 2009; Johnson and Grayson, 2005). Chih et al. (2015) investigated online shoppers' buying behavior from a positive- and negative-cognitive perspective and distinguished trust into cognitive trust and affective trust. They determined that cognitive trust is deemed as an antecedent of affective trust and their empirical results show that cognitive trust must be established in order to gain consumer trust and build relationships. For example, a virtual community provides accurate and credible shopping information, and further, that consumers are willing to build an affective linkage with this virtual community. Therefore, this study proposes the following hypothesis: H1. In a virtual community, members' cognitive trust has a significant and positive effect on affective trust. 3.2 Connection between individual and group In highly uncertain environments, trust helps people build interactive relationship networks. Because activities in virtual communities lack face-to-face contact, online communication requires trust. Trust facilitates the successful implementation of virtual communities. For example, in an environment without norms, it is necessary for a partner to have trust to execute socially acceptable interactions (Lin, 2008). Blanchard and Markus (2004) have asserted that in virtual communities, identification methods can enhance trust, thereby increasing members' sense of virtual community. This reflects contemporary SNSs' requirement that members provide their real names when registering to join a site. When members demonstrate trust toward a virtual community, they form a committed relationship with that community, which facilitates the formation of a sense of virtual community (Tsai et al., 2011; Wang and Tai, 2011). Ellonen et al. (2007) indicated that a deep sense of trust between community members allows them to assist one another because of the benefits of sharing a common social network and expectations, and thus foster a sense of virtual community. According to the findings of Blanchard and Markus (2004), trust between members develops after mutual support is demonstrated, which results in a multifaceted sense of virtual community. In addition, Lin (2008) asserted that successful virtual communities must generate trust between members, thereby producing a sense of virtual community. McMillan and Chavis (1986) found that trust can alleviate anxiety and insecurity for virtual community members. They also found that the relationships between virtual community members become closer and that members feel a sense of belonging when these relationships enhance trust between members and obtain assistance during the online interactive process. Zhu et al. (2012) indicated that trust is an important antecedent of shaping members' sense of community. Zhao et al. (2012) also confirmed that trust has a significant and positive effect on sense of belonging to a virtual community. In other words, trust is more likely to prompt the trustor to be more addictive to the relationship with the virtual community. Thus, this study proposes the following hypothesis: H2. In a virtual community, members' (a) cognitive trust and (b) affective trust have significant and positive effects on their sense of virtual community. 3.3 Connection between individual and society In virtual communities, establishing trust relationships with influential members is considered the foundation of interpersonal relationships because members frequently make decisions that conform to the opinions and suggestions provided by other members, who are strangers to them (Park and Feinberg, 2010). Casalo et al. (2011) indicated that trust toward online travel communities influences whether consumers accept the advice offered by a virtual community and subsequently purchase travel packages. Lascu and Zinkhan (1999) pointed out that when a group or community exhibits reliability, user conformity increases within the community. Because most online information is free, users' trust toward virtual communities affects the interpersonal relationships that they form within a community (Boush et al., 1993; Park and Feinberg, 2010). This suggests that members' assessment of the reliability of the information provided using their existing skills and knowledge, as well as the care and concern developed through emotional connections and social interactions, serve as factors that prompt members to conform to community norms and act upon suggestions provided by other members. These factors also act as references for members in purchase decisions. An individual will seek acceptance by others and change her/his attitude or behavior in order to meet the expectations of community members when the individual builds up a sense of trust with the community. Chin et al. (2009) found that online shoppers' trust has a significant and positive effect on social influence. Consumers tend to observe and gain information from others to understand products and services based on trust. Hsu et al. (2011) indicated that blog community members have a sense of trust to the community and comply with the common understanding and regulations to establish a standard behavior within the community by studying bloggers' interactive networks. Therefore, this study proposes the following hypotheses: H3. In a virtual community, members' cognitive trust has significant and positive effects on (a) normative influence and (b) informative influence. H4. In a virtual community, members' affective trust has significant and positive effects on (a) normative influence and (b) informative influence. 3.4 Connection between group and society A sense of virtual community is a sentiment generated by experiences in a virtual community that induces a sense of belonging and deep attachment toward that community (Blanchard and Markus, 2004; Koh and Kim, 2003; Tonteri et al., 2011). A number of scholars have asserted that for virtual community users, a sense of belonging in respect to such communities enhances their normative and informative influences (Lee and Park, 2008). A sense of belonging toward a virtual community is considered an antecedent to the formation of social influence, namely, normative and informative influence (Park and Feinberg, 2010). In a virtual community, a sense of community is generated when members recognize similarities and develop an intention to continue interacting, thus increasing their normative and informative influences within the community (Lascu and Zinkhan, 1999; Shen et al., 2010). Hsu et al. (2016) advocated the idea that members' sense of virtual community and their community social influence both increase if Facebook fan page members treat the community as part of their daily lives and acquire others' affirmation and praise. Hsu et al. (2016) also confirmed that the sense of virtual community has a significant and positive effect on normative and informative influence, respectively. Thus, this study proposes the following hypothesis: H5. In a virtual community, members' sense of virtual community has significant and positive effects on (a) normative influence and (b) informative influence. 4.1 Research design and data collection Our target population is Facebook fan page members because Facebook is the largest virtual community in Taiwan. This research applies Google Docs (https://drive-google-com.pitt.idm.oclc.org) online service, which has no time and/or geographical limits, to create an online questionnaire and release on a Facebook fan page and a PTT BBS station. Fan page users can connect to the questionnaire through a link. This study collected 422 samples. From these, 312 usable samples yielded a response rate of 73.93 percent. Table I shows the respondents' demographic information, with males comprising 51.92 percent. Among them, 57.37 percent of participants were between 18 and 24 years old, and the largest proportion of education background was a bachelor/associate degree, which accounted for 67.95 percent. About 36.22 percent of the respondents had a Facebook history of one to two years, and the majority of respondents had not been members for more than three years. In addition, 44.23 percent of users surfed Facebook three to five hours each day. 4.2 Measure This study aims to explore internet users' behavior in a virtual community. In order to assure the validity of the instrument, items used to measure the constructs are from scales developed in previous research. The items in the questionnaire are divided into six parts, including five dimensions of scale and demographic variables. All items are measured using a seven-point Likert scale ranging from "strongly disagree" (1) to "strongly agree" (7). Demographic variables are categorical data with a single-item measure and include gender, age, education, contact with Facebook, and surfing Facebook each day (see Table I). In this study, there are total of 36 items for five scales and five demographic variables in the questionnaire. 4.3 Sample validity For a precise reflection of Facebook users' population structure, this study used a gender ratio of the sample structure to determine whether it matches the population structure. In addition to statistics data from CheckFacebook (2013) as the basis reference of this study, we also follow the idea of Hsu et al. (2016) in taking gender as a test variable. The gender proportion is 50.6 percent male users and 49.4 percent female users. The result of a kh2 goodness-of-fit test shows that the p-value of kh2 is 2.697 (p-value is 0.101 larger than 0.050). Thus, the null hypothesis of the test cannot be rejected and no significant difference exists between the sample structure and the population structure of Taiwanese Facebook users' gender ratio provided by CheckFacebook in this study. To avoid or reduce the problem generated by the common method variance (CMV), a two-stage prevention procedure is conducted. First, we design the survey questionnaire by following these steps. The constructs are arranged randomly and the research objective is not shown on the questionnaire. The survey is conducted anonymously in order to decrease consistent answers from respondents. Second, Harman's one-factor-test is applied to examine whether the CMV problem exists in the sample data, including an exploratory factor analysis (EFA) (Harman, 1967; Podsakoff and Organ, 1986) and a confirmatory factor analysis (CFA). The analysis of EFA conducts all items, and the results show that six factors are extracted and the explained 40.573 percent variance by the first factor, which is lower than 50 percent (Wang et al., 2014). The analysis of CFA conducting all items are subsumed into one factor and the results do not show all item factor loadings higher than 0.5. Furthermore, as the model-fit of Harman's one-factor-test (kh2=3306.19, df=594, kh2/df=5.57, GFI=0.53, AGFI=0.47, RMSEA=0.12, CFI=0.63, NFI=0.59) is worse than the model-fit of the hypothesized model (kh2=799.65, df=535, kh2/df=1.50, GFI=0.88, AGFI=0.85, RMSEA=0.04, CFI=0.96, NFI=0.90), no significant CMV problem exists in the data. 4.4 Analysis of measurement model As essential prerequisites for achieving valid results, the reliability, convergent validity, and discriminant validity of the measurement model are assessed. Item reliability is assessed by applying the factor loading and squared multiple correlations (SMC), and the construct reliability is assessed by applying Cronbach's a. As shown in Table II, all factor loadings are above 0.5, and all SMCs are above 0.2, indicating good reliability of the items (Bentler and Wu, 1993; Hair et al., 2010). Also, the Cronbach's a are above 0.7, indicating good reliability of the scales (Nunnally, 1978). To test the ability of a measurement to reflect the actual circumstances, this study adopts the test for convergent validity and discriminant validity. Convergent validity can be assessed in terms of the average variance extracted (AVE) from the latent variables and the composite reliability (CR). Discriminant validity refers to the correlation between constructs. Table II shows the CRs above 0.6 and the AVE from latent variables are above the acceptable value of 0.5, with the exception of normative influence and informative influence. However, according to the findings of Fornell and Larcker (1981), the scale still indicates convergent validity if the scale CR is higher than 0.6. Thus, all the scale convergent validity can be confirmed (Bagozzi and Yi, 1988; Hulland, 1999). Furthermore, Table III indicates that the correlation with that of the other constructs is less than 1 and that the AVE square root of any particular construct is greater than the correlation of this construct with other constructs (Fornell and Larcker, 1981; Gaski and Nevin, 1985). 4.5 Structural model This study examines the structural model and considers the model-fit indices suggested by Hair et al. (2010), which are absolute fit measures, incremental fit measures, and parsimonious fit measures. All of the model-fit indices are higher than the suggested criteria. For the examination of the proposed hypotheses of the relationship among constructs, a structural model is applied. The result of the structural model is shown in Figure 2. The model explained 43.6 percent of affective trust variance, 54.2 percent of sense of virtual community variance, 44.4 percent of normative influence variance, and 40.3 percent of informative influence variance. H1 posits that cognitive trust influences affective trust. Figure 2 indicates that the path coefficient is 0.661 (p<0.001), thus supporting H1. H2, which states that (a) cognitive trust and (b) affective trust affect the sense of virtual community, respectively, are also confirmed (g21=0.381, p<0.001; b21=0.427, p<0.001). The positive effects of cognitive trust on (a) normative influence and (b) informative influence are also supported (g31=0.287, p<0.001; g41=0.182, p<0.05) thereby confirming H3. Furthermore, the results show the positive influences of affective trust on (a) normative influence and (b) informative influence (b31=0.279, p<0.001; b41=0.334, p<0.001) thus confirming H4. Finally, the effects of sense of virtual community on (a) normative influence and (b) informative influence are significant (b32=0.189, p<0.05; b42=0.199, p<0.05) thereby supporting H5. 4.6 Mediating effects Based on the previous demonstration of the effects of trust on sense of virtual community and social influence, this study tests the mediation effect of affective trust between cognitive trust and sense of virtual community and sense of virtual community between cognitive/affective trust and normative/informative influence. According to Table IV, both the bootstrapping 95 percent confidence intervals of percentile and the bias-corrected method are presented, and zero is not contained in these intervals. Such results enhance the findings that affective trust mediates the relationships between cognitive trust and sense of virtual community and that sense of virtual community mediates cognitive/affective trust and normative/informative influence. Furthermore, following prior research, three steps are required to test the mediation effect (Baron and Kenny, 1986; Komiak and Benbasat, 2006). In step 1, this study treats cognitive trust as an independent variable and sense of virtual community as a dependent variable, and finds a significant relationship between them (b=0.584, p<0.001). In step 2, this study builds a model that includes cognitive trust as an independent variable and affective trust as the dependent variable, indicating a significant effect (b=0.622, p<0.001). In step 3, this study builds a model with both cognitive trust and affective trust as independent variables and sense of virtual community as the dependent variable, and both the effects of cognitive/affective trust on sense of virtual community are significant. Thus, affective trust partially mediates between cognitive trust and sense of virtual community. Finally, this study also conducts the Sobel (1982) test to assess the significance of the mediation effect (Wood et al., 2008). The results demonstrate that affective trust significantly mediates between cognitive trust and sense of virtual community (Sobel=7.114, p<0.001). The mediating effects of sense of virtual community on the relationship between cognitive/affective trust and the normative/informative influence are also examined. Both a bootstrapping analysis and the Sobel test are conducted, and results in Table V show that sense of virtual community partially mediates the relationships between cognitive/affective trust and normative/informative influence. 5.1 Research implications This study contributes to improving the understanding about the transformation of sense of virtual community on the group level as well as the normative/informative influence on the social level, and from a trust perspective on the individual level. The findings indicate that this research model has a good explanatory power and provides a more complete explanation of members' interactive relationships on different levels in the context of Facebook brand fan pages. Regarding the individual perspective, this study references the research of Lewis and Weigert (1985) and divides trust into cognitive and affective trust to assess and clarify the non-parallel relationship between these two constructs in a virtual community. The results indicate that the formation of affective trust is partially dependent on cognitive trust. Similar to Chih et al.'s (2015) study, a virtual community must establish an accurate and reliable message source in order to build relationships with members. Virtual community members will form an affective trust of emotional feeling and emotional attachment when they obtain product information by rational demands. Second, regarding the individual-group perspective, this study identifies two salient factors, cognitive trust and affective trust, which are important antecedents of the sense of virtual community. Similar to Zhao et al.'s (2012) study, trust plays a key role in increasing brand community members' sense of belonging. The basis of interaction on an individual level is cognitive trust and affective trust. Consumers have confidence in disclosed relevant information within a virtual community and then frequently interact with other members if a virtual community manager devotes more effort to managing the community. In this scenario, virtual community members will feel a stronger sense of belonging. Third, regarding the individual-social perspective, this study investigates individual factors involved in the normative/informative influence that page fans have on other fans (i.e. members). This suggests that because of trust in other members' abilities, or trust stemming from emotional connections, fans of a brand page change their perspectives or behaviors to meet the expectations of other fans. H3 and H4 of this study are similar to the results of Hsu et al. (2011). Although cognitive trust has a significant and positive effect on normative influence and informative influence, respectively, it has a weak significant effect on informative influence. Therefore, a virtual community must build cognitive trust from a rational perspective in order to create a normative influence within this community and obtain acceptance by members. Affective trust has a significant and positive inference on normative influence and informative influence, respectively. A virtual community must build affective trust in order to create an informative influence among community members. Virtual community members seek others' suggestions and adopt correct and reliable information to reduce uncertainty and fear in the community. Finally, this study finds a significant and positive effect of a sense of virtual community on normative/informative influence from the group-social perspective. H5 of this study is same as Hsu et al.'s (2016) findings. In addition, the sense of virtual community is an important core concept in the research model. Such a sense not only strengthens the sense of belonging among virtual community members, but it also establishes a bridge between trust and social influence. Behavioral norms of virtual community members means that community members perceive common sense during the interaction and comply with the common understanding of establishing their behaviors from a community life point of view. 5.2 Managerial implications This study advocates that brand fan pages must establish a platform that virtual community members trust and are willing to participate in the discussion environment for a long time regardless from the perspective of rational cognitive trust and emotional affective trust. More engaged customers will establish a closer trust relationship in the virtual community and partake in activities such as providing brand and product-related knowledge, offer interactive discussions, reply to service-related questions on fan pages, increase the cognitive trust of brand among members, and organize fan page activities to increase the opportunities of exchanging emotion and enhancing affective trust. Cognitive/affective trust and sense of virtual community are key factors in establishing virtual community cohesion to build a reliable brand and belief as well as the development of emotional relationships for a sense of belonging. For example, Toyota cleverly used social media to resolve a vehicle recall crisis by building up customer trust. The company sets up a team to track negative rumors on Facebook and elsewhere, responded with the facts, and specifically opened a Twitter account to communicate with consumers. Toyota looked for online fans to spread their discussion through the company's media channels, taking advantage of the company's decade-long performance reputation, that is, a sense of reliable trust and durable brand commitment. They effectively used social media and other new media to offset the most negative messages. Toyota successfully lifted their brand challenge and eliminated the brand disaster to provide several important community management strategies for other companies. A brand fan page must establish an ideal community environment in which virtual community members are more likely to share and deliver brand/product-related messages with cognitive/affective trust and social influence (Brown et al., 2007). Therefore, the biggest benefit for consumers engaging in a brand community is to form strong relationship among brands, products, other customers, and companies. Customers increase their trust relationship with the brand and form an invisible norm for the community force after creating a close relationship with the brand (Habibi et al., 2014). Virtual community members choose to gain product messages and knowledge from other members to understand their personal experiences of product use. In the sense of virtual community and social influence, community managers must consider how to provide enough useful information and interactive activities to build up users' sense of belonging and cohesion with altruistic behavior of social relationship. Virtual community members can communicate by online platform to gain psychological support and attract other participating individuals who are willing to maintain a relationship with the community. Many people publish their own messages in a virtual community. Firms must further promote commercial behavior within the virtual community so that by participating, people can meet their particular needs, such as community, business transactions, knowledge, and entertainment (Hagel, 1999). This means that interpersonal relationships between virtual community members are the basis for community development and generates emotional exchanges for community members (Lee and Chang, 2011). Managers must build a successful virtual community to attract the attention of internet users as well as to provide sufficient incentives for virtual community fans to share information (Ho and Dempsey, 2010). For example, managers not only encourage members to provide relevant electronic word-of-mouth (e-WOM) information, but also recently launched brands through promotional activities on social media to provide relevant information to virtual community members. Virtual community fans will be more willing to use e-WOM to promote brand and create brand value when they are involved in the brand platform. Brand fan pages not only link with fans, but they also help to establish virtual community fans to increase discussion activity and maintain popularity of fan page. For example, firms disclose ideas or comments on styles and features on Facebook when new products are introduced. It is possible to investigate preferences and dissimilarities between fans and other consumers by collecting messages, comments, voting, and so forth. Virtual community creates membership, identification, and links with fan brands by meeting consumer demands (Fournier and Lee, 2009). The results indicate that social influence among members or Facebook pages fans is enhanced by cognitive trust, affective trust, and the sense of virtual community. Thus, a long-term sense of virtual community toward Facebook brand fan pages can be induced if online platform developers establish an environment in which users develop trust and participation. Methods of achieving this include providing knowledge related to brands or products, assigning staff to provide answers to questions posed by users, and hosting activities that encourage fans to interact, thereby enhancing cognitive and affective trust between fans. After users develop cognitive trust, affective trust, and the sense of virtual community, the social influence among members is likely to increase. The thriving development of the internet and SNSs has transformed common communication methods and lifestyles, and SNSs have become popular platforms for interaction and communication. For firms, crowds are business opportunities. Consequently, various firms aim to build platforms that facilitate positive interactions and communication with consumers on SNSs, thereby increasing product sales or brand value. Learning consumer behavior in the context of virtual brand community is required for firms to obtain tangible benefits and value from their interactive platforms. The results provide insights for brand companies in the creation of platforms on Facebook or other SNSs to enable B2C or C2C interactions. 5.3 Research contributions This study is based on trust theory and follows the concept of social exchange theory (Blau, 1964; Thibaut and Kelly, 1959). Blau (1964) advocated that trust within relationships is an important idea, especially in the process of exchange among individuals through cultivating a good rapport and provides people with a reason not to run away from social obligations. In the virtual environment, trust affects users' willingness to exchange messages with other members and is an important factor in continuing participation in the community (Blanchard et al., 2011; Ridings et al., 2002; Yeh and Choi, 2011). Trust is a psychological state (Rousseau et al., 1998) and a multi-faceted concept (Lewis and Weigert, 1985; McAllister, 1995; Riegelsberger et al., 2003). Cognitive trust and affective trust are considered to have a non-parallel relationship in organization research (Lewis and Weigert, 1985). However, this study successfully divides trust into cognitive trust and affective trust to clarify the non-parallel relationship between these two constructs in a virtual community. Past scholars have only investigated the group level of interpersonal relationships for virtual community members, but not other levels (Tonteri et al., 2011). This study concerns different levels and distinguishes the difference between individual, group, and social levels and presents trust theory to separate trust into cognitive trust and affective trust as the antecedents of a sense of virtual community. In addition, social influence is regarded as susceptible to interpersonal influence (Bearden et al., 1989) as a result of a sense of virtual community. 5.4 Limitations and future research Some limitations exist in this study. First, this study adopts a cross-sectional survey, which means it does not explore the subsequent internal changes and actual usage behavior of Facebook brand fan page users. Subsequent researchers could use a longitudinal survey. Second, this study only surveys Taiwanese Facebook users, and does not present a comprehensive picture of Facebook users in different countries. Subsequent researchers could investigate international brand fan page users. Third, this study only conducts the survey for Facebook fan page users, and in different virtual community platforms, it may create different findings and inferences. This study recommends that researchers follow up this study for different virtual community platforms, such as Twitter and Plurk. Fourth, this study only designs the multi-level constructs by using structural equation modeling. Future researchers can redesign the questionnaire to collect data from multiple groups of respondents. Finally, this study does not consider other constructs at the individual level, whereas a gap exists in considering users' personality characteristics, such as extraversion, introversion, narcissism, self-esteem, self-worth, and neuroticism (Nadkarni and Hofman, 2012). On the group-level factors, such as sense of virtual community can be divided into three facets such as membership, influence, and immersion (Koh and Kim, 2003) to verify the relationship.
|
This study investigates the multiple perspectives of the interpersonal interaction between individual, community, and social influence.
|
[SECTION: Purpose] The process of knowledge acquisition and development as a mechanism of either uncertainty reduction or opportunity development is critical in understanding internationalization processes (Casillas et al., 2015; Fletcher et al., 2013). Initially, the accumulation of knowledge was perceived as a mechanism for uncertainty reduction, namely, knowledge gained from experience incrementally reduces the lack of information about a given foreign market (Johanson and Vahlne, 1977, 2009). More recently, the role of knowledge in the internationalization process has been associated with international opportunity development (Chandra, 2017), namely, the exploitation of international opportunities that leads to entry in new foreign markets and to new businesses in foreign markets serviced by the firm (Chandra et al., 2009). This conceptual paper subscribes to the latter viewpoint by looking at the internationalization process as an entrepreneurial process related to the development of international opportunities. More specifically, this paper brings together the internationalization theory broadly advanced by the international business (IB) field (e.g. Johanson and Vahlne, 1977) with the international entrepreneurship (IE) literature (Casillas et al., 2015; Forsgren, 2016). In doing so, it provides a model connecting three constructs - knowledge, international opportunities and the internationalization process - that are often analyzed separately. Particularly concerning knowledge, the proposed model separates the effects of both market and internationalization knowledge over time. While the bulk of the research has emphasized the effect of market knowledge, which is dependent on a specific foreign market, the effect of general internationalization knowledge has been overlooked (Hakanson and Kappen, 2017). And different types of knowledge should affect internationalization decisions differently (Evangelista and Mac, 2016). Moreover, the model assesses the internationalization process using a dynamic approach (Welch and Paavilainen-Mantymaki, 2014) that takes into consideration sequential moves in foreign markets (Casillas and Acedo, 2013). It comprises both new foreign market entry and sequential moves that happen after entry (i.e. either via mode continuation or modal shifts). It also introduces the idea of a threshold beyond which the development of international opportunities following the accumulation and combination of knowledge affects the internationalization process. Finally, this paper contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009; Child and Hsieh, 2014) by showing that knowledge is not homogeneous and how it interacts to influence internationalization processes. In particular, this study posits that the combination of knowledge moderates the relationship between international opportunities and the internationalization process. Currently, the bulk of the literature suggests that the accumulation of knowledge gradually shapes the internationalization process (e.g. Hadjikhani, 1997). This study refines this view by arguing that the accumulation of knowledge shapes the identification and development of international opportunities, which will affect the internationalization process in distinct ways, before and after a threshold is achieved. The bulk of the IB research on internationalization has looked at knowledge as a mechanism for uncertainty reduction (Petersen et al., 2008). In the internationalization process of the firm conceptualized by Johanson and Vahlne (1977), knowledge is gained incrementally from experience in a foreign market (i.e. experiential knowledge). When a firm starts servicing a foreign market, it faces challenges associated with the new environment, such as cultural (Kogut and Singh, 1988) and behavioral differences (Sanchez-Peinado and Pla-Barber, 2006), which bring uncertainty. By accumulating experiential knowledge, the firm learns how to operate in the new environment and to deal with those differences, reducing such uncertainty (Johanson and Vahlne, 1977). In this context, the most critical type of knowledge is experiential market knowledge (e.g. Figueira-de-Lemos et al., 2011), that is, knowledge gained from prior experience in a given foreign market (e.g. Chandra et al., 2012; Johanson and Vahlne, 1977). Thus, market knowledge is a central element of the internationalization process described by Johanson and Vahlne (1977). Conversely, research on IE has looked at knowledge as one of several antecedents affecting the internationalization of the firm (Oviatt and McDougall, 1994). However, knowledge is not related to uncertainty reduction but to the identification and development of international opportunities (e.g. Ardichvili et al., 2003). International opportunities are here conceptualized as opportunities recognized by firms and connected actors to establish or expand their business outside of their home market. The recognition of international opportunities as the trigger of the internationalization process and how knowledge affects it, however, have not received systematic attention from the literature (Chandra et al., 2009; Forsgren, 2016). For instance, because studies on IE focus on international new ventures, they highlight the role of different types of knowledge, such as technological knowledge (Oviatt and McDougall, 1994), which do not necessarily matter for all firms seeking to internationalize. Moreover, because IE studies analyzing internationalization processes emphasize international new ventures, they focus on rapid internationalization processes. This stream of research is called dynamic internationalization and "identify patterns considered anomalous to traditional internationalization" (Jones et al., 2011, p. 638). Nonetheless, opportunity recognition and development that lead to the internationalization of the firm, comprising both new foreign market entry and sequential moves that happen after entry, may also be related to firms in general (e.g. manufacturing firms), which follow internationalization processes as defined by Johanson and Vahlne (1977). Such firms respond to smaller or less significant opportunities at first, which might emerge years after their foundation (Morschett et al., 2010). Throughout time, however, they increase their capabilities and resources to capture a larger number of opportunities or more significant ones (Dimitratos and Jones, 2005). It is of particular interest to examine how knowledge acquired and accumulated by the firm throughout time determines the recognition and development of international opportunities (Lamb et al., 2011; Sanz-Velasco, 2006) and consequently, the internationalization process of such firms (Forsgren, 2016). It should be noted that with the emergence of the IE field in the early 1990s, following the study of McDougall (1989), opportunities have also started to be incorporated into the IB literature (Ardichvili et al., 2003; Johanson and Vahlne, 2009; Vahlne and Johanson, 2013). For example, Johanson and Vahlne (2006, 2009), rethinking their initial model, suggested that the internationalization process could be considered as a process of recognition and development of international opportunities. Nonetheless, market knowledge continued to be emphasized, despite the recognition that other types of knowledge, such as internationalization knowledge, also affect the identification and development of international opportunities and the internationalization process. In fact, Johanson and Vahlne (2009) suggested that their initial internationalization process model overlooked the role of internationalization knowledge, which should be analyzed more thoroughly. Internationalization knowledge is not market-specific and is gained from the accumulation of experiences across foreign markets (Eriksson et al., 1997). Previous research has only more recently acknowledged that internationalization knowledge is critical to the unfolding of the internationalization process over time (e.g. Hakanson and Kappen, 2017), which is consistent with this paper's analysis of not only new foreign market entry but also sequential moves that happen after entry. In conclusion, by bringing together insights from both IB and IE research, this paper provides a better understanding of the role of knowledge, particularly market and internationalization knowledge, on the development of international opportunities that leads to the internationalization process. First, knowledge accumulated from prior experience decreases the time needed to identify new opportunities (Bingham and Davis, 2012). Opportunities are recognized more quickly and more often when the firm has more familiarity with the foreign market. Such familiarity increases the alertness of firms and their capacity to perceive growth and expansion opportunities (Autio et al., 2000). Second, the accumulation of knowledge allows the establishment and recombination of resources that support new ways of seeing opportunities (Cassia and Minola, 2012; Chandra et al., 2012). Opportunities may not be recognized because the firm acquired new knowledge per se, but rather due to a recombination of resources that changes the way it perceives opportunities within foreign markets. For example, when a firm enters a foreign market serendipitously, it may perceive the benefits of internationalization, and hence reallocate resources to it. Third, it is the continuous use of this accumulated knowledge that contributes to the development of opportunities (Ardichvili et al., 2003). According to organizational learning theory (e.g. Cohen and Levinthal, 1990), the firm can only integrate knowledge into its processes when and if it continuously uses it. Therefore, international opportunity recognition and development should be assessed as a process that evolves over time (Ardichvili et al., 2003). Finally, the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry, not only depends on the accumulation of knowledge but also on the types of knowledge that interact throughout time (Gao and Pan, 2010). Previous studies have acknowledged several different types of knowledge that affect the internationalization process (e.g. Eriksson et al., 1997). Nonetheless, the effects played by each type of knowledge in the internationalization of the firm over time needs to be better distinguished (Arte, 2017; Li et al., 2015; Perkins, 2014). As aforementioned, this study focuses on market and internationalization knowledge, which are fundamental types of knowledge affecting the internationalization of any firm operating abroad (Figueira-de-Lemos and Hadjikhani, 2014). Other types of knowledge, such as technological, may be more related to specific industries, markets or categories of firms (e.g. SMEs, INVs), and thus are not analyzed in this study. For example, firms that internationalize from inception (i.e. INVs) rely heavily on technological knowledge developed in-house (Oviatt and McDougall, 1994). Market knowledge and opportunities Market knowledge is defined by Johanson and Vahlne (1977) as information about the particularities of servicing a specific foreign market and is usually acquired by the firm as it operates in that market. Such type of knowledge is specific to a particular foreign market and, therefore, it cannot be easily transferred to other markets serviced by the firm (Eriksson et al., 1997; Johanson and Vahlne, 1977). Market knowledge was initially perceived as a mechanism for uncertainty reduction that gradually led to more commitment in a particular foreign market (Forsgren, 2002). In this tradition, the accumulation of market knowledge contributes to overcome the liability of foreignness, which represents the additional costs faced by a firm due to its unfamiliarity with the host country environment (Zaheer, 1995; Johanson and Vahlne, 2009). Such liability may result in restricted access to the local market and local networks that challenge the firm's expansion in foreign markets (Johnsen and Johnsen, 1999; Zaheer and Mosakowski, 1997). With the development of the IE literature (Jones et al., 2011), market knowledge started to be perceived as not only a mechanism to reduce the liability of foreignness but also an antecedent for the successful recognition of opportunities (Ardichvili et al., 2003; Arte, 2017). By knowing the culture, institutions, customers, competitors and market conditions of a particular foreign market, the firm becomes more aware of specific international opportunities existent in the market (Zhou, 2007). Previous studies have acknowledged that the uncertainty reduction from accumulated market knowledge may lead to the development of international opportunities (Johanson and Vahlne, 2006). Such studies are interested in the internationalization of the firm under uncertainty in general. However, this paper emphasizes the fact that firms that are internationalizing develop international opportunities under uncertainty (Alvarez and Barney, 2019). Rather than following a reactive internationalization process, they proactively engage in the development of opportunities when they build market knowledge in any given foreign market. Hence, this paper suggests that the accumulation of market knowledge over time allows firms to identify and exploit new opportunities in a given foreign market after entry, as detailed below. The influence of market knowledge in the internationalization process analyzed through an IE lens is a quite recent strand of research, which focuses on rapid internationalization processes (Jones et al., 2011). Research on international new ventures suggest that market knowledge can be acquired from a variety of sources of information such as the firm's own predisposition to innovativeness, risk-taking, and proactiveness, its exposure to cultural diversity (Kropp et al., 2008; Zhou, 2007), and the use of focused research conducted by specialized agents (Spence and Crick, 2009). However, Lord and Ranft (2000, p. 576) argue that market knowledge may be hard to obtain, because of the lack of "well-developed and wildly-available sources of market information" in some foreign markets, such as emerging markets. This paper suggests that market knowledge that leads to the recognition of opportunities is principally formed either in everyday entrepreneurial practices or through social interactions (Mainela et al., 2014). The former relates to prior knowledge, built from increasing commitment in each foreign market (Spence and Crick, 2009). It is experiential and accumulated throughout time. Gao and Pan (2010) show that firms with a longer time of local operations accumulate more market knowledge, speeding up the pace of the internationalization process in that foreign market. Such experiential market knowledge is important to support a firm's activities in foreign markets (Evangelista and Mac, 2016), through both the recognition and development of international opportunities. It increases the proclivity of the firm to identify opportunities, while contributing to its ability to develop them (Nordman and Melen, 2008). The latter relates to networks of relationships (Hohenthal et al., 2014; Vasilchenko and Morrish, 2011), which makes embeddedness in the foreign market a pivotal issue in the recognition and development of international opportunities. By building relationships with customers, suppliers, agents and other actors which are embedded in that specific market, the firm may take advantage of opportunities initially identified by those actors (Johanson and Vahlne, 2006, 2009). Therefore, the acquisition and accumulation of market knowledge, either by experience or by social interactions with actors embedded in each foreign market serviced by the firm, is a critical aspect of the internationalization process (Blomstermo et al., 2004). By increasing the awareness and alertness of the firm, as well as its familiarity with a specific foreign market, this paper proposes that market knowledge will lead to the development of international opportunities that happen after new foreign market entry. Nonetheless, because market knowledge is difficult to be transferred across markets without incurring in substantial opportunity costs (Eriksson et al., 1997; Johanson and Vahlne, 1977), market knowledge will only lead to the development of international opportunities in foreign markets where the firm already operates. Although market knowledge has been extensively associated with internationalization, P1 highlights the role of market knowledge in identifying international opportunities, rather than triggering the internationalization process via the reduction of uncertainty in the commitment of resources in foreign markets. Hence: P1. Market knowledge is positively related to the development of international opportunities. Internationalization knowledge and opportunities Differing from market knowledge, internationalization knowledge is related to accumulated experience from servicing foreign markets (Eriksson et al., 1997). It is not specific to a particular foreign market (Eriksson et al., 2000), but rather based on a general knowledge on how to conduct businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008). By drawing on previous experiences in foreign markets, the firm incorporates learned routines that support not only its current internationalization processes (Sapienza et al., 2006) but also its entry and subsequent expansions in new foreign markets not yet serviced by the firm (Hakanson and Kappen, 2017; Prashantham and Young, 2011). General knowledge of servicing foreign markets gives the firm an overall understanding of the internationalization process that can be applied across all foreign markets (Eriksson et al., 1997; Johanson and Vahlne, 1977). Consequently, the firm may benefit from not only homogeneous experiences (i.e. from similar environments) but also from heterogeneous experiences (i.e. from very different environments) in foreign markets (Kim et al., 2012), since this knowledge can be more freely transferred throughout its internationalization projects (Child and Hsieh, 2014). Similarly to market knowledge, internationalization knowledge was initially perceived as a mechanism for uncertainty reduction (e.g. Kim et al., 2012). Hitt et al. (2006), for example, suggest that by accumulating internationalization knowledge, the firm facilitates its internationalization process via the creation of social capital and useful resources, reducing its risk. Nonetheless, internationalization knowledge not only encourages firms to enter new markets and to use different servicing modes (Casillas and Acedo, 2013) through uncertainty reduction, but also through the recognition and development of new international opportunities. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge is a "versatile" and "country-neutral" knowledge that exposes the firm to unexpected international opportunities in general, which leads to the expansion in foreign markets. Nonetheless, it also exposes the firm to new knowledge and resources that increase its disposition to explore new and different opportunities (Nachum and Song, 2011). The accumulation of international experience contributes to the alertness and willingness of the firm in developing new international opportunities in general, even though they may be in different foreign markets where the firm has no specific market knowledge. In accordance with market knowledge, internationalization knowledge will be beneficial in markets already serviced by the firm. Even when the firm is already well established in those markets, it can use knowledge acquired elsewhere to better manage its relationships (Johanson and Vahlne, 2009) and expand its operations through the development of new opportunities in those foreign markets. This happens because the firm develops a "mindset" to more proactively search for opportunities, planning the internationalization process rather than just reacting to sporadic opportunities (Freeman et al., 2012). Accordingly, when the firm accumulates internationalization knowledge, it becomes better prepared to take advantage of opportunities as well as to manage them. Moreover, internationalization knowledge will be equally important to support entry into new foreign markets. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge leads to practices such as "country market screening, and evaluating strategic partners and distributors" that leads to new foreign market entries. Accordingly, Vasilchenko and Morrish (2011) showed that by building on previous experiences firms may develop a network of relationships abroad, engaging in cooperative behavior that ultimately leads to successful entry into new foreign markets. In sum, because internationalization knowledge reflects a general knowledge on conducting businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008), it can be used not only in foreign markets where the firm operates but also in new foreign markets. Therefore, internationalization knowledge will lead to the development of international opportunities both in foreign markets already serviced by the firm and in new foreign markets not yet serviced by the firm. Hence: P2. Internationalization knowledge is positively related to the development of international opportunities. As previously discussed, this paper looks at market knowledge and internationalization knowledge as antecedents of the development of international opportunities, which lead to the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry. Foreign market entry is a central element of internationalization research, and therefore has been extensively studied (Shaver, 2013). Nevertheless, the recognition of international opportunities that leads to entry in new foreign markets is quite recent in IE (Jones et al., 2011). Studies on IE that connect opportunities and internationalization focus on international new ventures following a rapid internationalization process (Jones et al. 2011). However, Dimitratos et al. (2016, p. 1220) suggest that research in the IE literature "[...] can shift the attention from the study of INVs to that of all opportunity-driven types of internationalized firms." While previous research has brought attention to the role of opportunities as drivers of the internationalization process of not only international new ventures but also firms in general (e.g. Johanson and Vahlne, 2006), such a relationship is not explicitly stated. Chandra et al. (2012, p. 75), for example, state that "Other authors note, and we agree, that 'the opportunity side of the internationalization process is not very well developed' (Johanson and Vahlne, 2006, p. 167). And such a side is not well developed because, although studies acknowledge the role of opportunities on the internationalization process, uncertainty reduction is still perceived as the main driver of the internationalization process (Figueira-de-Lemos and Hadjikhani, 2014). Using an IE lens to analyze foreign market entry, researchers have emphasized the mechanisms through which firms recognize opportunities in new foreign markets. Chandra et al. (2009), for example, suggest that firms exhibiting a strong entrepreneurial orientation will be more likely to recognize first time opportunities in foreign markets. Accordingly, Sapienza et al. (2006) argue that the firms' proactivity in the pursuit of international opportunities will lead to new foreign market entry. Such entrepreneurial orientation or proactivity is developed throughout time, being considered as a "long-term behavioural phenomenon" (Jones and Coviello, 2005, p. 297). In this vein, internationalizing firms may act entrepreneurially, actively identifying and developing opportunities in new foreign markets. This study, therefore, suggests that by identifying and developing international opportunities, firms are more likely to actively internationalize. A natural pathway for pursuing internationalization is through entering new foreign markets. Firms that develop international opportunities are more prepared and prone to entering new foreign markets. In other words, by developing more international opportunities, firms will enter more new foreign markets. Hence: P3a. The development of international opportunities is positively related to entry in new foreign markets. Nevertheless, the internationalization process comprises not only new foreign market entry but also sequential moves that may happen throughout time in markets already serviced by the firm (Casillas and Acedo, 2013; Gao and Pan, 2010). The entrepreneurial internationalization process relates to its international pathway (Jones et al., 2011), and sequential moves, as much as the entry choice, shape such a process (Casillas and Acedo, 2013; Gao and Pan, 2010). The bulk of the literature, however, emphasizes entry modes, often ignoring the dynamics of the internationalization process by not considering the sequential moves in foreign markets where it already operates (for a critique, see Shaver, 2013; Welch and Paavilainen-Mantymaki, 2014; for exceptions, see Mtigwe, 2005; Suarez-Ortega and Alamo-Vera, 2005). Sequential moves refer to the activities that follow new foreign market entry (e.g. continued sales) and are associated with within-market expansion or eventual withdrawal (Gao and Pan, 2010). Considering sequential moves is important because the internationalization process is not static and its evolution after foreign market entry is equally important for a better understanding of such a process (Benito et al., 2009; Gao and Pan, 2010). Sequential moves may follow two paths - either mode continuation or modal shift (Benito and Welch, 1994; Benito et al., 2009). In the former, there is no modal change and the firm operates in a given foreign market by using its initial entry mode (Benito and Welch, 1994; Swoboda et al., 2015). It does so because the existing servicing mode is considered adequate to conducting business in a given foreign market. Switching costs may also be perceived as high, deterring the firm to switch the initial mode; or inertia may play a role, that is, the firm simply does not change the initial mode due to inertial forces (Benito et al., 2009). This paper suggests that mode continuation is associated with the development of opportunities below a threshold. This argument is to be developed in the following paragraph. Mode continuation, hence, is a confirmation of the firm's initial choice of servicing mode, independently from the type of servicing mode initially chosen (Benito et al., 2009). In the latter, the firm shifts its servicing mode to adjust its operations within a given foreign market (e.g. by shifting from a sales agent to a whole-owned subsidiary). By shifting the mode of operation, firms may be more responsive to foreign market needs (Benito et al., 2009). Modal shifts, nonetheless, do not necessarily imply higher commitment in a given foreign market. Firms can also shift the entry mode to adjust operations when the servicing mode currently being used is not appropriate (e.g. by shifting from a sales agent to exporting). Consequently, this study suggests that the development of international opportunities will be related to sequential moves. More specifically, firms will invest resources, time, and effort to identify and develop opportunities in not only new foreign markets but also markets where it already operates. The development of international opportunities related to sequential moves allow firms to maintain and even strengthen their position in any given foreign market. In this vein, sequential moves are the pathway after new foreign market entry. Initially, however, such sequential moves will happen only via mode continuation. By accumulating knowledge and building relationships in foreign markets throughout time, via the development of new opportunities in those markets, the firm will be more inclined to continue servicing them by the continued use of the initial entry mode. For example, a firm may enter a given foreign market via direct exporting to one customer. Throughout time, because the firm is more familiar with that foreign market and with the exporting process in general, it expands its customer base there by exporting to new customers. This is consistent with the general conceptualization of international opportunities as the development of new customers (Reuber et al., 2018). The firm, thus, developed new international opportunities in that foreign market but did not change the initial servicing mode (i.e. direct exporting), which corresponds to mode continuation. Sequential moves via mode continuation are an easier pathway of the internationalization process after new foreign market entry, since they do not require leaps of resources and knowledge. Formally stated: P3b. The development of international opportunities is positively related to sequential moves using mode continuation in foreign markets serviced by the firm. In the example above, the firm chose to keep using the same servicing mode of entry even after it develops more international opportunities in the given foreign market. This may be because the firm did not develop enough international opportunities because it has not accumulated enough knowledge yet. Shifts in the servicing mode require more combined knowledge and understanding of the market needs. The development of opportunities per se, however, will not lead to shifts in the entry mode chosen by the firm to enter a given foreign market. It is only after a threshold is achieved that the development of international opportunities will indeed allow for a change in the firm's current internationalization process that requires riskier decisions, such as shifting the entry mode (Benito et al., 2005; Clark et al., 1997). The idea of a threshold is in line with the use of self-organized criticality (SOC) in organizations advanced by Andriani and McKelvey (2009, p. 1061), in which organizational phenomena not always follow a linear distribution - instead, they evolve "toward a critical state." When such a state is achieved, any additional related effort or interaction brings change. The authors argue that the IB "arena is especially vulnerable to SOC effects" (Andriani and McKelvey, 2007, p. 1215), in accordance with the argument presented in this study. Accordingly, prior studies have suggested that servicing modes are difficult to change, either when they represent higher or lower commitment (e.g. Anderson and Coughlan, 1987), which has been corroborated by empirical evidence (e.g. Pedersen et al., 2002). Nonetheless, after the firm builds enough knowledge that is translated into a consistent number of international opportunities developed in a given foreign market, it is able to overcome the difficulties associated with shifting the entry mode (e.g. switching costs and inertia). Hence, achieving such a threshold will lead to sequential moves where the firm changes its commitment to better serve the needs of a given foreign market by shifting the servicing mode. Formally stated: P3c. The development of international opportunities beyond a threshold is positively related to modal shifts in foreign markets serviced by the firm. On the example preceding P3b, the firm entered a given foreign market using direct exporting as the entry mode and continued using direct exporting. If the relationship between the development of international opportunities and the internationalization process were linear, the firm should have to switch or at least adjust the servicing mode. Instead, it continues using the same servicing mode, even after it expands its customer base and increases the volume of foreign sales. Nonetheless, imagine that sales to the firm's current customers keep increasing and more new customers are established. The firm evolves toward a critical state where exporting may not be adequate or the satisficing solution anymore. It has achieved this threshold and, hence, switches its servicing mode to a sales subsidiary, for example. This example shows that the relationship between the development of international opportunities and modal shifts is not linear. The firm needed to achieve a threshold in order to change its servicing mode. Nonetheless, as previously stated, modal shifts do not necessarily imply higher commitment in a given foreign market. Continuing with the example above, the firm established a sales subsidiary and continued developing opportunities in the foreign market. Nonetheless, after developing a certain number of opportunities without changing or adjusting the servicing mode (i.e. sales subsidiary), the firm learnt it had made a bad decision since, for example, the marginal costs of the sales subsidiary outweighed its marginal returns. Again, the firm evolved toward a critical state where a sales subsidiary was not adequate anymore. After reaching this threshold, the firm decided to divest the sales subsidiary and use a local sales representative instead. On P1 and P2, market and internationalization knowledge are analyzed independently, to show how each type of knowledge relates to the development of international opportunities. On P3a and P3b, the development of international opportunities that follows the accumulation of market and internationalization knowledge is connected to two aspects of the internationalization process - new foreign market entry and sequential moves related to mode continuation, respectively. On P3c, the idea of a threshold that affects sequential moves related to modal shifts is introduced. In addition, this paper proposes that both types of knowledge may combine to form the international knowledge stockpile of the firm. It is assumed that both market and internationalization knowledge interact through a mutual influence process, in which the former contributes to the development of the latter and vice versa. For example, Barkema et al. (1996) suggested that learning effects in a foreign market (i.e. market knowledge) were associated with learning from internationalization processes in different markets (i.e. internationalization knowledge), even though the degree of learning differed depending on similarities between those markets. Accordingly, knowledge in different markets accumulates over time to become a firm-specific knowledge that can be relevant to foreign markets serviced by the firm or foreign markets where it intends to service (Eriksson et al., 1997). Thus, the international knowledge stockpile as suggested here is a heterogeneous reservoir (Hutzschenreuter and Matt, 2017). It addresses two apparently contradictory aspects of knowledge development - diversity (i.e. market knowledge) and transferability (i.e. internationalization knowledge). By doing so, it increases the probability of growth and survival in foreign markets (Kogut and Zander, 1992). It may also provide the firm with differential efficiencies such as market diversification and innovation (Foss, 1996). The international knowledge stockpile can also be associated with pace in internationalization processes. First, it may enable the firm to respond to market turbulences more rapidly (Miller, 2002) and most importantly, it may accelerate the firm internationalization (Casillas et al., 2015), particularly in terms of the speed of modal shifts (Chetty et al., 2014). Considering that the international knowledge stockpile constitutes an important asset for internationalizing firms, this paper suggests that its effect is different than the effect of each type of knowledge that comprises it - market and internationalization knowledge - when analyzed separately. This is because the international knowledge stockpile enables the firm to capitalize on international opportunities, thus strengthening the relationship between international opportunities and the internationalization process. Therefore, this paper suggests that the international knowledge stockpile will moderate the relationship between the development of international opportunities and the internationalization process. It builds on the idea that the knowledge reservoir of the firm comprises both in-use and idle knowledge (Penrose, 1959), that is, utilized and underutilized knowledge. In this sense, the international knowledge stockpile is a mix of both market and internationalization knowledge in-use, which is knowledge the firm uses to advance its internationalization process, and market and internationalization idle knowledge, which is knowledge the firm can explore in near-future internationalization activities. In other words, the firm accumulates both market and internationalization knowledge over time. When those two types of knowledge are combined in a productive way, a baseline is achieved - the firm establishes its international knowledge stockpile - strengthening the relationship between the development of international opportunities and the internationalization process. "Durable and repetitive interactions" in foreign markets are the main drivers of this process (Eriksson et al., 1997, p. 354). First, whereas the development of international opportunities will lead to more new foreign market entries (P3a), the international knowledge stockpile will strengthen such relationship (P4a). The international knowledge stockpile may lead the firm to enter more new foreign markets in shorter intervals. This is because the firm possesses knowledge that results from different environments and knows how to transfer such knowledge to new foreign markets. Moreover, the international knowledge stockpile may also enable the firm to not only enter more markets in shorter intervals, but also enter multiple markets simultaneously (Wang and Suh, 2009). Literature acknowledges that knowledge supports the development of multiple internationalization processes within a firm, that is, entering and servicing multiple foreign markets at the same time (Welch and Paavilainen-Mantymaki, 2014). Hence, instead of entering foreign markets sequentially, as posited by Johanson and Vahlne (1977), the international knowledge stockpile enables the firm to capitalize on opportunities in multiple foreign markets at the same time. And this is possible because the firm possesses this heterogeneous reservoir of knowledge in-use and idle knowledge, which can be used to address multiple activities - in this case, shortly-spaced or simultaneous entries in different foreign markets. Formally stated: P4a. The relationship between the development of international opportunities and new foreign market entry is moderated by the firm's international knowledge stockpile. Second, whereas the development of international opportunities will lead to more sequential moves via mode continuation in any given foreign market serviced by the firm (P3b), the international knowledge stockpile will strengthen such relationship (P4b). The accumulation of knowledge has been associated with the internationalization process post new foreign market entry (Dimitratos et al., 2016). This means that such combination of knowledge reinforces the fact that the firm has chosen the right foreign market entry mode to service that specific foreign market, because it understands not only the needs of that specific foreign market but also its options and their outcomes when using different foreign market entry modes. Such reinforcement will lead to a proactive search for opportunities that do not require change in the servicing mode but that still strengthen the internationalization process. In addition, the international knowledge stockpile enables the firm to better assess the switching costs associated with modal shifts before attempting to switch the servicing mode. If it believes that the switching costs are too high, it will avoid switching the servicing mode. In this context, the international knowledge stockpile allows the firm to assess such switching costs more efficiently. This is because the international knowledge stockpile is associated with heuristics and routines on how to measure and monitor such costs, due to previous experiences. And the firm, because of its international knowledge stockpile, may decide to continue using a given servicing mode while adjusting such mode (e.g. switching sales agents). Hence: P4b. The relationship between the development of international opportunities and sequential moves using mode continuation in foreign markets serviced by the firm is moderated by the firm's international knowledge stockpile. Third, only after a threshold determined by a certain number of developed international opportunities is achieved (Barkema and Drogendijk, 2007), the firm will engage in modal shifts (P3c). This paper also suggests that the international knowledge stockpile of the firm will strengthen this relationship (P4c). As discussed above, modal shifts reflect changes in the commitment toward the internationalization process, since the firm shifts its entry mode to better serve the needs of a given foreign market (Benito et al., 2009). After the international knowledge stockpile is built, the firm develops international opportunities that will reach the threshold faster, which will allow it to shift the servicing mode. This idea is coupled with the fact that the firm, because of its international knowledge stockpile, is more experienced in switching its servicing mode. In other words, there is a trigger for modal shifts after a certain threshold is achieved. And there is the accumulated experience after this threshold is achieved - the firm knows about different servicing modes and how to better use them. Hence, the firm is able to reduce searching costs (i.e. identifying new servicing modes) and implementation costs (i.e. shifting the servicing mode per se). This enables the firm to engage in modal shifts faster and more efficiently (Casillas et al., 2015). In other words, the threshold, in terms of the development of international opportunities, will be achieved in shorter intervals. Hence, if the time to reach the threshold is reduced due to the international knowledge stockpile, modal shifts will not only happen more frequently but also in shorter intervals. Hence, because modal shifts require not only a combination of market and internationalization knowledge but also an additional effort from the firm of proactively developing international opportunities and changing the course of its internationalization process, as determined by the threshold, it is only after this threshold is achieved that the firm's international knowledge stockpile will moderate the relationship between international opportunities and modal shifts. After the threshold and international knowledge stockpile are achieved, the firm will engage in more modal shifts in different foreign markets and/or will shift the servicing modes at a given foreign market faster. Formally stated: P4c. The relationship between the development of international opportunities beyond a threshold and modal shifts is moderated by the firm's international knowledge stockpile. In sum, the combination of market and internationalization knowledge configures the knowledge stockpile of the firm, which reinforces the relationship between the development of international opportunities and the internationalization process by positively moderating such relationship. Thus, over and above the direct relationship between knowledge and international opportunities, this paper proposes an indirect relationship between knowledge, international opportunities, and the internationalization process comprising both new foreign market entry and sequential moves that happen after entry. Doing so is important because while it recognizes that different types of knowledge have different effects on the internationalization process, it considers that these types also combine to form knowledge that is firm-specific - the firm's international knowledge stockpile. Moreover, the international knowledge stockpile connects the three constructs advanced by the paper - knowledge, international opportunities, and the internationalization process. P1 and P2 connect knowledge to international opportunities; the set of P3a-P3c connect international opportunities and the internationalization process; the international knowledge stockpile completes the model, offering a refined understanding of internationalization processes as a function of both knowledge (i.e. market and internationalization) and the development of international opportunities. The conceptual model is presented in Figure 1. It shows that both market and internationalization knowledge are positively related to international opportunities, which, in turn are related to the internationalization process. From the IB field, the model borrows the notion of market knowledge and its relation to the overall internationalization process. From the IE field, it borrows the notion of international opportunities as an antecedent of the internationalization process. From a combination of both the IB and IE fields, it emphasizes the role of internationalization knowledge and the importance of looking at not only new foreign market entry but also sequential moves that happen after entry. The main points of novelty introduced by the model are showing that the path to sequential moves using mode continuation is different from that of sequential moves comprising modal shifts (i.e. the idea of a threshold) and market and internationalization knowledge combine to form the knowledge stockpile of the firm, which moderates the relationship between international opportunities and the internationalization process. By doing so, it offers a finer-grained view of the internationalization process, which can be entrepreneurial (i.e. related to the identification and development of opportunities) even for firms in general. By combining the IB and IE literatures, this paper presents a novel framework connecting three constructs that are typically analyzed separately in the literature - knowledge, international opportunities, and the internationalization process. Grounded on the idea that the internationalization process is the result of the development of international opportunities, this study looks at the types of knowledge that shape such opportunities as well as how those types interact to moderate the relationship between international opportunities and the internationalization process. On the one hand, research on IE often connects knowledge to the development of international opportunities (e.g. Chandra et al., 2009), but without highlighting how such development affects the internationalization process of the firm, comprising both new foreign market entry and sequential moves in all foreign markets where the firm operates. This literature also focuses on international new ventures, thus disregarding other types of internationalized firms (Dimitratos et al., 2016). On the other hand, research on IB connects knowledge directly to internationalization, usually focusing on new foreign market entry (e.g. Shaver, 2013). It also posits that the main driver of the internationalization process is uncertainty reduction via knowledge development (Figueira-de-Lemos et al., 2011). This study explicitly connects knowledge, international opportunities, and the internationalization process during and after new foreign market entry and provides testable propositions that contribute to advance and bring together both the IB and IE literatures. First, this study suggests that both market and internationalization knowledge will be positively related to the development of international opportunities, emphasizing the idea that the internationalization process may also be triggered and driven by the proactive identification and exploitation of opportunities, rather than mainly by uncertainty reduction. While uncertainty reduction has been important to comprehend internationalization (Figueira-de-Lemos et al., 2011), understanding how the development of international opportunities under uncertainty affects internationalization processes can better inform research on the internationalization of the firm (see Alvarez and Barney, 2019 for a discussion on uncertainty and opportunities). Second, it suggests that the development of international opportunities will be positively related to both new foreign market entry and sequential moves in foreign markets already serviced by the firm, thus showing that opportunities matter in earlier as well as in later epochs of the internationalization of the firm. Few studies do so (e.g. Benito et al., 2009). The bulk of the research on internationalization equates foreign market expansion to entry in a single or a few foreign markets (Shaver, 2013). Finally, this study explains how the relationship between knowledge and international opportunities leading to foreign market entries and sequential moves where the servicing mode does not change (i.e. sequential moves using mode continuation) is different from that leading to sequential moves where the servicing mode changes (i.e. modal shifts). Modal shifts will occur only after a certain threshold related to the number of international opportunities developed in foreign markets is achieved. And it suggests that the relationship between the development of international opportunities and both new foreign market entry and sequential moves, either via mode continuation or modal shifts, will be moderated by the international knowledge stockpile of the firm By introducing the idea of a threshold and the concept of international knowledge stockpile, this paper refines the view that the effects of the accumulation of knowledge are gradual and incremental throughout the internationalization process. Theoretical contributions This paper offers the following contributions. First, according to Forsgren (2016, p. 2), "incorporating [...] entrepreneurship into the [internationalization] model needs more consideration." The literature on IB, which focuses on the internationalization process, and on IE, which focuses on the development of international opportunities, have evolved quite independently over time. By incorporating IE into the IB literature, this paper shifts the focus of the internationalization process from uncertainty reduction to an entrepreneurial process of developing international opportunities. This paper argues that the emphasis on the development of international opportunities is the right pathway to bring those literatures together, as suggested by Johanson and Vahlne (2009). This paper also shows that the entrepreneurial process of opportunity recognition and development is important for any internationalized firm. This extends the IE literature suggesting that the understanding of internationalization processes of firms other than international new ventures will benefit from incorporating the idea that such processes result from the development of international opportunities (Dimitratos et al., 2016). Second, this study also answers recent calls to assess the internationalization process using a dynamic approach (e.g. Welch and Paavilainen-Mantymaki, 2014). Sequential moves that happen after entry shape the long-term growth and hence the success or failure of internationalization processes (Casillas and Acedo, 2013). Nonetheless, the bulk of the IB literature emphasizes entry in foreign markets, ignoring the role of time and of the subsequent steps followed by a firm that indeed lead to its expansion in foreign markets (Gao and Pan, 2010). This study not only explains the antecedents of both new foreign market entry and sequential moves, but also disaggregates the latter into mode continuation and modal shift. Most importantly, by doing so it allows for different effects of the development of international opportunities on new foreign market entry and on each sequential move that happens after it. If the internationalization process was conflated with new foreign market entry, it would assume that the effects of the development of international opportunities during entry would be the same for sequential moves that happen after entry. Third, this study contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009) showing that knowledge in internationalization processes is comprised of both market and internationalization knowledge. This means that knowledge is not homogeneous in the internationalization process. Following the suggestion of Li et al. (2015, p. 919) this paper assumes that different types of knowledge may affect international opportunities in different ways. This differs from previous literature, which looks at international knowledge as a whole and connects it directly to the internationalization process. This paper argues that market knowledge will be positively related to the development of international opportunities by increasing the firm's familiarity with local practices and businesses in a specific foreign market. Moreover, internationalization knowledge will also be positively related to the development of international opportunities, but by increasing the firms' ability of doing business abroad. Moreover, this paper introduces the concept of the firm's international knowledge stockpile. By showing that the combination of market and internationalization knowledge develops into the international knowledge stockpile of the firm, and that this international knowledge stockpile will influence the relationship between international opportunities and the internationalization process, this paper contributes to show how knowledge interacts to shape such process. Extant literature assumes that the accumulation of experience abroad starts shaping the internationalization process from its very beginning. This study, however, focuses on the moderating effect of the accumulation of knowledge on the internationalization process over time. Fourth, this study introduces the idea of a threshold. It suggests that, before a threshold related to the number of developed international opportunities is achieved, the knowledge the firm possesses is not yet sufficient to shape modal shifts in foreign markets serviced by the firm. In other words, the accumulation of knowledge starts after the firm's first operations abroad, which may be due to emergent chance opportunities, but it is only after a certain number of international opportunities are developed that the internationalization process starts to be shaped by modal shifts (e.g. from sporadic sales to a planned internationalization process, either increasing or decreasing the level of commitment in a given foreign market). This is particularly important in that it allows for the possibility of different pathways of internationalization (Mathews and Zander, 2007), in which the firm may follow different trajectories in different foreign markets. Such trajectories are not always linear and are often interdependent, suggesting that the internationalization process may be more complex than previously established in the IB literature. Managerial implications The propositions developed in this study offer insights for firms that wish to internationalize or that have already started internationalizing. First, firms may proactively search opportunities abroad, even when they are uncertain about their ability or willingness to internationalize. This is because the internationalization process will only be systematic after a certain number of developed international opportunities is achieved. Initial opportunities may be developed via trial and error, without compromising the firm's internationalization process as a whole. In addition, acknowledging that the effect of the learning process associated with the internationalization process is not always gradual and incremental allows firms to better plan their internationalization processes to fit their strategic goals. For instance, firms can use internationalization knowledge acquired in servicing a specific set of foreign markets to purposefully develop international opportunities that will enable them to shift their operation mode in a different foreign market. Doing so allows them to shape their internationalization processes in terms of their resources commitment (Acedo and Casillas, 2007; Casillas and Acedo, 2013; Gao and Pan, 2010). Finally, this study shows that firms do not necessarily need to build their internationalization process gradually and reactively. The accumulation of both market and internationalization knowledge allows firms to enter several foreign markets and to keep servicing foreign markets where they already operate. But because a more systematic internationalization process via modal shifts only happens after a threshold (i.e. a certain number of developed international opportunities) is reached, the sooner the firm reaches it, the better. Doing so usually involves a willingness to take risk by entering foreign markets where the firm has no or little market knowledge but is supported by structural and management processes developed by the accumulation of internationalization knowledge. In sum, managers should be less cautious and then, proactively develop international opportunities that enable their firms to enter and evolve in foreign markets. The model developed in this study suggests that managers of firms that wish to internationalize should also proactively transfer knowledge across different markets and develop routines and heuristics that allow such knowledge to be integrated into the firm. Such a process of accumulation, transfer and integration of knowledge should be continuous to facilitate the development of international opportunities that allows for the achievement of the threshold beyond which the internationalization process becomes more systematic (i.e. via modal shifts). Limitations and further research Because this is a conceptual paper, the propositions presented in this study have not been empirically tested. The first direction for further research is, hence, to quantitatively test them by using panel data, since longitudinal data are needed to track the internationalization process throughout time. A second limitation is because only two types of knowledge are analyzed - market and internationalization. However, research suggests that other types of knowledge may also affect the internationalization process, such as technological knowledge (Fletcher and Harris, 2012). Previous studies also suggest that market knowledge should be differentiated from institutional knowledge, which is not easily acquired (Eriksson et al., 1997). By incorporating other types of knowledge and differentiating market knowledge from institutional knowledge, future research may inform the relationship between different types of knowledge and internationalization processes. Doing so will contribute to recent research that suggests that the internationalization process is contingent on the several different types of knowledge associated with it rather than on market knowledge only, which is the focus of the bulk of the research on the internationalization process of the firm. A third limitation is that the model looks at knowledge as an antecedent of the development of international opportunities and the internationalization process but it does not capture the recursive relationship that should exist between the three constructs. As firms internationalize, they learn how to better develop international opportunities and accumulate both market and internationalization knowledge. Likewise, the model does not capture withdrawal from foreign markets or de-internationalization (Benito and Welch, 1997). The latter, however, can also be the outcome of knowledge accumulation over time. Future research opportunities that inform how knowledge affects de-internationalization are numerous[1].
|
By bringing together the IB and IE literatures, the purpose of this paper is to examine the internationalization process as an entrepreneurial process related to the development of international opportunities. It explicitly connects different types of knowledge (i.e. market and internationalization), international opportunities and the internationalization process comprising both new foreign market entry and sequential moves that happen after entry.
|
[SECTION: Method] The process of knowledge acquisition and development as a mechanism of either uncertainty reduction or opportunity development is critical in understanding internationalization processes (Casillas et al., 2015; Fletcher et al., 2013). Initially, the accumulation of knowledge was perceived as a mechanism for uncertainty reduction, namely, knowledge gained from experience incrementally reduces the lack of information about a given foreign market (Johanson and Vahlne, 1977, 2009). More recently, the role of knowledge in the internationalization process has been associated with international opportunity development (Chandra, 2017), namely, the exploitation of international opportunities that leads to entry in new foreign markets and to new businesses in foreign markets serviced by the firm (Chandra et al., 2009). This conceptual paper subscribes to the latter viewpoint by looking at the internationalization process as an entrepreneurial process related to the development of international opportunities. More specifically, this paper brings together the internationalization theory broadly advanced by the international business (IB) field (e.g. Johanson and Vahlne, 1977) with the international entrepreneurship (IE) literature (Casillas et al., 2015; Forsgren, 2016). In doing so, it provides a model connecting three constructs - knowledge, international opportunities and the internationalization process - that are often analyzed separately. Particularly concerning knowledge, the proposed model separates the effects of both market and internationalization knowledge over time. While the bulk of the research has emphasized the effect of market knowledge, which is dependent on a specific foreign market, the effect of general internationalization knowledge has been overlooked (Hakanson and Kappen, 2017). And different types of knowledge should affect internationalization decisions differently (Evangelista and Mac, 2016). Moreover, the model assesses the internationalization process using a dynamic approach (Welch and Paavilainen-Mantymaki, 2014) that takes into consideration sequential moves in foreign markets (Casillas and Acedo, 2013). It comprises both new foreign market entry and sequential moves that happen after entry (i.e. either via mode continuation or modal shifts). It also introduces the idea of a threshold beyond which the development of international opportunities following the accumulation and combination of knowledge affects the internationalization process. Finally, this paper contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009; Child and Hsieh, 2014) by showing that knowledge is not homogeneous and how it interacts to influence internationalization processes. In particular, this study posits that the combination of knowledge moderates the relationship between international opportunities and the internationalization process. Currently, the bulk of the literature suggests that the accumulation of knowledge gradually shapes the internationalization process (e.g. Hadjikhani, 1997). This study refines this view by arguing that the accumulation of knowledge shapes the identification and development of international opportunities, which will affect the internationalization process in distinct ways, before and after a threshold is achieved. The bulk of the IB research on internationalization has looked at knowledge as a mechanism for uncertainty reduction (Petersen et al., 2008). In the internationalization process of the firm conceptualized by Johanson and Vahlne (1977), knowledge is gained incrementally from experience in a foreign market (i.e. experiential knowledge). When a firm starts servicing a foreign market, it faces challenges associated with the new environment, such as cultural (Kogut and Singh, 1988) and behavioral differences (Sanchez-Peinado and Pla-Barber, 2006), which bring uncertainty. By accumulating experiential knowledge, the firm learns how to operate in the new environment and to deal with those differences, reducing such uncertainty (Johanson and Vahlne, 1977). In this context, the most critical type of knowledge is experiential market knowledge (e.g. Figueira-de-Lemos et al., 2011), that is, knowledge gained from prior experience in a given foreign market (e.g. Chandra et al., 2012; Johanson and Vahlne, 1977). Thus, market knowledge is a central element of the internationalization process described by Johanson and Vahlne (1977). Conversely, research on IE has looked at knowledge as one of several antecedents affecting the internationalization of the firm (Oviatt and McDougall, 1994). However, knowledge is not related to uncertainty reduction but to the identification and development of international opportunities (e.g. Ardichvili et al., 2003). International opportunities are here conceptualized as opportunities recognized by firms and connected actors to establish or expand their business outside of their home market. The recognition of international opportunities as the trigger of the internationalization process and how knowledge affects it, however, have not received systematic attention from the literature (Chandra et al., 2009; Forsgren, 2016). For instance, because studies on IE focus on international new ventures, they highlight the role of different types of knowledge, such as technological knowledge (Oviatt and McDougall, 1994), which do not necessarily matter for all firms seeking to internationalize. Moreover, because IE studies analyzing internationalization processes emphasize international new ventures, they focus on rapid internationalization processes. This stream of research is called dynamic internationalization and "identify patterns considered anomalous to traditional internationalization" (Jones et al., 2011, p. 638). Nonetheless, opportunity recognition and development that lead to the internationalization of the firm, comprising both new foreign market entry and sequential moves that happen after entry, may also be related to firms in general (e.g. manufacturing firms), which follow internationalization processes as defined by Johanson and Vahlne (1977). Such firms respond to smaller or less significant opportunities at first, which might emerge years after their foundation (Morschett et al., 2010). Throughout time, however, they increase their capabilities and resources to capture a larger number of opportunities or more significant ones (Dimitratos and Jones, 2005). It is of particular interest to examine how knowledge acquired and accumulated by the firm throughout time determines the recognition and development of international opportunities (Lamb et al., 2011; Sanz-Velasco, 2006) and consequently, the internationalization process of such firms (Forsgren, 2016). It should be noted that with the emergence of the IE field in the early 1990s, following the study of McDougall (1989), opportunities have also started to be incorporated into the IB literature (Ardichvili et al., 2003; Johanson and Vahlne, 2009; Vahlne and Johanson, 2013). For example, Johanson and Vahlne (2006, 2009), rethinking their initial model, suggested that the internationalization process could be considered as a process of recognition and development of international opportunities. Nonetheless, market knowledge continued to be emphasized, despite the recognition that other types of knowledge, such as internationalization knowledge, also affect the identification and development of international opportunities and the internationalization process. In fact, Johanson and Vahlne (2009) suggested that their initial internationalization process model overlooked the role of internationalization knowledge, which should be analyzed more thoroughly. Internationalization knowledge is not market-specific and is gained from the accumulation of experiences across foreign markets (Eriksson et al., 1997). Previous research has only more recently acknowledged that internationalization knowledge is critical to the unfolding of the internationalization process over time (e.g. Hakanson and Kappen, 2017), which is consistent with this paper's analysis of not only new foreign market entry but also sequential moves that happen after entry. In conclusion, by bringing together insights from both IB and IE research, this paper provides a better understanding of the role of knowledge, particularly market and internationalization knowledge, on the development of international opportunities that leads to the internationalization process. First, knowledge accumulated from prior experience decreases the time needed to identify new opportunities (Bingham and Davis, 2012). Opportunities are recognized more quickly and more often when the firm has more familiarity with the foreign market. Such familiarity increases the alertness of firms and their capacity to perceive growth and expansion opportunities (Autio et al., 2000). Second, the accumulation of knowledge allows the establishment and recombination of resources that support new ways of seeing opportunities (Cassia and Minola, 2012; Chandra et al., 2012). Opportunities may not be recognized because the firm acquired new knowledge per se, but rather due to a recombination of resources that changes the way it perceives opportunities within foreign markets. For example, when a firm enters a foreign market serendipitously, it may perceive the benefits of internationalization, and hence reallocate resources to it. Third, it is the continuous use of this accumulated knowledge that contributes to the development of opportunities (Ardichvili et al., 2003). According to organizational learning theory (e.g. Cohen and Levinthal, 1990), the firm can only integrate knowledge into its processes when and if it continuously uses it. Therefore, international opportunity recognition and development should be assessed as a process that evolves over time (Ardichvili et al., 2003). Finally, the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry, not only depends on the accumulation of knowledge but also on the types of knowledge that interact throughout time (Gao and Pan, 2010). Previous studies have acknowledged several different types of knowledge that affect the internationalization process (e.g. Eriksson et al., 1997). Nonetheless, the effects played by each type of knowledge in the internationalization of the firm over time needs to be better distinguished (Arte, 2017; Li et al., 2015; Perkins, 2014). As aforementioned, this study focuses on market and internationalization knowledge, which are fundamental types of knowledge affecting the internationalization of any firm operating abroad (Figueira-de-Lemos and Hadjikhani, 2014). Other types of knowledge, such as technological, may be more related to specific industries, markets or categories of firms (e.g. SMEs, INVs), and thus are not analyzed in this study. For example, firms that internationalize from inception (i.e. INVs) rely heavily on technological knowledge developed in-house (Oviatt and McDougall, 1994). Market knowledge and opportunities Market knowledge is defined by Johanson and Vahlne (1977) as information about the particularities of servicing a specific foreign market and is usually acquired by the firm as it operates in that market. Such type of knowledge is specific to a particular foreign market and, therefore, it cannot be easily transferred to other markets serviced by the firm (Eriksson et al., 1997; Johanson and Vahlne, 1977). Market knowledge was initially perceived as a mechanism for uncertainty reduction that gradually led to more commitment in a particular foreign market (Forsgren, 2002). In this tradition, the accumulation of market knowledge contributes to overcome the liability of foreignness, which represents the additional costs faced by a firm due to its unfamiliarity with the host country environment (Zaheer, 1995; Johanson and Vahlne, 2009). Such liability may result in restricted access to the local market and local networks that challenge the firm's expansion in foreign markets (Johnsen and Johnsen, 1999; Zaheer and Mosakowski, 1997). With the development of the IE literature (Jones et al., 2011), market knowledge started to be perceived as not only a mechanism to reduce the liability of foreignness but also an antecedent for the successful recognition of opportunities (Ardichvili et al., 2003; Arte, 2017). By knowing the culture, institutions, customers, competitors and market conditions of a particular foreign market, the firm becomes more aware of specific international opportunities existent in the market (Zhou, 2007). Previous studies have acknowledged that the uncertainty reduction from accumulated market knowledge may lead to the development of international opportunities (Johanson and Vahlne, 2006). Such studies are interested in the internationalization of the firm under uncertainty in general. However, this paper emphasizes the fact that firms that are internationalizing develop international opportunities under uncertainty (Alvarez and Barney, 2019). Rather than following a reactive internationalization process, they proactively engage in the development of opportunities when they build market knowledge in any given foreign market. Hence, this paper suggests that the accumulation of market knowledge over time allows firms to identify and exploit new opportunities in a given foreign market after entry, as detailed below. The influence of market knowledge in the internationalization process analyzed through an IE lens is a quite recent strand of research, which focuses on rapid internationalization processes (Jones et al., 2011). Research on international new ventures suggest that market knowledge can be acquired from a variety of sources of information such as the firm's own predisposition to innovativeness, risk-taking, and proactiveness, its exposure to cultural diversity (Kropp et al., 2008; Zhou, 2007), and the use of focused research conducted by specialized agents (Spence and Crick, 2009). However, Lord and Ranft (2000, p. 576) argue that market knowledge may be hard to obtain, because of the lack of "well-developed and wildly-available sources of market information" in some foreign markets, such as emerging markets. This paper suggests that market knowledge that leads to the recognition of opportunities is principally formed either in everyday entrepreneurial practices or through social interactions (Mainela et al., 2014). The former relates to prior knowledge, built from increasing commitment in each foreign market (Spence and Crick, 2009). It is experiential and accumulated throughout time. Gao and Pan (2010) show that firms with a longer time of local operations accumulate more market knowledge, speeding up the pace of the internationalization process in that foreign market. Such experiential market knowledge is important to support a firm's activities in foreign markets (Evangelista and Mac, 2016), through both the recognition and development of international opportunities. It increases the proclivity of the firm to identify opportunities, while contributing to its ability to develop them (Nordman and Melen, 2008). The latter relates to networks of relationships (Hohenthal et al., 2014; Vasilchenko and Morrish, 2011), which makes embeddedness in the foreign market a pivotal issue in the recognition and development of international opportunities. By building relationships with customers, suppliers, agents and other actors which are embedded in that specific market, the firm may take advantage of opportunities initially identified by those actors (Johanson and Vahlne, 2006, 2009). Therefore, the acquisition and accumulation of market knowledge, either by experience or by social interactions with actors embedded in each foreign market serviced by the firm, is a critical aspect of the internationalization process (Blomstermo et al., 2004). By increasing the awareness and alertness of the firm, as well as its familiarity with a specific foreign market, this paper proposes that market knowledge will lead to the development of international opportunities that happen after new foreign market entry. Nonetheless, because market knowledge is difficult to be transferred across markets without incurring in substantial opportunity costs (Eriksson et al., 1997; Johanson and Vahlne, 1977), market knowledge will only lead to the development of international opportunities in foreign markets where the firm already operates. Although market knowledge has been extensively associated with internationalization, P1 highlights the role of market knowledge in identifying international opportunities, rather than triggering the internationalization process via the reduction of uncertainty in the commitment of resources in foreign markets. Hence: P1. Market knowledge is positively related to the development of international opportunities. Internationalization knowledge and opportunities Differing from market knowledge, internationalization knowledge is related to accumulated experience from servicing foreign markets (Eriksson et al., 1997). It is not specific to a particular foreign market (Eriksson et al., 2000), but rather based on a general knowledge on how to conduct businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008). By drawing on previous experiences in foreign markets, the firm incorporates learned routines that support not only its current internationalization processes (Sapienza et al., 2006) but also its entry and subsequent expansions in new foreign markets not yet serviced by the firm (Hakanson and Kappen, 2017; Prashantham and Young, 2011). General knowledge of servicing foreign markets gives the firm an overall understanding of the internationalization process that can be applied across all foreign markets (Eriksson et al., 1997; Johanson and Vahlne, 1977). Consequently, the firm may benefit from not only homogeneous experiences (i.e. from similar environments) but also from heterogeneous experiences (i.e. from very different environments) in foreign markets (Kim et al., 2012), since this knowledge can be more freely transferred throughout its internationalization projects (Child and Hsieh, 2014). Similarly to market knowledge, internationalization knowledge was initially perceived as a mechanism for uncertainty reduction (e.g. Kim et al., 2012). Hitt et al. (2006), for example, suggest that by accumulating internationalization knowledge, the firm facilitates its internationalization process via the creation of social capital and useful resources, reducing its risk. Nonetheless, internationalization knowledge not only encourages firms to enter new markets and to use different servicing modes (Casillas and Acedo, 2013) through uncertainty reduction, but also through the recognition and development of new international opportunities. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge is a "versatile" and "country-neutral" knowledge that exposes the firm to unexpected international opportunities in general, which leads to the expansion in foreign markets. Nonetheless, it also exposes the firm to new knowledge and resources that increase its disposition to explore new and different opportunities (Nachum and Song, 2011). The accumulation of international experience contributes to the alertness and willingness of the firm in developing new international opportunities in general, even though they may be in different foreign markets where the firm has no specific market knowledge. In accordance with market knowledge, internationalization knowledge will be beneficial in markets already serviced by the firm. Even when the firm is already well established in those markets, it can use knowledge acquired elsewhere to better manage its relationships (Johanson and Vahlne, 2009) and expand its operations through the development of new opportunities in those foreign markets. This happens because the firm develops a "mindset" to more proactively search for opportunities, planning the internationalization process rather than just reacting to sporadic opportunities (Freeman et al., 2012). Accordingly, when the firm accumulates internationalization knowledge, it becomes better prepared to take advantage of opportunities as well as to manage them. Moreover, internationalization knowledge will be equally important to support entry into new foreign markets. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge leads to practices such as "country market screening, and evaluating strategic partners and distributors" that leads to new foreign market entries. Accordingly, Vasilchenko and Morrish (2011) showed that by building on previous experiences firms may develop a network of relationships abroad, engaging in cooperative behavior that ultimately leads to successful entry into new foreign markets. In sum, because internationalization knowledge reflects a general knowledge on conducting businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008), it can be used not only in foreign markets where the firm operates but also in new foreign markets. Therefore, internationalization knowledge will lead to the development of international opportunities both in foreign markets already serviced by the firm and in new foreign markets not yet serviced by the firm. Hence: P2. Internationalization knowledge is positively related to the development of international opportunities. As previously discussed, this paper looks at market knowledge and internationalization knowledge as antecedents of the development of international opportunities, which lead to the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry. Foreign market entry is a central element of internationalization research, and therefore has been extensively studied (Shaver, 2013). Nevertheless, the recognition of international opportunities that leads to entry in new foreign markets is quite recent in IE (Jones et al., 2011). Studies on IE that connect opportunities and internationalization focus on international new ventures following a rapid internationalization process (Jones et al. 2011). However, Dimitratos et al. (2016, p. 1220) suggest that research in the IE literature "[...] can shift the attention from the study of INVs to that of all opportunity-driven types of internationalized firms." While previous research has brought attention to the role of opportunities as drivers of the internationalization process of not only international new ventures but also firms in general (e.g. Johanson and Vahlne, 2006), such a relationship is not explicitly stated. Chandra et al. (2012, p. 75), for example, state that "Other authors note, and we agree, that 'the opportunity side of the internationalization process is not very well developed' (Johanson and Vahlne, 2006, p. 167). And such a side is not well developed because, although studies acknowledge the role of opportunities on the internationalization process, uncertainty reduction is still perceived as the main driver of the internationalization process (Figueira-de-Lemos and Hadjikhani, 2014). Using an IE lens to analyze foreign market entry, researchers have emphasized the mechanisms through which firms recognize opportunities in new foreign markets. Chandra et al. (2009), for example, suggest that firms exhibiting a strong entrepreneurial orientation will be more likely to recognize first time opportunities in foreign markets. Accordingly, Sapienza et al. (2006) argue that the firms' proactivity in the pursuit of international opportunities will lead to new foreign market entry. Such entrepreneurial orientation or proactivity is developed throughout time, being considered as a "long-term behavioural phenomenon" (Jones and Coviello, 2005, p. 297). In this vein, internationalizing firms may act entrepreneurially, actively identifying and developing opportunities in new foreign markets. This study, therefore, suggests that by identifying and developing international opportunities, firms are more likely to actively internationalize. A natural pathway for pursuing internationalization is through entering new foreign markets. Firms that develop international opportunities are more prepared and prone to entering new foreign markets. In other words, by developing more international opportunities, firms will enter more new foreign markets. Hence: P3a. The development of international opportunities is positively related to entry in new foreign markets. Nevertheless, the internationalization process comprises not only new foreign market entry but also sequential moves that may happen throughout time in markets already serviced by the firm (Casillas and Acedo, 2013; Gao and Pan, 2010). The entrepreneurial internationalization process relates to its international pathway (Jones et al., 2011), and sequential moves, as much as the entry choice, shape such a process (Casillas and Acedo, 2013; Gao and Pan, 2010). The bulk of the literature, however, emphasizes entry modes, often ignoring the dynamics of the internationalization process by not considering the sequential moves in foreign markets where it already operates (for a critique, see Shaver, 2013; Welch and Paavilainen-Mantymaki, 2014; for exceptions, see Mtigwe, 2005; Suarez-Ortega and Alamo-Vera, 2005). Sequential moves refer to the activities that follow new foreign market entry (e.g. continued sales) and are associated with within-market expansion or eventual withdrawal (Gao and Pan, 2010). Considering sequential moves is important because the internationalization process is not static and its evolution after foreign market entry is equally important for a better understanding of such a process (Benito et al., 2009; Gao and Pan, 2010). Sequential moves may follow two paths - either mode continuation or modal shift (Benito and Welch, 1994; Benito et al., 2009). In the former, there is no modal change and the firm operates in a given foreign market by using its initial entry mode (Benito and Welch, 1994; Swoboda et al., 2015). It does so because the existing servicing mode is considered adequate to conducting business in a given foreign market. Switching costs may also be perceived as high, deterring the firm to switch the initial mode; or inertia may play a role, that is, the firm simply does not change the initial mode due to inertial forces (Benito et al., 2009). This paper suggests that mode continuation is associated with the development of opportunities below a threshold. This argument is to be developed in the following paragraph. Mode continuation, hence, is a confirmation of the firm's initial choice of servicing mode, independently from the type of servicing mode initially chosen (Benito et al., 2009). In the latter, the firm shifts its servicing mode to adjust its operations within a given foreign market (e.g. by shifting from a sales agent to a whole-owned subsidiary). By shifting the mode of operation, firms may be more responsive to foreign market needs (Benito et al., 2009). Modal shifts, nonetheless, do not necessarily imply higher commitment in a given foreign market. Firms can also shift the entry mode to adjust operations when the servicing mode currently being used is not appropriate (e.g. by shifting from a sales agent to exporting). Consequently, this study suggests that the development of international opportunities will be related to sequential moves. More specifically, firms will invest resources, time, and effort to identify and develop opportunities in not only new foreign markets but also markets where it already operates. The development of international opportunities related to sequential moves allow firms to maintain and even strengthen their position in any given foreign market. In this vein, sequential moves are the pathway after new foreign market entry. Initially, however, such sequential moves will happen only via mode continuation. By accumulating knowledge and building relationships in foreign markets throughout time, via the development of new opportunities in those markets, the firm will be more inclined to continue servicing them by the continued use of the initial entry mode. For example, a firm may enter a given foreign market via direct exporting to one customer. Throughout time, because the firm is more familiar with that foreign market and with the exporting process in general, it expands its customer base there by exporting to new customers. This is consistent with the general conceptualization of international opportunities as the development of new customers (Reuber et al., 2018). The firm, thus, developed new international opportunities in that foreign market but did not change the initial servicing mode (i.e. direct exporting), which corresponds to mode continuation. Sequential moves via mode continuation are an easier pathway of the internationalization process after new foreign market entry, since they do not require leaps of resources and knowledge. Formally stated: P3b. The development of international opportunities is positively related to sequential moves using mode continuation in foreign markets serviced by the firm. In the example above, the firm chose to keep using the same servicing mode of entry even after it develops more international opportunities in the given foreign market. This may be because the firm did not develop enough international opportunities because it has not accumulated enough knowledge yet. Shifts in the servicing mode require more combined knowledge and understanding of the market needs. The development of opportunities per se, however, will not lead to shifts in the entry mode chosen by the firm to enter a given foreign market. It is only after a threshold is achieved that the development of international opportunities will indeed allow for a change in the firm's current internationalization process that requires riskier decisions, such as shifting the entry mode (Benito et al., 2005; Clark et al., 1997). The idea of a threshold is in line with the use of self-organized criticality (SOC) in organizations advanced by Andriani and McKelvey (2009, p. 1061), in which organizational phenomena not always follow a linear distribution - instead, they evolve "toward a critical state." When such a state is achieved, any additional related effort or interaction brings change. The authors argue that the IB "arena is especially vulnerable to SOC effects" (Andriani and McKelvey, 2007, p. 1215), in accordance with the argument presented in this study. Accordingly, prior studies have suggested that servicing modes are difficult to change, either when they represent higher or lower commitment (e.g. Anderson and Coughlan, 1987), which has been corroborated by empirical evidence (e.g. Pedersen et al., 2002). Nonetheless, after the firm builds enough knowledge that is translated into a consistent number of international opportunities developed in a given foreign market, it is able to overcome the difficulties associated with shifting the entry mode (e.g. switching costs and inertia). Hence, achieving such a threshold will lead to sequential moves where the firm changes its commitment to better serve the needs of a given foreign market by shifting the servicing mode. Formally stated: P3c. The development of international opportunities beyond a threshold is positively related to modal shifts in foreign markets serviced by the firm. On the example preceding P3b, the firm entered a given foreign market using direct exporting as the entry mode and continued using direct exporting. If the relationship between the development of international opportunities and the internationalization process were linear, the firm should have to switch or at least adjust the servicing mode. Instead, it continues using the same servicing mode, even after it expands its customer base and increases the volume of foreign sales. Nonetheless, imagine that sales to the firm's current customers keep increasing and more new customers are established. The firm evolves toward a critical state where exporting may not be adequate or the satisficing solution anymore. It has achieved this threshold and, hence, switches its servicing mode to a sales subsidiary, for example. This example shows that the relationship between the development of international opportunities and modal shifts is not linear. The firm needed to achieve a threshold in order to change its servicing mode. Nonetheless, as previously stated, modal shifts do not necessarily imply higher commitment in a given foreign market. Continuing with the example above, the firm established a sales subsidiary and continued developing opportunities in the foreign market. Nonetheless, after developing a certain number of opportunities without changing or adjusting the servicing mode (i.e. sales subsidiary), the firm learnt it had made a bad decision since, for example, the marginal costs of the sales subsidiary outweighed its marginal returns. Again, the firm evolved toward a critical state where a sales subsidiary was not adequate anymore. After reaching this threshold, the firm decided to divest the sales subsidiary and use a local sales representative instead. On P1 and P2, market and internationalization knowledge are analyzed independently, to show how each type of knowledge relates to the development of international opportunities. On P3a and P3b, the development of international opportunities that follows the accumulation of market and internationalization knowledge is connected to two aspects of the internationalization process - new foreign market entry and sequential moves related to mode continuation, respectively. On P3c, the idea of a threshold that affects sequential moves related to modal shifts is introduced. In addition, this paper proposes that both types of knowledge may combine to form the international knowledge stockpile of the firm. It is assumed that both market and internationalization knowledge interact through a mutual influence process, in which the former contributes to the development of the latter and vice versa. For example, Barkema et al. (1996) suggested that learning effects in a foreign market (i.e. market knowledge) were associated with learning from internationalization processes in different markets (i.e. internationalization knowledge), even though the degree of learning differed depending on similarities between those markets. Accordingly, knowledge in different markets accumulates over time to become a firm-specific knowledge that can be relevant to foreign markets serviced by the firm or foreign markets where it intends to service (Eriksson et al., 1997). Thus, the international knowledge stockpile as suggested here is a heterogeneous reservoir (Hutzschenreuter and Matt, 2017). It addresses two apparently contradictory aspects of knowledge development - diversity (i.e. market knowledge) and transferability (i.e. internationalization knowledge). By doing so, it increases the probability of growth and survival in foreign markets (Kogut and Zander, 1992). It may also provide the firm with differential efficiencies such as market diversification and innovation (Foss, 1996). The international knowledge stockpile can also be associated with pace in internationalization processes. First, it may enable the firm to respond to market turbulences more rapidly (Miller, 2002) and most importantly, it may accelerate the firm internationalization (Casillas et al., 2015), particularly in terms of the speed of modal shifts (Chetty et al., 2014). Considering that the international knowledge stockpile constitutes an important asset for internationalizing firms, this paper suggests that its effect is different than the effect of each type of knowledge that comprises it - market and internationalization knowledge - when analyzed separately. This is because the international knowledge stockpile enables the firm to capitalize on international opportunities, thus strengthening the relationship between international opportunities and the internationalization process. Therefore, this paper suggests that the international knowledge stockpile will moderate the relationship between the development of international opportunities and the internationalization process. It builds on the idea that the knowledge reservoir of the firm comprises both in-use and idle knowledge (Penrose, 1959), that is, utilized and underutilized knowledge. In this sense, the international knowledge stockpile is a mix of both market and internationalization knowledge in-use, which is knowledge the firm uses to advance its internationalization process, and market and internationalization idle knowledge, which is knowledge the firm can explore in near-future internationalization activities. In other words, the firm accumulates both market and internationalization knowledge over time. When those two types of knowledge are combined in a productive way, a baseline is achieved - the firm establishes its international knowledge stockpile - strengthening the relationship between the development of international opportunities and the internationalization process. "Durable and repetitive interactions" in foreign markets are the main drivers of this process (Eriksson et al., 1997, p. 354). First, whereas the development of international opportunities will lead to more new foreign market entries (P3a), the international knowledge stockpile will strengthen such relationship (P4a). The international knowledge stockpile may lead the firm to enter more new foreign markets in shorter intervals. This is because the firm possesses knowledge that results from different environments and knows how to transfer such knowledge to new foreign markets. Moreover, the international knowledge stockpile may also enable the firm to not only enter more markets in shorter intervals, but also enter multiple markets simultaneously (Wang and Suh, 2009). Literature acknowledges that knowledge supports the development of multiple internationalization processes within a firm, that is, entering and servicing multiple foreign markets at the same time (Welch and Paavilainen-Mantymaki, 2014). Hence, instead of entering foreign markets sequentially, as posited by Johanson and Vahlne (1977), the international knowledge stockpile enables the firm to capitalize on opportunities in multiple foreign markets at the same time. And this is possible because the firm possesses this heterogeneous reservoir of knowledge in-use and idle knowledge, which can be used to address multiple activities - in this case, shortly-spaced or simultaneous entries in different foreign markets. Formally stated: P4a. The relationship between the development of international opportunities and new foreign market entry is moderated by the firm's international knowledge stockpile. Second, whereas the development of international opportunities will lead to more sequential moves via mode continuation in any given foreign market serviced by the firm (P3b), the international knowledge stockpile will strengthen such relationship (P4b). The accumulation of knowledge has been associated with the internationalization process post new foreign market entry (Dimitratos et al., 2016). This means that such combination of knowledge reinforces the fact that the firm has chosen the right foreign market entry mode to service that specific foreign market, because it understands not only the needs of that specific foreign market but also its options and their outcomes when using different foreign market entry modes. Such reinforcement will lead to a proactive search for opportunities that do not require change in the servicing mode but that still strengthen the internationalization process. In addition, the international knowledge stockpile enables the firm to better assess the switching costs associated with modal shifts before attempting to switch the servicing mode. If it believes that the switching costs are too high, it will avoid switching the servicing mode. In this context, the international knowledge stockpile allows the firm to assess such switching costs more efficiently. This is because the international knowledge stockpile is associated with heuristics and routines on how to measure and monitor such costs, due to previous experiences. And the firm, because of its international knowledge stockpile, may decide to continue using a given servicing mode while adjusting such mode (e.g. switching sales agents). Hence: P4b. The relationship between the development of international opportunities and sequential moves using mode continuation in foreign markets serviced by the firm is moderated by the firm's international knowledge stockpile. Third, only after a threshold determined by a certain number of developed international opportunities is achieved (Barkema and Drogendijk, 2007), the firm will engage in modal shifts (P3c). This paper also suggests that the international knowledge stockpile of the firm will strengthen this relationship (P4c). As discussed above, modal shifts reflect changes in the commitment toward the internationalization process, since the firm shifts its entry mode to better serve the needs of a given foreign market (Benito et al., 2009). After the international knowledge stockpile is built, the firm develops international opportunities that will reach the threshold faster, which will allow it to shift the servicing mode. This idea is coupled with the fact that the firm, because of its international knowledge stockpile, is more experienced in switching its servicing mode. In other words, there is a trigger for modal shifts after a certain threshold is achieved. And there is the accumulated experience after this threshold is achieved - the firm knows about different servicing modes and how to better use them. Hence, the firm is able to reduce searching costs (i.e. identifying new servicing modes) and implementation costs (i.e. shifting the servicing mode per se). This enables the firm to engage in modal shifts faster and more efficiently (Casillas et al., 2015). In other words, the threshold, in terms of the development of international opportunities, will be achieved in shorter intervals. Hence, if the time to reach the threshold is reduced due to the international knowledge stockpile, modal shifts will not only happen more frequently but also in shorter intervals. Hence, because modal shifts require not only a combination of market and internationalization knowledge but also an additional effort from the firm of proactively developing international opportunities and changing the course of its internationalization process, as determined by the threshold, it is only after this threshold is achieved that the firm's international knowledge stockpile will moderate the relationship between international opportunities and modal shifts. After the threshold and international knowledge stockpile are achieved, the firm will engage in more modal shifts in different foreign markets and/or will shift the servicing modes at a given foreign market faster. Formally stated: P4c. The relationship between the development of international opportunities beyond a threshold and modal shifts is moderated by the firm's international knowledge stockpile. In sum, the combination of market and internationalization knowledge configures the knowledge stockpile of the firm, which reinforces the relationship between the development of international opportunities and the internationalization process by positively moderating such relationship. Thus, over and above the direct relationship between knowledge and international opportunities, this paper proposes an indirect relationship between knowledge, international opportunities, and the internationalization process comprising both new foreign market entry and sequential moves that happen after entry. Doing so is important because while it recognizes that different types of knowledge have different effects on the internationalization process, it considers that these types also combine to form knowledge that is firm-specific - the firm's international knowledge stockpile. Moreover, the international knowledge stockpile connects the three constructs advanced by the paper - knowledge, international opportunities, and the internationalization process. P1 and P2 connect knowledge to international opportunities; the set of P3a-P3c connect international opportunities and the internationalization process; the international knowledge stockpile completes the model, offering a refined understanding of internationalization processes as a function of both knowledge (i.e. market and internationalization) and the development of international opportunities. The conceptual model is presented in Figure 1. It shows that both market and internationalization knowledge are positively related to international opportunities, which, in turn are related to the internationalization process. From the IB field, the model borrows the notion of market knowledge and its relation to the overall internationalization process. From the IE field, it borrows the notion of international opportunities as an antecedent of the internationalization process. From a combination of both the IB and IE fields, it emphasizes the role of internationalization knowledge and the importance of looking at not only new foreign market entry but also sequential moves that happen after entry. The main points of novelty introduced by the model are showing that the path to sequential moves using mode continuation is different from that of sequential moves comprising modal shifts (i.e. the idea of a threshold) and market and internationalization knowledge combine to form the knowledge stockpile of the firm, which moderates the relationship between international opportunities and the internationalization process. By doing so, it offers a finer-grained view of the internationalization process, which can be entrepreneurial (i.e. related to the identification and development of opportunities) even for firms in general. By combining the IB and IE literatures, this paper presents a novel framework connecting three constructs that are typically analyzed separately in the literature - knowledge, international opportunities, and the internationalization process. Grounded on the idea that the internationalization process is the result of the development of international opportunities, this study looks at the types of knowledge that shape such opportunities as well as how those types interact to moderate the relationship between international opportunities and the internationalization process. On the one hand, research on IE often connects knowledge to the development of international opportunities (e.g. Chandra et al., 2009), but without highlighting how such development affects the internationalization process of the firm, comprising both new foreign market entry and sequential moves in all foreign markets where the firm operates. This literature also focuses on international new ventures, thus disregarding other types of internationalized firms (Dimitratos et al., 2016). On the other hand, research on IB connects knowledge directly to internationalization, usually focusing on new foreign market entry (e.g. Shaver, 2013). It also posits that the main driver of the internationalization process is uncertainty reduction via knowledge development (Figueira-de-Lemos et al., 2011). This study explicitly connects knowledge, international opportunities, and the internationalization process during and after new foreign market entry and provides testable propositions that contribute to advance and bring together both the IB and IE literatures. First, this study suggests that both market and internationalization knowledge will be positively related to the development of international opportunities, emphasizing the idea that the internationalization process may also be triggered and driven by the proactive identification and exploitation of opportunities, rather than mainly by uncertainty reduction. While uncertainty reduction has been important to comprehend internationalization (Figueira-de-Lemos et al., 2011), understanding how the development of international opportunities under uncertainty affects internationalization processes can better inform research on the internationalization of the firm (see Alvarez and Barney, 2019 for a discussion on uncertainty and opportunities). Second, it suggests that the development of international opportunities will be positively related to both new foreign market entry and sequential moves in foreign markets already serviced by the firm, thus showing that opportunities matter in earlier as well as in later epochs of the internationalization of the firm. Few studies do so (e.g. Benito et al., 2009). The bulk of the research on internationalization equates foreign market expansion to entry in a single or a few foreign markets (Shaver, 2013). Finally, this study explains how the relationship between knowledge and international opportunities leading to foreign market entries and sequential moves where the servicing mode does not change (i.e. sequential moves using mode continuation) is different from that leading to sequential moves where the servicing mode changes (i.e. modal shifts). Modal shifts will occur only after a certain threshold related to the number of international opportunities developed in foreign markets is achieved. And it suggests that the relationship between the development of international opportunities and both new foreign market entry and sequential moves, either via mode continuation or modal shifts, will be moderated by the international knowledge stockpile of the firm By introducing the idea of a threshold and the concept of international knowledge stockpile, this paper refines the view that the effects of the accumulation of knowledge are gradual and incremental throughout the internationalization process. Theoretical contributions This paper offers the following contributions. First, according to Forsgren (2016, p. 2), "incorporating [...] entrepreneurship into the [internationalization] model needs more consideration." The literature on IB, which focuses on the internationalization process, and on IE, which focuses on the development of international opportunities, have evolved quite independently over time. By incorporating IE into the IB literature, this paper shifts the focus of the internationalization process from uncertainty reduction to an entrepreneurial process of developing international opportunities. This paper argues that the emphasis on the development of international opportunities is the right pathway to bring those literatures together, as suggested by Johanson and Vahlne (2009). This paper also shows that the entrepreneurial process of opportunity recognition and development is important for any internationalized firm. This extends the IE literature suggesting that the understanding of internationalization processes of firms other than international new ventures will benefit from incorporating the idea that such processes result from the development of international opportunities (Dimitratos et al., 2016). Second, this study also answers recent calls to assess the internationalization process using a dynamic approach (e.g. Welch and Paavilainen-Mantymaki, 2014). Sequential moves that happen after entry shape the long-term growth and hence the success or failure of internationalization processes (Casillas and Acedo, 2013). Nonetheless, the bulk of the IB literature emphasizes entry in foreign markets, ignoring the role of time and of the subsequent steps followed by a firm that indeed lead to its expansion in foreign markets (Gao and Pan, 2010). This study not only explains the antecedents of both new foreign market entry and sequential moves, but also disaggregates the latter into mode continuation and modal shift. Most importantly, by doing so it allows for different effects of the development of international opportunities on new foreign market entry and on each sequential move that happens after it. If the internationalization process was conflated with new foreign market entry, it would assume that the effects of the development of international opportunities during entry would be the same for sequential moves that happen after entry. Third, this study contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009) showing that knowledge in internationalization processes is comprised of both market and internationalization knowledge. This means that knowledge is not homogeneous in the internationalization process. Following the suggestion of Li et al. (2015, p. 919) this paper assumes that different types of knowledge may affect international opportunities in different ways. This differs from previous literature, which looks at international knowledge as a whole and connects it directly to the internationalization process. This paper argues that market knowledge will be positively related to the development of international opportunities by increasing the firm's familiarity with local practices and businesses in a specific foreign market. Moreover, internationalization knowledge will also be positively related to the development of international opportunities, but by increasing the firms' ability of doing business abroad. Moreover, this paper introduces the concept of the firm's international knowledge stockpile. By showing that the combination of market and internationalization knowledge develops into the international knowledge stockpile of the firm, and that this international knowledge stockpile will influence the relationship between international opportunities and the internationalization process, this paper contributes to show how knowledge interacts to shape such process. Extant literature assumes that the accumulation of experience abroad starts shaping the internationalization process from its very beginning. This study, however, focuses on the moderating effect of the accumulation of knowledge on the internationalization process over time. Fourth, this study introduces the idea of a threshold. It suggests that, before a threshold related to the number of developed international opportunities is achieved, the knowledge the firm possesses is not yet sufficient to shape modal shifts in foreign markets serviced by the firm. In other words, the accumulation of knowledge starts after the firm's first operations abroad, which may be due to emergent chance opportunities, but it is only after a certain number of international opportunities are developed that the internationalization process starts to be shaped by modal shifts (e.g. from sporadic sales to a planned internationalization process, either increasing or decreasing the level of commitment in a given foreign market). This is particularly important in that it allows for the possibility of different pathways of internationalization (Mathews and Zander, 2007), in which the firm may follow different trajectories in different foreign markets. Such trajectories are not always linear and are often interdependent, suggesting that the internationalization process may be more complex than previously established in the IB literature. Managerial implications The propositions developed in this study offer insights for firms that wish to internationalize or that have already started internationalizing. First, firms may proactively search opportunities abroad, even when they are uncertain about their ability or willingness to internationalize. This is because the internationalization process will only be systematic after a certain number of developed international opportunities is achieved. Initial opportunities may be developed via trial and error, without compromising the firm's internationalization process as a whole. In addition, acknowledging that the effect of the learning process associated with the internationalization process is not always gradual and incremental allows firms to better plan their internationalization processes to fit their strategic goals. For instance, firms can use internationalization knowledge acquired in servicing a specific set of foreign markets to purposefully develop international opportunities that will enable them to shift their operation mode in a different foreign market. Doing so allows them to shape their internationalization processes in terms of their resources commitment (Acedo and Casillas, 2007; Casillas and Acedo, 2013; Gao and Pan, 2010). Finally, this study shows that firms do not necessarily need to build their internationalization process gradually and reactively. The accumulation of both market and internationalization knowledge allows firms to enter several foreign markets and to keep servicing foreign markets where they already operate. But because a more systematic internationalization process via modal shifts only happens after a threshold (i.e. a certain number of developed international opportunities) is reached, the sooner the firm reaches it, the better. Doing so usually involves a willingness to take risk by entering foreign markets where the firm has no or little market knowledge but is supported by structural and management processes developed by the accumulation of internationalization knowledge. In sum, managers should be less cautious and then, proactively develop international opportunities that enable their firms to enter and evolve in foreign markets. The model developed in this study suggests that managers of firms that wish to internationalize should also proactively transfer knowledge across different markets and develop routines and heuristics that allow such knowledge to be integrated into the firm. Such a process of accumulation, transfer and integration of knowledge should be continuous to facilitate the development of international opportunities that allows for the achievement of the threshold beyond which the internationalization process becomes more systematic (i.e. via modal shifts). Limitations and further research Because this is a conceptual paper, the propositions presented in this study have not been empirically tested. The first direction for further research is, hence, to quantitatively test them by using panel data, since longitudinal data are needed to track the internationalization process throughout time. A second limitation is because only two types of knowledge are analyzed - market and internationalization. However, research suggests that other types of knowledge may also affect the internationalization process, such as technological knowledge (Fletcher and Harris, 2012). Previous studies also suggest that market knowledge should be differentiated from institutional knowledge, which is not easily acquired (Eriksson et al., 1997). By incorporating other types of knowledge and differentiating market knowledge from institutional knowledge, future research may inform the relationship between different types of knowledge and internationalization processes. Doing so will contribute to recent research that suggests that the internationalization process is contingent on the several different types of knowledge associated with it rather than on market knowledge only, which is the focus of the bulk of the research on the internationalization process of the firm. A third limitation is that the model looks at knowledge as an antecedent of the development of international opportunities and the internationalization process but it does not capture the recursive relationship that should exist between the three constructs. As firms internationalize, they learn how to better develop international opportunities and accumulate both market and internationalization knowledge. Likewise, the model does not capture withdrawal from foreign markets or de-internationalization (Benito and Welch, 1997). The latter, however, can also be the outcome of knowledge accumulation over time. Future research opportunities that inform how knowledge affects de-internationalization are numerous[1].
|
This is a conceptual paper that reviews the literature on knowledge, opportunities and the internationalization process. Moreover, the paper identifies the current gaps in the literature and builds new theory that sheds light into how these three concepts are related. The paper also presents a model and propositions that should guide future research.
|
[SECTION: Findings] The process of knowledge acquisition and development as a mechanism of either uncertainty reduction or opportunity development is critical in understanding internationalization processes (Casillas et al., 2015; Fletcher et al., 2013). Initially, the accumulation of knowledge was perceived as a mechanism for uncertainty reduction, namely, knowledge gained from experience incrementally reduces the lack of information about a given foreign market (Johanson and Vahlne, 1977, 2009). More recently, the role of knowledge in the internationalization process has been associated with international opportunity development (Chandra, 2017), namely, the exploitation of international opportunities that leads to entry in new foreign markets and to new businesses in foreign markets serviced by the firm (Chandra et al., 2009). This conceptual paper subscribes to the latter viewpoint by looking at the internationalization process as an entrepreneurial process related to the development of international opportunities. More specifically, this paper brings together the internationalization theory broadly advanced by the international business (IB) field (e.g. Johanson and Vahlne, 1977) with the international entrepreneurship (IE) literature (Casillas et al., 2015; Forsgren, 2016). In doing so, it provides a model connecting three constructs - knowledge, international opportunities and the internationalization process - that are often analyzed separately. Particularly concerning knowledge, the proposed model separates the effects of both market and internationalization knowledge over time. While the bulk of the research has emphasized the effect of market knowledge, which is dependent on a specific foreign market, the effect of general internationalization knowledge has been overlooked (Hakanson and Kappen, 2017). And different types of knowledge should affect internationalization decisions differently (Evangelista and Mac, 2016). Moreover, the model assesses the internationalization process using a dynamic approach (Welch and Paavilainen-Mantymaki, 2014) that takes into consideration sequential moves in foreign markets (Casillas and Acedo, 2013). It comprises both new foreign market entry and sequential moves that happen after entry (i.e. either via mode continuation or modal shifts). It also introduces the idea of a threshold beyond which the development of international opportunities following the accumulation and combination of knowledge affects the internationalization process. Finally, this paper contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009; Child and Hsieh, 2014) by showing that knowledge is not homogeneous and how it interacts to influence internationalization processes. In particular, this study posits that the combination of knowledge moderates the relationship between international opportunities and the internationalization process. Currently, the bulk of the literature suggests that the accumulation of knowledge gradually shapes the internationalization process (e.g. Hadjikhani, 1997). This study refines this view by arguing that the accumulation of knowledge shapes the identification and development of international opportunities, which will affect the internationalization process in distinct ways, before and after a threshold is achieved. The bulk of the IB research on internationalization has looked at knowledge as a mechanism for uncertainty reduction (Petersen et al., 2008). In the internationalization process of the firm conceptualized by Johanson and Vahlne (1977), knowledge is gained incrementally from experience in a foreign market (i.e. experiential knowledge). When a firm starts servicing a foreign market, it faces challenges associated with the new environment, such as cultural (Kogut and Singh, 1988) and behavioral differences (Sanchez-Peinado and Pla-Barber, 2006), which bring uncertainty. By accumulating experiential knowledge, the firm learns how to operate in the new environment and to deal with those differences, reducing such uncertainty (Johanson and Vahlne, 1977). In this context, the most critical type of knowledge is experiential market knowledge (e.g. Figueira-de-Lemos et al., 2011), that is, knowledge gained from prior experience in a given foreign market (e.g. Chandra et al., 2012; Johanson and Vahlne, 1977). Thus, market knowledge is a central element of the internationalization process described by Johanson and Vahlne (1977). Conversely, research on IE has looked at knowledge as one of several antecedents affecting the internationalization of the firm (Oviatt and McDougall, 1994). However, knowledge is not related to uncertainty reduction but to the identification and development of international opportunities (e.g. Ardichvili et al., 2003). International opportunities are here conceptualized as opportunities recognized by firms and connected actors to establish or expand their business outside of their home market. The recognition of international opportunities as the trigger of the internationalization process and how knowledge affects it, however, have not received systematic attention from the literature (Chandra et al., 2009; Forsgren, 2016). For instance, because studies on IE focus on international new ventures, they highlight the role of different types of knowledge, such as technological knowledge (Oviatt and McDougall, 1994), which do not necessarily matter for all firms seeking to internationalize. Moreover, because IE studies analyzing internationalization processes emphasize international new ventures, they focus on rapid internationalization processes. This stream of research is called dynamic internationalization and "identify patterns considered anomalous to traditional internationalization" (Jones et al., 2011, p. 638). Nonetheless, opportunity recognition and development that lead to the internationalization of the firm, comprising both new foreign market entry and sequential moves that happen after entry, may also be related to firms in general (e.g. manufacturing firms), which follow internationalization processes as defined by Johanson and Vahlne (1977). Such firms respond to smaller or less significant opportunities at first, which might emerge years after their foundation (Morschett et al., 2010). Throughout time, however, they increase their capabilities and resources to capture a larger number of opportunities or more significant ones (Dimitratos and Jones, 2005). It is of particular interest to examine how knowledge acquired and accumulated by the firm throughout time determines the recognition and development of international opportunities (Lamb et al., 2011; Sanz-Velasco, 2006) and consequently, the internationalization process of such firms (Forsgren, 2016). It should be noted that with the emergence of the IE field in the early 1990s, following the study of McDougall (1989), opportunities have also started to be incorporated into the IB literature (Ardichvili et al., 2003; Johanson and Vahlne, 2009; Vahlne and Johanson, 2013). For example, Johanson and Vahlne (2006, 2009), rethinking their initial model, suggested that the internationalization process could be considered as a process of recognition and development of international opportunities. Nonetheless, market knowledge continued to be emphasized, despite the recognition that other types of knowledge, such as internationalization knowledge, also affect the identification and development of international opportunities and the internationalization process. In fact, Johanson and Vahlne (2009) suggested that their initial internationalization process model overlooked the role of internationalization knowledge, which should be analyzed more thoroughly. Internationalization knowledge is not market-specific and is gained from the accumulation of experiences across foreign markets (Eriksson et al., 1997). Previous research has only more recently acknowledged that internationalization knowledge is critical to the unfolding of the internationalization process over time (e.g. Hakanson and Kappen, 2017), which is consistent with this paper's analysis of not only new foreign market entry but also sequential moves that happen after entry. In conclusion, by bringing together insights from both IB and IE research, this paper provides a better understanding of the role of knowledge, particularly market and internationalization knowledge, on the development of international opportunities that leads to the internationalization process. First, knowledge accumulated from prior experience decreases the time needed to identify new opportunities (Bingham and Davis, 2012). Opportunities are recognized more quickly and more often when the firm has more familiarity with the foreign market. Such familiarity increases the alertness of firms and their capacity to perceive growth and expansion opportunities (Autio et al., 2000). Second, the accumulation of knowledge allows the establishment and recombination of resources that support new ways of seeing opportunities (Cassia and Minola, 2012; Chandra et al., 2012). Opportunities may not be recognized because the firm acquired new knowledge per se, but rather due to a recombination of resources that changes the way it perceives opportunities within foreign markets. For example, when a firm enters a foreign market serendipitously, it may perceive the benefits of internationalization, and hence reallocate resources to it. Third, it is the continuous use of this accumulated knowledge that contributes to the development of opportunities (Ardichvili et al., 2003). According to organizational learning theory (e.g. Cohen and Levinthal, 1990), the firm can only integrate knowledge into its processes when and if it continuously uses it. Therefore, international opportunity recognition and development should be assessed as a process that evolves over time (Ardichvili et al., 2003). Finally, the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry, not only depends on the accumulation of knowledge but also on the types of knowledge that interact throughout time (Gao and Pan, 2010). Previous studies have acknowledged several different types of knowledge that affect the internationalization process (e.g. Eriksson et al., 1997). Nonetheless, the effects played by each type of knowledge in the internationalization of the firm over time needs to be better distinguished (Arte, 2017; Li et al., 2015; Perkins, 2014). As aforementioned, this study focuses on market and internationalization knowledge, which are fundamental types of knowledge affecting the internationalization of any firm operating abroad (Figueira-de-Lemos and Hadjikhani, 2014). Other types of knowledge, such as technological, may be more related to specific industries, markets or categories of firms (e.g. SMEs, INVs), and thus are not analyzed in this study. For example, firms that internationalize from inception (i.e. INVs) rely heavily on technological knowledge developed in-house (Oviatt and McDougall, 1994). Market knowledge and opportunities Market knowledge is defined by Johanson and Vahlne (1977) as information about the particularities of servicing a specific foreign market and is usually acquired by the firm as it operates in that market. Such type of knowledge is specific to a particular foreign market and, therefore, it cannot be easily transferred to other markets serviced by the firm (Eriksson et al., 1997; Johanson and Vahlne, 1977). Market knowledge was initially perceived as a mechanism for uncertainty reduction that gradually led to more commitment in a particular foreign market (Forsgren, 2002). In this tradition, the accumulation of market knowledge contributes to overcome the liability of foreignness, which represents the additional costs faced by a firm due to its unfamiliarity with the host country environment (Zaheer, 1995; Johanson and Vahlne, 2009). Such liability may result in restricted access to the local market and local networks that challenge the firm's expansion in foreign markets (Johnsen and Johnsen, 1999; Zaheer and Mosakowski, 1997). With the development of the IE literature (Jones et al., 2011), market knowledge started to be perceived as not only a mechanism to reduce the liability of foreignness but also an antecedent for the successful recognition of opportunities (Ardichvili et al., 2003; Arte, 2017). By knowing the culture, institutions, customers, competitors and market conditions of a particular foreign market, the firm becomes more aware of specific international opportunities existent in the market (Zhou, 2007). Previous studies have acknowledged that the uncertainty reduction from accumulated market knowledge may lead to the development of international opportunities (Johanson and Vahlne, 2006). Such studies are interested in the internationalization of the firm under uncertainty in general. However, this paper emphasizes the fact that firms that are internationalizing develop international opportunities under uncertainty (Alvarez and Barney, 2019). Rather than following a reactive internationalization process, they proactively engage in the development of opportunities when they build market knowledge in any given foreign market. Hence, this paper suggests that the accumulation of market knowledge over time allows firms to identify and exploit new opportunities in a given foreign market after entry, as detailed below. The influence of market knowledge in the internationalization process analyzed through an IE lens is a quite recent strand of research, which focuses on rapid internationalization processes (Jones et al., 2011). Research on international new ventures suggest that market knowledge can be acquired from a variety of sources of information such as the firm's own predisposition to innovativeness, risk-taking, and proactiveness, its exposure to cultural diversity (Kropp et al., 2008; Zhou, 2007), and the use of focused research conducted by specialized agents (Spence and Crick, 2009). However, Lord and Ranft (2000, p. 576) argue that market knowledge may be hard to obtain, because of the lack of "well-developed and wildly-available sources of market information" in some foreign markets, such as emerging markets. This paper suggests that market knowledge that leads to the recognition of opportunities is principally formed either in everyday entrepreneurial practices or through social interactions (Mainela et al., 2014). The former relates to prior knowledge, built from increasing commitment in each foreign market (Spence and Crick, 2009). It is experiential and accumulated throughout time. Gao and Pan (2010) show that firms with a longer time of local operations accumulate more market knowledge, speeding up the pace of the internationalization process in that foreign market. Such experiential market knowledge is important to support a firm's activities in foreign markets (Evangelista and Mac, 2016), through both the recognition and development of international opportunities. It increases the proclivity of the firm to identify opportunities, while contributing to its ability to develop them (Nordman and Melen, 2008). The latter relates to networks of relationships (Hohenthal et al., 2014; Vasilchenko and Morrish, 2011), which makes embeddedness in the foreign market a pivotal issue in the recognition and development of international opportunities. By building relationships with customers, suppliers, agents and other actors which are embedded in that specific market, the firm may take advantage of opportunities initially identified by those actors (Johanson and Vahlne, 2006, 2009). Therefore, the acquisition and accumulation of market knowledge, either by experience or by social interactions with actors embedded in each foreign market serviced by the firm, is a critical aspect of the internationalization process (Blomstermo et al., 2004). By increasing the awareness and alertness of the firm, as well as its familiarity with a specific foreign market, this paper proposes that market knowledge will lead to the development of international opportunities that happen after new foreign market entry. Nonetheless, because market knowledge is difficult to be transferred across markets without incurring in substantial opportunity costs (Eriksson et al., 1997; Johanson and Vahlne, 1977), market knowledge will only lead to the development of international opportunities in foreign markets where the firm already operates. Although market knowledge has been extensively associated with internationalization, P1 highlights the role of market knowledge in identifying international opportunities, rather than triggering the internationalization process via the reduction of uncertainty in the commitment of resources in foreign markets. Hence: P1. Market knowledge is positively related to the development of international opportunities. Internationalization knowledge and opportunities Differing from market knowledge, internationalization knowledge is related to accumulated experience from servicing foreign markets (Eriksson et al., 1997). It is not specific to a particular foreign market (Eriksson et al., 2000), but rather based on a general knowledge on how to conduct businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008). By drawing on previous experiences in foreign markets, the firm incorporates learned routines that support not only its current internationalization processes (Sapienza et al., 2006) but also its entry and subsequent expansions in new foreign markets not yet serviced by the firm (Hakanson and Kappen, 2017; Prashantham and Young, 2011). General knowledge of servicing foreign markets gives the firm an overall understanding of the internationalization process that can be applied across all foreign markets (Eriksson et al., 1997; Johanson and Vahlne, 1977). Consequently, the firm may benefit from not only homogeneous experiences (i.e. from similar environments) but also from heterogeneous experiences (i.e. from very different environments) in foreign markets (Kim et al., 2012), since this knowledge can be more freely transferred throughout its internationalization projects (Child and Hsieh, 2014). Similarly to market knowledge, internationalization knowledge was initially perceived as a mechanism for uncertainty reduction (e.g. Kim et al., 2012). Hitt et al. (2006), for example, suggest that by accumulating internationalization knowledge, the firm facilitates its internationalization process via the creation of social capital and useful resources, reducing its risk. Nonetheless, internationalization knowledge not only encourages firms to enter new markets and to use different servicing modes (Casillas and Acedo, 2013) through uncertainty reduction, but also through the recognition and development of new international opportunities. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge is a "versatile" and "country-neutral" knowledge that exposes the firm to unexpected international opportunities in general, which leads to the expansion in foreign markets. Nonetheless, it also exposes the firm to new knowledge and resources that increase its disposition to explore new and different opportunities (Nachum and Song, 2011). The accumulation of international experience contributes to the alertness and willingness of the firm in developing new international opportunities in general, even though they may be in different foreign markets where the firm has no specific market knowledge. In accordance with market knowledge, internationalization knowledge will be beneficial in markets already serviced by the firm. Even when the firm is already well established in those markets, it can use knowledge acquired elsewhere to better manage its relationships (Johanson and Vahlne, 2009) and expand its operations through the development of new opportunities in those foreign markets. This happens because the firm develops a "mindset" to more proactively search for opportunities, planning the internationalization process rather than just reacting to sporadic opportunities (Freeman et al., 2012). Accordingly, when the firm accumulates internationalization knowledge, it becomes better prepared to take advantage of opportunities as well as to manage them. Moreover, internationalization knowledge will be equally important to support entry into new foreign markets. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge leads to practices such as "country market screening, and evaluating strategic partners and distributors" that leads to new foreign market entries. Accordingly, Vasilchenko and Morrish (2011) showed that by building on previous experiences firms may develop a network of relationships abroad, engaging in cooperative behavior that ultimately leads to successful entry into new foreign markets. In sum, because internationalization knowledge reflects a general knowledge on conducting businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008), it can be used not only in foreign markets where the firm operates but also in new foreign markets. Therefore, internationalization knowledge will lead to the development of international opportunities both in foreign markets already serviced by the firm and in new foreign markets not yet serviced by the firm. Hence: P2. Internationalization knowledge is positively related to the development of international opportunities. As previously discussed, this paper looks at market knowledge and internationalization knowledge as antecedents of the development of international opportunities, which lead to the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry. Foreign market entry is a central element of internationalization research, and therefore has been extensively studied (Shaver, 2013). Nevertheless, the recognition of international opportunities that leads to entry in new foreign markets is quite recent in IE (Jones et al., 2011). Studies on IE that connect opportunities and internationalization focus on international new ventures following a rapid internationalization process (Jones et al. 2011). However, Dimitratos et al. (2016, p. 1220) suggest that research in the IE literature "[...] can shift the attention from the study of INVs to that of all opportunity-driven types of internationalized firms." While previous research has brought attention to the role of opportunities as drivers of the internationalization process of not only international new ventures but also firms in general (e.g. Johanson and Vahlne, 2006), such a relationship is not explicitly stated. Chandra et al. (2012, p. 75), for example, state that "Other authors note, and we agree, that 'the opportunity side of the internationalization process is not very well developed' (Johanson and Vahlne, 2006, p. 167). And such a side is not well developed because, although studies acknowledge the role of opportunities on the internationalization process, uncertainty reduction is still perceived as the main driver of the internationalization process (Figueira-de-Lemos and Hadjikhani, 2014). Using an IE lens to analyze foreign market entry, researchers have emphasized the mechanisms through which firms recognize opportunities in new foreign markets. Chandra et al. (2009), for example, suggest that firms exhibiting a strong entrepreneurial orientation will be more likely to recognize first time opportunities in foreign markets. Accordingly, Sapienza et al. (2006) argue that the firms' proactivity in the pursuit of international opportunities will lead to new foreign market entry. Such entrepreneurial orientation or proactivity is developed throughout time, being considered as a "long-term behavioural phenomenon" (Jones and Coviello, 2005, p. 297). In this vein, internationalizing firms may act entrepreneurially, actively identifying and developing opportunities in new foreign markets. This study, therefore, suggests that by identifying and developing international opportunities, firms are more likely to actively internationalize. A natural pathway for pursuing internationalization is through entering new foreign markets. Firms that develop international opportunities are more prepared and prone to entering new foreign markets. In other words, by developing more international opportunities, firms will enter more new foreign markets. Hence: P3a. The development of international opportunities is positively related to entry in new foreign markets. Nevertheless, the internationalization process comprises not only new foreign market entry but also sequential moves that may happen throughout time in markets already serviced by the firm (Casillas and Acedo, 2013; Gao and Pan, 2010). The entrepreneurial internationalization process relates to its international pathway (Jones et al., 2011), and sequential moves, as much as the entry choice, shape such a process (Casillas and Acedo, 2013; Gao and Pan, 2010). The bulk of the literature, however, emphasizes entry modes, often ignoring the dynamics of the internationalization process by not considering the sequential moves in foreign markets where it already operates (for a critique, see Shaver, 2013; Welch and Paavilainen-Mantymaki, 2014; for exceptions, see Mtigwe, 2005; Suarez-Ortega and Alamo-Vera, 2005). Sequential moves refer to the activities that follow new foreign market entry (e.g. continued sales) and are associated with within-market expansion or eventual withdrawal (Gao and Pan, 2010). Considering sequential moves is important because the internationalization process is not static and its evolution after foreign market entry is equally important for a better understanding of such a process (Benito et al., 2009; Gao and Pan, 2010). Sequential moves may follow two paths - either mode continuation or modal shift (Benito and Welch, 1994; Benito et al., 2009). In the former, there is no modal change and the firm operates in a given foreign market by using its initial entry mode (Benito and Welch, 1994; Swoboda et al., 2015). It does so because the existing servicing mode is considered adequate to conducting business in a given foreign market. Switching costs may also be perceived as high, deterring the firm to switch the initial mode; or inertia may play a role, that is, the firm simply does not change the initial mode due to inertial forces (Benito et al., 2009). This paper suggests that mode continuation is associated with the development of opportunities below a threshold. This argument is to be developed in the following paragraph. Mode continuation, hence, is a confirmation of the firm's initial choice of servicing mode, independently from the type of servicing mode initially chosen (Benito et al., 2009). In the latter, the firm shifts its servicing mode to adjust its operations within a given foreign market (e.g. by shifting from a sales agent to a whole-owned subsidiary). By shifting the mode of operation, firms may be more responsive to foreign market needs (Benito et al., 2009). Modal shifts, nonetheless, do not necessarily imply higher commitment in a given foreign market. Firms can also shift the entry mode to adjust operations when the servicing mode currently being used is not appropriate (e.g. by shifting from a sales agent to exporting). Consequently, this study suggests that the development of international opportunities will be related to sequential moves. More specifically, firms will invest resources, time, and effort to identify and develop opportunities in not only new foreign markets but also markets where it already operates. The development of international opportunities related to sequential moves allow firms to maintain and even strengthen their position in any given foreign market. In this vein, sequential moves are the pathway after new foreign market entry. Initially, however, such sequential moves will happen only via mode continuation. By accumulating knowledge and building relationships in foreign markets throughout time, via the development of new opportunities in those markets, the firm will be more inclined to continue servicing them by the continued use of the initial entry mode. For example, a firm may enter a given foreign market via direct exporting to one customer. Throughout time, because the firm is more familiar with that foreign market and with the exporting process in general, it expands its customer base there by exporting to new customers. This is consistent with the general conceptualization of international opportunities as the development of new customers (Reuber et al., 2018). The firm, thus, developed new international opportunities in that foreign market but did not change the initial servicing mode (i.e. direct exporting), which corresponds to mode continuation. Sequential moves via mode continuation are an easier pathway of the internationalization process after new foreign market entry, since they do not require leaps of resources and knowledge. Formally stated: P3b. The development of international opportunities is positively related to sequential moves using mode continuation in foreign markets serviced by the firm. In the example above, the firm chose to keep using the same servicing mode of entry even after it develops more international opportunities in the given foreign market. This may be because the firm did not develop enough international opportunities because it has not accumulated enough knowledge yet. Shifts in the servicing mode require more combined knowledge and understanding of the market needs. The development of opportunities per se, however, will not lead to shifts in the entry mode chosen by the firm to enter a given foreign market. It is only after a threshold is achieved that the development of international opportunities will indeed allow for a change in the firm's current internationalization process that requires riskier decisions, such as shifting the entry mode (Benito et al., 2005; Clark et al., 1997). The idea of a threshold is in line with the use of self-organized criticality (SOC) in organizations advanced by Andriani and McKelvey (2009, p. 1061), in which organizational phenomena not always follow a linear distribution - instead, they evolve "toward a critical state." When such a state is achieved, any additional related effort or interaction brings change. The authors argue that the IB "arena is especially vulnerable to SOC effects" (Andriani and McKelvey, 2007, p. 1215), in accordance with the argument presented in this study. Accordingly, prior studies have suggested that servicing modes are difficult to change, either when they represent higher or lower commitment (e.g. Anderson and Coughlan, 1987), which has been corroborated by empirical evidence (e.g. Pedersen et al., 2002). Nonetheless, after the firm builds enough knowledge that is translated into a consistent number of international opportunities developed in a given foreign market, it is able to overcome the difficulties associated with shifting the entry mode (e.g. switching costs and inertia). Hence, achieving such a threshold will lead to sequential moves where the firm changes its commitment to better serve the needs of a given foreign market by shifting the servicing mode. Formally stated: P3c. The development of international opportunities beyond a threshold is positively related to modal shifts in foreign markets serviced by the firm. On the example preceding P3b, the firm entered a given foreign market using direct exporting as the entry mode and continued using direct exporting. If the relationship between the development of international opportunities and the internationalization process were linear, the firm should have to switch or at least adjust the servicing mode. Instead, it continues using the same servicing mode, even after it expands its customer base and increases the volume of foreign sales. Nonetheless, imagine that sales to the firm's current customers keep increasing and more new customers are established. The firm evolves toward a critical state where exporting may not be adequate or the satisficing solution anymore. It has achieved this threshold and, hence, switches its servicing mode to a sales subsidiary, for example. This example shows that the relationship between the development of international opportunities and modal shifts is not linear. The firm needed to achieve a threshold in order to change its servicing mode. Nonetheless, as previously stated, modal shifts do not necessarily imply higher commitment in a given foreign market. Continuing with the example above, the firm established a sales subsidiary and continued developing opportunities in the foreign market. Nonetheless, after developing a certain number of opportunities without changing or adjusting the servicing mode (i.e. sales subsidiary), the firm learnt it had made a bad decision since, for example, the marginal costs of the sales subsidiary outweighed its marginal returns. Again, the firm evolved toward a critical state where a sales subsidiary was not adequate anymore. After reaching this threshold, the firm decided to divest the sales subsidiary and use a local sales representative instead. On P1 and P2, market and internationalization knowledge are analyzed independently, to show how each type of knowledge relates to the development of international opportunities. On P3a and P3b, the development of international opportunities that follows the accumulation of market and internationalization knowledge is connected to two aspects of the internationalization process - new foreign market entry and sequential moves related to mode continuation, respectively. On P3c, the idea of a threshold that affects sequential moves related to modal shifts is introduced. In addition, this paper proposes that both types of knowledge may combine to form the international knowledge stockpile of the firm. It is assumed that both market and internationalization knowledge interact through a mutual influence process, in which the former contributes to the development of the latter and vice versa. For example, Barkema et al. (1996) suggested that learning effects in a foreign market (i.e. market knowledge) were associated with learning from internationalization processes in different markets (i.e. internationalization knowledge), even though the degree of learning differed depending on similarities between those markets. Accordingly, knowledge in different markets accumulates over time to become a firm-specific knowledge that can be relevant to foreign markets serviced by the firm or foreign markets where it intends to service (Eriksson et al., 1997). Thus, the international knowledge stockpile as suggested here is a heterogeneous reservoir (Hutzschenreuter and Matt, 2017). It addresses two apparently contradictory aspects of knowledge development - diversity (i.e. market knowledge) and transferability (i.e. internationalization knowledge). By doing so, it increases the probability of growth and survival in foreign markets (Kogut and Zander, 1992). It may also provide the firm with differential efficiencies such as market diversification and innovation (Foss, 1996). The international knowledge stockpile can also be associated with pace in internationalization processes. First, it may enable the firm to respond to market turbulences more rapidly (Miller, 2002) and most importantly, it may accelerate the firm internationalization (Casillas et al., 2015), particularly in terms of the speed of modal shifts (Chetty et al., 2014). Considering that the international knowledge stockpile constitutes an important asset for internationalizing firms, this paper suggests that its effect is different than the effect of each type of knowledge that comprises it - market and internationalization knowledge - when analyzed separately. This is because the international knowledge stockpile enables the firm to capitalize on international opportunities, thus strengthening the relationship between international opportunities and the internationalization process. Therefore, this paper suggests that the international knowledge stockpile will moderate the relationship between the development of international opportunities and the internationalization process. It builds on the idea that the knowledge reservoir of the firm comprises both in-use and idle knowledge (Penrose, 1959), that is, utilized and underutilized knowledge. In this sense, the international knowledge stockpile is a mix of both market and internationalization knowledge in-use, which is knowledge the firm uses to advance its internationalization process, and market and internationalization idle knowledge, which is knowledge the firm can explore in near-future internationalization activities. In other words, the firm accumulates both market and internationalization knowledge over time. When those two types of knowledge are combined in a productive way, a baseline is achieved - the firm establishes its international knowledge stockpile - strengthening the relationship between the development of international opportunities and the internationalization process. "Durable and repetitive interactions" in foreign markets are the main drivers of this process (Eriksson et al., 1997, p. 354). First, whereas the development of international opportunities will lead to more new foreign market entries (P3a), the international knowledge stockpile will strengthen such relationship (P4a). The international knowledge stockpile may lead the firm to enter more new foreign markets in shorter intervals. This is because the firm possesses knowledge that results from different environments and knows how to transfer such knowledge to new foreign markets. Moreover, the international knowledge stockpile may also enable the firm to not only enter more markets in shorter intervals, but also enter multiple markets simultaneously (Wang and Suh, 2009). Literature acknowledges that knowledge supports the development of multiple internationalization processes within a firm, that is, entering and servicing multiple foreign markets at the same time (Welch and Paavilainen-Mantymaki, 2014). Hence, instead of entering foreign markets sequentially, as posited by Johanson and Vahlne (1977), the international knowledge stockpile enables the firm to capitalize on opportunities in multiple foreign markets at the same time. And this is possible because the firm possesses this heterogeneous reservoir of knowledge in-use and idle knowledge, which can be used to address multiple activities - in this case, shortly-spaced or simultaneous entries in different foreign markets. Formally stated: P4a. The relationship between the development of international opportunities and new foreign market entry is moderated by the firm's international knowledge stockpile. Second, whereas the development of international opportunities will lead to more sequential moves via mode continuation in any given foreign market serviced by the firm (P3b), the international knowledge stockpile will strengthen such relationship (P4b). The accumulation of knowledge has been associated with the internationalization process post new foreign market entry (Dimitratos et al., 2016). This means that such combination of knowledge reinforces the fact that the firm has chosen the right foreign market entry mode to service that specific foreign market, because it understands not only the needs of that specific foreign market but also its options and their outcomes when using different foreign market entry modes. Such reinforcement will lead to a proactive search for opportunities that do not require change in the servicing mode but that still strengthen the internationalization process. In addition, the international knowledge stockpile enables the firm to better assess the switching costs associated with modal shifts before attempting to switch the servicing mode. If it believes that the switching costs are too high, it will avoid switching the servicing mode. In this context, the international knowledge stockpile allows the firm to assess such switching costs more efficiently. This is because the international knowledge stockpile is associated with heuristics and routines on how to measure and monitor such costs, due to previous experiences. And the firm, because of its international knowledge stockpile, may decide to continue using a given servicing mode while adjusting such mode (e.g. switching sales agents). Hence: P4b. The relationship between the development of international opportunities and sequential moves using mode continuation in foreign markets serviced by the firm is moderated by the firm's international knowledge stockpile. Third, only after a threshold determined by a certain number of developed international opportunities is achieved (Barkema and Drogendijk, 2007), the firm will engage in modal shifts (P3c). This paper also suggests that the international knowledge stockpile of the firm will strengthen this relationship (P4c). As discussed above, modal shifts reflect changes in the commitment toward the internationalization process, since the firm shifts its entry mode to better serve the needs of a given foreign market (Benito et al., 2009). After the international knowledge stockpile is built, the firm develops international opportunities that will reach the threshold faster, which will allow it to shift the servicing mode. This idea is coupled with the fact that the firm, because of its international knowledge stockpile, is more experienced in switching its servicing mode. In other words, there is a trigger for modal shifts after a certain threshold is achieved. And there is the accumulated experience after this threshold is achieved - the firm knows about different servicing modes and how to better use them. Hence, the firm is able to reduce searching costs (i.e. identifying new servicing modes) and implementation costs (i.e. shifting the servicing mode per se). This enables the firm to engage in modal shifts faster and more efficiently (Casillas et al., 2015). In other words, the threshold, in terms of the development of international opportunities, will be achieved in shorter intervals. Hence, if the time to reach the threshold is reduced due to the international knowledge stockpile, modal shifts will not only happen more frequently but also in shorter intervals. Hence, because modal shifts require not only a combination of market and internationalization knowledge but also an additional effort from the firm of proactively developing international opportunities and changing the course of its internationalization process, as determined by the threshold, it is only after this threshold is achieved that the firm's international knowledge stockpile will moderate the relationship between international opportunities and modal shifts. After the threshold and international knowledge stockpile are achieved, the firm will engage in more modal shifts in different foreign markets and/or will shift the servicing modes at a given foreign market faster. Formally stated: P4c. The relationship between the development of international opportunities beyond a threshold and modal shifts is moderated by the firm's international knowledge stockpile. In sum, the combination of market and internationalization knowledge configures the knowledge stockpile of the firm, which reinforces the relationship between the development of international opportunities and the internationalization process by positively moderating such relationship. Thus, over and above the direct relationship between knowledge and international opportunities, this paper proposes an indirect relationship between knowledge, international opportunities, and the internationalization process comprising both new foreign market entry and sequential moves that happen after entry. Doing so is important because while it recognizes that different types of knowledge have different effects on the internationalization process, it considers that these types also combine to form knowledge that is firm-specific - the firm's international knowledge stockpile. Moreover, the international knowledge stockpile connects the three constructs advanced by the paper - knowledge, international opportunities, and the internationalization process. P1 and P2 connect knowledge to international opportunities; the set of P3a-P3c connect international opportunities and the internationalization process; the international knowledge stockpile completes the model, offering a refined understanding of internationalization processes as a function of both knowledge (i.e. market and internationalization) and the development of international opportunities. The conceptual model is presented in Figure 1. It shows that both market and internationalization knowledge are positively related to international opportunities, which, in turn are related to the internationalization process. From the IB field, the model borrows the notion of market knowledge and its relation to the overall internationalization process. From the IE field, it borrows the notion of international opportunities as an antecedent of the internationalization process. From a combination of both the IB and IE fields, it emphasizes the role of internationalization knowledge and the importance of looking at not only new foreign market entry but also sequential moves that happen after entry. The main points of novelty introduced by the model are showing that the path to sequential moves using mode continuation is different from that of sequential moves comprising modal shifts (i.e. the idea of a threshold) and market and internationalization knowledge combine to form the knowledge stockpile of the firm, which moderates the relationship between international opportunities and the internationalization process. By doing so, it offers a finer-grained view of the internationalization process, which can be entrepreneurial (i.e. related to the identification and development of opportunities) even for firms in general. By combining the IB and IE literatures, this paper presents a novel framework connecting three constructs that are typically analyzed separately in the literature - knowledge, international opportunities, and the internationalization process. Grounded on the idea that the internationalization process is the result of the development of international opportunities, this study looks at the types of knowledge that shape such opportunities as well as how those types interact to moderate the relationship between international opportunities and the internationalization process. On the one hand, research on IE often connects knowledge to the development of international opportunities (e.g. Chandra et al., 2009), but without highlighting how such development affects the internationalization process of the firm, comprising both new foreign market entry and sequential moves in all foreign markets where the firm operates. This literature also focuses on international new ventures, thus disregarding other types of internationalized firms (Dimitratos et al., 2016). On the other hand, research on IB connects knowledge directly to internationalization, usually focusing on new foreign market entry (e.g. Shaver, 2013). It also posits that the main driver of the internationalization process is uncertainty reduction via knowledge development (Figueira-de-Lemos et al., 2011). This study explicitly connects knowledge, international opportunities, and the internationalization process during and after new foreign market entry and provides testable propositions that contribute to advance and bring together both the IB and IE literatures. First, this study suggests that both market and internationalization knowledge will be positively related to the development of international opportunities, emphasizing the idea that the internationalization process may also be triggered and driven by the proactive identification and exploitation of opportunities, rather than mainly by uncertainty reduction. While uncertainty reduction has been important to comprehend internationalization (Figueira-de-Lemos et al., 2011), understanding how the development of international opportunities under uncertainty affects internationalization processes can better inform research on the internationalization of the firm (see Alvarez and Barney, 2019 for a discussion on uncertainty and opportunities). Second, it suggests that the development of international opportunities will be positively related to both new foreign market entry and sequential moves in foreign markets already serviced by the firm, thus showing that opportunities matter in earlier as well as in later epochs of the internationalization of the firm. Few studies do so (e.g. Benito et al., 2009). The bulk of the research on internationalization equates foreign market expansion to entry in a single or a few foreign markets (Shaver, 2013). Finally, this study explains how the relationship between knowledge and international opportunities leading to foreign market entries and sequential moves where the servicing mode does not change (i.e. sequential moves using mode continuation) is different from that leading to sequential moves where the servicing mode changes (i.e. modal shifts). Modal shifts will occur only after a certain threshold related to the number of international opportunities developed in foreign markets is achieved. And it suggests that the relationship between the development of international opportunities and both new foreign market entry and sequential moves, either via mode continuation or modal shifts, will be moderated by the international knowledge stockpile of the firm By introducing the idea of a threshold and the concept of international knowledge stockpile, this paper refines the view that the effects of the accumulation of knowledge are gradual and incremental throughout the internationalization process. Theoretical contributions This paper offers the following contributions. First, according to Forsgren (2016, p. 2), "incorporating [...] entrepreneurship into the [internationalization] model needs more consideration." The literature on IB, which focuses on the internationalization process, and on IE, which focuses on the development of international opportunities, have evolved quite independently over time. By incorporating IE into the IB literature, this paper shifts the focus of the internationalization process from uncertainty reduction to an entrepreneurial process of developing international opportunities. This paper argues that the emphasis on the development of international opportunities is the right pathway to bring those literatures together, as suggested by Johanson and Vahlne (2009). This paper also shows that the entrepreneurial process of opportunity recognition and development is important for any internationalized firm. This extends the IE literature suggesting that the understanding of internationalization processes of firms other than international new ventures will benefit from incorporating the idea that such processes result from the development of international opportunities (Dimitratos et al., 2016). Second, this study also answers recent calls to assess the internationalization process using a dynamic approach (e.g. Welch and Paavilainen-Mantymaki, 2014). Sequential moves that happen after entry shape the long-term growth and hence the success or failure of internationalization processes (Casillas and Acedo, 2013). Nonetheless, the bulk of the IB literature emphasizes entry in foreign markets, ignoring the role of time and of the subsequent steps followed by a firm that indeed lead to its expansion in foreign markets (Gao and Pan, 2010). This study not only explains the antecedents of both new foreign market entry and sequential moves, but also disaggregates the latter into mode continuation and modal shift. Most importantly, by doing so it allows for different effects of the development of international opportunities on new foreign market entry and on each sequential move that happens after it. If the internationalization process was conflated with new foreign market entry, it would assume that the effects of the development of international opportunities during entry would be the same for sequential moves that happen after entry. Third, this study contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009) showing that knowledge in internationalization processes is comprised of both market and internationalization knowledge. This means that knowledge is not homogeneous in the internationalization process. Following the suggestion of Li et al. (2015, p. 919) this paper assumes that different types of knowledge may affect international opportunities in different ways. This differs from previous literature, which looks at international knowledge as a whole and connects it directly to the internationalization process. This paper argues that market knowledge will be positively related to the development of international opportunities by increasing the firm's familiarity with local practices and businesses in a specific foreign market. Moreover, internationalization knowledge will also be positively related to the development of international opportunities, but by increasing the firms' ability of doing business abroad. Moreover, this paper introduces the concept of the firm's international knowledge stockpile. By showing that the combination of market and internationalization knowledge develops into the international knowledge stockpile of the firm, and that this international knowledge stockpile will influence the relationship between international opportunities and the internationalization process, this paper contributes to show how knowledge interacts to shape such process. Extant literature assumes that the accumulation of experience abroad starts shaping the internationalization process from its very beginning. This study, however, focuses on the moderating effect of the accumulation of knowledge on the internationalization process over time. Fourth, this study introduces the idea of a threshold. It suggests that, before a threshold related to the number of developed international opportunities is achieved, the knowledge the firm possesses is not yet sufficient to shape modal shifts in foreign markets serviced by the firm. In other words, the accumulation of knowledge starts after the firm's first operations abroad, which may be due to emergent chance opportunities, but it is only after a certain number of international opportunities are developed that the internationalization process starts to be shaped by modal shifts (e.g. from sporadic sales to a planned internationalization process, either increasing or decreasing the level of commitment in a given foreign market). This is particularly important in that it allows for the possibility of different pathways of internationalization (Mathews and Zander, 2007), in which the firm may follow different trajectories in different foreign markets. Such trajectories are not always linear and are often interdependent, suggesting that the internationalization process may be more complex than previously established in the IB literature. Managerial implications The propositions developed in this study offer insights for firms that wish to internationalize or that have already started internationalizing. First, firms may proactively search opportunities abroad, even when they are uncertain about their ability or willingness to internationalize. This is because the internationalization process will only be systematic after a certain number of developed international opportunities is achieved. Initial opportunities may be developed via trial and error, without compromising the firm's internationalization process as a whole. In addition, acknowledging that the effect of the learning process associated with the internationalization process is not always gradual and incremental allows firms to better plan their internationalization processes to fit their strategic goals. For instance, firms can use internationalization knowledge acquired in servicing a specific set of foreign markets to purposefully develop international opportunities that will enable them to shift their operation mode in a different foreign market. Doing so allows them to shape their internationalization processes in terms of their resources commitment (Acedo and Casillas, 2007; Casillas and Acedo, 2013; Gao and Pan, 2010). Finally, this study shows that firms do not necessarily need to build their internationalization process gradually and reactively. The accumulation of both market and internationalization knowledge allows firms to enter several foreign markets and to keep servicing foreign markets where they already operate. But because a more systematic internationalization process via modal shifts only happens after a threshold (i.e. a certain number of developed international opportunities) is reached, the sooner the firm reaches it, the better. Doing so usually involves a willingness to take risk by entering foreign markets where the firm has no or little market knowledge but is supported by structural and management processes developed by the accumulation of internationalization knowledge. In sum, managers should be less cautious and then, proactively develop international opportunities that enable their firms to enter and evolve in foreign markets. The model developed in this study suggests that managers of firms that wish to internationalize should also proactively transfer knowledge across different markets and develop routines and heuristics that allow such knowledge to be integrated into the firm. Such a process of accumulation, transfer and integration of knowledge should be continuous to facilitate the development of international opportunities that allows for the achievement of the threshold beyond which the internationalization process becomes more systematic (i.e. via modal shifts). Limitations and further research Because this is a conceptual paper, the propositions presented in this study have not been empirically tested. The first direction for further research is, hence, to quantitatively test them by using panel data, since longitudinal data are needed to track the internationalization process throughout time. A second limitation is because only two types of knowledge are analyzed - market and internationalization. However, research suggests that other types of knowledge may also affect the internationalization process, such as technological knowledge (Fletcher and Harris, 2012). Previous studies also suggest that market knowledge should be differentiated from institutional knowledge, which is not easily acquired (Eriksson et al., 1997). By incorporating other types of knowledge and differentiating market knowledge from institutional knowledge, future research may inform the relationship between different types of knowledge and internationalization processes. Doing so will contribute to recent research that suggests that the internationalization process is contingent on the several different types of knowledge associated with it rather than on market knowledge only, which is the focus of the bulk of the research on the internationalization process of the firm. A third limitation is that the model looks at knowledge as an antecedent of the development of international opportunities and the internationalization process but it does not capture the recursive relationship that should exist between the three constructs. As firms internationalize, they learn how to better develop international opportunities and accumulate both market and internationalization knowledge. Likewise, the model does not capture withdrawal from foreign markets or de-internationalization (Benito and Welch, 1997). The latter, however, can also be the outcome of knowledge accumulation over time. Future research opportunities that inform how knowledge affects de-internationalization are numerous[1].
|
The proposed model shows that market and internationalization knowledge combine to form the international knowledge stockpile of the firm, which moderates the relationship between the development of international opportunities and the internationalization process, comprising not only new foreign market entry but also sequential moves that happen after entry using either mode continuation or modal shift. Moreover, it shows that the development of opportunities only leads to modal shifts after a certain threshold is achieved.
|
[SECTION: Value] The process of knowledge acquisition and development as a mechanism of either uncertainty reduction or opportunity development is critical in understanding internationalization processes (Casillas et al., 2015; Fletcher et al., 2013). Initially, the accumulation of knowledge was perceived as a mechanism for uncertainty reduction, namely, knowledge gained from experience incrementally reduces the lack of information about a given foreign market (Johanson and Vahlne, 1977, 2009). More recently, the role of knowledge in the internationalization process has been associated with international opportunity development (Chandra, 2017), namely, the exploitation of international opportunities that leads to entry in new foreign markets and to new businesses in foreign markets serviced by the firm (Chandra et al., 2009). This conceptual paper subscribes to the latter viewpoint by looking at the internationalization process as an entrepreneurial process related to the development of international opportunities. More specifically, this paper brings together the internationalization theory broadly advanced by the international business (IB) field (e.g. Johanson and Vahlne, 1977) with the international entrepreneurship (IE) literature (Casillas et al., 2015; Forsgren, 2016). In doing so, it provides a model connecting three constructs - knowledge, international opportunities and the internationalization process - that are often analyzed separately. Particularly concerning knowledge, the proposed model separates the effects of both market and internationalization knowledge over time. While the bulk of the research has emphasized the effect of market knowledge, which is dependent on a specific foreign market, the effect of general internationalization knowledge has been overlooked (Hakanson and Kappen, 2017). And different types of knowledge should affect internationalization decisions differently (Evangelista and Mac, 2016). Moreover, the model assesses the internationalization process using a dynamic approach (Welch and Paavilainen-Mantymaki, 2014) that takes into consideration sequential moves in foreign markets (Casillas and Acedo, 2013). It comprises both new foreign market entry and sequential moves that happen after entry (i.e. either via mode continuation or modal shifts). It also introduces the idea of a threshold beyond which the development of international opportunities following the accumulation and combination of knowledge affects the internationalization process. Finally, this paper contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009; Child and Hsieh, 2014) by showing that knowledge is not homogeneous and how it interacts to influence internationalization processes. In particular, this study posits that the combination of knowledge moderates the relationship between international opportunities and the internationalization process. Currently, the bulk of the literature suggests that the accumulation of knowledge gradually shapes the internationalization process (e.g. Hadjikhani, 1997). This study refines this view by arguing that the accumulation of knowledge shapes the identification and development of international opportunities, which will affect the internationalization process in distinct ways, before and after a threshold is achieved. The bulk of the IB research on internationalization has looked at knowledge as a mechanism for uncertainty reduction (Petersen et al., 2008). In the internationalization process of the firm conceptualized by Johanson and Vahlne (1977), knowledge is gained incrementally from experience in a foreign market (i.e. experiential knowledge). When a firm starts servicing a foreign market, it faces challenges associated with the new environment, such as cultural (Kogut and Singh, 1988) and behavioral differences (Sanchez-Peinado and Pla-Barber, 2006), which bring uncertainty. By accumulating experiential knowledge, the firm learns how to operate in the new environment and to deal with those differences, reducing such uncertainty (Johanson and Vahlne, 1977). In this context, the most critical type of knowledge is experiential market knowledge (e.g. Figueira-de-Lemos et al., 2011), that is, knowledge gained from prior experience in a given foreign market (e.g. Chandra et al., 2012; Johanson and Vahlne, 1977). Thus, market knowledge is a central element of the internationalization process described by Johanson and Vahlne (1977). Conversely, research on IE has looked at knowledge as one of several antecedents affecting the internationalization of the firm (Oviatt and McDougall, 1994). However, knowledge is not related to uncertainty reduction but to the identification and development of international opportunities (e.g. Ardichvili et al., 2003). International opportunities are here conceptualized as opportunities recognized by firms and connected actors to establish or expand their business outside of their home market. The recognition of international opportunities as the trigger of the internationalization process and how knowledge affects it, however, have not received systematic attention from the literature (Chandra et al., 2009; Forsgren, 2016). For instance, because studies on IE focus on international new ventures, they highlight the role of different types of knowledge, such as technological knowledge (Oviatt and McDougall, 1994), which do not necessarily matter for all firms seeking to internationalize. Moreover, because IE studies analyzing internationalization processes emphasize international new ventures, they focus on rapid internationalization processes. This stream of research is called dynamic internationalization and "identify patterns considered anomalous to traditional internationalization" (Jones et al., 2011, p. 638). Nonetheless, opportunity recognition and development that lead to the internationalization of the firm, comprising both new foreign market entry and sequential moves that happen after entry, may also be related to firms in general (e.g. manufacturing firms), which follow internationalization processes as defined by Johanson and Vahlne (1977). Such firms respond to smaller or less significant opportunities at first, which might emerge years after their foundation (Morschett et al., 2010). Throughout time, however, they increase their capabilities and resources to capture a larger number of opportunities or more significant ones (Dimitratos and Jones, 2005). It is of particular interest to examine how knowledge acquired and accumulated by the firm throughout time determines the recognition and development of international opportunities (Lamb et al., 2011; Sanz-Velasco, 2006) and consequently, the internationalization process of such firms (Forsgren, 2016). It should be noted that with the emergence of the IE field in the early 1990s, following the study of McDougall (1989), opportunities have also started to be incorporated into the IB literature (Ardichvili et al., 2003; Johanson and Vahlne, 2009; Vahlne and Johanson, 2013). For example, Johanson and Vahlne (2006, 2009), rethinking their initial model, suggested that the internationalization process could be considered as a process of recognition and development of international opportunities. Nonetheless, market knowledge continued to be emphasized, despite the recognition that other types of knowledge, such as internationalization knowledge, also affect the identification and development of international opportunities and the internationalization process. In fact, Johanson and Vahlne (2009) suggested that their initial internationalization process model overlooked the role of internationalization knowledge, which should be analyzed more thoroughly. Internationalization knowledge is not market-specific and is gained from the accumulation of experiences across foreign markets (Eriksson et al., 1997). Previous research has only more recently acknowledged that internationalization knowledge is critical to the unfolding of the internationalization process over time (e.g. Hakanson and Kappen, 2017), which is consistent with this paper's analysis of not only new foreign market entry but also sequential moves that happen after entry. In conclusion, by bringing together insights from both IB and IE research, this paper provides a better understanding of the role of knowledge, particularly market and internationalization knowledge, on the development of international opportunities that leads to the internationalization process. First, knowledge accumulated from prior experience decreases the time needed to identify new opportunities (Bingham and Davis, 2012). Opportunities are recognized more quickly and more often when the firm has more familiarity with the foreign market. Such familiarity increases the alertness of firms and their capacity to perceive growth and expansion opportunities (Autio et al., 2000). Second, the accumulation of knowledge allows the establishment and recombination of resources that support new ways of seeing opportunities (Cassia and Minola, 2012; Chandra et al., 2012). Opportunities may not be recognized because the firm acquired new knowledge per se, but rather due to a recombination of resources that changes the way it perceives opportunities within foreign markets. For example, when a firm enters a foreign market serendipitously, it may perceive the benefits of internationalization, and hence reallocate resources to it. Third, it is the continuous use of this accumulated knowledge that contributes to the development of opportunities (Ardichvili et al., 2003). According to organizational learning theory (e.g. Cohen and Levinthal, 1990), the firm can only integrate knowledge into its processes when and if it continuously uses it. Therefore, international opportunity recognition and development should be assessed as a process that evolves over time (Ardichvili et al., 2003). Finally, the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry, not only depends on the accumulation of knowledge but also on the types of knowledge that interact throughout time (Gao and Pan, 2010). Previous studies have acknowledged several different types of knowledge that affect the internationalization process (e.g. Eriksson et al., 1997). Nonetheless, the effects played by each type of knowledge in the internationalization of the firm over time needs to be better distinguished (Arte, 2017; Li et al., 2015; Perkins, 2014). As aforementioned, this study focuses on market and internationalization knowledge, which are fundamental types of knowledge affecting the internationalization of any firm operating abroad (Figueira-de-Lemos and Hadjikhani, 2014). Other types of knowledge, such as technological, may be more related to specific industries, markets or categories of firms (e.g. SMEs, INVs), and thus are not analyzed in this study. For example, firms that internationalize from inception (i.e. INVs) rely heavily on technological knowledge developed in-house (Oviatt and McDougall, 1994). Market knowledge and opportunities Market knowledge is defined by Johanson and Vahlne (1977) as information about the particularities of servicing a specific foreign market and is usually acquired by the firm as it operates in that market. Such type of knowledge is specific to a particular foreign market and, therefore, it cannot be easily transferred to other markets serviced by the firm (Eriksson et al., 1997; Johanson and Vahlne, 1977). Market knowledge was initially perceived as a mechanism for uncertainty reduction that gradually led to more commitment in a particular foreign market (Forsgren, 2002). In this tradition, the accumulation of market knowledge contributes to overcome the liability of foreignness, which represents the additional costs faced by a firm due to its unfamiliarity with the host country environment (Zaheer, 1995; Johanson and Vahlne, 2009). Such liability may result in restricted access to the local market and local networks that challenge the firm's expansion in foreign markets (Johnsen and Johnsen, 1999; Zaheer and Mosakowski, 1997). With the development of the IE literature (Jones et al., 2011), market knowledge started to be perceived as not only a mechanism to reduce the liability of foreignness but also an antecedent for the successful recognition of opportunities (Ardichvili et al., 2003; Arte, 2017). By knowing the culture, institutions, customers, competitors and market conditions of a particular foreign market, the firm becomes more aware of specific international opportunities existent in the market (Zhou, 2007). Previous studies have acknowledged that the uncertainty reduction from accumulated market knowledge may lead to the development of international opportunities (Johanson and Vahlne, 2006). Such studies are interested in the internationalization of the firm under uncertainty in general. However, this paper emphasizes the fact that firms that are internationalizing develop international opportunities under uncertainty (Alvarez and Barney, 2019). Rather than following a reactive internationalization process, they proactively engage in the development of opportunities when they build market knowledge in any given foreign market. Hence, this paper suggests that the accumulation of market knowledge over time allows firms to identify and exploit new opportunities in a given foreign market after entry, as detailed below. The influence of market knowledge in the internationalization process analyzed through an IE lens is a quite recent strand of research, which focuses on rapid internationalization processes (Jones et al., 2011). Research on international new ventures suggest that market knowledge can be acquired from a variety of sources of information such as the firm's own predisposition to innovativeness, risk-taking, and proactiveness, its exposure to cultural diversity (Kropp et al., 2008; Zhou, 2007), and the use of focused research conducted by specialized agents (Spence and Crick, 2009). However, Lord and Ranft (2000, p. 576) argue that market knowledge may be hard to obtain, because of the lack of "well-developed and wildly-available sources of market information" in some foreign markets, such as emerging markets. This paper suggests that market knowledge that leads to the recognition of opportunities is principally formed either in everyday entrepreneurial practices or through social interactions (Mainela et al., 2014). The former relates to prior knowledge, built from increasing commitment in each foreign market (Spence and Crick, 2009). It is experiential and accumulated throughout time. Gao and Pan (2010) show that firms with a longer time of local operations accumulate more market knowledge, speeding up the pace of the internationalization process in that foreign market. Such experiential market knowledge is important to support a firm's activities in foreign markets (Evangelista and Mac, 2016), through both the recognition and development of international opportunities. It increases the proclivity of the firm to identify opportunities, while contributing to its ability to develop them (Nordman and Melen, 2008). The latter relates to networks of relationships (Hohenthal et al., 2014; Vasilchenko and Morrish, 2011), which makes embeddedness in the foreign market a pivotal issue in the recognition and development of international opportunities. By building relationships with customers, suppliers, agents and other actors which are embedded in that specific market, the firm may take advantage of opportunities initially identified by those actors (Johanson and Vahlne, 2006, 2009). Therefore, the acquisition and accumulation of market knowledge, either by experience or by social interactions with actors embedded in each foreign market serviced by the firm, is a critical aspect of the internationalization process (Blomstermo et al., 2004). By increasing the awareness and alertness of the firm, as well as its familiarity with a specific foreign market, this paper proposes that market knowledge will lead to the development of international opportunities that happen after new foreign market entry. Nonetheless, because market knowledge is difficult to be transferred across markets without incurring in substantial opportunity costs (Eriksson et al., 1997; Johanson and Vahlne, 1977), market knowledge will only lead to the development of international opportunities in foreign markets where the firm already operates. Although market knowledge has been extensively associated with internationalization, P1 highlights the role of market knowledge in identifying international opportunities, rather than triggering the internationalization process via the reduction of uncertainty in the commitment of resources in foreign markets. Hence: P1. Market knowledge is positively related to the development of international opportunities. Internationalization knowledge and opportunities Differing from market knowledge, internationalization knowledge is related to accumulated experience from servicing foreign markets (Eriksson et al., 1997). It is not specific to a particular foreign market (Eriksson et al., 2000), but rather based on a general knowledge on how to conduct businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008). By drawing on previous experiences in foreign markets, the firm incorporates learned routines that support not only its current internationalization processes (Sapienza et al., 2006) but also its entry and subsequent expansions in new foreign markets not yet serviced by the firm (Hakanson and Kappen, 2017; Prashantham and Young, 2011). General knowledge of servicing foreign markets gives the firm an overall understanding of the internationalization process that can be applied across all foreign markets (Eriksson et al., 1997; Johanson and Vahlne, 1977). Consequently, the firm may benefit from not only homogeneous experiences (i.e. from similar environments) but also from heterogeneous experiences (i.e. from very different environments) in foreign markets (Kim et al., 2012), since this knowledge can be more freely transferred throughout its internationalization projects (Child and Hsieh, 2014). Similarly to market knowledge, internationalization knowledge was initially perceived as a mechanism for uncertainty reduction (e.g. Kim et al., 2012). Hitt et al. (2006), for example, suggest that by accumulating internationalization knowledge, the firm facilitates its internationalization process via the creation of social capital and useful resources, reducing its risk. Nonetheless, internationalization knowledge not only encourages firms to enter new markets and to use different servicing modes (Casillas and Acedo, 2013) through uncertainty reduction, but also through the recognition and development of new international opportunities. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge is a "versatile" and "country-neutral" knowledge that exposes the firm to unexpected international opportunities in general, which leads to the expansion in foreign markets. Nonetheless, it also exposes the firm to new knowledge and resources that increase its disposition to explore new and different opportunities (Nachum and Song, 2011). The accumulation of international experience contributes to the alertness and willingness of the firm in developing new international opportunities in general, even though they may be in different foreign markets where the firm has no specific market knowledge. In accordance with market knowledge, internationalization knowledge will be beneficial in markets already serviced by the firm. Even when the firm is already well established in those markets, it can use knowledge acquired elsewhere to better manage its relationships (Johanson and Vahlne, 2009) and expand its operations through the development of new opportunities in those foreign markets. This happens because the firm develops a "mindset" to more proactively search for opportunities, planning the internationalization process rather than just reacting to sporadic opportunities (Freeman et al., 2012). Accordingly, when the firm accumulates internationalization knowledge, it becomes better prepared to take advantage of opportunities as well as to manage them. Moreover, internationalization knowledge will be equally important to support entry into new foreign markets. Prashantham and Young (2011, p. 283) suggest that internationalization knowledge leads to practices such as "country market screening, and evaluating strategic partners and distributors" that leads to new foreign market entries. Accordingly, Vasilchenko and Morrish (2011) showed that by building on previous experiences firms may develop a network of relationships abroad, engaging in cooperative behavior that ultimately leads to successful entry into new foreign markets. In sum, because internationalization knowledge reflects a general knowledge on conducting businesses abroad (Freeman et al., 2012; Nordman and Melen, 2008), it can be used not only in foreign markets where the firm operates but also in new foreign markets. Therefore, internationalization knowledge will lead to the development of international opportunities both in foreign markets already serviced by the firm and in new foreign markets not yet serviced by the firm. Hence: P2. Internationalization knowledge is positively related to the development of international opportunities. As previously discussed, this paper looks at market knowledge and internationalization knowledge as antecedents of the development of international opportunities, which lead to the internationalization process, comprising both new foreign market entry and sequential moves that happen after entry. Foreign market entry is a central element of internationalization research, and therefore has been extensively studied (Shaver, 2013). Nevertheless, the recognition of international opportunities that leads to entry in new foreign markets is quite recent in IE (Jones et al., 2011). Studies on IE that connect opportunities and internationalization focus on international new ventures following a rapid internationalization process (Jones et al. 2011). However, Dimitratos et al. (2016, p. 1220) suggest that research in the IE literature "[...] can shift the attention from the study of INVs to that of all opportunity-driven types of internationalized firms." While previous research has brought attention to the role of opportunities as drivers of the internationalization process of not only international new ventures but also firms in general (e.g. Johanson and Vahlne, 2006), such a relationship is not explicitly stated. Chandra et al. (2012, p. 75), for example, state that "Other authors note, and we agree, that 'the opportunity side of the internationalization process is not very well developed' (Johanson and Vahlne, 2006, p. 167). And such a side is not well developed because, although studies acknowledge the role of opportunities on the internationalization process, uncertainty reduction is still perceived as the main driver of the internationalization process (Figueira-de-Lemos and Hadjikhani, 2014). Using an IE lens to analyze foreign market entry, researchers have emphasized the mechanisms through which firms recognize opportunities in new foreign markets. Chandra et al. (2009), for example, suggest that firms exhibiting a strong entrepreneurial orientation will be more likely to recognize first time opportunities in foreign markets. Accordingly, Sapienza et al. (2006) argue that the firms' proactivity in the pursuit of international opportunities will lead to new foreign market entry. Such entrepreneurial orientation or proactivity is developed throughout time, being considered as a "long-term behavioural phenomenon" (Jones and Coviello, 2005, p. 297). In this vein, internationalizing firms may act entrepreneurially, actively identifying and developing opportunities in new foreign markets. This study, therefore, suggests that by identifying and developing international opportunities, firms are more likely to actively internationalize. A natural pathway for pursuing internationalization is through entering new foreign markets. Firms that develop international opportunities are more prepared and prone to entering new foreign markets. In other words, by developing more international opportunities, firms will enter more new foreign markets. Hence: P3a. The development of international opportunities is positively related to entry in new foreign markets. Nevertheless, the internationalization process comprises not only new foreign market entry but also sequential moves that may happen throughout time in markets already serviced by the firm (Casillas and Acedo, 2013; Gao and Pan, 2010). The entrepreneurial internationalization process relates to its international pathway (Jones et al., 2011), and sequential moves, as much as the entry choice, shape such a process (Casillas and Acedo, 2013; Gao and Pan, 2010). The bulk of the literature, however, emphasizes entry modes, often ignoring the dynamics of the internationalization process by not considering the sequential moves in foreign markets where it already operates (for a critique, see Shaver, 2013; Welch and Paavilainen-Mantymaki, 2014; for exceptions, see Mtigwe, 2005; Suarez-Ortega and Alamo-Vera, 2005). Sequential moves refer to the activities that follow new foreign market entry (e.g. continued sales) and are associated with within-market expansion or eventual withdrawal (Gao and Pan, 2010). Considering sequential moves is important because the internationalization process is not static and its evolution after foreign market entry is equally important for a better understanding of such a process (Benito et al., 2009; Gao and Pan, 2010). Sequential moves may follow two paths - either mode continuation or modal shift (Benito and Welch, 1994; Benito et al., 2009). In the former, there is no modal change and the firm operates in a given foreign market by using its initial entry mode (Benito and Welch, 1994; Swoboda et al., 2015). It does so because the existing servicing mode is considered adequate to conducting business in a given foreign market. Switching costs may also be perceived as high, deterring the firm to switch the initial mode; or inertia may play a role, that is, the firm simply does not change the initial mode due to inertial forces (Benito et al., 2009). This paper suggests that mode continuation is associated with the development of opportunities below a threshold. This argument is to be developed in the following paragraph. Mode continuation, hence, is a confirmation of the firm's initial choice of servicing mode, independently from the type of servicing mode initially chosen (Benito et al., 2009). In the latter, the firm shifts its servicing mode to adjust its operations within a given foreign market (e.g. by shifting from a sales agent to a whole-owned subsidiary). By shifting the mode of operation, firms may be more responsive to foreign market needs (Benito et al., 2009). Modal shifts, nonetheless, do not necessarily imply higher commitment in a given foreign market. Firms can also shift the entry mode to adjust operations when the servicing mode currently being used is not appropriate (e.g. by shifting from a sales agent to exporting). Consequently, this study suggests that the development of international opportunities will be related to sequential moves. More specifically, firms will invest resources, time, and effort to identify and develop opportunities in not only new foreign markets but also markets where it already operates. The development of international opportunities related to sequential moves allow firms to maintain and even strengthen their position in any given foreign market. In this vein, sequential moves are the pathway after new foreign market entry. Initially, however, such sequential moves will happen only via mode continuation. By accumulating knowledge and building relationships in foreign markets throughout time, via the development of new opportunities in those markets, the firm will be more inclined to continue servicing them by the continued use of the initial entry mode. For example, a firm may enter a given foreign market via direct exporting to one customer. Throughout time, because the firm is more familiar with that foreign market and with the exporting process in general, it expands its customer base there by exporting to new customers. This is consistent with the general conceptualization of international opportunities as the development of new customers (Reuber et al., 2018). The firm, thus, developed new international opportunities in that foreign market but did not change the initial servicing mode (i.e. direct exporting), which corresponds to mode continuation. Sequential moves via mode continuation are an easier pathway of the internationalization process after new foreign market entry, since they do not require leaps of resources and knowledge. Formally stated: P3b. The development of international opportunities is positively related to sequential moves using mode continuation in foreign markets serviced by the firm. In the example above, the firm chose to keep using the same servicing mode of entry even after it develops more international opportunities in the given foreign market. This may be because the firm did not develop enough international opportunities because it has not accumulated enough knowledge yet. Shifts in the servicing mode require more combined knowledge and understanding of the market needs. The development of opportunities per se, however, will not lead to shifts in the entry mode chosen by the firm to enter a given foreign market. It is only after a threshold is achieved that the development of international opportunities will indeed allow for a change in the firm's current internationalization process that requires riskier decisions, such as shifting the entry mode (Benito et al., 2005; Clark et al., 1997). The idea of a threshold is in line with the use of self-organized criticality (SOC) in organizations advanced by Andriani and McKelvey (2009, p. 1061), in which organizational phenomena not always follow a linear distribution - instead, they evolve "toward a critical state." When such a state is achieved, any additional related effort or interaction brings change. The authors argue that the IB "arena is especially vulnerable to SOC effects" (Andriani and McKelvey, 2007, p. 1215), in accordance with the argument presented in this study. Accordingly, prior studies have suggested that servicing modes are difficult to change, either when they represent higher or lower commitment (e.g. Anderson and Coughlan, 1987), which has been corroborated by empirical evidence (e.g. Pedersen et al., 2002). Nonetheless, after the firm builds enough knowledge that is translated into a consistent number of international opportunities developed in a given foreign market, it is able to overcome the difficulties associated with shifting the entry mode (e.g. switching costs and inertia). Hence, achieving such a threshold will lead to sequential moves where the firm changes its commitment to better serve the needs of a given foreign market by shifting the servicing mode. Formally stated: P3c. The development of international opportunities beyond a threshold is positively related to modal shifts in foreign markets serviced by the firm. On the example preceding P3b, the firm entered a given foreign market using direct exporting as the entry mode and continued using direct exporting. If the relationship between the development of international opportunities and the internationalization process were linear, the firm should have to switch or at least adjust the servicing mode. Instead, it continues using the same servicing mode, even after it expands its customer base and increases the volume of foreign sales. Nonetheless, imagine that sales to the firm's current customers keep increasing and more new customers are established. The firm evolves toward a critical state where exporting may not be adequate or the satisficing solution anymore. It has achieved this threshold and, hence, switches its servicing mode to a sales subsidiary, for example. This example shows that the relationship between the development of international opportunities and modal shifts is not linear. The firm needed to achieve a threshold in order to change its servicing mode. Nonetheless, as previously stated, modal shifts do not necessarily imply higher commitment in a given foreign market. Continuing with the example above, the firm established a sales subsidiary and continued developing opportunities in the foreign market. Nonetheless, after developing a certain number of opportunities without changing or adjusting the servicing mode (i.e. sales subsidiary), the firm learnt it had made a bad decision since, for example, the marginal costs of the sales subsidiary outweighed its marginal returns. Again, the firm evolved toward a critical state where a sales subsidiary was not adequate anymore. After reaching this threshold, the firm decided to divest the sales subsidiary and use a local sales representative instead. On P1 and P2, market and internationalization knowledge are analyzed independently, to show how each type of knowledge relates to the development of international opportunities. On P3a and P3b, the development of international opportunities that follows the accumulation of market and internationalization knowledge is connected to two aspects of the internationalization process - new foreign market entry and sequential moves related to mode continuation, respectively. On P3c, the idea of a threshold that affects sequential moves related to modal shifts is introduced. In addition, this paper proposes that both types of knowledge may combine to form the international knowledge stockpile of the firm. It is assumed that both market and internationalization knowledge interact through a mutual influence process, in which the former contributes to the development of the latter and vice versa. For example, Barkema et al. (1996) suggested that learning effects in a foreign market (i.e. market knowledge) were associated with learning from internationalization processes in different markets (i.e. internationalization knowledge), even though the degree of learning differed depending on similarities between those markets. Accordingly, knowledge in different markets accumulates over time to become a firm-specific knowledge that can be relevant to foreign markets serviced by the firm or foreign markets where it intends to service (Eriksson et al., 1997). Thus, the international knowledge stockpile as suggested here is a heterogeneous reservoir (Hutzschenreuter and Matt, 2017). It addresses two apparently contradictory aspects of knowledge development - diversity (i.e. market knowledge) and transferability (i.e. internationalization knowledge). By doing so, it increases the probability of growth and survival in foreign markets (Kogut and Zander, 1992). It may also provide the firm with differential efficiencies such as market diversification and innovation (Foss, 1996). The international knowledge stockpile can also be associated with pace in internationalization processes. First, it may enable the firm to respond to market turbulences more rapidly (Miller, 2002) and most importantly, it may accelerate the firm internationalization (Casillas et al., 2015), particularly in terms of the speed of modal shifts (Chetty et al., 2014). Considering that the international knowledge stockpile constitutes an important asset for internationalizing firms, this paper suggests that its effect is different than the effect of each type of knowledge that comprises it - market and internationalization knowledge - when analyzed separately. This is because the international knowledge stockpile enables the firm to capitalize on international opportunities, thus strengthening the relationship between international opportunities and the internationalization process. Therefore, this paper suggests that the international knowledge stockpile will moderate the relationship between the development of international opportunities and the internationalization process. It builds on the idea that the knowledge reservoir of the firm comprises both in-use and idle knowledge (Penrose, 1959), that is, utilized and underutilized knowledge. In this sense, the international knowledge stockpile is a mix of both market and internationalization knowledge in-use, which is knowledge the firm uses to advance its internationalization process, and market and internationalization idle knowledge, which is knowledge the firm can explore in near-future internationalization activities. In other words, the firm accumulates both market and internationalization knowledge over time. When those two types of knowledge are combined in a productive way, a baseline is achieved - the firm establishes its international knowledge stockpile - strengthening the relationship between the development of international opportunities and the internationalization process. "Durable and repetitive interactions" in foreign markets are the main drivers of this process (Eriksson et al., 1997, p. 354). First, whereas the development of international opportunities will lead to more new foreign market entries (P3a), the international knowledge stockpile will strengthen such relationship (P4a). The international knowledge stockpile may lead the firm to enter more new foreign markets in shorter intervals. This is because the firm possesses knowledge that results from different environments and knows how to transfer such knowledge to new foreign markets. Moreover, the international knowledge stockpile may also enable the firm to not only enter more markets in shorter intervals, but also enter multiple markets simultaneously (Wang and Suh, 2009). Literature acknowledges that knowledge supports the development of multiple internationalization processes within a firm, that is, entering and servicing multiple foreign markets at the same time (Welch and Paavilainen-Mantymaki, 2014). Hence, instead of entering foreign markets sequentially, as posited by Johanson and Vahlne (1977), the international knowledge stockpile enables the firm to capitalize on opportunities in multiple foreign markets at the same time. And this is possible because the firm possesses this heterogeneous reservoir of knowledge in-use and idle knowledge, which can be used to address multiple activities - in this case, shortly-spaced or simultaneous entries in different foreign markets. Formally stated: P4a. The relationship between the development of international opportunities and new foreign market entry is moderated by the firm's international knowledge stockpile. Second, whereas the development of international opportunities will lead to more sequential moves via mode continuation in any given foreign market serviced by the firm (P3b), the international knowledge stockpile will strengthen such relationship (P4b). The accumulation of knowledge has been associated with the internationalization process post new foreign market entry (Dimitratos et al., 2016). This means that such combination of knowledge reinforces the fact that the firm has chosen the right foreign market entry mode to service that specific foreign market, because it understands not only the needs of that specific foreign market but also its options and their outcomes when using different foreign market entry modes. Such reinforcement will lead to a proactive search for opportunities that do not require change in the servicing mode but that still strengthen the internationalization process. In addition, the international knowledge stockpile enables the firm to better assess the switching costs associated with modal shifts before attempting to switch the servicing mode. If it believes that the switching costs are too high, it will avoid switching the servicing mode. In this context, the international knowledge stockpile allows the firm to assess such switching costs more efficiently. This is because the international knowledge stockpile is associated with heuristics and routines on how to measure and monitor such costs, due to previous experiences. And the firm, because of its international knowledge stockpile, may decide to continue using a given servicing mode while adjusting such mode (e.g. switching sales agents). Hence: P4b. The relationship between the development of international opportunities and sequential moves using mode continuation in foreign markets serviced by the firm is moderated by the firm's international knowledge stockpile. Third, only after a threshold determined by a certain number of developed international opportunities is achieved (Barkema and Drogendijk, 2007), the firm will engage in modal shifts (P3c). This paper also suggests that the international knowledge stockpile of the firm will strengthen this relationship (P4c). As discussed above, modal shifts reflect changes in the commitment toward the internationalization process, since the firm shifts its entry mode to better serve the needs of a given foreign market (Benito et al., 2009). After the international knowledge stockpile is built, the firm develops international opportunities that will reach the threshold faster, which will allow it to shift the servicing mode. This idea is coupled with the fact that the firm, because of its international knowledge stockpile, is more experienced in switching its servicing mode. In other words, there is a trigger for modal shifts after a certain threshold is achieved. And there is the accumulated experience after this threshold is achieved - the firm knows about different servicing modes and how to better use them. Hence, the firm is able to reduce searching costs (i.e. identifying new servicing modes) and implementation costs (i.e. shifting the servicing mode per se). This enables the firm to engage in modal shifts faster and more efficiently (Casillas et al., 2015). In other words, the threshold, in terms of the development of international opportunities, will be achieved in shorter intervals. Hence, if the time to reach the threshold is reduced due to the international knowledge stockpile, modal shifts will not only happen more frequently but also in shorter intervals. Hence, because modal shifts require not only a combination of market and internationalization knowledge but also an additional effort from the firm of proactively developing international opportunities and changing the course of its internationalization process, as determined by the threshold, it is only after this threshold is achieved that the firm's international knowledge stockpile will moderate the relationship between international opportunities and modal shifts. After the threshold and international knowledge stockpile are achieved, the firm will engage in more modal shifts in different foreign markets and/or will shift the servicing modes at a given foreign market faster. Formally stated: P4c. The relationship between the development of international opportunities beyond a threshold and modal shifts is moderated by the firm's international knowledge stockpile. In sum, the combination of market and internationalization knowledge configures the knowledge stockpile of the firm, which reinforces the relationship between the development of international opportunities and the internationalization process by positively moderating such relationship. Thus, over and above the direct relationship between knowledge and international opportunities, this paper proposes an indirect relationship between knowledge, international opportunities, and the internationalization process comprising both new foreign market entry and sequential moves that happen after entry. Doing so is important because while it recognizes that different types of knowledge have different effects on the internationalization process, it considers that these types also combine to form knowledge that is firm-specific - the firm's international knowledge stockpile. Moreover, the international knowledge stockpile connects the three constructs advanced by the paper - knowledge, international opportunities, and the internationalization process. P1 and P2 connect knowledge to international opportunities; the set of P3a-P3c connect international opportunities and the internationalization process; the international knowledge stockpile completes the model, offering a refined understanding of internationalization processes as a function of both knowledge (i.e. market and internationalization) and the development of international opportunities. The conceptual model is presented in Figure 1. It shows that both market and internationalization knowledge are positively related to international opportunities, which, in turn are related to the internationalization process. From the IB field, the model borrows the notion of market knowledge and its relation to the overall internationalization process. From the IE field, it borrows the notion of international opportunities as an antecedent of the internationalization process. From a combination of both the IB and IE fields, it emphasizes the role of internationalization knowledge and the importance of looking at not only new foreign market entry but also sequential moves that happen after entry. The main points of novelty introduced by the model are showing that the path to sequential moves using mode continuation is different from that of sequential moves comprising modal shifts (i.e. the idea of a threshold) and market and internationalization knowledge combine to form the knowledge stockpile of the firm, which moderates the relationship between international opportunities and the internationalization process. By doing so, it offers a finer-grained view of the internationalization process, which can be entrepreneurial (i.e. related to the identification and development of opportunities) even for firms in general. By combining the IB and IE literatures, this paper presents a novel framework connecting three constructs that are typically analyzed separately in the literature - knowledge, international opportunities, and the internationalization process. Grounded on the idea that the internationalization process is the result of the development of international opportunities, this study looks at the types of knowledge that shape such opportunities as well as how those types interact to moderate the relationship between international opportunities and the internationalization process. On the one hand, research on IE often connects knowledge to the development of international opportunities (e.g. Chandra et al., 2009), but without highlighting how such development affects the internationalization process of the firm, comprising both new foreign market entry and sequential moves in all foreign markets where the firm operates. This literature also focuses on international new ventures, thus disregarding other types of internationalized firms (Dimitratos et al., 2016). On the other hand, research on IB connects knowledge directly to internationalization, usually focusing on new foreign market entry (e.g. Shaver, 2013). It also posits that the main driver of the internationalization process is uncertainty reduction via knowledge development (Figueira-de-Lemos et al., 2011). This study explicitly connects knowledge, international opportunities, and the internationalization process during and after new foreign market entry and provides testable propositions that contribute to advance and bring together both the IB and IE literatures. First, this study suggests that both market and internationalization knowledge will be positively related to the development of international opportunities, emphasizing the idea that the internationalization process may also be triggered and driven by the proactive identification and exploitation of opportunities, rather than mainly by uncertainty reduction. While uncertainty reduction has been important to comprehend internationalization (Figueira-de-Lemos et al., 2011), understanding how the development of international opportunities under uncertainty affects internationalization processes can better inform research on the internationalization of the firm (see Alvarez and Barney, 2019 for a discussion on uncertainty and opportunities). Second, it suggests that the development of international opportunities will be positively related to both new foreign market entry and sequential moves in foreign markets already serviced by the firm, thus showing that opportunities matter in earlier as well as in later epochs of the internationalization of the firm. Few studies do so (e.g. Benito et al., 2009). The bulk of the research on internationalization equates foreign market expansion to entry in a single or a few foreign markets (Shaver, 2013). Finally, this study explains how the relationship between knowledge and international opportunities leading to foreign market entries and sequential moves where the servicing mode does not change (i.e. sequential moves using mode continuation) is different from that leading to sequential moves where the servicing mode changes (i.e. modal shifts). Modal shifts will occur only after a certain threshold related to the number of international opportunities developed in foreign markets is achieved. And it suggests that the relationship between the development of international opportunities and both new foreign market entry and sequential moves, either via mode continuation or modal shifts, will be moderated by the international knowledge stockpile of the firm By introducing the idea of a threshold and the concept of international knowledge stockpile, this paper refines the view that the effects of the accumulation of knowledge are gradual and incremental throughout the internationalization process. Theoretical contributions This paper offers the following contributions. First, according to Forsgren (2016, p. 2), "incorporating [...] entrepreneurship into the [internationalization] model needs more consideration." The literature on IB, which focuses on the internationalization process, and on IE, which focuses on the development of international opportunities, have evolved quite independently over time. By incorporating IE into the IB literature, this paper shifts the focus of the internationalization process from uncertainty reduction to an entrepreneurial process of developing international opportunities. This paper argues that the emphasis on the development of international opportunities is the right pathway to bring those literatures together, as suggested by Johanson and Vahlne (2009). This paper also shows that the entrepreneurial process of opportunity recognition and development is important for any internationalized firm. This extends the IE literature suggesting that the understanding of internationalization processes of firms other than international new ventures will benefit from incorporating the idea that such processes result from the development of international opportunities (Dimitratos et al., 2016). Second, this study also answers recent calls to assess the internationalization process using a dynamic approach (e.g. Welch and Paavilainen-Mantymaki, 2014). Sequential moves that happen after entry shape the long-term growth and hence the success or failure of internationalization processes (Casillas and Acedo, 2013). Nonetheless, the bulk of the IB literature emphasizes entry in foreign markets, ignoring the role of time and of the subsequent steps followed by a firm that indeed lead to its expansion in foreign markets (Gao and Pan, 2010). This study not only explains the antecedents of both new foreign market entry and sequential moves, but also disaggregates the latter into mode continuation and modal shift. Most importantly, by doing so it allows for different effects of the development of international opportunities on new foreign market entry and on each sequential move that happens after it. If the internationalization process was conflated with new foreign market entry, it would assume that the effects of the development of international opportunities during entry would be the same for sequential moves that happen after entry. Third, this study contributes to the knowledge-based view in internationalization processes (e.g. Casillas et al., 2009) showing that knowledge in internationalization processes is comprised of both market and internationalization knowledge. This means that knowledge is not homogeneous in the internationalization process. Following the suggestion of Li et al. (2015, p. 919) this paper assumes that different types of knowledge may affect international opportunities in different ways. This differs from previous literature, which looks at international knowledge as a whole and connects it directly to the internationalization process. This paper argues that market knowledge will be positively related to the development of international opportunities by increasing the firm's familiarity with local practices and businesses in a specific foreign market. Moreover, internationalization knowledge will also be positively related to the development of international opportunities, but by increasing the firms' ability of doing business abroad. Moreover, this paper introduces the concept of the firm's international knowledge stockpile. By showing that the combination of market and internationalization knowledge develops into the international knowledge stockpile of the firm, and that this international knowledge stockpile will influence the relationship between international opportunities and the internationalization process, this paper contributes to show how knowledge interacts to shape such process. Extant literature assumes that the accumulation of experience abroad starts shaping the internationalization process from its very beginning. This study, however, focuses on the moderating effect of the accumulation of knowledge on the internationalization process over time. Fourth, this study introduces the idea of a threshold. It suggests that, before a threshold related to the number of developed international opportunities is achieved, the knowledge the firm possesses is not yet sufficient to shape modal shifts in foreign markets serviced by the firm. In other words, the accumulation of knowledge starts after the firm's first operations abroad, which may be due to emergent chance opportunities, but it is only after a certain number of international opportunities are developed that the internationalization process starts to be shaped by modal shifts (e.g. from sporadic sales to a planned internationalization process, either increasing or decreasing the level of commitment in a given foreign market). This is particularly important in that it allows for the possibility of different pathways of internationalization (Mathews and Zander, 2007), in which the firm may follow different trajectories in different foreign markets. Such trajectories are not always linear and are often interdependent, suggesting that the internationalization process may be more complex than previously established in the IB literature. Managerial implications The propositions developed in this study offer insights for firms that wish to internationalize or that have already started internationalizing. First, firms may proactively search opportunities abroad, even when they are uncertain about their ability or willingness to internationalize. This is because the internationalization process will only be systematic after a certain number of developed international opportunities is achieved. Initial opportunities may be developed via trial and error, without compromising the firm's internationalization process as a whole. In addition, acknowledging that the effect of the learning process associated with the internationalization process is not always gradual and incremental allows firms to better plan their internationalization processes to fit their strategic goals. For instance, firms can use internationalization knowledge acquired in servicing a specific set of foreign markets to purposefully develop international opportunities that will enable them to shift their operation mode in a different foreign market. Doing so allows them to shape their internationalization processes in terms of their resources commitment (Acedo and Casillas, 2007; Casillas and Acedo, 2013; Gao and Pan, 2010). Finally, this study shows that firms do not necessarily need to build their internationalization process gradually and reactively. The accumulation of both market and internationalization knowledge allows firms to enter several foreign markets and to keep servicing foreign markets where they already operate. But because a more systematic internationalization process via modal shifts only happens after a threshold (i.e. a certain number of developed international opportunities) is reached, the sooner the firm reaches it, the better. Doing so usually involves a willingness to take risk by entering foreign markets where the firm has no or little market knowledge but is supported by structural and management processes developed by the accumulation of internationalization knowledge. In sum, managers should be less cautious and then, proactively develop international opportunities that enable their firms to enter and evolve in foreign markets. The model developed in this study suggests that managers of firms that wish to internationalize should also proactively transfer knowledge across different markets and develop routines and heuristics that allow such knowledge to be integrated into the firm. Such a process of accumulation, transfer and integration of knowledge should be continuous to facilitate the development of international opportunities that allows for the achievement of the threshold beyond which the internationalization process becomes more systematic (i.e. via modal shifts). Limitations and further research Because this is a conceptual paper, the propositions presented in this study have not been empirically tested. The first direction for further research is, hence, to quantitatively test them by using panel data, since longitudinal data are needed to track the internationalization process throughout time. A second limitation is because only two types of knowledge are analyzed - market and internationalization. However, research suggests that other types of knowledge may also affect the internationalization process, such as technological knowledge (Fletcher and Harris, 2012). Previous studies also suggest that market knowledge should be differentiated from institutional knowledge, which is not easily acquired (Eriksson et al., 1997). By incorporating other types of knowledge and differentiating market knowledge from institutional knowledge, future research may inform the relationship between different types of knowledge and internationalization processes. Doing so will contribute to recent research that suggests that the internationalization process is contingent on the several different types of knowledge associated with it rather than on market knowledge only, which is the focus of the bulk of the research on the internationalization process of the firm. A third limitation is that the model looks at knowledge as an antecedent of the development of international opportunities and the internationalization process but it does not capture the recursive relationship that should exist between the three constructs. As firms internationalize, they learn how to better develop international opportunities and accumulate both market and internationalization knowledge. Likewise, the model does not capture withdrawal from foreign markets or de-internationalization (Benito and Welch, 1997). The latter, however, can also be the outcome of knowledge accumulation over time. Future research opportunities that inform how knowledge affects de-internationalization are numerous[1].
|
The propositions suggest that market and internationalization knowledge are positively related to international opportunities, which, in turn, are related to foreign market entry and sequential moves using mode continuation. International opportunities, however, are related to modal shifts only beyond a threshold. Moreover, the international knowledge stockpile of the firm moderates the relationship between international opportunities and the internationalization process. Because this is a conceptual paper, the propositions have not been tested and, therefore, lack empirical validation. Nonetheless, the model is a starting point to new research on internationalization distinguishing different types of knowledge as well as different sequential moves.
|
[SECTION: Purpose] The sharing economy is continuously emerging, and it has had substantial impacts on various industries (Tussyadiah and Park, 2018). In particular, the novel business model has generated a strong economic and social impact in the hospitality and tourism industry (Tussyadiah and Pesonen, 2016). As a leading business in the sharing economy, Airbnb provides millions of accommodations in 65,000 cities in approximately 200 countries on a global scale, and its value is estimated at 10 billion US dollars (Gunter, 2018). As Airbnb has become a major player in the hospitality and tourism industry, it is one of the top priority research topics in the field (Liang et al., 2018). In contrast to the products of a conventional accommodation service, the products of Airbnb are individuals' private places; thus, it is much more difficult for potential travelers to obtain prior knowledge (Fagerstrom et al., 2017). Although hotels' experiential products are also hard to pretest, people have general knowledge about these products based on their past experiences, hotel brands, or hotel images. However, even for potential travelers who have past experience using Airbnb, Airbnb's idiosyncratic places are almost impossible to anticipate in terms of their qualities (Wang et al., 2016). This situation renders the communications between hosts (individuals lending their places) and guests (individuals renting hosts' places) critical in Airbnb, as the sharing transaction is processed primarily based on these communications. By considering the information about both places and hosts that is created by the hosts themselves (host-created information), the guests search for and select places to stay, indicating that host-created information is one of the core factors in the Airbnb system (Ert et al., 2016). Although many studies have attempted to investigate the communication process in Airbnb with host-created information, most cases have focused on specific parts of the information such as the price (Wang and Nicolau, 2017) or host profile (Fagerstrom et al., 2017). However, the process through which the totality of the host-created information is delivered and perceived has been scarcely investigated. Therefore, this study tries to examine how various information appeals in host-created information are communicated in Airbnb to further understand the communication process in a sharing economy. Specifically, we examine which appeals in the host-created information are significantly influential on the guest's decision-making. Based on Aristotle's appeals, general aspects of the host-created information in Airbnb are empirically analyzed. Sharing economy and irbnb The sharing economy is defined as "peer-to-peer (P2P) based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services" (Hamari et al., 2016). Social media have enabled the emergence of P2P online networks as well as social sharing (Priporas et al., 2017). Airbnb is a successful example of a sharing economy business. The sharing economy website for short-term rentals was founded in 2008 (Libert et al., 2014). By enabling people to easily lease short-term lodgings with their own living spaces, Airbnb has made an impact on the sharing economy, hence, on the hospitality and tourism business field (Morgan, 2011). With affordable prices and highly varied locations and types of lodgings, Airbnb has become a major player in the industry and has been expanding its business area to relevant fields such as airlines (Rizzo, 2018). Currently, Airbnb accommodates approximately 5 million lodging listings throughout the world and facilitates over 260 million check-ins on average per year (Airbnb, 2018). Because the sharing economy is a P2P-based network, its value is created, distributed and consumed by users, and this feature makes interactive communications between users indispensable to the sharing economy (Xie and Mao, 2017). Lampinen et al. (2013) found that the communication flow between users is important for a successful online sharing service. Thus, communication on a sharing economy website is performed with the provider's informational message and the recipient's response (Poon and Huang, 2017). In Airbnb, hosts introduce their places and themselves through informative messages (i.e. host-created information), and potential guests make decisions by evaluating the places and hosts based on the host-created information (Ert et al., 2016). Given its importance, the communication process in Airbnb has been of great interest of researchers (Xie and Mao, 2017). However, the existing studies have usually focused on specific appeals, such as the price (Gibbs et al., 2017; Wang and Nicolau, 2017) and host profile (Gunter, 2018; Tussyadiah and Park, 2018; Wu et al., 2017). Previously, the role of host-created information has only been partially understood. To address this research gap, this study examines the influences of information appeals within host-created information in Airbnb. Aristotle's appeals in online information messages As social media has become prevalent in most activities of daily life, the wide adoption of social media services and their online information have generated substantial impacts on individuals and businesses (Colicev et al., 2018). Thus, the influence of social media information has been an important research topic and is recognized in various contexts (Thakur and Hale, 2018). By examining this information's significant impact on an individual's perception and behavior in the pre-consumption (Swani et al., 2017), consumption (Chen et al., 2017) and post-consumption stages (Nadeem et al., 2015; VanMeter et al., 2015), it has been determined that the information communicated in online platforms has a strong impact. As a result, it is important to understand how the online information could be influential in stimulating, persuading and inspiring people (Yang et al., 2018). Aristotle's appeals compose a proper framework to analyze the persuasive influence of information (Otterbacher, 2011). According to Aristotle's appeals (Ramage et al., 2015), interpersonal messages can be persuasive and powerful through the following three components: ethos, pathos and logos (Xun and Reynolds, 2010). First, ethos is an ethical appeal that includes all of the proofs of the message sender's authority and credibility. Second, pathos is an emotional appeal to the recipient. Finally, logos is a rational appeal to the recipients. Usually, facts, figures and examples are used to influence the recipients' perceptions of the messages as reasonable. Xun and Reynolds (2010) demonstrated how to persuade readers by mixing ethos, logos and pathos in an online forum as if the messages were offline. Otterbacher (2011) examined the logos, ethos and pathos in the online review communities and demonstrated more ethos in the reviews required for experiences and references, more pathos in the reviews for daily necessaries, and logos in the most prominent reviews. Bronstein (2013) revealed that the candidates for President of the USA had expressed their identities to the voters with an emotional and synchronous approach using the Aristotelian language of persuasion in SNS. All of the studies confirmed that the propositions of Aristotle's appeals were supported in online communications by identifying the significance of the online information messages relative to the users' reactions (Bronstein, 2013). However, these methods have not been extensively applied to the communication process in the sharing economy despite the importance of understanding the persuasive impacts of information messages. In the case of Airbnb, the main platform is its online website, and the sharing transaction is processed primarily based on communications between users (Ert et al., 2016). Individuals who want to share their places join Airbnb and become hosts by registering them. During the registration, potential hosts input various pieces of information about their places and about themselves. Then, the guests search, evaluate and select the places where they want to stay based on the host-created information. Because there are no sources that enable one to become familiar with places other than the host-created information, this information is critical to a guest's decision making (Chen and Xie, 2017). Based on the highly expected importance of host-created information, a general proposition is suggested that assumes the significant influence of host-created information on a guest's purchase decision in Airbnb. The various informative statements' influences are empirically examined based on Aristotle's appeals to explain which piece of information is effective at persuading people (Scott, 1967). In this research, host-created information is defined as the information available in Airbnb that is written by hosts to convince potential guests to select their places. In Airbnb, various information appeals are available in each post of host-created information. Based on Aristotle's appeals, we use three categories: ethos; pathos; and logos. For the dependent variable, this research uses the number of times that places were shared. In Airbnb, only the guests who shared places can post reviews; hence, the number of reviews about places indicates the number of times the places have been shared. The research model is depicted in Figure 1. Ethos in host-created information Ethos symbolizes the message sender's credibility. If the messages are from trustworthy sources, they tend to be more influential (O'keefe, 1987). In online communications, the source's credibility is related to the user's reputation as perceived by the other users, which represents a signal of trustworthiness (Slee, 2013). Many researchers verified the impact of a website user's reputation on other users' reactions (Liu and Park, 2015) and online behaviors (Jin and Phua, 2014). In addition, several studies proved the persuasive impact of the message sender's credibility in online information based on Aristotle's appeals (Yang et al., 2018). In Airbnb, users can establish their reputations with three types of proof: a super host badge, ID verification and host reviews. Since 2014, a super host badge system has existed in Airbnb. According to Airbnb, a super host badge is awarded to the "experienced hosts who are passionate about making your trip memorable" (Airbnb, 2017). There are four requirements to obtain a super host badge: complete at least ten stays in a year; achieve a rating greater than 4.8 out of five; respond within 24 hours at least 90 per cent of the time; and no cancellations of confirmed reservations without extenuating circumstances. Once these requirements are met, the badge is awarded automatically to focal hosts. In addition, because the requirements are re-confirmed every year, the status is updated if there any changes (Gunter, 2018). To guests, a super host badge could be perceived as a reliable indication of the host's experience and commitment and represent the quality of the place. It has been found that guests are willing to pay a higher price for sharing with super hosts (Liang et al., 2017) and that the prices of super hosts' places tend to be higher (Wang and Nicolau, 2017). Based on these studies, it could be expected that a super host badge is a reliable signal of the host's credibility and the place's quality. H1.1. A super host badge will have a positive impact on a guest's decision-making. ID verification indicates whether the host has submitted his or her own personal information to Airbnb. If users: submit government-issued ID; connect their Airbnb accounts to other online accounts such as Google or Facebook; and upload a profile photo, phone number and email address, Airbnb gives an ID verification badge. Although the ID disclosure remains private, the signal that this host verified his or her ID can improve a guest's perception of the host (Racherla and Friske, 2012). Previous studies demonstrated the positive reactions of users to ID disclosures of the message provider. Because revealing personal information in online environments can reduce uncertainty (Tidwell and Walther, 2002) and enhance the source's credibility (Sussman and Siegal, 2003), information representing the ID disclosure of a message provider is a crucial factor in the recipient's perception. Hence, ID verification is evidence of the host's credibility. H1.2. ID verification will have a positive impact on a guest's decision-making. As mentioned above, the number of reviews indicates how many times the host has accommodated guests because only the guests who actually stayed in the places can post reviews. In other words, a host review can indicate a host's hosting experience (Weiss et al., 2008). Guests would prefer to stay in places that are guided by more experienced hosts, and Tussyadiah and Park (2018) found that more qualified, skilled and experienced hosts are more positively perceived. H1.3. A host review will have a positive impact on a guest's decision-making. Pathos in host-created information Pathos indicates the elements affecting the message recipients emotionally, and an emotional appeal is powerful in terms of its ability to persuade (Xun and Reynolds, 2010). Because Airbnb hosts have to write summaries of the places or themselves in their own words, guests are able to perceive an emotional or social stimulus through the hosts' language patterns in their descriptions. According to Tussyadiah and Pesonen (2016), one of the main reasons for using Airbnb is that guests want to find opportunities to explore local life by communicating with hosts. Hence, they would prefer the places whose hosts are emotional, social and friendly, and these characteristics could be reflected by their writing styles (Ludwig et al., 2013). Tussyadiah and Park (2018) found that the Airbnb user's likelihood of booking is higher when the hosts describe themselves as being willing to meet new people in host-created information. H2.1. The use of emotional words will have a positive impact on a guest's decision-making.H2.2. The use of social words will have a positive impact on a guest's decision-making. Logos in host-created information Logos persuades people with reasoned discourse. In a product choice situation, the consumer's rational thinking is mostly involved with the product awareness (Xun and Reynolds, 2010). In the host-created information of Airbnb, objective information regarding the place is provided: the price, occupancy, safety features, place picture and star-rating. These accommodation characteristics are important for a guest's decision-making (Yang et al., 2018). In Airbnb, the price means the price per night, and it is decided by the hosts. Inasmuch as the generally lower prices of places compared to conventional hotel rooms are responsible for the competitiveness of Airbnb (Guttentag, 2015), Airbnb users would expect lower costs for their stays and select the places with the most affordable prices (So et al., 2018). H3.1. The price will have a negative impact on the guest's decision-making. In Airbnb, the occupancy refers to the maximum number of spaces available, i.e. higher occupancy signifies more space. Lu and Zhu (2006) indicated that guests usually consider the room size as a crucial factor in the quality of the accommodation facility. H3.2. Occupancy will have a positive impact on the guest's decision-making. Airbnb promotes hosts who equip basic safety features in their places by providing the information about the number of these safety features. Specifically, smoke detectors, carbon monoxide detectors, first aid kits, safety cards, fire extinguishers and locks on the bedroom door are listed as desirable safety features in Airbnb. Hosts can list the features they have installed in their places, and the equipped features are shown in the host-created information. Safety could be more important in Airbnb than in a hotel because each place within Airbnb is distinctive in terms of its quality and safety accidents are usually outside the pale of platforms in the market (Richard and Cleveland, 2016). H3.3. Safety features will have a positive impact on a guest's decision-making. The place picture indicates the uploaded picture of the place. Hosts in Airbnb are able to upload as many photographs of their places as they want. When people search intangible products to make a purchase, pictorial information can be an effective means to exhibit the products' qualities (Jin and Phua, 2014). H3.4. Place pictures will have a positive impact on a guest's decision-making. Airbnb guests can evaluate the places where they have stayed with a star-rating. Guests who have stayed in specific places are encouraged to appraise the whole stay experience by giving star points from one star to five stars. In online communities, peer evaluation has been widely used to give helpful information to assist other users' decision-making (Tsao, 2018). Lee et al. (2015) demonstrated that the star-ratings of places are important to the sales of places in Airbnb. H3.5. The star-rating will have a positive impact on a guest's decision-making. Instrument development Table I indicates each variable's description. The super host badge and ID verification are measured by checking whether each host has the badges. The host review is measured based on the number of reviews that each host has received. The uses of emotional and social words are measured by a linguistic inquiry word count (LIWC). LIWC is automated word analysis software that provides content analysis results based on 70 preset linguistic categories (Tausczik and Pennebaker, 2010). For emotional words, those words that describe an individual's emotions are considered, and they indicate how much the authors are emotionally oriented. In the case of social words, those words that explain social relations are included to show how much the authors are socially oriented (Tausczik and Pennebaker, 2010). By calculating the proportion of the number of focal categories of words that appear in the whole text, LIWC could indicate what tendencies, inclinations or personalities the writers have (Mehl et al., 2006). Over 100 studies have applied this approach in various contexts (Cohn et al., 2004; Humphreys, 2010; Ludwig et al., 2013) and confirmed its reliability and validity. Thus, this research measures the use of emotional and social words by adopting the resulting values of LIWC's analyses of the textual descriptions in host-created information (Lee and van Dolen, 2015). The price, occupancy and star-rating are measured in numbers as shown in the host-created information. Similarly, the safety features and place pictures are counted in numbers based on the host-created information. Finally, the dependent variable is the actual purchases, and it is measured by the number of place reviews. The number of place reviews can represent the number of guests who actually stayed in the place because only these guests can write reviews in Airbnb. Thus, the number of place reviews could indicate the number of times the place has been shared (Chen and Xie, 2017; Liang et al., 2017). While the number of host reviews are reviews written for evaluating hosts, and it is optional for guests to leave host reviews, that of place reviews are reviews written for evaluating places, and it is mandatory for guests to leave place reviews. Thus, the number of host and place reviews is not always consistent: guests can write only place reviews; and if hosts register more than one places, the number of host reviews can be higher than that of place reviews because the former shows the total number of host reviews which have received from all the registered places. Data collection Airbnb is selected as the data source. From December 12, 2015 to December 24, 2015, 854 host-created information postings pertaining to places in Bangkok, London and New York were collected, as these cities are ranked as the top global destination cities in the Asia Pacific, Europe and the USA, respectively (Hedrick-Wong and Choong, 2015). At the Airbnb website, places are searched by location without dates or other filters. A total of 306 places were sought in each city; 64 rooms were excluded because some are duplicated results and others' host-created information was not written in English. Finally, 291 places in Bangkok, 288 in London and 275 in New York were subjected to data analysis. Data analysis The aim of this study is to examine the influences of various appeals in host-created information on the user's purchase decision in Airbnb. Therefore, empirical impacts of different appeals in host-created information on guests' decision-making were investigated in the context of Airbnb. A Tobit regression model was used because of the censored nature of the dependent variable (Qazi et al., 2016). The distribution of the dependent variable was skewed to the left, and its observed value was found to be within a certain range and censored. When the dependent variables have these features, ordinary least squares (OLS) analysis results in biased and inconsistent estimates. The Tobit model is known as the proper method to overcome these problems because it is a regression model with a censored variable and a non-negative dependent variable. In addition, we needed to resolve the selection biases. Although the dependent variable in this research is a proxy variable for an actual purchase, it is not the exact number of sharing transactions. The number of actual purchases for the place could be considerably greater than the number of place reviews. As a result, our sample has inherent selection biases, and we used a Tobit model to resolve them (Qazi et al., 2016). Results Table II presents the descriptive statistics for the variables, and Table III shows the correlation coefficients between the variables. The highest coefficient (coef.) was 0.32, indicating that singularity and multicollinearity were not problems within our data set (Tabachnick and Fidell, 2007). In addition, the tolerance and the variance inflation factor (VIF) were also checked to assess the multicollinearity. Because the tolerance ranged from 0.81 to 0.98 and the VIF ranged from 1.02 to 1.23, we have confirmed that there is no evidence of multicollinearity. A Tobit model was proposed with 12 hypothetical relationships as follows: Tobit Model:Actual purchases(Number of place reviews)=dummySH+dummyID+b1*HR+b2*Emotional+b3*Social+b4*Price+b5*Occupancy+b6*Safety+b7*Picture+b8*Rating+e where, dummySH = whether the host is a super host; dummyID = whether the host performed ID verification; HR = the number of host reviews; Emotional = LIWC Index of the use of emotional words in a summary describing the places or hosts; Social = LIWC Index of the use of social words in a summary describing the places or hosts; Price = the price of the place per night; Occupancy = the maximum number of people who can be accommodated in the place; Safety = the number of safety features with which the place is equipped; Picture = the number of pictures of the place; Rating = the average evaluation of the place made by experienced guests; and e = the random error. Table IV summarizes the results of our model (log likelihood = -3708.71). All of the variables explained approximately 3.2 per cent of the variance in the dependent variable (Pseudo R2 = 0.032). Ethos: except for the ID verification (coef. = 0.32), the positive impacts of the super host badge (coef. = 9.32, p-value < 0.01) and host review (coef. = 0.02, p-value < 0.001) were significant. Thus, while H1.1 and H1.3 were accepted, H1.2 was rejected. Pathos: although the positive impact of the use of social words was significant (coef. = 0.34, p-value < 0.05), the use of emotional words was not significant (coef. = -0.18). Hence, among the hypotheses on the pathos appeals, only H2.2 was accepted. Logos: among five variables, only the number of safety features (coef. = -0.44) was not significant. Among the significant variables, whereas the price (coef. = 0.07, p-value < 0.01), place picture (coef. = 0.32, p-value < 0.001), and star-rating (coef. = 6.28, p-value < 0.001) were positively significant, the occupancy (coef. = -4.12, p-value < 0.01) was negatively significant. As a result, H3.3 was rejected because of the insignificant impact, but H3.1 and H3.2 were rejected because the observed significant impacts have opposite directions to the hypothesized directions. Among the five pertinent hypotheses, only H3.4 and H3.5 were accepted. This study hypothesizes that various information appeals of host-created information have significant effects on the sharing behavior in Airbnb. Specifically, three categories of appeals would be expected to relate to the guest's actual purchase because the guest's decision-making on Airbnb would be affected by the host-created information. Airbnb hosts who had super host badges and more reviews sold more products. The significant impacts of the super host badges and host reviews have also been examined in previous studies (Wang and Nicolau, 2017). The appellation of "super host" is given to a very small number of hosts. Therefore, the places of super hosts tend to be more frequently selected because they appear more credible. In addition, inasmuch as having more host reviews implies more experience, guests are more likely to choose the hosts with more reviews. Furthermore, Airbnb is a service whereby people purchase unknown accommodation products from unfamiliar suppliers, and it is based on a review mechanism for the sake of trust (Guttentag, 2015). The results confirmed the importance of host reviews in Airbnb. Unexpectedly, the verifying ID had no significant effect on the dependent variable. The insignificance of the ID verification is in accordance with previous studies of Airbnb (Teubner et al., 2016) and online communities (Racherla and Friske, 2012). This finding could be attributed to a large number of users who have verified their IDs. Because the ID verification of a host was common among Airbnb hosts, the ID verification may have no discernible effect on either the customer's decision to trust a host or the purchase decision (Teubner et al., 2016). For the pathos, the more social words were used to describe hosts and places, the more likely those hosts were to be selected. However, emotional words had no impact on the purchase. To Airbnb guests, sociable hosts introducing themselves and their places with more social words were attractive, but emotional hosts were not highly preferable. In Airbnb, unlike hotels, guests demand local experiences (Guttentag, 2015). Thus, the social appeal is a crucial factor in a guest's choice. Tussyadiah and Park (2018) found significant influences of social appeals of hosts in the Airbnb context. The following sentences are actual examples with which hosts described their places or themselves socially: "Hello! Happy to hear from you! I and my husband delight to offer this lovely SINGLE ROOM on your consideration [sic]," "We're excited to provide a warm welcome, and a clean, secure, and comfortable stay while you are visiting New York," "We are creating this space for like minded people, Travellers, expats, explorers, students [...] people who enjoy a company of local and international hosts who know Bangkok well." In the case of emotional words, as hosts try to present themselves and their places attractively to guests, most words would be related to positive terms. According to relevant studies investigating the impacts of emotional online contents on an individual's perception, negative information tends to be perceived as more persuasive than positive information (Lee et al., 2017). This negativity bias has also been found in the Airbnb context, indicating that negative reviews have significant effects on decreases in the host's reputation (Abramova et al., 2015). In this regard, the emotional word-laden information tends to be positive and emotional in Airbnb. Hence, the host-created information trying to present the places as attractive would be difficult to consider trustworthy or authentic. The following sentences are actual examples, describing hosts and places with charming and fascinating words which are difficult to be effective in attracting guests: "You will feel relaxed and inspired in this spacious yet cozy room with classic brick fireplace": You'll love my place because of the modern apartment with nice new finishes and central Heat and Air Conditioning, high-speed internet, the great views of the city, and the quick ride to Manhattan. "Spectacular loft in the heart of LA! Great restaurants, bars and more right at your door. Walk to everything that is Downtown LA". Finally, logos is an appeal to the recipient's logic. This logos appeal is assumed to be an objective figure or obvious characteristic (Bronstein, 2013). Because places in Airbnb have no precise criteria (unlike other existing products), it is particularly important to describe what features the products have (Tussyadiah and Zach, 2017). Accordingly, this study took into account the prices, occupancies, safety features, place pictures and star-ratings as logos appeals. According to the study findings, a higher price, more place pictures, better star-ratings and lower occupancies increase the attractiveness of a place. The impact of the price, place picture, star-rating and occupancy on the actual purchase in this study is consistent with the literature, which argues that the objective characteristics of a product influence the theoretical judgments of the recipients (Garvin, 1984). Indeed, the impact of the price, place pictures and star-ratings have been demonstrated in some previous Airbnb studies (So et al., 2018; Jin and Phua, 2014). One of the most important factors in purchasing an accommodation is the price. Compared to hotels, Airbnb was competitive in its prices (Tussyadiah and Pesonen, 2016). However, this study found that the higher the price was, the more likely the product was to be purchased. A higher price may imply higher quality, even though some people prefer a less expensive product (Lichtenstein et al., 1993). Furthermore, Airbnb users have difficulty identifying the quality of shared economic goods. Therefore, the guests would have tried to determine a place's quality according to the price because they were unable to ascertain the real quality. This situation also increases the importance of visual evidence with respect to the products in Airbnb. Therefore, we posit that higher prices and more pictures have significant effects on the purchase because the characteristics of the products in a sharing economy are understood differently from the comparable products in the traditional economy. The impact of the star-rating can be understood in relation to the mechanism of the sharing economy platform. For the sharing economy to earn trust, it must have a review mechanism (Guttentag, 2015). An online review has a close relation with consumers' attitudes, evaluations and ratings (Liu and Park, 2015; Hlee et al., 2018). Consequently, the star-rating has a significant impact on the purchases in Airbnb. Interestingly, whereas Chen and Rothschild (2010) found a positive impact of the place size information, the current results showed that places with lower occupancy are selected more frequently by Airbnb guests. The reason for the negative impact of the occupancy on the actual purchase is as follows: according to Lu and Zhu (2006), the size of the accommodation was assumed to represent the standard of the accommodation facility. The results showed that the smaller the accommodation was, the more reservations it had. That is, users of peer-to-peer accommodations prefer smaller rooms to larger ones. In comparison with reservations for one or two persons, it is not easy for guests to decide to purchase when the number of people staying there increases because there will be a greater potential for disruptions. This difficulty may be the reason for the result showing that the lower the number of occupants is, the higher the chance of a purchase will be. Finally, the safety features were found to have no influence on the purchase. This result is different from the findings of the previous research of Airbnb, where safety and security issues are significant to the guest satisfaction (Birinci et al., 2018). However, the previous case adopted a survey approach and sampled through Mechanical Turk without screening questions such as "have you ever used Airbnb?" (Birinci et al., 2018). Thus, the current research result could be more reliable in that actual behavioral data in Airbnb have been used. In Airbnb, a host can list a maximum of six basic safety features. Therefore, the number of safety features is observed to have no meaningful impact on the purchase because it can only show the basic details for safety. This research has several theoretical and practical implications. This research could contribute to the theoretical literature on Airbnb and the sharing economy because it addresses the existing studies' limitations. Although many previous works have attempted to study information communications between users in Airbnb by measuring their importance, only fragmentary investigations have been performed and only a partial understanding has been provided, such as the impact of the price information or host profile (Ert et al., 2016; Fagerstrom et al., 2017; Tussyadiah and Park, 2018; Wang and Nicolau, 2017). In the sharing economy context, because the information messages available in online platforms are usually the only sources for checking products, interacting with others, and making decisions, individuals tend to consider various components and aspects of information messages (Chen and Xie, 2017; Gibbs et al., 2017). Thus, to fully understand the communicative role of information in a sharing economy, a holistic perspective is required rather than a partial focus. As a result, this research focused on the various information appeals in host-created information by considering different categories of appeals and examining how they are delivered and perceived by individuals. By addressing the limitations of the previous literature, this study furthers the research into the sharing economy. As most activities in everyday life have been possible in online circumstances, the importance of online information to each individual's decision-making has been continuously appreciated (Li et al., 2017). Although a number of studies have investigated which information is helpful for an individual's decision-making in online environments or is effective at stimulating an individual's choice, few theoretical frameworks have been adopted; examples include dual-coding theory and a heuristic-systematic model (Hong et al., 2017). Hence, most previous results and implications have been explained from limited perspectives (Park et al., 2007). By adopting an untapped theoretical background, this research articulates the persuasive impacts of the information components of message appeals. Considering that the research topics could be differently explored with an original background, applying a new, but proper theoretical framework would be meaningful for the development of the research field (Haugh, 2012). We see major practical implications of our work to Airbnb's management and its host users. First, it would be better for Airbnb to produce measures for normal hosts. In our results, super host badges are examined as the most influential factor that helps guests select places to stay. Although increasing the general qualities of places through attractive incentives is important, this pattern could make it too uncommon for beginner hosts to be selected by guests. Furthermore, because the ratio of super hosts is quite low in our data set, super host badges could create a situation in which the rich grow richer while the poor grow poorer. The significant impact of the number of host reviews could make the problem worse. As a higher number of host reviewers are more attractive, the hosts registering more than one place would be able to receive more reviews, which could create a difficult situation for normal hosts. In conclusion, guests tend to choose places owned by experienced hosts with super host badges and several places. Airbnb has tried to promote users to be new hosts to achieve broad coverage in various locations. To attract new hosts, Airbnb needs to assure potential hosts that they would be selected by guests. However, the current system is mostly favorable to a small number of super hosts and commercial hosts. To accommodate more hosts and achieve its original goal of providing guests real local experiences by connecting them to normal local hosts, Airbnb needs to create measures for new hosts and normal hosts. Another practical implication of our work concerns the star-rating system. According to the results, the star-rating had a significant positive impact on the guests' decision making. Although it is reasonable that the places evaluated positively are more often selected by guests, if an evaluation does not reflect the quality of a place effectively, it will be misleading. If guests choose specific places because they have earned higher ratings, they would have higher expectations about these places. In this situation, if the higher ratings are not correct, the guests would be highly disappointed due to the incorrect evaluations, which will decrease the reliability of the Airbnb system. Fradkin et al. (2015) found that the star-rating in Airbnb generally tends to be inflated because the Airbnb system enable not only permits guests to review hosts and places but also permits hosts to review guests. Thus, the reciprocal evaluation system creates a biased trend in the star-ratings, and this feature could lead guests to distrust the system. Because the star-rating is examined as significant, Airbnb should consider this possibility. Other than Airbnb, host users could be provided some practical directions, especially on how to attract guests effectively. The results show the positive impacts of using social words, suitable prices, and place pictures. If hosts can make appeals to pathos by using social words and introduce places as if guests are staying in a friend's house, they will increase the rates of purchase. Thus, hosts are encouraged to describe themselves and their places with more social words and provide as much visual evidence as possible. Moreover, in Airbnb, the price could be an indication of the place quality, which would imply that an excessively low price could bring opposite effects from what the host intends, as the guests may associate low prices with low-quality places. Hence, hosts need to carefully consider the prices of their places. Finally, the fewer guests there are, the higher the purchase rate will be, i.e. we found a negative influence of the occupancy rate. This pattern indicates that guests are more willing to stay in places that are neither too large nor too expensive. Indeed, if places are crowded with guests, hosts could find it difficult to pay attention to every guest who might want to communicate with them to receive local information or experience. Rather than accommodate as many guests as possible, hosts should provide better experience to their guests by treating a specific number of people or by using their room designs to achieve the optimal occupancy. Thus, hosts should focus on having few guests and high quality rather than have many guests for high profit. Despite these findings, there are some limitations to the study. This study used the number of room reviews as a proxy variable; only a customer who actually bought a product can write such a review. Accordingly, the current measured value cannot explain all of the actual purchases. Additionally, the number of place and host reviews is not always equal but they are likely to be correlated each other and this can cause multicollinearity problem. Although no significant multicollinearity problem was identified in the full model, it could be presented when only the two measures are considered. Therefore, interpreting the results of this study should be done carefully and future research needs to adopt more reliable measures for representing the actual purchases. As this study examined the persuasive power of the host-created information in Airbnb, it could have overlooked other factors that are crucial for the actual purchase. Therefore, considerations of other factors, such as direct communications between the hosts and users, will make it easier to understand an actual purchase. Finally, although this study adopted several cities as data samples to generalize the results, the three cities are difficult to regard as sufficiently representative cases to generalize the results properly. Note that the results are inconsistent depending on the selected cities (Table AI). Thus, future research can enhance the current study by examining the results with more representative samples and comparing them with detailed explanations.
|
This paper aims to explain a guest's purchase decision in Airbnb from the perspective of Aristotle's appeals. In host-created information, the authors investigate which information appeals are significantly considered by guests.
|
[SECTION: Method] The sharing economy is continuously emerging, and it has had substantial impacts on various industries (Tussyadiah and Park, 2018). In particular, the novel business model has generated a strong economic and social impact in the hospitality and tourism industry (Tussyadiah and Pesonen, 2016). As a leading business in the sharing economy, Airbnb provides millions of accommodations in 65,000 cities in approximately 200 countries on a global scale, and its value is estimated at 10 billion US dollars (Gunter, 2018). As Airbnb has become a major player in the hospitality and tourism industry, it is one of the top priority research topics in the field (Liang et al., 2018). In contrast to the products of a conventional accommodation service, the products of Airbnb are individuals' private places; thus, it is much more difficult for potential travelers to obtain prior knowledge (Fagerstrom et al., 2017). Although hotels' experiential products are also hard to pretest, people have general knowledge about these products based on their past experiences, hotel brands, or hotel images. However, even for potential travelers who have past experience using Airbnb, Airbnb's idiosyncratic places are almost impossible to anticipate in terms of their qualities (Wang et al., 2016). This situation renders the communications between hosts (individuals lending their places) and guests (individuals renting hosts' places) critical in Airbnb, as the sharing transaction is processed primarily based on these communications. By considering the information about both places and hosts that is created by the hosts themselves (host-created information), the guests search for and select places to stay, indicating that host-created information is one of the core factors in the Airbnb system (Ert et al., 2016). Although many studies have attempted to investigate the communication process in Airbnb with host-created information, most cases have focused on specific parts of the information such as the price (Wang and Nicolau, 2017) or host profile (Fagerstrom et al., 2017). However, the process through which the totality of the host-created information is delivered and perceived has been scarcely investigated. Therefore, this study tries to examine how various information appeals in host-created information are communicated in Airbnb to further understand the communication process in a sharing economy. Specifically, we examine which appeals in the host-created information are significantly influential on the guest's decision-making. Based on Aristotle's appeals, general aspects of the host-created information in Airbnb are empirically analyzed. Sharing economy and irbnb The sharing economy is defined as "peer-to-peer (P2P) based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services" (Hamari et al., 2016). Social media have enabled the emergence of P2P online networks as well as social sharing (Priporas et al., 2017). Airbnb is a successful example of a sharing economy business. The sharing economy website for short-term rentals was founded in 2008 (Libert et al., 2014). By enabling people to easily lease short-term lodgings with their own living spaces, Airbnb has made an impact on the sharing economy, hence, on the hospitality and tourism business field (Morgan, 2011). With affordable prices and highly varied locations and types of lodgings, Airbnb has become a major player in the industry and has been expanding its business area to relevant fields such as airlines (Rizzo, 2018). Currently, Airbnb accommodates approximately 5 million lodging listings throughout the world and facilitates over 260 million check-ins on average per year (Airbnb, 2018). Because the sharing economy is a P2P-based network, its value is created, distributed and consumed by users, and this feature makes interactive communications between users indispensable to the sharing economy (Xie and Mao, 2017). Lampinen et al. (2013) found that the communication flow between users is important for a successful online sharing service. Thus, communication on a sharing economy website is performed with the provider's informational message and the recipient's response (Poon and Huang, 2017). In Airbnb, hosts introduce their places and themselves through informative messages (i.e. host-created information), and potential guests make decisions by evaluating the places and hosts based on the host-created information (Ert et al., 2016). Given its importance, the communication process in Airbnb has been of great interest of researchers (Xie and Mao, 2017). However, the existing studies have usually focused on specific appeals, such as the price (Gibbs et al., 2017; Wang and Nicolau, 2017) and host profile (Gunter, 2018; Tussyadiah and Park, 2018; Wu et al., 2017). Previously, the role of host-created information has only been partially understood. To address this research gap, this study examines the influences of information appeals within host-created information in Airbnb. Aristotle's appeals in online information messages As social media has become prevalent in most activities of daily life, the wide adoption of social media services and their online information have generated substantial impacts on individuals and businesses (Colicev et al., 2018). Thus, the influence of social media information has been an important research topic and is recognized in various contexts (Thakur and Hale, 2018). By examining this information's significant impact on an individual's perception and behavior in the pre-consumption (Swani et al., 2017), consumption (Chen et al., 2017) and post-consumption stages (Nadeem et al., 2015; VanMeter et al., 2015), it has been determined that the information communicated in online platforms has a strong impact. As a result, it is important to understand how the online information could be influential in stimulating, persuading and inspiring people (Yang et al., 2018). Aristotle's appeals compose a proper framework to analyze the persuasive influence of information (Otterbacher, 2011). According to Aristotle's appeals (Ramage et al., 2015), interpersonal messages can be persuasive and powerful through the following three components: ethos, pathos and logos (Xun and Reynolds, 2010). First, ethos is an ethical appeal that includes all of the proofs of the message sender's authority and credibility. Second, pathos is an emotional appeal to the recipient. Finally, logos is a rational appeal to the recipients. Usually, facts, figures and examples are used to influence the recipients' perceptions of the messages as reasonable. Xun and Reynolds (2010) demonstrated how to persuade readers by mixing ethos, logos and pathos in an online forum as if the messages were offline. Otterbacher (2011) examined the logos, ethos and pathos in the online review communities and demonstrated more ethos in the reviews required for experiences and references, more pathos in the reviews for daily necessaries, and logos in the most prominent reviews. Bronstein (2013) revealed that the candidates for President of the USA had expressed their identities to the voters with an emotional and synchronous approach using the Aristotelian language of persuasion in SNS. All of the studies confirmed that the propositions of Aristotle's appeals were supported in online communications by identifying the significance of the online information messages relative to the users' reactions (Bronstein, 2013). However, these methods have not been extensively applied to the communication process in the sharing economy despite the importance of understanding the persuasive impacts of information messages. In the case of Airbnb, the main platform is its online website, and the sharing transaction is processed primarily based on communications between users (Ert et al., 2016). Individuals who want to share their places join Airbnb and become hosts by registering them. During the registration, potential hosts input various pieces of information about their places and about themselves. Then, the guests search, evaluate and select the places where they want to stay based on the host-created information. Because there are no sources that enable one to become familiar with places other than the host-created information, this information is critical to a guest's decision making (Chen and Xie, 2017). Based on the highly expected importance of host-created information, a general proposition is suggested that assumes the significant influence of host-created information on a guest's purchase decision in Airbnb. The various informative statements' influences are empirically examined based on Aristotle's appeals to explain which piece of information is effective at persuading people (Scott, 1967). In this research, host-created information is defined as the information available in Airbnb that is written by hosts to convince potential guests to select their places. In Airbnb, various information appeals are available in each post of host-created information. Based on Aristotle's appeals, we use three categories: ethos; pathos; and logos. For the dependent variable, this research uses the number of times that places were shared. In Airbnb, only the guests who shared places can post reviews; hence, the number of reviews about places indicates the number of times the places have been shared. The research model is depicted in Figure 1. Ethos in host-created information Ethos symbolizes the message sender's credibility. If the messages are from trustworthy sources, they tend to be more influential (O'keefe, 1987). In online communications, the source's credibility is related to the user's reputation as perceived by the other users, which represents a signal of trustworthiness (Slee, 2013). Many researchers verified the impact of a website user's reputation on other users' reactions (Liu and Park, 2015) and online behaviors (Jin and Phua, 2014). In addition, several studies proved the persuasive impact of the message sender's credibility in online information based on Aristotle's appeals (Yang et al., 2018). In Airbnb, users can establish their reputations with three types of proof: a super host badge, ID verification and host reviews. Since 2014, a super host badge system has existed in Airbnb. According to Airbnb, a super host badge is awarded to the "experienced hosts who are passionate about making your trip memorable" (Airbnb, 2017). There are four requirements to obtain a super host badge: complete at least ten stays in a year; achieve a rating greater than 4.8 out of five; respond within 24 hours at least 90 per cent of the time; and no cancellations of confirmed reservations without extenuating circumstances. Once these requirements are met, the badge is awarded automatically to focal hosts. In addition, because the requirements are re-confirmed every year, the status is updated if there any changes (Gunter, 2018). To guests, a super host badge could be perceived as a reliable indication of the host's experience and commitment and represent the quality of the place. It has been found that guests are willing to pay a higher price for sharing with super hosts (Liang et al., 2017) and that the prices of super hosts' places tend to be higher (Wang and Nicolau, 2017). Based on these studies, it could be expected that a super host badge is a reliable signal of the host's credibility and the place's quality. H1.1. A super host badge will have a positive impact on a guest's decision-making. ID verification indicates whether the host has submitted his or her own personal information to Airbnb. If users: submit government-issued ID; connect their Airbnb accounts to other online accounts such as Google or Facebook; and upload a profile photo, phone number and email address, Airbnb gives an ID verification badge. Although the ID disclosure remains private, the signal that this host verified his or her ID can improve a guest's perception of the host (Racherla and Friske, 2012). Previous studies demonstrated the positive reactions of users to ID disclosures of the message provider. Because revealing personal information in online environments can reduce uncertainty (Tidwell and Walther, 2002) and enhance the source's credibility (Sussman and Siegal, 2003), information representing the ID disclosure of a message provider is a crucial factor in the recipient's perception. Hence, ID verification is evidence of the host's credibility. H1.2. ID verification will have a positive impact on a guest's decision-making. As mentioned above, the number of reviews indicates how many times the host has accommodated guests because only the guests who actually stayed in the places can post reviews. In other words, a host review can indicate a host's hosting experience (Weiss et al., 2008). Guests would prefer to stay in places that are guided by more experienced hosts, and Tussyadiah and Park (2018) found that more qualified, skilled and experienced hosts are more positively perceived. H1.3. A host review will have a positive impact on a guest's decision-making. Pathos in host-created information Pathos indicates the elements affecting the message recipients emotionally, and an emotional appeal is powerful in terms of its ability to persuade (Xun and Reynolds, 2010). Because Airbnb hosts have to write summaries of the places or themselves in their own words, guests are able to perceive an emotional or social stimulus through the hosts' language patterns in their descriptions. According to Tussyadiah and Pesonen (2016), one of the main reasons for using Airbnb is that guests want to find opportunities to explore local life by communicating with hosts. Hence, they would prefer the places whose hosts are emotional, social and friendly, and these characteristics could be reflected by their writing styles (Ludwig et al., 2013). Tussyadiah and Park (2018) found that the Airbnb user's likelihood of booking is higher when the hosts describe themselves as being willing to meet new people in host-created information. H2.1. The use of emotional words will have a positive impact on a guest's decision-making.H2.2. The use of social words will have a positive impact on a guest's decision-making. Logos in host-created information Logos persuades people with reasoned discourse. In a product choice situation, the consumer's rational thinking is mostly involved with the product awareness (Xun and Reynolds, 2010). In the host-created information of Airbnb, objective information regarding the place is provided: the price, occupancy, safety features, place picture and star-rating. These accommodation characteristics are important for a guest's decision-making (Yang et al., 2018). In Airbnb, the price means the price per night, and it is decided by the hosts. Inasmuch as the generally lower prices of places compared to conventional hotel rooms are responsible for the competitiveness of Airbnb (Guttentag, 2015), Airbnb users would expect lower costs for their stays and select the places with the most affordable prices (So et al., 2018). H3.1. The price will have a negative impact on the guest's decision-making. In Airbnb, the occupancy refers to the maximum number of spaces available, i.e. higher occupancy signifies more space. Lu and Zhu (2006) indicated that guests usually consider the room size as a crucial factor in the quality of the accommodation facility. H3.2. Occupancy will have a positive impact on the guest's decision-making. Airbnb promotes hosts who equip basic safety features in their places by providing the information about the number of these safety features. Specifically, smoke detectors, carbon monoxide detectors, first aid kits, safety cards, fire extinguishers and locks on the bedroom door are listed as desirable safety features in Airbnb. Hosts can list the features they have installed in their places, and the equipped features are shown in the host-created information. Safety could be more important in Airbnb than in a hotel because each place within Airbnb is distinctive in terms of its quality and safety accidents are usually outside the pale of platforms in the market (Richard and Cleveland, 2016). H3.3. Safety features will have a positive impact on a guest's decision-making. The place picture indicates the uploaded picture of the place. Hosts in Airbnb are able to upload as many photographs of their places as they want. When people search intangible products to make a purchase, pictorial information can be an effective means to exhibit the products' qualities (Jin and Phua, 2014). H3.4. Place pictures will have a positive impact on a guest's decision-making. Airbnb guests can evaluate the places where they have stayed with a star-rating. Guests who have stayed in specific places are encouraged to appraise the whole stay experience by giving star points from one star to five stars. In online communities, peer evaluation has been widely used to give helpful information to assist other users' decision-making (Tsao, 2018). Lee et al. (2015) demonstrated that the star-ratings of places are important to the sales of places in Airbnb. H3.5. The star-rating will have a positive impact on a guest's decision-making. Instrument development Table I indicates each variable's description. The super host badge and ID verification are measured by checking whether each host has the badges. The host review is measured based on the number of reviews that each host has received. The uses of emotional and social words are measured by a linguistic inquiry word count (LIWC). LIWC is automated word analysis software that provides content analysis results based on 70 preset linguistic categories (Tausczik and Pennebaker, 2010). For emotional words, those words that describe an individual's emotions are considered, and they indicate how much the authors are emotionally oriented. In the case of social words, those words that explain social relations are included to show how much the authors are socially oriented (Tausczik and Pennebaker, 2010). By calculating the proportion of the number of focal categories of words that appear in the whole text, LIWC could indicate what tendencies, inclinations or personalities the writers have (Mehl et al., 2006). Over 100 studies have applied this approach in various contexts (Cohn et al., 2004; Humphreys, 2010; Ludwig et al., 2013) and confirmed its reliability and validity. Thus, this research measures the use of emotional and social words by adopting the resulting values of LIWC's analyses of the textual descriptions in host-created information (Lee and van Dolen, 2015). The price, occupancy and star-rating are measured in numbers as shown in the host-created information. Similarly, the safety features and place pictures are counted in numbers based on the host-created information. Finally, the dependent variable is the actual purchases, and it is measured by the number of place reviews. The number of place reviews can represent the number of guests who actually stayed in the place because only these guests can write reviews in Airbnb. Thus, the number of place reviews could indicate the number of times the place has been shared (Chen and Xie, 2017; Liang et al., 2017). While the number of host reviews are reviews written for evaluating hosts, and it is optional for guests to leave host reviews, that of place reviews are reviews written for evaluating places, and it is mandatory for guests to leave place reviews. Thus, the number of host and place reviews is not always consistent: guests can write only place reviews; and if hosts register more than one places, the number of host reviews can be higher than that of place reviews because the former shows the total number of host reviews which have received from all the registered places. Data collection Airbnb is selected as the data source. From December 12, 2015 to December 24, 2015, 854 host-created information postings pertaining to places in Bangkok, London and New York were collected, as these cities are ranked as the top global destination cities in the Asia Pacific, Europe and the USA, respectively (Hedrick-Wong and Choong, 2015). At the Airbnb website, places are searched by location without dates or other filters. A total of 306 places were sought in each city; 64 rooms were excluded because some are duplicated results and others' host-created information was not written in English. Finally, 291 places in Bangkok, 288 in London and 275 in New York were subjected to data analysis. Data analysis The aim of this study is to examine the influences of various appeals in host-created information on the user's purchase decision in Airbnb. Therefore, empirical impacts of different appeals in host-created information on guests' decision-making were investigated in the context of Airbnb. A Tobit regression model was used because of the censored nature of the dependent variable (Qazi et al., 2016). The distribution of the dependent variable was skewed to the left, and its observed value was found to be within a certain range and censored. When the dependent variables have these features, ordinary least squares (OLS) analysis results in biased and inconsistent estimates. The Tobit model is known as the proper method to overcome these problems because it is a regression model with a censored variable and a non-negative dependent variable. In addition, we needed to resolve the selection biases. Although the dependent variable in this research is a proxy variable for an actual purchase, it is not the exact number of sharing transactions. The number of actual purchases for the place could be considerably greater than the number of place reviews. As a result, our sample has inherent selection biases, and we used a Tobit model to resolve them (Qazi et al., 2016). Results Table II presents the descriptive statistics for the variables, and Table III shows the correlation coefficients between the variables. The highest coefficient (coef.) was 0.32, indicating that singularity and multicollinearity were not problems within our data set (Tabachnick and Fidell, 2007). In addition, the tolerance and the variance inflation factor (VIF) were also checked to assess the multicollinearity. Because the tolerance ranged from 0.81 to 0.98 and the VIF ranged from 1.02 to 1.23, we have confirmed that there is no evidence of multicollinearity. A Tobit model was proposed with 12 hypothetical relationships as follows: Tobit Model:Actual purchases(Number of place reviews)=dummySH+dummyID+b1*HR+b2*Emotional+b3*Social+b4*Price+b5*Occupancy+b6*Safety+b7*Picture+b8*Rating+e where, dummySH = whether the host is a super host; dummyID = whether the host performed ID verification; HR = the number of host reviews; Emotional = LIWC Index of the use of emotional words in a summary describing the places or hosts; Social = LIWC Index of the use of social words in a summary describing the places or hosts; Price = the price of the place per night; Occupancy = the maximum number of people who can be accommodated in the place; Safety = the number of safety features with which the place is equipped; Picture = the number of pictures of the place; Rating = the average evaluation of the place made by experienced guests; and e = the random error. Table IV summarizes the results of our model (log likelihood = -3708.71). All of the variables explained approximately 3.2 per cent of the variance in the dependent variable (Pseudo R2 = 0.032). Ethos: except for the ID verification (coef. = 0.32), the positive impacts of the super host badge (coef. = 9.32, p-value < 0.01) and host review (coef. = 0.02, p-value < 0.001) were significant. Thus, while H1.1 and H1.3 were accepted, H1.2 was rejected. Pathos: although the positive impact of the use of social words was significant (coef. = 0.34, p-value < 0.05), the use of emotional words was not significant (coef. = -0.18). Hence, among the hypotheses on the pathos appeals, only H2.2 was accepted. Logos: among five variables, only the number of safety features (coef. = -0.44) was not significant. Among the significant variables, whereas the price (coef. = 0.07, p-value < 0.01), place picture (coef. = 0.32, p-value < 0.001), and star-rating (coef. = 6.28, p-value < 0.001) were positively significant, the occupancy (coef. = -4.12, p-value < 0.01) was negatively significant. As a result, H3.3 was rejected because of the insignificant impact, but H3.1 and H3.2 were rejected because the observed significant impacts have opposite directions to the hypothesized directions. Among the five pertinent hypotheses, only H3.4 and H3.5 were accepted. This study hypothesizes that various information appeals of host-created information have significant effects on the sharing behavior in Airbnb. Specifically, three categories of appeals would be expected to relate to the guest's actual purchase because the guest's decision-making on Airbnb would be affected by the host-created information. Airbnb hosts who had super host badges and more reviews sold more products. The significant impacts of the super host badges and host reviews have also been examined in previous studies (Wang and Nicolau, 2017). The appellation of "super host" is given to a very small number of hosts. Therefore, the places of super hosts tend to be more frequently selected because they appear more credible. In addition, inasmuch as having more host reviews implies more experience, guests are more likely to choose the hosts with more reviews. Furthermore, Airbnb is a service whereby people purchase unknown accommodation products from unfamiliar suppliers, and it is based on a review mechanism for the sake of trust (Guttentag, 2015). The results confirmed the importance of host reviews in Airbnb. Unexpectedly, the verifying ID had no significant effect on the dependent variable. The insignificance of the ID verification is in accordance with previous studies of Airbnb (Teubner et al., 2016) and online communities (Racherla and Friske, 2012). This finding could be attributed to a large number of users who have verified their IDs. Because the ID verification of a host was common among Airbnb hosts, the ID verification may have no discernible effect on either the customer's decision to trust a host or the purchase decision (Teubner et al., 2016). For the pathos, the more social words were used to describe hosts and places, the more likely those hosts were to be selected. However, emotional words had no impact on the purchase. To Airbnb guests, sociable hosts introducing themselves and their places with more social words were attractive, but emotional hosts were not highly preferable. In Airbnb, unlike hotels, guests demand local experiences (Guttentag, 2015). Thus, the social appeal is a crucial factor in a guest's choice. Tussyadiah and Park (2018) found significant influences of social appeals of hosts in the Airbnb context. The following sentences are actual examples with which hosts described their places or themselves socially: "Hello! Happy to hear from you! I and my husband delight to offer this lovely SINGLE ROOM on your consideration [sic]," "We're excited to provide a warm welcome, and a clean, secure, and comfortable stay while you are visiting New York," "We are creating this space for like minded people, Travellers, expats, explorers, students [...] people who enjoy a company of local and international hosts who know Bangkok well." In the case of emotional words, as hosts try to present themselves and their places attractively to guests, most words would be related to positive terms. According to relevant studies investigating the impacts of emotional online contents on an individual's perception, negative information tends to be perceived as more persuasive than positive information (Lee et al., 2017). This negativity bias has also been found in the Airbnb context, indicating that negative reviews have significant effects on decreases in the host's reputation (Abramova et al., 2015). In this regard, the emotional word-laden information tends to be positive and emotional in Airbnb. Hence, the host-created information trying to present the places as attractive would be difficult to consider trustworthy or authentic. The following sentences are actual examples, describing hosts and places with charming and fascinating words which are difficult to be effective in attracting guests: "You will feel relaxed and inspired in this spacious yet cozy room with classic brick fireplace": You'll love my place because of the modern apartment with nice new finishes and central Heat and Air Conditioning, high-speed internet, the great views of the city, and the quick ride to Manhattan. "Spectacular loft in the heart of LA! Great restaurants, bars and more right at your door. Walk to everything that is Downtown LA". Finally, logos is an appeal to the recipient's logic. This logos appeal is assumed to be an objective figure or obvious characteristic (Bronstein, 2013). Because places in Airbnb have no precise criteria (unlike other existing products), it is particularly important to describe what features the products have (Tussyadiah and Zach, 2017). Accordingly, this study took into account the prices, occupancies, safety features, place pictures and star-ratings as logos appeals. According to the study findings, a higher price, more place pictures, better star-ratings and lower occupancies increase the attractiveness of a place. The impact of the price, place picture, star-rating and occupancy on the actual purchase in this study is consistent with the literature, which argues that the objective characteristics of a product influence the theoretical judgments of the recipients (Garvin, 1984). Indeed, the impact of the price, place pictures and star-ratings have been demonstrated in some previous Airbnb studies (So et al., 2018; Jin and Phua, 2014). One of the most important factors in purchasing an accommodation is the price. Compared to hotels, Airbnb was competitive in its prices (Tussyadiah and Pesonen, 2016). However, this study found that the higher the price was, the more likely the product was to be purchased. A higher price may imply higher quality, even though some people prefer a less expensive product (Lichtenstein et al., 1993). Furthermore, Airbnb users have difficulty identifying the quality of shared economic goods. Therefore, the guests would have tried to determine a place's quality according to the price because they were unable to ascertain the real quality. This situation also increases the importance of visual evidence with respect to the products in Airbnb. Therefore, we posit that higher prices and more pictures have significant effects on the purchase because the characteristics of the products in a sharing economy are understood differently from the comparable products in the traditional economy. The impact of the star-rating can be understood in relation to the mechanism of the sharing economy platform. For the sharing economy to earn trust, it must have a review mechanism (Guttentag, 2015). An online review has a close relation with consumers' attitudes, evaluations and ratings (Liu and Park, 2015; Hlee et al., 2018). Consequently, the star-rating has a significant impact on the purchases in Airbnb. Interestingly, whereas Chen and Rothschild (2010) found a positive impact of the place size information, the current results showed that places with lower occupancy are selected more frequently by Airbnb guests. The reason for the negative impact of the occupancy on the actual purchase is as follows: according to Lu and Zhu (2006), the size of the accommodation was assumed to represent the standard of the accommodation facility. The results showed that the smaller the accommodation was, the more reservations it had. That is, users of peer-to-peer accommodations prefer smaller rooms to larger ones. In comparison with reservations for one or two persons, it is not easy for guests to decide to purchase when the number of people staying there increases because there will be a greater potential for disruptions. This difficulty may be the reason for the result showing that the lower the number of occupants is, the higher the chance of a purchase will be. Finally, the safety features were found to have no influence on the purchase. This result is different from the findings of the previous research of Airbnb, where safety and security issues are significant to the guest satisfaction (Birinci et al., 2018). However, the previous case adopted a survey approach and sampled through Mechanical Turk without screening questions such as "have you ever used Airbnb?" (Birinci et al., 2018). Thus, the current research result could be more reliable in that actual behavioral data in Airbnb have been used. In Airbnb, a host can list a maximum of six basic safety features. Therefore, the number of safety features is observed to have no meaningful impact on the purchase because it can only show the basic details for safety. This research has several theoretical and practical implications. This research could contribute to the theoretical literature on Airbnb and the sharing economy because it addresses the existing studies' limitations. Although many previous works have attempted to study information communications between users in Airbnb by measuring their importance, only fragmentary investigations have been performed and only a partial understanding has been provided, such as the impact of the price information or host profile (Ert et al., 2016; Fagerstrom et al., 2017; Tussyadiah and Park, 2018; Wang and Nicolau, 2017). In the sharing economy context, because the information messages available in online platforms are usually the only sources for checking products, interacting with others, and making decisions, individuals tend to consider various components and aspects of information messages (Chen and Xie, 2017; Gibbs et al., 2017). Thus, to fully understand the communicative role of information in a sharing economy, a holistic perspective is required rather than a partial focus. As a result, this research focused on the various information appeals in host-created information by considering different categories of appeals and examining how they are delivered and perceived by individuals. By addressing the limitations of the previous literature, this study furthers the research into the sharing economy. As most activities in everyday life have been possible in online circumstances, the importance of online information to each individual's decision-making has been continuously appreciated (Li et al., 2017). Although a number of studies have investigated which information is helpful for an individual's decision-making in online environments or is effective at stimulating an individual's choice, few theoretical frameworks have been adopted; examples include dual-coding theory and a heuristic-systematic model (Hong et al., 2017). Hence, most previous results and implications have been explained from limited perspectives (Park et al., 2007). By adopting an untapped theoretical background, this research articulates the persuasive impacts of the information components of message appeals. Considering that the research topics could be differently explored with an original background, applying a new, but proper theoretical framework would be meaningful for the development of the research field (Haugh, 2012). We see major practical implications of our work to Airbnb's management and its host users. First, it would be better for Airbnb to produce measures for normal hosts. In our results, super host badges are examined as the most influential factor that helps guests select places to stay. Although increasing the general qualities of places through attractive incentives is important, this pattern could make it too uncommon for beginner hosts to be selected by guests. Furthermore, because the ratio of super hosts is quite low in our data set, super host badges could create a situation in which the rich grow richer while the poor grow poorer. The significant impact of the number of host reviews could make the problem worse. As a higher number of host reviewers are more attractive, the hosts registering more than one place would be able to receive more reviews, which could create a difficult situation for normal hosts. In conclusion, guests tend to choose places owned by experienced hosts with super host badges and several places. Airbnb has tried to promote users to be new hosts to achieve broad coverage in various locations. To attract new hosts, Airbnb needs to assure potential hosts that they would be selected by guests. However, the current system is mostly favorable to a small number of super hosts and commercial hosts. To accommodate more hosts and achieve its original goal of providing guests real local experiences by connecting them to normal local hosts, Airbnb needs to create measures for new hosts and normal hosts. Another practical implication of our work concerns the star-rating system. According to the results, the star-rating had a significant positive impact on the guests' decision making. Although it is reasonable that the places evaluated positively are more often selected by guests, if an evaluation does not reflect the quality of a place effectively, it will be misleading. If guests choose specific places because they have earned higher ratings, they would have higher expectations about these places. In this situation, if the higher ratings are not correct, the guests would be highly disappointed due to the incorrect evaluations, which will decrease the reliability of the Airbnb system. Fradkin et al. (2015) found that the star-rating in Airbnb generally tends to be inflated because the Airbnb system enable not only permits guests to review hosts and places but also permits hosts to review guests. Thus, the reciprocal evaluation system creates a biased trend in the star-ratings, and this feature could lead guests to distrust the system. Because the star-rating is examined as significant, Airbnb should consider this possibility. Other than Airbnb, host users could be provided some practical directions, especially on how to attract guests effectively. The results show the positive impacts of using social words, suitable prices, and place pictures. If hosts can make appeals to pathos by using social words and introduce places as if guests are staying in a friend's house, they will increase the rates of purchase. Thus, hosts are encouraged to describe themselves and their places with more social words and provide as much visual evidence as possible. Moreover, in Airbnb, the price could be an indication of the place quality, which would imply that an excessively low price could bring opposite effects from what the host intends, as the guests may associate low prices with low-quality places. Hence, hosts need to carefully consider the prices of their places. Finally, the fewer guests there are, the higher the purchase rate will be, i.e. we found a negative influence of the occupancy rate. This pattern indicates that guests are more willing to stay in places that are neither too large nor too expensive. Indeed, if places are crowded with guests, hosts could find it difficult to pay attention to every guest who might want to communicate with them to receive local information or experience. Rather than accommodate as many guests as possible, hosts should provide better experience to their guests by treating a specific number of people or by using their room designs to achieve the optimal occupancy. Thus, hosts should focus on having few guests and high quality rather than have many guests for high profit. Despite these findings, there are some limitations to the study. This study used the number of room reviews as a proxy variable; only a customer who actually bought a product can write such a review. Accordingly, the current measured value cannot explain all of the actual purchases. Additionally, the number of place and host reviews is not always equal but they are likely to be correlated each other and this can cause multicollinearity problem. Although no significant multicollinearity problem was identified in the full model, it could be presented when only the two measures are considered. Therefore, interpreting the results of this study should be done carefully and future research needs to adopt more reliable measures for representing the actual purchases. As this study examined the persuasive power of the host-created information in Airbnb, it could have overlooked other factors that are crucial for the actual purchase. Therefore, considerations of other factors, such as direct communications between the hosts and users, will make it easier to understand an actual purchase. Finally, although this study adopted several cities as data samples to generalize the results, the three cities are difficult to regard as sufficiently representative cases to generalize the results properly. Note that the results are inconsistent depending on the selected cities (Table AI). Thus, future research can enhance the current study by examining the results with more representative samples and comparing them with detailed explanations.
|
It is hypothesized that a guest's purchase would be affected by the host-created information's ethos, pathos and logos.
|
[SECTION: Findings] The sharing economy is continuously emerging, and it has had substantial impacts on various industries (Tussyadiah and Park, 2018). In particular, the novel business model has generated a strong economic and social impact in the hospitality and tourism industry (Tussyadiah and Pesonen, 2016). As a leading business in the sharing economy, Airbnb provides millions of accommodations in 65,000 cities in approximately 200 countries on a global scale, and its value is estimated at 10 billion US dollars (Gunter, 2018). As Airbnb has become a major player in the hospitality and tourism industry, it is one of the top priority research topics in the field (Liang et al., 2018). In contrast to the products of a conventional accommodation service, the products of Airbnb are individuals' private places; thus, it is much more difficult for potential travelers to obtain prior knowledge (Fagerstrom et al., 2017). Although hotels' experiential products are also hard to pretest, people have general knowledge about these products based on their past experiences, hotel brands, or hotel images. However, even for potential travelers who have past experience using Airbnb, Airbnb's idiosyncratic places are almost impossible to anticipate in terms of their qualities (Wang et al., 2016). This situation renders the communications between hosts (individuals lending their places) and guests (individuals renting hosts' places) critical in Airbnb, as the sharing transaction is processed primarily based on these communications. By considering the information about both places and hosts that is created by the hosts themselves (host-created information), the guests search for and select places to stay, indicating that host-created information is one of the core factors in the Airbnb system (Ert et al., 2016). Although many studies have attempted to investigate the communication process in Airbnb with host-created information, most cases have focused on specific parts of the information such as the price (Wang and Nicolau, 2017) or host profile (Fagerstrom et al., 2017). However, the process through which the totality of the host-created information is delivered and perceived has been scarcely investigated. Therefore, this study tries to examine how various information appeals in host-created information are communicated in Airbnb to further understand the communication process in a sharing economy. Specifically, we examine which appeals in the host-created information are significantly influential on the guest's decision-making. Based on Aristotle's appeals, general aspects of the host-created information in Airbnb are empirically analyzed. Sharing economy and irbnb The sharing economy is defined as "peer-to-peer (P2P) based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services" (Hamari et al., 2016). Social media have enabled the emergence of P2P online networks as well as social sharing (Priporas et al., 2017). Airbnb is a successful example of a sharing economy business. The sharing economy website for short-term rentals was founded in 2008 (Libert et al., 2014). By enabling people to easily lease short-term lodgings with their own living spaces, Airbnb has made an impact on the sharing economy, hence, on the hospitality and tourism business field (Morgan, 2011). With affordable prices and highly varied locations and types of lodgings, Airbnb has become a major player in the industry and has been expanding its business area to relevant fields such as airlines (Rizzo, 2018). Currently, Airbnb accommodates approximately 5 million lodging listings throughout the world and facilitates over 260 million check-ins on average per year (Airbnb, 2018). Because the sharing economy is a P2P-based network, its value is created, distributed and consumed by users, and this feature makes interactive communications between users indispensable to the sharing economy (Xie and Mao, 2017). Lampinen et al. (2013) found that the communication flow between users is important for a successful online sharing service. Thus, communication on a sharing economy website is performed with the provider's informational message and the recipient's response (Poon and Huang, 2017). In Airbnb, hosts introduce their places and themselves through informative messages (i.e. host-created information), and potential guests make decisions by evaluating the places and hosts based on the host-created information (Ert et al., 2016). Given its importance, the communication process in Airbnb has been of great interest of researchers (Xie and Mao, 2017). However, the existing studies have usually focused on specific appeals, such as the price (Gibbs et al., 2017; Wang and Nicolau, 2017) and host profile (Gunter, 2018; Tussyadiah and Park, 2018; Wu et al., 2017). Previously, the role of host-created information has only been partially understood. To address this research gap, this study examines the influences of information appeals within host-created information in Airbnb. Aristotle's appeals in online information messages As social media has become prevalent in most activities of daily life, the wide adoption of social media services and their online information have generated substantial impacts on individuals and businesses (Colicev et al., 2018). Thus, the influence of social media information has been an important research topic and is recognized in various contexts (Thakur and Hale, 2018). By examining this information's significant impact on an individual's perception and behavior in the pre-consumption (Swani et al., 2017), consumption (Chen et al., 2017) and post-consumption stages (Nadeem et al., 2015; VanMeter et al., 2015), it has been determined that the information communicated in online platforms has a strong impact. As a result, it is important to understand how the online information could be influential in stimulating, persuading and inspiring people (Yang et al., 2018). Aristotle's appeals compose a proper framework to analyze the persuasive influence of information (Otterbacher, 2011). According to Aristotle's appeals (Ramage et al., 2015), interpersonal messages can be persuasive and powerful through the following three components: ethos, pathos and logos (Xun and Reynolds, 2010). First, ethos is an ethical appeal that includes all of the proofs of the message sender's authority and credibility. Second, pathos is an emotional appeal to the recipient. Finally, logos is a rational appeal to the recipients. Usually, facts, figures and examples are used to influence the recipients' perceptions of the messages as reasonable. Xun and Reynolds (2010) demonstrated how to persuade readers by mixing ethos, logos and pathos in an online forum as if the messages were offline. Otterbacher (2011) examined the logos, ethos and pathos in the online review communities and demonstrated more ethos in the reviews required for experiences and references, more pathos in the reviews for daily necessaries, and logos in the most prominent reviews. Bronstein (2013) revealed that the candidates for President of the USA had expressed their identities to the voters with an emotional and synchronous approach using the Aristotelian language of persuasion in SNS. All of the studies confirmed that the propositions of Aristotle's appeals were supported in online communications by identifying the significance of the online information messages relative to the users' reactions (Bronstein, 2013). However, these methods have not been extensively applied to the communication process in the sharing economy despite the importance of understanding the persuasive impacts of information messages. In the case of Airbnb, the main platform is its online website, and the sharing transaction is processed primarily based on communications between users (Ert et al., 2016). Individuals who want to share their places join Airbnb and become hosts by registering them. During the registration, potential hosts input various pieces of information about their places and about themselves. Then, the guests search, evaluate and select the places where they want to stay based on the host-created information. Because there are no sources that enable one to become familiar with places other than the host-created information, this information is critical to a guest's decision making (Chen and Xie, 2017). Based on the highly expected importance of host-created information, a general proposition is suggested that assumes the significant influence of host-created information on a guest's purchase decision in Airbnb. The various informative statements' influences are empirically examined based on Aristotle's appeals to explain which piece of information is effective at persuading people (Scott, 1967). In this research, host-created information is defined as the information available in Airbnb that is written by hosts to convince potential guests to select their places. In Airbnb, various information appeals are available in each post of host-created information. Based on Aristotle's appeals, we use three categories: ethos; pathos; and logos. For the dependent variable, this research uses the number of times that places were shared. In Airbnb, only the guests who shared places can post reviews; hence, the number of reviews about places indicates the number of times the places have been shared. The research model is depicted in Figure 1. Ethos in host-created information Ethos symbolizes the message sender's credibility. If the messages are from trustworthy sources, they tend to be more influential (O'keefe, 1987). In online communications, the source's credibility is related to the user's reputation as perceived by the other users, which represents a signal of trustworthiness (Slee, 2013). Many researchers verified the impact of a website user's reputation on other users' reactions (Liu and Park, 2015) and online behaviors (Jin and Phua, 2014). In addition, several studies proved the persuasive impact of the message sender's credibility in online information based on Aristotle's appeals (Yang et al., 2018). In Airbnb, users can establish their reputations with three types of proof: a super host badge, ID verification and host reviews. Since 2014, a super host badge system has existed in Airbnb. According to Airbnb, a super host badge is awarded to the "experienced hosts who are passionate about making your trip memorable" (Airbnb, 2017). There are four requirements to obtain a super host badge: complete at least ten stays in a year; achieve a rating greater than 4.8 out of five; respond within 24 hours at least 90 per cent of the time; and no cancellations of confirmed reservations without extenuating circumstances. Once these requirements are met, the badge is awarded automatically to focal hosts. In addition, because the requirements are re-confirmed every year, the status is updated if there any changes (Gunter, 2018). To guests, a super host badge could be perceived as a reliable indication of the host's experience and commitment and represent the quality of the place. It has been found that guests are willing to pay a higher price for sharing with super hosts (Liang et al., 2017) and that the prices of super hosts' places tend to be higher (Wang and Nicolau, 2017). Based on these studies, it could be expected that a super host badge is a reliable signal of the host's credibility and the place's quality. H1.1. A super host badge will have a positive impact on a guest's decision-making. ID verification indicates whether the host has submitted his or her own personal information to Airbnb. If users: submit government-issued ID; connect their Airbnb accounts to other online accounts such as Google or Facebook; and upload a profile photo, phone number and email address, Airbnb gives an ID verification badge. Although the ID disclosure remains private, the signal that this host verified his or her ID can improve a guest's perception of the host (Racherla and Friske, 2012). Previous studies demonstrated the positive reactions of users to ID disclosures of the message provider. Because revealing personal information in online environments can reduce uncertainty (Tidwell and Walther, 2002) and enhance the source's credibility (Sussman and Siegal, 2003), information representing the ID disclosure of a message provider is a crucial factor in the recipient's perception. Hence, ID verification is evidence of the host's credibility. H1.2. ID verification will have a positive impact on a guest's decision-making. As mentioned above, the number of reviews indicates how many times the host has accommodated guests because only the guests who actually stayed in the places can post reviews. In other words, a host review can indicate a host's hosting experience (Weiss et al., 2008). Guests would prefer to stay in places that are guided by more experienced hosts, and Tussyadiah and Park (2018) found that more qualified, skilled and experienced hosts are more positively perceived. H1.3. A host review will have a positive impact on a guest's decision-making. Pathos in host-created information Pathos indicates the elements affecting the message recipients emotionally, and an emotional appeal is powerful in terms of its ability to persuade (Xun and Reynolds, 2010). Because Airbnb hosts have to write summaries of the places or themselves in their own words, guests are able to perceive an emotional or social stimulus through the hosts' language patterns in their descriptions. According to Tussyadiah and Pesonen (2016), one of the main reasons for using Airbnb is that guests want to find opportunities to explore local life by communicating with hosts. Hence, they would prefer the places whose hosts are emotional, social and friendly, and these characteristics could be reflected by their writing styles (Ludwig et al., 2013). Tussyadiah and Park (2018) found that the Airbnb user's likelihood of booking is higher when the hosts describe themselves as being willing to meet new people in host-created information. H2.1. The use of emotional words will have a positive impact on a guest's decision-making.H2.2. The use of social words will have a positive impact on a guest's decision-making. Logos in host-created information Logos persuades people with reasoned discourse. In a product choice situation, the consumer's rational thinking is mostly involved with the product awareness (Xun and Reynolds, 2010). In the host-created information of Airbnb, objective information regarding the place is provided: the price, occupancy, safety features, place picture and star-rating. These accommodation characteristics are important for a guest's decision-making (Yang et al., 2018). In Airbnb, the price means the price per night, and it is decided by the hosts. Inasmuch as the generally lower prices of places compared to conventional hotel rooms are responsible for the competitiveness of Airbnb (Guttentag, 2015), Airbnb users would expect lower costs for their stays and select the places with the most affordable prices (So et al., 2018). H3.1. The price will have a negative impact on the guest's decision-making. In Airbnb, the occupancy refers to the maximum number of spaces available, i.e. higher occupancy signifies more space. Lu and Zhu (2006) indicated that guests usually consider the room size as a crucial factor in the quality of the accommodation facility. H3.2. Occupancy will have a positive impact on the guest's decision-making. Airbnb promotes hosts who equip basic safety features in their places by providing the information about the number of these safety features. Specifically, smoke detectors, carbon monoxide detectors, first aid kits, safety cards, fire extinguishers and locks on the bedroom door are listed as desirable safety features in Airbnb. Hosts can list the features they have installed in their places, and the equipped features are shown in the host-created information. Safety could be more important in Airbnb than in a hotel because each place within Airbnb is distinctive in terms of its quality and safety accidents are usually outside the pale of platforms in the market (Richard and Cleveland, 2016). H3.3. Safety features will have a positive impact on a guest's decision-making. The place picture indicates the uploaded picture of the place. Hosts in Airbnb are able to upload as many photographs of their places as they want. When people search intangible products to make a purchase, pictorial information can be an effective means to exhibit the products' qualities (Jin and Phua, 2014). H3.4. Place pictures will have a positive impact on a guest's decision-making. Airbnb guests can evaluate the places where they have stayed with a star-rating. Guests who have stayed in specific places are encouraged to appraise the whole stay experience by giving star points from one star to five stars. In online communities, peer evaluation has been widely used to give helpful information to assist other users' decision-making (Tsao, 2018). Lee et al. (2015) demonstrated that the star-ratings of places are important to the sales of places in Airbnb. H3.5. The star-rating will have a positive impact on a guest's decision-making. Instrument development Table I indicates each variable's description. The super host badge and ID verification are measured by checking whether each host has the badges. The host review is measured based on the number of reviews that each host has received. The uses of emotional and social words are measured by a linguistic inquiry word count (LIWC). LIWC is automated word analysis software that provides content analysis results based on 70 preset linguistic categories (Tausczik and Pennebaker, 2010). For emotional words, those words that describe an individual's emotions are considered, and they indicate how much the authors are emotionally oriented. In the case of social words, those words that explain social relations are included to show how much the authors are socially oriented (Tausczik and Pennebaker, 2010). By calculating the proportion of the number of focal categories of words that appear in the whole text, LIWC could indicate what tendencies, inclinations or personalities the writers have (Mehl et al., 2006). Over 100 studies have applied this approach in various contexts (Cohn et al., 2004; Humphreys, 2010; Ludwig et al., 2013) and confirmed its reliability and validity. Thus, this research measures the use of emotional and social words by adopting the resulting values of LIWC's analyses of the textual descriptions in host-created information (Lee and van Dolen, 2015). The price, occupancy and star-rating are measured in numbers as shown in the host-created information. Similarly, the safety features and place pictures are counted in numbers based on the host-created information. Finally, the dependent variable is the actual purchases, and it is measured by the number of place reviews. The number of place reviews can represent the number of guests who actually stayed in the place because only these guests can write reviews in Airbnb. Thus, the number of place reviews could indicate the number of times the place has been shared (Chen and Xie, 2017; Liang et al., 2017). While the number of host reviews are reviews written for evaluating hosts, and it is optional for guests to leave host reviews, that of place reviews are reviews written for evaluating places, and it is mandatory for guests to leave place reviews. Thus, the number of host and place reviews is not always consistent: guests can write only place reviews; and if hosts register more than one places, the number of host reviews can be higher than that of place reviews because the former shows the total number of host reviews which have received from all the registered places. Data collection Airbnb is selected as the data source. From December 12, 2015 to December 24, 2015, 854 host-created information postings pertaining to places in Bangkok, London and New York were collected, as these cities are ranked as the top global destination cities in the Asia Pacific, Europe and the USA, respectively (Hedrick-Wong and Choong, 2015). At the Airbnb website, places are searched by location without dates or other filters. A total of 306 places were sought in each city; 64 rooms were excluded because some are duplicated results and others' host-created information was not written in English. Finally, 291 places in Bangkok, 288 in London and 275 in New York were subjected to data analysis. Data analysis The aim of this study is to examine the influences of various appeals in host-created information on the user's purchase decision in Airbnb. Therefore, empirical impacts of different appeals in host-created information on guests' decision-making were investigated in the context of Airbnb. A Tobit regression model was used because of the censored nature of the dependent variable (Qazi et al., 2016). The distribution of the dependent variable was skewed to the left, and its observed value was found to be within a certain range and censored. When the dependent variables have these features, ordinary least squares (OLS) analysis results in biased and inconsistent estimates. The Tobit model is known as the proper method to overcome these problems because it is a regression model with a censored variable and a non-negative dependent variable. In addition, we needed to resolve the selection biases. Although the dependent variable in this research is a proxy variable for an actual purchase, it is not the exact number of sharing transactions. The number of actual purchases for the place could be considerably greater than the number of place reviews. As a result, our sample has inherent selection biases, and we used a Tobit model to resolve them (Qazi et al., 2016). Results Table II presents the descriptive statistics for the variables, and Table III shows the correlation coefficients between the variables. The highest coefficient (coef.) was 0.32, indicating that singularity and multicollinearity were not problems within our data set (Tabachnick and Fidell, 2007). In addition, the tolerance and the variance inflation factor (VIF) were also checked to assess the multicollinearity. Because the tolerance ranged from 0.81 to 0.98 and the VIF ranged from 1.02 to 1.23, we have confirmed that there is no evidence of multicollinearity. A Tobit model was proposed with 12 hypothetical relationships as follows: Tobit Model:Actual purchases(Number of place reviews)=dummySH+dummyID+b1*HR+b2*Emotional+b3*Social+b4*Price+b5*Occupancy+b6*Safety+b7*Picture+b8*Rating+e where, dummySH = whether the host is a super host; dummyID = whether the host performed ID verification; HR = the number of host reviews; Emotional = LIWC Index of the use of emotional words in a summary describing the places or hosts; Social = LIWC Index of the use of social words in a summary describing the places or hosts; Price = the price of the place per night; Occupancy = the maximum number of people who can be accommodated in the place; Safety = the number of safety features with which the place is equipped; Picture = the number of pictures of the place; Rating = the average evaluation of the place made by experienced guests; and e = the random error. Table IV summarizes the results of our model (log likelihood = -3708.71). All of the variables explained approximately 3.2 per cent of the variance in the dependent variable (Pseudo R2 = 0.032). Ethos: except for the ID verification (coef. = 0.32), the positive impacts of the super host badge (coef. = 9.32, p-value < 0.01) and host review (coef. = 0.02, p-value < 0.001) were significant. Thus, while H1.1 and H1.3 were accepted, H1.2 was rejected. Pathos: although the positive impact of the use of social words was significant (coef. = 0.34, p-value < 0.05), the use of emotional words was not significant (coef. = -0.18). Hence, among the hypotheses on the pathos appeals, only H2.2 was accepted. Logos: among five variables, only the number of safety features (coef. = -0.44) was not significant. Among the significant variables, whereas the price (coef. = 0.07, p-value < 0.01), place picture (coef. = 0.32, p-value < 0.001), and star-rating (coef. = 6.28, p-value < 0.001) were positively significant, the occupancy (coef. = -4.12, p-value < 0.01) was negatively significant. As a result, H3.3 was rejected because of the insignificant impact, but H3.1 and H3.2 were rejected because the observed significant impacts have opposite directions to the hypothesized directions. Among the five pertinent hypotheses, only H3.4 and H3.5 were accepted. This study hypothesizes that various information appeals of host-created information have significant effects on the sharing behavior in Airbnb. Specifically, three categories of appeals would be expected to relate to the guest's actual purchase because the guest's decision-making on Airbnb would be affected by the host-created information. Airbnb hosts who had super host badges and more reviews sold more products. The significant impacts of the super host badges and host reviews have also been examined in previous studies (Wang and Nicolau, 2017). The appellation of "super host" is given to a very small number of hosts. Therefore, the places of super hosts tend to be more frequently selected because they appear more credible. In addition, inasmuch as having more host reviews implies more experience, guests are more likely to choose the hosts with more reviews. Furthermore, Airbnb is a service whereby people purchase unknown accommodation products from unfamiliar suppliers, and it is based on a review mechanism for the sake of trust (Guttentag, 2015). The results confirmed the importance of host reviews in Airbnb. Unexpectedly, the verifying ID had no significant effect on the dependent variable. The insignificance of the ID verification is in accordance with previous studies of Airbnb (Teubner et al., 2016) and online communities (Racherla and Friske, 2012). This finding could be attributed to a large number of users who have verified their IDs. Because the ID verification of a host was common among Airbnb hosts, the ID verification may have no discernible effect on either the customer's decision to trust a host or the purchase decision (Teubner et al., 2016). For the pathos, the more social words were used to describe hosts and places, the more likely those hosts were to be selected. However, emotional words had no impact on the purchase. To Airbnb guests, sociable hosts introducing themselves and their places with more social words were attractive, but emotional hosts were not highly preferable. In Airbnb, unlike hotels, guests demand local experiences (Guttentag, 2015). Thus, the social appeal is a crucial factor in a guest's choice. Tussyadiah and Park (2018) found significant influences of social appeals of hosts in the Airbnb context. The following sentences are actual examples with which hosts described their places or themselves socially: "Hello! Happy to hear from you! I and my husband delight to offer this lovely SINGLE ROOM on your consideration [sic]," "We're excited to provide a warm welcome, and a clean, secure, and comfortable stay while you are visiting New York," "We are creating this space for like minded people, Travellers, expats, explorers, students [...] people who enjoy a company of local and international hosts who know Bangkok well." In the case of emotional words, as hosts try to present themselves and their places attractively to guests, most words would be related to positive terms. According to relevant studies investigating the impacts of emotional online contents on an individual's perception, negative information tends to be perceived as more persuasive than positive information (Lee et al., 2017). This negativity bias has also been found in the Airbnb context, indicating that negative reviews have significant effects on decreases in the host's reputation (Abramova et al., 2015). In this regard, the emotional word-laden information tends to be positive and emotional in Airbnb. Hence, the host-created information trying to present the places as attractive would be difficult to consider trustworthy or authentic. The following sentences are actual examples, describing hosts and places with charming and fascinating words which are difficult to be effective in attracting guests: "You will feel relaxed and inspired in this spacious yet cozy room with classic brick fireplace": You'll love my place because of the modern apartment with nice new finishes and central Heat and Air Conditioning, high-speed internet, the great views of the city, and the quick ride to Manhattan. "Spectacular loft in the heart of LA! Great restaurants, bars and more right at your door. Walk to everything that is Downtown LA". Finally, logos is an appeal to the recipient's logic. This logos appeal is assumed to be an objective figure or obvious characteristic (Bronstein, 2013). Because places in Airbnb have no precise criteria (unlike other existing products), it is particularly important to describe what features the products have (Tussyadiah and Zach, 2017). Accordingly, this study took into account the prices, occupancies, safety features, place pictures and star-ratings as logos appeals. According to the study findings, a higher price, more place pictures, better star-ratings and lower occupancies increase the attractiveness of a place. The impact of the price, place picture, star-rating and occupancy on the actual purchase in this study is consistent with the literature, which argues that the objective characteristics of a product influence the theoretical judgments of the recipients (Garvin, 1984). Indeed, the impact of the price, place pictures and star-ratings have been demonstrated in some previous Airbnb studies (So et al., 2018; Jin and Phua, 2014). One of the most important factors in purchasing an accommodation is the price. Compared to hotels, Airbnb was competitive in its prices (Tussyadiah and Pesonen, 2016). However, this study found that the higher the price was, the more likely the product was to be purchased. A higher price may imply higher quality, even though some people prefer a less expensive product (Lichtenstein et al., 1993). Furthermore, Airbnb users have difficulty identifying the quality of shared economic goods. Therefore, the guests would have tried to determine a place's quality according to the price because they were unable to ascertain the real quality. This situation also increases the importance of visual evidence with respect to the products in Airbnb. Therefore, we posit that higher prices and more pictures have significant effects on the purchase because the characteristics of the products in a sharing economy are understood differently from the comparable products in the traditional economy. The impact of the star-rating can be understood in relation to the mechanism of the sharing economy platform. For the sharing economy to earn trust, it must have a review mechanism (Guttentag, 2015). An online review has a close relation with consumers' attitudes, evaluations and ratings (Liu and Park, 2015; Hlee et al., 2018). Consequently, the star-rating has a significant impact on the purchases in Airbnb. Interestingly, whereas Chen and Rothschild (2010) found a positive impact of the place size information, the current results showed that places with lower occupancy are selected more frequently by Airbnb guests. The reason for the negative impact of the occupancy on the actual purchase is as follows: according to Lu and Zhu (2006), the size of the accommodation was assumed to represent the standard of the accommodation facility. The results showed that the smaller the accommodation was, the more reservations it had. That is, users of peer-to-peer accommodations prefer smaller rooms to larger ones. In comparison with reservations for one or two persons, it is not easy for guests to decide to purchase when the number of people staying there increases because there will be a greater potential for disruptions. This difficulty may be the reason for the result showing that the lower the number of occupants is, the higher the chance of a purchase will be. Finally, the safety features were found to have no influence on the purchase. This result is different from the findings of the previous research of Airbnb, where safety and security issues are significant to the guest satisfaction (Birinci et al., 2018). However, the previous case adopted a survey approach and sampled through Mechanical Turk without screening questions such as "have you ever used Airbnb?" (Birinci et al., 2018). Thus, the current research result could be more reliable in that actual behavioral data in Airbnb have been used. In Airbnb, a host can list a maximum of six basic safety features. Therefore, the number of safety features is observed to have no meaningful impact on the purchase because it can only show the basic details for safety. This research has several theoretical and practical implications. This research could contribute to the theoretical literature on Airbnb and the sharing economy because it addresses the existing studies' limitations. Although many previous works have attempted to study information communications between users in Airbnb by measuring their importance, only fragmentary investigations have been performed and only a partial understanding has been provided, such as the impact of the price information or host profile (Ert et al., 2016; Fagerstrom et al., 2017; Tussyadiah and Park, 2018; Wang and Nicolau, 2017). In the sharing economy context, because the information messages available in online platforms are usually the only sources for checking products, interacting with others, and making decisions, individuals tend to consider various components and aspects of information messages (Chen and Xie, 2017; Gibbs et al., 2017). Thus, to fully understand the communicative role of information in a sharing economy, a holistic perspective is required rather than a partial focus. As a result, this research focused on the various information appeals in host-created information by considering different categories of appeals and examining how they are delivered and perceived by individuals. By addressing the limitations of the previous literature, this study furthers the research into the sharing economy. As most activities in everyday life have been possible in online circumstances, the importance of online information to each individual's decision-making has been continuously appreciated (Li et al., 2017). Although a number of studies have investigated which information is helpful for an individual's decision-making in online environments or is effective at stimulating an individual's choice, few theoretical frameworks have been adopted; examples include dual-coding theory and a heuristic-systematic model (Hong et al., 2017). Hence, most previous results and implications have been explained from limited perspectives (Park et al., 2007). By adopting an untapped theoretical background, this research articulates the persuasive impacts of the information components of message appeals. Considering that the research topics could be differently explored with an original background, applying a new, but proper theoretical framework would be meaningful for the development of the research field (Haugh, 2012). We see major practical implications of our work to Airbnb's management and its host users. First, it would be better for Airbnb to produce measures for normal hosts. In our results, super host badges are examined as the most influential factor that helps guests select places to stay. Although increasing the general qualities of places through attractive incentives is important, this pattern could make it too uncommon for beginner hosts to be selected by guests. Furthermore, because the ratio of super hosts is quite low in our data set, super host badges could create a situation in which the rich grow richer while the poor grow poorer. The significant impact of the number of host reviews could make the problem worse. As a higher number of host reviewers are more attractive, the hosts registering more than one place would be able to receive more reviews, which could create a difficult situation for normal hosts. In conclusion, guests tend to choose places owned by experienced hosts with super host badges and several places. Airbnb has tried to promote users to be new hosts to achieve broad coverage in various locations. To attract new hosts, Airbnb needs to assure potential hosts that they would be selected by guests. However, the current system is mostly favorable to a small number of super hosts and commercial hosts. To accommodate more hosts and achieve its original goal of providing guests real local experiences by connecting them to normal local hosts, Airbnb needs to create measures for new hosts and normal hosts. Another practical implication of our work concerns the star-rating system. According to the results, the star-rating had a significant positive impact on the guests' decision making. Although it is reasonable that the places evaluated positively are more often selected by guests, if an evaluation does not reflect the quality of a place effectively, it will be misleading. If guests choose specific places because they have earned higher ratings, they would have higher expectations about these places. In this situation, if the higher ratings are not correct, the guests would be highly disappointed due to the incorrect evaluations, which will decrease the reliability of the Airbnb system. Fradkin et al. (2015) found that the star-rating in Airbnb generally tends to be inflated because the Airbnb system enable not only permits guests to review hosts and places but also permits hosts to review guests. Thus, the reciprocal evaluation system creates a biased trend in the star-ratings, and this feature could lead guests to distrust the system. Because the star-rating is examined as significant, Airbnb should consider this possibility. Other than Airbnb, host users could be provided some practical directions, especially on how to attract guests effectively. The results show the positive impacts of using social words, suitable prices, and place pictures. If hosts can make appeals to pathos by using social words and introduce places as if guests are staying in a friend's house, they will increase the rates of purchase. Thus, hosts are encouraged to describe themselves and their places with more social words and provide as much visual evidence as possible. Moreover, in Airbnb, the price could be an indication of the place quality, which would imply that an excessively low price could bring opposite effects from what the host intends, as the guests may associate low prices with low-quality places. Hence, hosts need to carefully consider the prices of their places. Finally, the fewer guests there are, the higher the purchase rate will be, i.e. we found a negative influence of the occupancy rate. This pattern indicates that guests are more willing to stay in places that are neither too large nor too expensive. Indeed, if places are crowded with guests, hosts could find it difficult to pay attention to every guest who might want to communicate with them to receive local information or experience. Rather than accommodate as many guests as possible, hosts should provide better experience to their guests by treating a specific number of people or by using their room designs to achieve the optimal occupancy. Thus, hosts should focus on having few guests and high quality rather than have many guests for high profit. Despite these findings, there are some limitations to the study. This study used the number of room reviews as a proxy variable; only a customer who actually bought a product can write such a review. Accordingly, the current measured value cannot explain all of the actual purchases. Additionally, the number of place and host reviews is not always equal but they are likely to be correlated each other and this can cause multicollinearity problem. Although no significant multicollinearity problem was identified in the full model, it could be presented when only the two measures are considered. Therefore, interpreting the results of this study should be done carefully and future research needs to adopt more reliable measures for representing the actual purchases. As this study examined the persuasive power of the host-created information in Airbnb, it could have overlooked other factors that are crucial for the actual purchase. Therefore, considerations of other factors, such as direct communications between the hosts and users, will make it easier to understand an actual purchase. Finally, although this study adopted several cities as data samples to generalize the results, the three cities are difficult to regard as sufficiently representative cases to generalize the results properly. Note that the results are inconsistent depending on the selected cities (Table AI). Thus, future research can enhance the current study by examining the results with more representative samples and comparing them with detailed explanations.
|
For the ethos, the super host badge and host review have positive impacts on the purchase; for the pathos, the positive impact of the use of social words is significant. For the logos, the authors have determined that although the price, place picture and star-rating have positive impacts on the likelihood of a purchase, the occupancy has a negative impact on it.
|
[SECTION: Value] The sharing economy is continuously emerging, and it has had substantial impacts on various industries (Tussyadiah and Park, 2018). In particular, the novel business model has generated a strong economic and social impact in the hospitality and tourism industry (Tussyadiah and Pesonen, 2016). As a leading business in the sharing economy, Airbnb provides millions of accommodations in 65,000 cities in approximately 200 countries on a global scale, and its value is estimated at 10 billion US dollars (Gunter, 2018). As Airbnb has become a major player in the hospitality and tourism industry, it is one of the top priority research topics in the field (Liang et al., 2018). In contrast to the products of a conventional accommodation service, the products of Airbnb are individuals' private places; thus, it is much more difficult for potential travelers to obtain prior knowledge (Fagerstrom et al., 2017). Although hotels' experiential products are also hard to pretest, people have general knowledge about these products based on their past experiences, hotel brands, or hotel images. However, even for potential travelers who have past experience using Airbnb, Airbnb's idiosyncratic places are almost impossible to anticipate in terms of their qualities (Wang et al., 2016). This situation renders the communications between hosts (individuals lending their places) and guests (individuals renting hosts' places) critical in Airbnb, as the sharing transaction is processed primarily based on these communications. By considering the information about both places and hosts that is created by the hosts themselves (host-created information), the guests search for and select places to stay, indicating that host-created information is one of the core factors in the Airbnb system (Ert et al., 2016). Although many studies have attempted to investigate the communication process in Airbnb with host-created information, most cases have focused on specific parts of the information such as the price (Wang and Nicolau, 2017) or host profile (Fagerstrom et al., 2017). However, the process through which the totality of the host-created information is delivered and perceived has been scarcely investigated. Therefore, this study tries to examine how various information appeals in host-created information are communicated in Airbnb to further understand the communication process in a sharing economy. Specifically, we examine which appeals in the host-created information are significantly influential on the guest's decision-making. Based on Aristotle's appeals, general aspects of the host-created information in Airbnb are empirically analyzed. Sharing economy and irbnb The sharing economy is defined as "peer-to-peer (P2P) based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services" (Hamari et al., 2016). Social media have enabled the emergence of P2P online networks as well as social sharing (Priporas et al., 2017). Airbnb is a successful example of a sharing economy business. The sharing economy website for short-term rentals was founded in 2008 (Libert et al., 2014). By enabling people to easily lease short-term lodgings with their own living spaces, Airbnb has made an impact on the sharing economy, hence, on the hospitality and tourism business field (Morgan, 2011). With affordable prices and highly varied locations and types of lodgings, Airbnb has become a major player in the industry and has been expanding its business area to relevant fields such as airlines (Rizzo, 2018). Currently, Airbnb accommodates approximately 5 million lodging listings throughout the world and facilitates over 260 million check-ins on average per year (Airbnb, 2018). Because the sharing economy is a P2P-based network, its value is created, distributed and consumed by users, and this feature makes interactive communications between users indispensable to the sharing economy (Xie and Mao, 2017). Lampinen et al. (2013) found that the communication flow between users is important for a successful online sharing service. Thus, communication on a sharing economy website is performed with the provider's informational message and the recipient's response (Poon and Huang, 2017). In Airbnb, hosts introduce their places and themselves through informative messages (i.e. host-created information), and potential guests make decisions by evaluating the places and hosts based on the host-created information (Ert et al., 2016). Given its importance, the communication process in Airbnb has been of great interest of researchers (Xie and Mao, 2017). However, the existing studies have usually focused on specific appeals, such as the price (Gibbs et al., 2017; Wang and Nicolau, 2017) and host profile (Gunter, 2018; Tussyadiah and Park, 2018; Wu et al., 2017). Previously, the role of host-created information has only been partially understood. To address this research gap, this study examines the influences of information appeals within host-created information in Airbnb. Aristotle's appeals in online information messages As social media has become prevalent in most activities of daily life, the wide adoption of social media services and their online information have generated substantial impacts on individuals and businesses (Colicev et al., 2018). Thus, the influence of social media information has been an important research topic and is recognized in various contexts (Thakur and Hale, 2018). By examining this information's significant impact on an individual's perception and behavior in the pre-consumption (Swani et al., 2017), consumption (Chen et al., 2017) and post-consumption stages (Nadeem et al., 2015; VanMeter et al., 2015), it has been determined that the information communicated in online platforms has a strong impact. As a result, it is important to understand how the online information could be influential in stimulating, persuading and inspiring people (Yang et al., 2018). Aristotle's appeals compose a proper framework to analyze the persuasive influence of information (Otterbacher, 2011). According to Aristotle's appeals (Ramage et al., 2015), interpersonal messages can be persuasive and powerful through the following three components: ethos, pathos and logos (Xun and Reynolds, 2010). First, ethos is an ethical appeal that includes all of the proofs of the message sender's authority and credibility. Second, pathos is an emotional appeal to the recipient. Finally, logos is a rational appeal to the recipients. Usually, facts, figures and examples are used to influence the recipients' perceptions of the messages as reasonable. Xun and Reynolds (2010) demonstrated how to persuade readers by mixing ethos, logos and pathos in an online forum as if the messages were offline. Otterbacher (2011) examined the logos, ethos and pathos in the online review communities and demonstrated more ethos in the reviews required for experiences and references, more pathos in the reviews for daily necessaries, and logos in the most prominent reviews. Bronstein (2013) revealed that the candidates for President of the USA had expressed their identities to the voters with an emotional and synchronous approach using the Aristotelian language of persuasion in SNS. All of the studies confirmed that the propositions of Aristotle's appeals were supported in online communications by identifying the significance of the online information messages relative to the users' reactions (Bronstein, 2013). However, these methods have not been extensively applied to the communication process in the sharing economy despite the importance of understanding the persuasive impacts of information messages. In the case of Airbnb, the main platform is its online website, and the sharing transaction is processed primarily based on communications between users (Ert et al., 2016). Individuals who want to share their places join Airbnb and become hosts by registering them. During the registration, potential hosts input various pieces of information about their places and about themselves. Then, the guests search, evaluate and select the places where they want to stay based on the host-created information. Because there are no sources that enable one to become familiar with places other than the host-created information, this information is critical to a guest's decision making (Chen and Xie, 2017). Based on the highly expected importance of host-created information, a general proposition is suggested that assumes the significant influence of host-created information on a guest's purchase decision in Airbnb. The various informative statements' influences are empirically examined based on Aristotle's appeals to explain which piece of information is effective at persuading people (Scott, 1967). In this research, host-created information is defined as the information available in Airbnb that is written by hosts to convince potential guests to select their places. In Airbnb, various information appeals are available in each post of host-created information. Based on Aristotle's appeals, we use three categories: ethos; pathos; and logos. For the dependent variable, this research uses the number of times that places were shared. In Airbnb, only the guests who shared places can post reviews; hence, the number of reviews about places indicates the number of times the places have been shared. The research model is depicted in Figure 1. Ethos in host-created information Ethos symbolizes the message sender's credibility. If the messages are from trustworthy sources, they tend to be more influential (O'keefe, 1987). In online communications, the source's credibility is related to the user's reputation as perceived by the other users, which represents a signal of trustworthiness (Slee, 2013). Many researchers verified the impact of a website user's reputation on other users' reactions (Liu and Park, 2015) and online behaviors (Jin and Phua, 2014). In addition, several studies proved the persuasive impact of the message sender's credibility in online information based on Aristotle's appeals (Yang et al., 2018). In Airbnb, users can establish their reputations with three types of proof: a super host badge, ID verification and host reviews. Since 2014, a super host badge system has existed in Airbnb. According to Airbnb, a super host badge is awarded to the "experienced hosts who are passionate about making your trip memorable" (Airbnb, 2017). There are four requirements to obtain a super host badge: complete at least ten stays in a year; achieve a rating greater than 4.8 out of five; respond within 24 hours at least 90 per cent of the time; and no cancellations of confirmed reservations without extenuating circumstances. Once these requirements are met, the badge is awarded automatically to focal hosts. In addition, because the requirements are re-confirmed every year, the status is updated if there any changes (Gunter, 2018). To guests, a super host badge could be perceived as a reliable indication of the host's experience and commitment and represent the quality of the place. It has been found that guests are willing to pay a higher price for sharing with super hosts (Liang et al., 2017) and that the prices of super hosts' places tend to be higher (Wang and Nicolau, 2017). Based on these studies, it could be expected that a super host badge is a reliable signal of the host's credibility and the place's quality. H1.1. A super host badge will have a positive impact on a guest's decision-making. ID verification indicates whether the host has submitted his or her own personal information to Airbnb. If users: submit government-issued ID; connect their Airbnb accounts to other online accounts such as Google or Facebook; and upload a profile photo, phone number and email address, Airbnb gives an ID verification badge. Although the ID disclosure remains private, the signal that this host verified his or her ID can improve a guest's perception of the host (Racherla and Friske, 2012). Previous studies demonstrated the positive reactions of users to ID disclosures of the message provider. Because revealing personal information in online environments can reduce uncertainty (Tidwell and Walther, 2002) and enhance the source's credibility (Sussman and Siegal, 2003), information representing the ID disclosure of a message provider is a crucial factor in the recipient's perception. Hence, ID verification is evidence of the host's credibility. H1.2. ID verification will have a positive impact on a guest's decision-making. As mentioned above, the number of reviews indicates how many times the host has accommodated guests because only the guests who actually stayed in the places can post reviews. In other words, a host review can indicate a host's hosting experience (Weiss et al., 2008). Guests would prefer to stay in places that are guided by more experienced hosts, and Tussyadiah and Park (2018) found that more qualified, skilled and experienced hosts are more positively perceived. H1.3. A host review will have a positive impact on a guest's decision-making. Pathos in host-created information Pathos indicates the elements affecting the message recipients emotionally, and an emotional appeal is powerful in terms of its ability to persuade (Xun and Reynolds, 2010). Because Airbnb hosts have to write summaries of the places or themselves in their own words, guests are able to perceive an emotional or social stimulus through the hosts' language patterns in their descriptions. According to Tussyadiah and Pesonen (2016), one of the main reasons for using Airbnb is that guests want to find opportunities to explore local life by communicating with hosts. Hence, they would prefer the places whose hosts are emotional, social and friendly, and these characteristics could be reflected by their writing styles (Ludwig et al., 2013). Tussyadiah and Park (2018) found that the Airbnb user's likelihood of booking is higher when the hosts describe themselves as being willing to meet new people in host-created information. H2.1. The use of emotional words will have a positive impact on a guest's decision-making.H2.2. The use of social words will have a positive impact on a guest's decision-making. Logos in host-created information Logos persuades people with reasoned discourse. In a product choice situation, the consumer's rational thinking is mostly involved with the product awareness (Xun and Reynolds, 2010). In the host-created information of Airbnb, objective information regarding the place is provided: the price, occupancy, safety features, place picture and star-rating. These accommodation characteristics are important for a guest's decision-making (Yang et al., 2018). In Airbnb, the price means the price per night, and it is decided by the hosts. Inasmuch as the generally lower prices of places compared to conventional hotel rooms are responsible for the competitiveness of Airbnb (Guttentag, 2015), Airbnb users would expect lower costs for their stays and select the places with the most affordable prices (So et al., 2018). H3.1. The price will have a negative impact on the guest's decision-making. In Airbnb, the occupancy refers to the maximum number of spaces available, i.e. higher occupancy signifies more space. Lu and Zhu (2006) indicated that guests usually consider the room size as a crucial factor in the quality of the accommodation facility. H3.2. Occupancy will have a positive impact on the guest's decision-making. Airbnb promotes hosts who equip basic safety features in their places by providing the information about the number of these safety features. Specifically, smoke detectors, carbon monoxide detectors, first aid kits, safety cards, fire extinguishers and locks on the bedroom door are listed as desirable safety features in Airbnb. Hosts can list the features they have installed in their places, and the equipped features are shown in the host-created information. Safety could be more important in Airbnb than in a hotel because each place within Airbnb is distinctive in terms of its quality and safety accidents are usually outside the pale of platforms in the market (Richard and Cleveland, 2016). H3.3. Safety features will have a positive impact on a guest's decision-making. The place picture indicates the uploaded picture of the place. Hosts in Airbnb are able to upload as many photographs of their places as they want. When people search intangible products to make a purchase, pictorial information can be an effective means to exhibit the products' qualities (Jin and Phua, 2014). H3.4. Place pictures will have a positive impact on a guest's decision-making. Airbnb guests can evaluate the places where they have stayed with a star-rating. Guests who have stayed in specific places are encouraged to appraise the whole stay experience by giving star points from one star to five stars. In online communities, peer evaluation has been widely used to give helpful information to assist other users' decision-making (Tsao, 2018). Lee et al. (2015) demonstrated that the star-ratings of places are important to the sales of places in Airbnb. H3.5. The star-rating will have a positive impact on a guest's decision-making. Instrument development Table I indicates each variable's description. The super host badge and ID verification are measured by checking whether each host has the badges. The host review is measured based on the number of reviews that each host has received. The uses of emotional and social words are measured by a linguistic inquiry word count (LIWC). LIWC is automated word analysis software that provides content analysis results based on 70 preset linguistic categories (Tausczik and Pennebaker, 2010). For emotional words, those words that describe an individual's emotions are considered, and they indicate how much the authors are emotionally oriented. In the case of social words, those words that explain social relations are included to show how much the authors are socially oriented (Tausczik and Pennebaker, 2010). By calculating the proportion of the number of focal categories of words that appear in the whole text, LIWC could indicate what tendencies, inclinations or personalities the writers have (Mehl et al., 2006). Over 100 studies have applied this approach in various contexts (Cohn et al., 2004; Humphreys, 2010; Ludwig et al., 2013) and confirmed its reliability and validity. Thus, this research measures the use of emotional and social words by adopting the resulting values of LIWC's analyses of the textual descriptions in host-created information (Lee and van Dolen, 2015). The price, occupancy and star-rating are measured in numbers as shown in the host-created information. Similarly, the safety features and place pictures are counted in numbers based on the host-created information. Finally, the dependent variable is the actual purchases, and it is measured by the number of place reviews. The number of place reviews can represent the number of guests who actually stayed in the place because only these guests can write reviews in Airbnb. Thus, the number of place reviews could indicate the number of times the place has been shared (Chen and Xie, 2017; Liang et al., 2017). While the number of host reviews are reviews written for evaluating hosts, and it is optional for guests to leave host reviews, that of place reviews are reviews written for evaluating places, and it is mandatory for guests to leave place reviews. Thus, the number of host and place reviews is not always consistent: guests can write only place reviews; and if hosts register more than one places, the number of host reviews can be higher than that of place reviews because the former shows the total number of host reviews which have received from all the registered places. Data collection Airbnb is selected as the data source. From December 12, 2015 to December 24, 2015, 854 host-created information postings pertaining to places in Bangkok, London and New York were collected, as these cities are ranked as the top global destination cities in the Asia Pacific, Europe and the USA, respectively (Hedrick-Wong and Choong, 2015). At the Airbnb website, places are searched by location without dates or other filters. A total of 306 places were sought in each city; 64 rooms were excluded because some are duplicated results and others' host-created information was not written in English. Finally, 291 places in Bangkok, 288 in London and 275 in New York were subjected to data analysis. Data analysis The aim of this study is to examine the influences of various appeals in host-created information on the user's purchase decision in Airbnb. Therefore, empirical impacts of different appeals in host-created information on guests' decision-making were investigated in the context of Airbnb. A Tobit regression model was used because of the censored nature of the dependent variable (Qazi et al., 2016). The distribution of the dependent variable was skewed to the left, and its observed value was found to be within a certain range and censored. When the dependent variables have these features, ordinary least squares (OLS) analysis results in biased and inconsistent estimates. The Tobit model is known as the proper method to overcome these problems because it is a regression model with a censored variable and a non-negative dependent variable. In addition, we needed to resolve the selection biases. Although the dependent variable in this research is a proxy variable for an actual purchase, it is not the exact number of sharing transactions. The number of actual purchases for the place could be considerably greater than the number of place reviews. As a result, our sample has inherent selection biases, and we used a Tobit model to resolve them (Qazi et al., 2016). Results Table II presents the descriptive statistics for the variables, and Table III shows the correlation coefficients between the variables. The highest coefficient (coef.) was 0.32, indicating that singularity and multicollinearity were not problems within our data set (Tabachnick and Fidell, 2007). In addition, the tolerance and the variance inflation factor (VIF) were also checked to assess the multicollinearity. Because the tolerance ranged from 0.81 to 0.98 and the VIF ranged from 1.02 to 1.23, we have confirmed that there is no evidence of multicollinearity. A Tobit model was proposed with 12 hypothetical relationships as follows: Tobit Model:Actual purchases(Number of place reviews)=dummySH+dummyID+b1*HR+b2*Emotional+b3*Social+b4*Price+b5*Occupancy+b6*Safety+b7*Picture+b8*Rating+e where, dummySH = whether the host is a super host; dummyID = whether the host performed ID verification; HR = the number of host reviews; Emotional = LIWC Index of the use of emotional words in a summary describing the places or hosts; Social = LIWC Index of the use of social words in a summary describing the places or hosts; Price = the price of the place per night; Occupancy = the maximum number of people who can be accommodated in the place; Safety = the number of safety features with which the place is equipped; Picture = the number of pictures of the place; Rating = the average evaluation of the place made by experienced guests; and e = the random error. Table IV summarizes the results of our model (log likelihood = -3708.71). All of the variables explained approximately 3.2 per cent of the variance in the dependent variable (Pseudo R2 = 0.032). Ethos: except for the ID verification (coef. = 0.32), the positive impacts of the super host badge (coef. = 9.32, p-value < 0.01) and host review (coef. = 0.02, p-value < 0.001) were significant. Thus, while H1.1 and H1.3 were accepted, H1.2 was rejected. Pathos: although the positive impact of the use of social words was significant (coef. = 0.34, p-value < 0.05), the use of emotional words was not significant (coef. = -0.18). Hence, among the hypotheses on the pathos appeals, only H2.2 was accepted. Logos: among five variables, only the number of safety features (coef. = -0.44) was not significant. Among the significant variables, whereas the price (coef. = 0.07, p-value < 0.01), place picture (coef. = 0.32, p-value < 0.001), and star-rating (coef. = 6.28, p-value < 0.001) were positively significant, the occupancy (coef. = -4.12, p-value < 0.01) was negatively significant. As a result, H3.3 was rejected because of the insignificant impact, but H3.1 and H3.2 were rejected because the observed significant impacts have opposite directions to the hypothesized directions. Among the five pertinent hypotheses, only H3.4 and H3.5 were accepted. This study hypothesizes that various information appeals of host-created information have significant effects on the sharing behavior in Airbnb. Specifically, three categories of appeals would be expected to relate to the guest's actual purchase because the guest's decision-making on Airbnb would be affected by the host-created information. Airbnb hosts who had super host badges and more reviews sold more products. The significant impacts of the super host badges and host reviews have also been examined in previous studies (Wang and Nicolau, 2017). The appellation of "super host" is given to a very small number of hosts. Therefore, the places of super hosts tend to be more frequently selected because they appear more credible. In addition, inasmuch as having more host reviews implies more experience, guests are more likely to choose the hosts with more reviews. Furthermore, Airbnb is a service whereby people purchase unknown accommodation products from unfamiliar suppliers, and it is based on a review mechanism for the sake of trust (Guttentag, 2015). The results confirmed the importance of host reviews in Airbnb. Unexpectedly, the verifying ID had no significant effect on the dependent variable. The insignificance of the ID verification is in accordance with previous studies of Airbnb (Teubner et al., 2016) and online communities (Racherla and Friske, 2012). This finding could be attributed to a large number of users who have verified their IDs. Because the ID verification of a host was common among Airbnb hosts, the ID verification may have no discernible effect on either the customer's decision to trust a host or the purchase decision (Teubner et al., 2016). For the pathos, the more social words were used to describe hosts and places, the more likely those hosts were to be selected. However, emotional words had no impact on the purchase. To Airbnb guests, sociable hosts introducing themselves and their places with more social words were attractive, but emotional hosts were not highly preferable. In Airbnb, unlike hotels, guests demand local experiences (Guttentag, 2015). Thus, the social appeal is a crucial factor in a guest's choice. Tussyadiah and Park (2018) found significant influences of social appeals of hosts in the Airbnb context. The following sentences are actual examples with which hosts described their places or themselves socially: "Hello! Happy to hear from you! I and my husband delight to offer this lovely SINGLE ROOM on your consideration [sic]," "We're excited to provide a warm welcome, and a clean, secure, and comfortable stay while you are visiting New York," "We are creating this space for like minded people, Travellers, expats, explorers, students [...] people who enjoy a company of local and international hosts who know Bangkok well." In the case of emotional words, as hosts try to present themselves and their places attractively to guests, most words would be related to positive terms. According to relevant studies investigating the impacts of emotional online contents on an individual's perception, negative information tends to be perceived as more persuasive than positive information (Lee et al., 2017). This negativity bias has also been found in the Airbnb context, indicating that negative reviews have significant effects on decreases in the host's reputation (Abramova et al., 2015). In this regard, the emotional word-laden information tends to be positive and emotional in Airbnb. Hence, the host-created information trying to present the places as attractive would be difficult to consider trustworthy or authentic. The following sentences are actual examples, describing hosts and places with charming and fascinating words which are difficult to be effective in attracting guests: "You will feel relaxed and inspired in this spacious yet cozy room with classic brick fireplace": You'll love my place because of the modern apartment with nice new finishes and central Heat and Air Conditioning, high-speed internet, the great views of the city, and the quick ride to Manhattan. "Spectacular loft in the heart of LA! Great restaurants, bars and more right at your door. Walk to everything that is Downtown LA". Finally, logos is an appeal to the recipient's logic. This logos appeal is assumed to be an objective figure or obvious characteristic (Bronstein, 2013). Because places in Airbnb have no precise criteria (unlike other existing products), it is particularly important to describe what features the products have (Tussyadiah and Zach, 2017). Accordingly, this study took into account the prices, occupancies, safety features, place pictures and star-ratings as logos appeals. According to the study findings, a higher price, more place pictures, better star-ratings and lower occupancies increase the attractiveness of a place. The impact of the price, place picture, star-rating and occupancy on the actual purchase in this study is consistent with the literature, which argues that the objective characteristics of a product influence the theoretical judgments of the recipients (Garvin, 1984). Indeed, the impact of the price, place pictures and star-ratings have been demonstrated in some previous Airbnb studies (So et al., 2018; Jin and Phua, 2014). One of the most important factors in purchasing an accommodation is the price. Compared to hotels, Airbnb was competitive in its prices (Tussyadiah and Pesonen, 2016). However, this study found that the higher the price was, the more likely the product was to be purchased. A higher price may imply higher quality, even though some people prefer a less expensive product (Lichtenstein et al., 1993). Furthermore, Airbnb users have difficulty identifying the quality of shared economic goods. Therefore, the guests would have tried to determine a place's quality according to the price because they were unable to ascertain the real quality. This situation also increases the importance of visual evidence with respect to the products in Airbnb. Therefore, we posit that higher prices and more pictures have significant effects on the purchase because the characteristics of the products in a sharing economy are understood differently from the comparable products in the traditional economy. The impact of the star-rating can be understood in relation to the mechanism of the sharing economy platform. For the sharing economy to earn trust, it must have a review mechanism (Guttentag, 2015). An online review has a close relation with consumers' attitudes, evaluations and ratings (Liu and Park, 2015; Hlee et al., 2018). Consequently, the star-rating has a significant impact on the purchases in Airbnb. Interestingly, whereas Chen and Rothschild (2010) found a positive impact of the place size information, the current results showed that places with lower occupancy are selected more frequently by Airbnb guests. The reason for the negative impact of the occupancy on the actual purchase is as follows: according to Lu and Zhu (2006), the size of the accommodation was assumed to represent the standard of the accommodation facility. The results showed that the smaller the accommodation was, the more reservations it had. That is, users of peer-to-peer accommodations prefer smaller rooms to larger ones. In comparison with reservations for one or two persons, it is not easy for guests to decide to purchase when the number of people staying there increases because there will be a greater potential for disruptions. This difficulty may be the reason for the result showing that the lower the number of occupants is, the higher the chance of a purchase will be. Finally, the safety features were found to have no influence on the purchase. This result is different from the findings of the previous research of Airbnb, where safety and security issues are significant to the guest satisfaction (Birinci et al., 2018). However, the previous case adopted a survey approach and sampled through Mechanical Turk without screening questions such as "have you ever used Airbnb?" (Birinci et al., 2018). Thus, the current research result could be more reliable in that actual behavioral data in Airbnb have been used. In Airbnb, a host can list a maximum of six basic safety features. Therefore, the number of safety features is observed to have no meaningful impact on the purchase because it can only show the basic details for safety. This research has several theoretical and practical implications. This research could contribute to the theoretical literature on Airbnb and the sharing economy because it addresses the existing studies' limitations. Although many previous works have attempted to study information communications between users in Airbnb by measuring their importance, only fragmentary investigations have been performed and only a partial understanding has been provided, such as the impact of the price information or host profile (Ert et al., 2016; Fagerstrom et al., 2017; Tussyadiah and Park, 2018; Wang and Nicolau, 2017). In the sharing economy context, because the information messages available in online platforms are usually the only sources for checking products, interacting with others, and making decisions, individuals tend to consider various components and aspects of information messages (Chen and Xie, 2017; Gibbs et al., 2017). Thus, to fully understand the communicative role of information in a sharing economy, a holistic perspective is required rather than a partial focus. As a result, this research focused on the various information appeals in host-created information by considering different categories of appeals and examining how they are delivered and perceived by individuals. By addressing the limitations of the previous literature, this study furthers the research into the sharing economy. As most activities in everyday life have been possible in online circumstances, the importance of online information to each individual's decision-making has been continuously appreciated (Li et al., 2017). Although a number of studies have investigated which information is helpful for an individual's decision-making in online environments or is effective at stimulating an individual's choice, few theoretical frameworks have been adopted; examples include dual-coding theory and a heuristic-systematic model (Hong et al., 2017). Hence, most previous results and implications have been explained from limited perspectives (Park et al., 2007). By adopting an untapped theoretical background, this research articulates the persuasive impacts of the information components of message appeals. Considering that the research topics could be differently explored with an original background, applying a new, but proper theoretical framework would be meaningful for the development of the research field (Haugh, 2012). We see major practical implications of our work to Airbnb's management and its host users. First, it would be better for Airbnb to produce measures for normal hosts. In our results, super host badges are examined as the most influential factor that helps guests select places to stay. Although increasing the general qualities of places through attractive incentives is important, this pattern could make it too uncommon for beginner hosts to be selected by guests. Furthermore, because the ratio of super hosts is quite low in our data set, super host badges could create a situation in which the rich grow richer while the poor grow poorer. The significant impact of the number of host reviews could make the problem worse. As a higher number of host reviewers are more attractive, the hosts registering more than one place would be able to receive more reviews, which could create a difficult situation for normal hosts. In conclusion, guests tend to choose places owned by experienced hosts with super host badges and several places. Airbnb has tried to promote users to be new hosts to achieve broad coverage in various locations. To attract new hosts, Airbnb needs to assure potential hosts that they would be selected by guests. However, the current system is mostly favorable to a small number of super hosts and commercial hosts. To accommodate more hosts and achieve its original goal of providing guests real local experiences by connecting them to normal local hosts, Airbnb needs to create measures for new hosts and normal hosts. Another practical implication of our work concerns the star-rating system. According to the results, the star-rating had a significant positive impact on the guests' decision making. Although it is reasonable that the places evaluated positively are more often selected by guests, if an evaluation does not reflect the quality of a place effectively, it will be misleading. If guests choose specific places because they have earned higher ratings, they would have higher expectations about these places. In this situation, if the higher ratings are not correct, the guests would be highly disappointed due to the incorrect evaluations, which will decrease the reliability of the Airbnb system. Fradkin et al. (2015) found that the star-rating in Airbnb generally tends to be inflated because the Airbnb system enable not only permits guests to review hosts and places but also permits hosts to review guests. Thus, the reciprocal evaluation system creates a biased trend in the star-ratings, and this feature could lead guests to distrust the system. Because the star-rating is examined as significant, Airbnb should consider this possibility. Other than Airbnb, host users could be provided some practical directions, especially on how to attract guests effectively. The results show the positive impacts of using social words, suitable prices, and place pictures. If hosts can make appeals to pathos by using social words and introduce places as if guests are staying in a friend's house, they will increase the rates of purchase. Thus, hosts are encouraged to describe themselves and their places with more social words and provide as much visual evidence as possible. Moreover, in Airbnb, the price could be an indication of the place quality, which would imply that an excessively low price could bring opposite effects from what the host intends, as the guests may associate low prices with low-quality places. Hence, hosts need to carefully consider the prices of their places. Finally, the fewer guests there are, the higher the purchase rate will be, i.e. we found a negative influence of the occupancy rate. This pattern indicates that guests are more willing to stay in places that are neither too large nor too expensive. Indeed, if places are crowded with guests, hosts could find it difficult to pay attention to every guest who might want to communicate with them to receive local information or experience. Rather than accommodate as many guests as possible, hosts should provide better experience to their guests by treating a specific number of people or by using their room designs to achieve the optimal occupancy. Thus, hosts should focus on having few guests and high quality rather than have many guests for high profit. Despite these findings, there are some limitations to the study. This study used the number of room reviews as a proxy variable; only a customer who actually bought a product can write such a review. Accordingly, the current measured value cannot explain all of the actual purchases. Additionally, the number of place and host reviews is not always equal but they are likely to be correlated each other and this can cause multicollinearity problem. Although no significant multicollinearity problem was identified in the full model, it could be presented when only the two measures are considered. Therefore, interpreting the results of this study should be done carefully and future research needs to adopt more reliable measures for representing the actual purchases. As this study examined the persuasive power of the host-created information in Airbnb, it could have overlooked other factors that are crucial for the actual purchase. Therefore, considerations of other factors, such as direct communications between the hosts and users, will make it easier to understand an actual purchase. Finally, although this study adopted several cities as data samples to generalize the results, the three cities are difficult to regard as sufficiently representative cases to generalize the results properly. Note that the results are inconsistent depending on the selected cities (Table AI). Thus, future research can enhance the current study by examining the results with more representative samples and comparing them with detailed explanations.
|
The dependent variable, the number of place reviews, cannot represent the exact number of purchases. Other possible influential factors, such as direct communications between hosts and guests, are not examined.
|
[SECTION: Purpose] This paper is about the relationship between influencing behaviour, the influencing strategies used by people at work and the ways in which they combine them into influencing styles, and 360-degree assessments of their performance. It also considers gender and seniority differences in such assessments. It describes and discusses research carried out on a mixed gender group of senior and middle managers working in the public sector in the UK. The paper builds on an earlier series of articles on influencing behaviour (Manning et al., 2008a, b, c). These articles described how individual influencing behaviour tends to vary in different contexts. They readily acknowledged that simply finding that particular influencing behaviours tend to be used in specific contexts tells us nothing about the effectiveness of such behaviours in those contexts. In order to do this, it is necessary to provide some independent indicator of effectiveness or impact. 360-degree assessments of performance provide one such indicator.The paper begins by considering three underlying questions. First, why use 360-degree assessments to measure the effectiveness of particular influencing behaviours? Second, why consider gender differences in such assessments? Third, why consider seniority differences in such assessments? There is then a brief description of the research methods, including the instruments used to measure influencing behaviour and 360-degree assessments, as well as the statistical tools used to analyse findings. This is followed by a description of the research findings, including the overall findings and specific findings on gender and seniority differences. The article goes on to discuss the findings, including the extent to which they are consistent with the previous research on influencing behaviour by the authors, and why the relationship between influencing behaviours and 360-degree assessments differ according to both the gender and seniority of managers. The article concludes by recapping the main themes examined and discussing their practical implications. There are a number of reasons for using 360-degree assessments to measure the effectiveness of particular influencing behaviours. Previous research by the authors found clear relationships between influencing behaviours and contextual factors, including the roles played by individuals at work, and what is expected of them in these roles. This research presents a social psychological model of the process of interpersonal influence, in which the influencer (or agent) seeks to influence the influencee (or target). The behaviour of the agent leads to a response from the target. Perceptions, in the form of expectancies, play an important part in both influencer behaviour and influencee response. Expectancies include prior judgements, hunches and predictions about the behaviours of others, as well as the social contexts within which interaction occurs. Of particular relevance to this study are the feedback loops between the behaviour of the agent and the expectancies of the target, as well as the response of the target and the expectancies of the agent. This study investigates the ways in which the agent's influencing behaviour impacts on the judgements, predictions and hunches of the person, or target, the agent seeks to influence.360-degree assessments of performance offer an appropriate and interesting indicator to investigate this relationship between behaviour and impact. 360-degree assessments, sometimes referred to as "multi-rater performance appraisals" (McCarthy and Garavan, 2001), offer standardised measures of the judgements of behaviours. They are used in both performance appraisal and development planning systems (Huggett, 1998). They are widely accepted as a robust process for collecting feedback on perceptions of behaviour in the workplace and are utilised by almost all Fortune 500 companies (Newbold, 2008). Goodge and Burr (1999), in their review of the literature on the usefulness of 360-degree feedback, concluded that there are a surprisingly large number of academic evaluations that show real changes in competencies and attitudes as a result of 360-degree feedback programmes. This suggests that while 360-degree assessments may be "subjective" they do tap into something relevant to the performance of people at work. The use of "subjective" assessments of individuals forms an integral element of organisational performance management systems and the methods of rating performance used in such systems and in 360-degree assessments typically have much in common.Utilising 360-degree assessments as an indicator of behavioural impact is particularly relevant to perceptual or "transactional" theories of leadership (see, for example, Greene, 1975). These theories see leadership as involving transactions between the leader and subordinates that affect their relationships. Particular attention is focussed on the way that leaders emerge and how this requires the consent of followers. In this sense, leadership exists only after it is acknowledged by followers. It is a perceptual phenomenon in the mind of followers. 360-degree assessment provides a standardised indicator of such perceptions. In conclusion, 360-degree assessments of performance provide one possible way of measuring the impact of particular behaviours. They are consistent with the authors' process model of interpersonal influence in that they shine light on the "expectancies" of those being influenced. There are a number of reasons for exploring gender differences in 360-degree assessments of influencing behaviours. There is, of course, an extensive body of theory and research on gender, sexuality and organisations, as discussed in Thompson and McHugh (2002). However, there are specific reasons for looking at 360-degree assessments. In his review of the literature, Fletcher (1999) found that there are gender differences in self-assessment and 360-degree appraisal. In particular, he concluded that women tend to rate themselves less positively than do men, are less susceptible to leniency effects and there is greater congruence between their self ratings and external ratings of behaviour and performance. Ford (2005) also calls into question the gendered nature of diagnostic and developmental processes, including 360-degree assessment tools. In the light of these findings, the exploration of gender differences in self-assessed influencing behaviour and 360-degree performance assessments warrants further research. Contingency theories of management and leadership suggest the need to explore seniority differences in 360-degree assessments of behaviour. There is a long tradition in management thought that argues against the idea that there is "one best approach" to management. These "contingency" or "relativist" theories propose that there are no universal management or leadership traits or behaviours. Appropriate behaviour is contingent on, or relative to, the particular context, situation or circumstances in which the individual operates. For examples of these theories see Reddin (1970), Tannenbaum and Schmidt (1973) and Vroom and Yetton (1973). These theories identify a variety of contextual factors, including characteristics of the leader, the led, the task and the wider situation. Contingency theories suggest that different sets of influencing behaviours may be appropriate in some leadership roles and situations. It may, for example, be that the jobs carried out by managers in more senior positions differ significantly from those carried out by more junior individuals, and that different behaviours may be more or less appropriate, depending on the seniority of the job holder. Indeed, one of the findings in our earlier research was that influencing behaviour varied considerably according to the overall level of responsibility of the jobholder.Another reason for exploring seniority differences in 360-degree assessments of behaviour is that they may be linked, in some way(s), to gender differences in such assessments, given the well established fact that men are over-represented, and women under-represented, at the more senior levels of organisations. It may, therefore, be useful to consider if there is a stronger relationship between self-assessed behaviour and 360-degree assessments of performance among individuals at lower organisational levels. It may also be useful to try to control for differences in seniority when investigating gender differences. Our research looked at the relationship between an individual's influencing behaviour and 360-degree assessments of their performance. The information used in the research was collected from male and female senior and middle managers attending the People Management Skills for Senior Managers course run by the National School for Government, a large public sector training establishment in the UK. Influencing behaviour was assessed using the Influencing Strategies and Styles Profile, developed by Tony Manning, published by Management Learning Resources. 360-degree performance assessments were carried out using an instrument developed by Sir John Hunt of the London Business School and used by the National School for Government. Details of the models and measures that underpin this research can be obtained from the lead author, Tony Manning.Influencing behaviour The framework for analysing influencing behaviour is described in Manning and Robertson (2003) and Manning et al. (2008a). It identifies six sets of strategies that people may use in their attempts to influence others at work: reason, assertion, exchange, courting favour, coercion and partnership. It argues that research on the ways individuals combine, or avoid, these strategies makes it possible to identify three broad dimensions of influence style. Thus an individual's influence style can be described by locating them, in three-dimensional space, on three dimensions of influence:Factor 1. Bystander versus shotgun The "bystander" engages in relatively infrequent influence attempts, using little of any of the strategies. In contrast, the "shotgun" engages in relatively frequent influence attempts, using all of the strategies and using them frequently.Factor 2. Strategist versus opportunist The "strategist" uses reason, assertion and partnership, while avoiding the use of courting favour and exchange. In contrast, the "opportunist" tends to use courting favour and exchange, while avoiding reason, assertion and partnership.Factor 3. Collaborator versus battler The "collaborator" uses reason, partnership, courting favour and exchange, while avoiding assertion and, above all, coercion. In contrast, the "battler" tends to use assertion and coercion, while avoiding reason, partnership, courting favour and exchange.360-degree performance assessments Information for the 360-degree assessments was typically collected from the individual's line manager, colleagues and members of staff. When appropriate, it was also collected from more senior managers and external stakeholders. Scores were produced on five broad aspects of performance and overall satisfaction. The five aspects of performance were: task oriented behaviours; maintenance and motivation behaviours; appraisal and development behaviours; behaviours that stimulate innovation and differentiation; and leadership behaviours.The statistical analysis of the data In order to explore the relationship between influencing behaviour and 360-degree assessments of performance, we used two statistical tools: one established the degree of correlation between the two sets of variables and the other the statistical significance of the observed relationships.We initially looked at the degree of correlation and its statistical significance for the total data set. We then divided our data into male and female sub-sets, as well as senior and middle management sub-sets, and repeated the statistical analysis.However, we recognised that our sub-sets were not equal in size and that this might distort our findings. For example, we had more middle managers then senior managers, more male managers than female managers and our senior manager sub-set had more male managers in it than female managers. We therefore divided our data into four sub-sets, namely, senior-male, senior-female, middle-male and middle-female, and repeated the statistical analysis on these groupings. This allowed us to produce what we called "adjusted" data as well as "raw" data. For example, the "adjusted" data for senior managers is the average of the findings for male senior and female senior managers. The findings from both "raw" and "adjusted" data are available on request from the lead author, Tony Manning. In the account of findings that follow, we only report on the "adjusted" data, as we consider this less subject to distortion arising from an unequal gender and/or seniority mix in the sub-sets. We present data on the observed relationships between both facets of influencing behaviour, the frequency of use of the six strategies and the three style dimensions, and 360-degree performance assessments. The discussion focuses on the relationship between influencing style and 360-degree assessments. The sample as a whole The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample as whole, including male and female managers, at senior and middle management levels. Table I looks at influencing strategies and influencing styles.Table I suggests that, for the sample as a whole, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence. This is illustrated in Figure 1.Figures 1-3 show the average degree of correlation between each of the three the style dimensions and the six sub-scales included in the 360-degree assessment, along with the highest and lowest observed relationship with that style dimension.Male and female managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of female managers, including those at senior and middle management levels. Table II looks at influencing strategies and influencing styles.Table II suggests that, for the sample of female managers, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence, particularly with collaborator styles, where the observed relationships are strongest and most wide-ranging.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of male managers, including those at senior and middle management levels. Table III looks at influencing strategies and influencing styles.Table III shows that, for the sample of male managers, there are no statistically significant relationships between influencing styles and 360-degree assessments. However, a number of relationships approached statistical significance, specifically bystander (in relation to task), strategist (in relation to leadership and overall satisfaction) and battler (in relation to task).The main differences in the observed relationships between influencing styles and 360-degree assessment between male and female managers are on the collaborator-battler scale. In the female sample there is a very strong positive relationship with the collaborator style, whereas in the male sample there is a very weak relationship with the battler style. There are also much smaller differences on the strategist-opportunist scale. In both the male and female sample, there is a positive relationship between the strategist style and positive 360-degree assessments, although the relationship is much stronger in the female sample. This highlights another more general difference between the male and female samples: the observed relationship between the two sets of variables is much stronger in the female sample. The general pattern of gender differences is illustrated in Figure 2.Senior and middle managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of senior managers, including those at senior and middle management levels. Table IV looks at influencing strategies and influencing styles.Table IV indicates that, for the sample of senior managers, there are no statistically significant relationships, although there is a near significant relationship between the 360-degree assessment for overall satisfaction and the strategist style.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of middle managers, including those at senior and middle management levels. Table V looks at influencing strategies and influencing styles.Table V shows that, for the sample of middle managers, there are statistically significant relationships between all three influencing style dimensions and 360-degree assessments. The most positive relationships 360-degree assessments, the strongest and most wide-ranging, are associated with the collaborator style, although there are also positive relationships with both the bystander and strategist styles.The main differences in the observed relationships between influencing styles and 360-degree assessment between senior and middle managers lie on the collaborator-battler scale. In the middle management sample there is a very strong positive relationship with the collaborator style, whereas in the senior management sample there is a weak relationship with the battler style. There are also much smaller differences on the bystander-shotgun scale. In the middle management sample, there is a positive relationship between the bystander style and positive 360-degree assessments, whereas the two variables are largely unrelated in the senior management sample. This highlights another more general difference between the senior and middle management samples, in that the observed relationship between the two sets of variables is much stronger in the middle management sample. The general pattern of seniority differences is illustrated in the Figure 3. The 360-degree assessment tool used in this research is designed-to-be-used in leadership and management development activities rather than as a pure research instrument. It does, however, shine light on some of the observed relationships between influencing behaviour and contextual variables found by the present authors (Manning et al., 2008) in the previous series of articles, although the exact terms used may vary. In particular, the following findings from this study appear consistent with the earlier findings:1. Bystander styles of influence receive positive 360-degree assessments:* when used by middle managers rather than senior managers; and* in relation to task orientation.2. Strategist styles of influence tend to receive positive 360-degree assessment:* when used by both middle and senior managers; and* in relation to task orientation.3. Collaborator styles of influence tend to receive positive 360-degree assessments:* when used by middle managers; and* in relation to stimulating innovation, group maintenance and leadership.The findings presented here support the findings of previous research on gender differences in self-assessment and 360-degree assessments (Fletcher, 1999). In particular, we found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our female sample than in our male sample. However, we also found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our middle management sample then in our senior management sample. Moreover, although both gender and seniority differences were clearly observable and independent, seniority differences appeared greater than gender differences, as there was a stronger relationship between self-assessed influencing behaviour and 360-degree assessments in the male middle manager sub-group than in the female senior manager sub-group.We found both similarities and differences between the observed relationships in male and senior managers, on the one hand, and female and middle managers, on the other. In the way of similarities, collaborator styles of influence tended-to-be-rated very positively in both female and middle manager groups, whereas battler styles were weakly related to negative assessments. In the way of differences, we found gender differences on the strategist-opportunist dimension (with strategist styles being more positively assessed in the female sample) but much less in the way of seniority differences, and seniority differences on the bystander-shotgun dimension (with bystanders being more positively assessed in the middle management sample) but much less in the way of gender differences.The findings on both gender and seniority differences in the observed relationships between the two sets of factors raise a number of interesting questions about why such differences occur.First, why are there perceived gender differences? Why do some influencing behaviours appear to be more positively valued when done by men and others by women, and vice versa? We propose at least two possible reasons for such observed differences and both may be true, at least in part. The first possibility is that men and women are judged by different gender stereotypes, while the second is that men and women tend to do different kinds of jobs. These two possibilities are considered briefly in the following.The observed gender differences may arise from the fact that men and women are judged by a different gender stereotype. One interpretation of our findings, put very simply, is that the stereotype of a "good" female manager is someone who is collaborative, whereas the stereotype of a "good" male manager is someone is strong and decisive. The corollary is, of course, that female managers tend not to be valued if they are strong and decisive, while male managers tend not to be valued if they are collaborative. In consequence, individuals who conform to the "good" stereotype for their gender are likely to receive positive assessments from others and vice versa.The second possible explanation of the gender differences is that male and female managers tend to do different jobs. Wilson (2004) argues that organisations differ according to their gender regimes, in that they are structured according to the symbolism of gender, with women doing women's tasks (e.g. caring and cleaning) and occupying female jobs (e.g. nursing and secretarial), thereby perpetuating the symbolic system of subordination and subservience. Wilson (2004) considers research which illustrates that women are likely to occupy management positions in less prestigious organisations, occupy less prestigious positions in organisations, are found disproportionately at lower organisational levels and are likely to be dominated by males, even when holding more senior positions than those males. This argument is supported by Ford's (2005) considerations of the gendered nature of leadership models within organizational theory. It may be that collaborative behaviour was valued in our sample of female managers because they tend to jobs where collaboration is an integral part of their roles. Conversely, strong and decisive behaviour may have been more central to the roles played by male managers in our sample.Second, why are there perceived seniority differences? Why do some influencing behaviours appear to be more positively valued when done by senior managers and others by middle managers, and vice versa? It seems to us that there are at least four possible reasons for such observed differences. While each may be true, at least in part, some seem more plausible than others.One possibility is that middle management jobs may be more homogenous than senior management jobs. In consequence, more clearly observable relationships are likely to be found between influencing behaviours and 360-degree performance assessments among middle managers than senior managers. This is something we hope to explore further, differentiating between different types of jobs rather than seniority per se. However, we would expect middle management jobs to be varied, as well as senior management jobs, and so we doubt that this is a sufficient explanation for the findings.A second possibility is that factors other than influencing behaviours have a greater impact on 360-degree assessments in the senior management sample than the middle management one. These other factors include leadership and management behaviours, as well as motivational factors. It is likely that leadership behaviours play a particularly important part in senior management jobs and that these behaviours are quite separate from the influencing behaviours explored here. Examples of such leadership behaviours include communicating a compelling vision, building teams, and involving and developing staff. We intend to look at leadership and management behaviours, and their relationship to 360-degree assessments, in a subsequent and related article. While we would expect both senior and middle management jobs to include leadership and/or management elements, we accept that "leadership" elements may be more important in senior management positions, while "management" elements may be more important in middle management positions. We follow Kotter's (1990) distinctions between these two separate but inter-related concepts.While these two hypothetical explanations for the observed seniority differences are possible and worthy of further research, they do not seem to provide sufficient explanations. We therefore turn our attention to what are sometimes called "anti-leadership" theories. According to Van Seters and Field's (1990) review of leadership theory, there are two main anti-leadership perspectives, the "ambiguity" and "substitute" perspectives. We argue that both may help explain the observed seniority differences found in our research.In the "ambiguity" perspective, leadership is seen as a purely perceptual phenomenon, something that exists only in the mind of the observer. This idea has much in common with the "transactional" theory of leadership referred to previously. However, according to the "ambiguity" perspective view, the leader is a symbol whose actual performance is of little or no consequence, while leadership itself is an encompassing term we use to describe organisational changes we do not understand. While we would not want to accept the extreme form of this view, that the concept of leadership be abandoned altogether, we do think it possible that senior management jobs may tend to be more symbolic and less substantive than middle management jobs.In the "substitute" perspective, the concept of leadership is retained, although it is argued that the characteristics of the task, subordinates and organisation can prevent the leader from affecting subordinate performance. We have some sympathy for this view and suspect that people often have very high expectations of their leaders but, in reality, leadership roles are highly constrained and/or impacted on by a wide range of factors outside the control of the leader. In contrast, people may have more realistic expectations of middle managers, whose roles may be more substantive and less symbolic. Thus, 360-degree assessments of the performance of middle managers may be more closely linked to their behaviour than is the case with senior managers. In this paper, we have reported on and discussed research on the relationship between influencing behaviour and 360-degree assessments of performance. In our sample as whole, we found statistically significant relationships between these two sets of factors. However, we also found both gender and seniority differences in the pattern of relationships. The point here is not that men and women and/or senior and middle managers behaved differently but that the same behaviours tended to be judged differently, according to the gender and seniority of those doing the influencing.We found that the observed relationships between influencing behaviour and 360-degree assessments were stronger and more wide-ranging in both our female and middle management samples than in our male and senior management samples. We also found more pronounced similarities between our female and middle management samples, on the one hand, and our male and senior management samples, on the other hand, while there were differences between female/middle and male/senior samples.We went on to explore why there were both gender and seniority differences in the observed relationships. We concluded that the gender differences might be related to the ways in which men and women are judged by different gender stereotypes, as well as to the fact that men and women do different jobs. We concluded that seniority differences might also be due to the fact that senior and middle managers do different jobs. Another possibility was the differential impact of other factors, including managerial and leadership behaviours, as well as motivational factors, at different organisational levels. Anti-leadership theories suggested other explanations, including the possibility that senior management roles were more symbolic, less substantial and more constrained than middle management roles.Our findings have relevance to managers and leaders at all organisational levels, as well as to professional involved in human resource management and development. Five major implications are briefly considered in the following.First, our findings reinforce the conclusion that there are few, if any, influencing behaviours that apply to all situations. In consequence, it is essential to focus on individual behaviours appropriate to particular situations.Second, our findings support the interpersonal model previously proposed, and highlight the role of expectancies on the relationship between behaviour and impact.Third, our findings suggest that 360-degree assessments of performance are vulnerable to gender stereotyping. This suggests that the assessment of performance within organisational performance management systems may also be susceptible to such bias. It is, therefore, important for human resource managers, line managers and professionals who use 360-degree assessments in their work to be aware of the possibility of such bias and take the necessary steps to mitigate it.Fourth, if there is bias arising from gender stereotypes, it may arise from other stereotypes. It may, therefore, also be worth carrying out further research on other stereotypes, including race, disability, age, sexual orientation and religion and belief.Finally, there is clear need to investigate factors other than influencing behaviours that may be related to 360-degree performance assessments. We hope to publish further research on the relationship between 360-degree assessments and leadership behaviour, as well as personality, in the near future. Opens in a new window.Figure 1 The relationship between influencing styles and 360-degree assessments for the whole sample Opens in a new window.Figure 2 Gender differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Figure 3 Seniority differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Table I Findings for the sample as a whole Opens in a new window.Table II Findings for the sample of female managers Opens in a new window.Table III Findings for the sample of male managers Opens in a new window.Table IV Findings for the sample of senior managers Opens in a new window.Table V Findings for the sample of middle managers
|
- The paper aims to present and discuss research into the relationship between influencing behaviour and impact, including gender and seniority differences.
|
[SECTION: Method] This paper is about the relationship between influencing behaviour, the influencing strategies used by people at work and the ways in which they combine them into influencing styles, and 360-degree assessments of their performance. It also considers gender and seniority differences in such assessments. It describes and discusses research carried out on a mixed gender group of senior and middle managers working in the public sector in the UK. The paper builds on an earlier series of articles on influencing behaviour (Manning et al., 2008a, b, c). These articles described how individual influencing behaviour tends to vary in different contexts. They readily acknowledged that simply finding that particular influencing behaviours tend to be used in specific contexts tells us nothing about the effectiveness of such behaviours in those contexts. In order to do this, it is necessary to provide some independent indicator of effectiveness or impact. 360-degree assessments of performance provide one such indicator.The paper begins by considering three underlying questions. First, why use 360-degree assessments to measure the effectiveness of particular influencing behaviours? Second, why consider gender differences in such assessments? Third, why consider seniority differences in such assessments? There is then a brief description of the research methods, including the instruments used to measure influencing behaviour and 360-degree assessments, as well as the statistical tools used to analyse findings. This is followed by a description of the research findings, including the overall findings and specific findings on gender and seniority differences. The article goes on to discuss the findings, including the extent to which they are consistent with the previous research on influencing behaviour by the authors, and why the relationship between influencing behaviours and 360-degree assessments differ according to both the gender and seniority of managers. The article concludes by recapping the main themes examined and discussing their practical implications. There are a number of reasons for using 360-degree assessments to measure the effectiveness of particular influencing behaviours. Previous research by the authors found clear relationships between influencing behaviours and contextual factors, including the roles played by individuals at work, and what is expected of them in these roles. This research presents a social psychological model of the process of interpersonal influence, in which the influencer (or agent) seeks to influence the influencee (or target). The behaviour of the agent leads to a response from the target. Perceptions, in the form of expectancies, play an important part in both influencer behaviour and influencee response. Expectancies include prior judgements, hunches and predictions about the behaviours of others, as well as the social contexts within which interaction occurs. Of particular relevance to this study are the feedback loops between the behaviour of the agent and the expectancies of the target, as well as the response of the target and the expectancies of the agent. This study investigates the ways in which the agent's influencing behaviour impacts on the judgements, predictions and hunches of the person, or target, the agent seeks to influence.360-degree assessments of performance offer an appropriate and interesting indicator to investigate this relationship between behaviour and impact. 360-degree assessments, sometimes referred to as "multi-rater performance appraisals" (McCarthy and Garavan, 2001), offer standardised measures of the judgements of behaviours. They are used in both performance appraisal and development planning systems (Huggett, 1998). They are widely accepted as a robust process for collecting feedback on perceptions of behaviour in the workplace and are utilised by almost all Fortune 500 companies (Newbold, 2008). Goodge and Burr (1999), in their review of the literature on the usefulness of 360-degree feedback, concluded that there are a surprisingly large number of academic evaluations that show real changes in competencies and attitudes as a result of 360-degree feedback programmes. This suggests that while 360-degree assessments may be "subjective" they do tap into something relevant to the performance of people at work. The use of "subjective" assessments of individuals forms an integral element of organisational performance management systems and the methods of rating performance used in such systems and in 360-degree assessments typically have much in common.Utilising 360-degree assessments as an indicator of behavioural impact is particularly relevant to perceptual or "transactional" theories of leadership (see, for example, Greene, 1975). These theories see leadership as involving transactions between the leader and subordinates that affect their relationships. Particular attention is focussed on the way that leaders emerge and how this requires the consent of followers. In this sense, leadership exists only after it is acknowledged by followers. It is a perceptual phenomenon in the mind of followers. 360-degree assessment provides a standardised indicator of such perceptions. In conclusion, 360-degree assessments of performance provide one possible way of measuring the impact of particular behaviours. They are consistent with the authors' process model of interpersonal influence in that they shine light on the "expectancies" of those being influenced. There are a number of reasons for exploring gender differences in 360-degree assessments of influencing behaviours. There is, of course, an extensive body of theory and research on gender, sexuality and organisations, as discussed in Thompson and McHugh (2002). However, there are specific reasons for looking at 360-degree assessments. In his review of the literature, Fletcher (1999) found that there are gender differences in self-assessment and 360-degree appraisal. In particular, he concluded that women tend to rate themselves less positively than do men, are less susceptible to leniency effects and there is greater congruence between their self ratings and external ratings of behaviour and performance. Ford (2005) also calls into question the gendered nature of diagnostic and developmental processes, including 360-degree assessment tools. In the light of these findings, the exploration of gender differences in self-assessed influencing behaviour and 360-degree performance assessments warrants further research. Contingency theories of management and leadership suggest the need to explore seniority differences in 360-degree assessments of behaviour. There is a long tradition in management thought that argues against the idea that there is "one best approach" to management. These "contingency" or "relativist" theories propose that there are no universal management or leadership traits or behaviours. Appropriate behaviour is contingent on, or relative to, the particular context, situation or circumstances in which the individual operates. For examples of these theories see Reddin (1970), Tannenbaum and Schmidt (1973) and Vroom and Yetton (1973). These theories identify a variety of contextual factors, including characteristics of the leader, the led, the task and the wider situation. Contingency theories suggest that different sets of influencing behaviours may be appropriate in some leadership roles and situations. It may, for example, be that the jobs carried out by managers in more senior positions differ significantly from those carried out by more junior individuals, and that different behaviours may be more or less appropriate, depending on the seniority of the job holder. Indeed, one of the findings in our earlier research was that influencing behaviour varied considerably according to the overall level of responsibility of the jobholder.Another reason for exploring seniority differences in 360-degree assessments of behaviour is that they may be linked, in some way(s), to gender differences in such assessments, given the well established fact that men are over-represented, and women under-represented, at the more senior levels of organisations. It may, therefore, be useful to consider if there is a stronger relationship between self-assessed behaviour and 360-degree assessments of performance among individuals at lower organisational levels. It may also be useful to try to control for differences in seniority when investigating gender differences. Our research looked at the relationship between an individual's influencing behaviour and 360-degree assessments of their performance. The information used in the research was collected from male and female senior and middle managers attending the People Management Skills for Senior Managers course run by the National School for Government, a large public sector training establishment in the UK. Influencing behaviour was assessed using the Influencing Strategies and Styles Profile, developed by Tony Manning, published by Management Learning Resources. 360-degree performance assessments were carried out using an instrument developed by Sir John Hunt of the London Business School and used by the National School for Government. Details of the models and measures that underpin this research can be obtained from the lead author, Tony Manning.Influencing behaviour The framework for analysing influencing behaviour is described in Manning and Robertson (2003) and Manning et al. (2008a). It identifies six sets of strategies that people may use in their attempts to influence others at work: reason, assertion, exchange, courting favour, coercion and partnership. It argues that research on the ways individuals combine, or avoid, these strategies makes it possible to identify three broad dimensions of influence style. Thus an individual's influence style can be described by locating them, in three-dimensional space, on three dimensions of influence:Factor 1. Bystander versus shotgun The "bystander" engages in relatively infrequent influence attempts, using little of any of the strategies. In contrast, the "shotgun" engages in relatively frequent influence attempts, using all of the strategies and using them frequently.Factor 2. Strategist versus opportunist The "strategist" uses reason, assertion and partnership, while avoiding the use of courting favour and exchange. In contrast, the "opportunist" tends to use courting favour and exchange, while avoiding reason, assertion and partnership.Factor 3. Collaborator versus battler The "collaborator" uses reason, partnership, courting favour and exchange, while avoiding assertion and, above all, coercion. In contrast, the "battler" tends to use assertion and coercion, while avoiding reason, partnership, courting favour and exchange.360-degree performance assessments Information for the 360-degree assessments was typically collected from the individual's line manager, colleagues and members of staff. When appropriate, it was also collected from more senior managers and external stakeholders. Scores were produced on five broad aspects of performance and overall satisfaction. The five aspects of performance were: task oriented behaviours; maintenance and motivation behaviours; appraisal and development behaviours; behaviours that stimulate innovation and differentiation; and leadership behaviours.The statistical analysis of the data In order to explore the relationship between influencing behaviour and 360-degree assessments of performance, we used two statistical tools: one established the degree of correlation between the two sets of variables and the other the statistical significance of the observed relationships.We initially looked at the degree of correlation and its statistical significance for the total data set. We then divided our data into male and female sub-sets, as well as senior and middle management sub-sets, and repeated the statistical analysis.However, we recognised that our sub-sets were not equal in size and that this might distort our findings. For example, we had more middle managers then senior managers, more male managers than female managers and our senior manager sub-set had more male managers in it than female managers. We therefore divided our data into four sub-sets, namely, senior-male, senior-female, middle-male and middle-female, and repeated the statistical analysis on these groupings. This allowed us to produce what we called "adjusted" data as well as "raw" data. For example, the "adjusted" data for senior managers is the average of the findings for male senior and female senior managers. The findings from both "raw" and "adjusted" data are available on request from the lead author, Tony Manning. In the account of findings that follow, we only report on the "adjusted" data, as we consider this less subject to distortion arising from an unequal gender and/or seniority mix in the sub-sets. We present data on the observed relationships between both facets of influencing behaviour, the frequency of use of the six strategies and the three style dimensions, and 360-degree performance assessments. The discussion focuses on the relationship between influencing style and 360-degree assessments. The sample as a whole The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample as whole, including male and female managers, at senior and middle management levels. Table I looks at influencing strategies and influencing styles.Table I suggests that, for the sample as a whole, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence. This is illustrated in Figure 1.Figures 1-3 show the average degree of correlation between each of the three the style dimensions and the six sub-scales included in the 360-degree assessment, along with the highest and lowest observed relationship with that style dimension.Male and female managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of female managers, including those at senior and middle management levels. Table II looks at influencing strategies and influencing styles.Table II suggests that, for the sample of female managers, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence, particularly with collaborator styles, where the observed relationships are strongest and most wide-ranging.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of male managers, including those at senior and middle management levels. Table III looks at influencing strategies and influencing styles.Table III shows that, for the sample of male managers, there are no statistically significant relationships between influencing styles and 360-degree assessments. However, a number of relationships approached statistical significance, specifically bystander (in relation to task), strategist (in relation to leadership and overall satisfaction) and battler (in relation to task).The main differences in the observed relationships between influencing styles and 360-degree assessment between male and female managers are on the collaborator-battler scale. In the female sample there is a very strong positive relationship with the collaborator style, whereas in the male sample there is a very weak relationship with the battler style. There are also much smaller differences on the strategist-opportunist scale. In both the male and female sample, there is a positive relationship between the strategist style and positive 360-degree assessments, although the relationship is much stronger in the female sample. This highlights another more general difference between the male and female samples: the observed relationship between the two sets of variables is much stronger in the female sample. The general pattern of gender differences is illustrated in Figure 2.Senior and middle managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of senior managers, including those at senior and middle management levels. Table IV looks at influencing strategies and influencing styles.Table IV indicates that, for the sample of senior managers, there are no statistically significant relationships, although there is a near significant relationship between the 360-degree assessment for overall satisfaction and the strategist style.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of middle managers, including those at senior and middle management levels. Table V looks at influencing strategies and influencing styles.Table V shows that, for the sample of middle managers, there are statistically significant relationships between all three influencing style dimensions and 360-degree assessments. The most positive relationships 360-degree assessments, the strongest and most wide-ranging, are associated with the collaborator style, although there are also positive relationships with both the bystander and strategist styles.The main differences in the observed relationships between influencing styles and 360-degree assessment between senior and middle managers lie on the collaborator-battler scale. In the middle management sample there is a very strong positive relationship with the collaborator style, whereas in the senior management sample there is a weak relationship with the battler style. There are also much smaller differences on the bystander-shotgun scale. In the middle management sample, there is a positive relationship between the bystander style and positive 360-degree assessments, whereas the two variables are largely unrelated in the senior management sample. This highlights another more general difference between the senior and middle management samples, in that the observed relationship between the two sets of variables is much stronger in the middle management sample. The general pattern of seniority differences is illustrated in the Figure 3. The 360-degree assessment tool used in this research is designed-to-be-used in leadership and management development activities rather than as a pure research instrument. It does, however, shine light on some of the observed relationships between influencing behaviour and contextual variables found by the present authors (Manning et al., 2008) in the previous series of articles, although the exact terms used may vary. In particular, the following findings from this study appear consistent with the earlier findings:1. Bystander styles of influence receive positive 360-degree assessments:* when used by middle managers rather than senior managers; and* in relation to task orientation.2. Strategist styles of influence tend to receive positive 360-degree assessment:* when used by both middle and senior managers; and* in relation to task orientation.3. Collaborator styles of influence tend to receive positive 360-degree assessments:* when used by middle managers; and* in relation to stimulating innovation, group maintenance and leadership.The findings presented here support the findings of previous research on gender differences in self-assessment and 360-degree assessments (Fletcher, 1999). In particular, we found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our female sample than in our male sample. However, we also found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our middle management sample then in our senior management sample. Moreover, although both gender and seniority differences were clearly observable and independent, seniority differences appeared greater than gender differences, as there was a stronger relationship between self-assessed influencing behaviour and 360-degree assessments in the male middle manager sub-group than in the female senior manager sub-group.We found both similarities and differences between the observed relationships in male and senior managers, on the one hand, and female and middle managers, on the other. In the way of similarities, collaborator styles of influence tended-to-be-rated very positively in both female and middle manager groups, whereas battler styles were weakly related to negative assessments. In the way of differences, we found gender differences on the strategist-opportunist dimension (with strategist styles being more positively assessed in the female sample) but much less in the way of seniority differences, and seniority differences on the bystander-shotgun dimension (with bystanders being more positively assessed in the middle management sample) but much less in the way of gender differences.The findings on both gender and seniority differences in the observed relationships between the two sets of factors raise a number of interesting questions about why such differences occur.First, why are there perceived gender differences? Why do some influencing behaviours appear to be more positively valued when done by men and others by women, and vice versa? We propose at least two possible reasons for such observed differences and both may be true, at least in part. The first possibility is that men and women are judged by different gender stereotypes, while the second is that men and women tend to do different kinds of jobs. These two possibilities are considered briefly in the following.The observed gender differences may arise from the fact that men and women are judged by a different gender stereotype. One interpretation of our findings, put very simply, is that the stereotype of a "good" female manager is someone who is collaborative, whereas the stereotype of a "good" male manager is someone is strong and decisive. The corollary is, of course, that female managers tend not to be valued if they are strong and decisive, while male managers tend not to be valued if they are collaborative. In consequence, individuals who conform to the "good" stereotype for their gender are likely to receive positive assessments from others and vice versa.The second possible explanation of the gender differences is that male and female managers tend to do different jobs. Wilson (2004) argues that organisations differ according to their gender regimes, in that they are structured according to the symbolism of gender, with women doing women's tasks (e.g. caring and cleaning) and occupying female jobs (e.g. nursing and secretarial), thereby perpetuating the symbolic system of subordination and subservience. Wilson (2004) considers research which illustrates that women are likely to occupy management positions in less prestigious organisations, occupy less prestigious positions in organisations, are found disproportionately at lower organisational levels and are likely to be dominated by males, even when holding more senior positions than those males. This argument is supported by Ford's (2005) considerations of the gendered nature of leadership models within organizational theory. It may be that collaborative behaviour was valued in our sample of female managers because they tend to jobs where collaboration is an integral part of their roles. Conversely, strong and decisive behaviour may have been more central to the roles played by male managers in our sample.Second, why are there perceived seniority differences? Why do some influencing behaviours appear to be more positively valued when done by senior managers and others by middle managers, and vice versa? It seems to us that there are at least four possible reasons for such observed differences. While each may be true, at least in part, some seem more plausible than others.One possibility is that middle management jobs may be more homogenous than senior management jobs. In consequence, more clearly observable relationships are likely to be found between influencing behaviours and 360-degree performance assessments among middle managers than senior managers. This is something we hope to explore further, differentiating between different types of jobs rather than seniority per se. However, we would expect middle management jobs to be varied, as well as senior management jobs, and so we doubt that this is a sufficient explanation for the findings.A second possibility is that factors other than influencing behaviours have a greater impact on 360-degree assessments in the senior management sample than the middle management one. These other factors include leadership and management behaviours, as well as motivational factors. It is likely that leadership behaviours play a particularly important part in senior management jobs and that these behaviours are quite separate from the influencing behaviours explored here. Examples of such leadership behaviours include communicating a compelling vision, building teams, and involving and developing staff. We intend to look at leadership and management behaviours, and their relationship to 360-degree assessments, in a subsequent and related article. While we would expect both senior and middle management jobs to include leadership and/or management elements, we accept that "leadership" elements may be more important in senior management positions, while "management" elements may be more important in middle management positions. We follow Kotter's (1990) distinctions between these two separate but inter-related concepts.While these two hypothetical explanations for the observed seniority differences are possible and worthy of further research, they do not seem to provide sufficient explanations. We therefore turn our attention to what are sometimes called "anti-leadership" theories. According to Van Seters and Field's (1990) review of leadership theory, there are two main anti-leadership perspectives, the "ambiguity" and "substitute" perspectives. We argue that both may help explain the observed seniority differences found in our research.In the "ambiguity" perspective, leadership is seen as a purely perceptual phenomenon, something that exists only in the mind of the observer. This idea has much in common with the "transactional" theory of leadership referred to previously. However, according to the "ambiguity" perspective view, the leader is a symbol whose actual performance is of little or no consequence, while leadership itself is an encompassing term we use to describe organisational changes we do not understand. While we would not want to accept the extreme form of this view, that the concept of leadership be abandoned altogether, we do think it possible that senior management jobs may tend to be more symbolic and less substantive than middle management jobs.In the "substitute" perspective, the concept of leadership is retained, although it is argued that the characteristics of the task, subordinates and organisation can prevent the leader from affecting subordinate performance. We have some sympathy for this view and suspect that people often have very high expectations of their leaders but, in reality, leadership roles are highly constrained and/or impacted on by a wide range of factors outside the control of the leader. In contrast, people may have more realistic expectations of middle managers, whose roles may be more substantive and less symbolic. Thus, 360-degree assessments of the performance of middle managers may be more closely linked to their behaviour than is the case with senior managers. In this paper, we have reported on and discussed research on the relationship between influencing behaviour and 360-degree assessments of performance. In our sample as whole, we found statistically significant relationships between these two sets of factors. However, we also found both gender and seniority differences in the pattern of relationships. The point here is not that men and women and/or senior and middle managers behaved differently but that the same behaviours tended to be judged differently, according to the gender and seniority of those doing the influencing.We found that the observed relationships between influencing behaviour and 360-degree assessments were stronger and more wide-ranging in both our female and middle management samples than in our male and senior management samples. We also found more pronounced similarities between our female and middle management samples, on the one hand, and our male and senior management samples, on the other hand, while there were differences between female/middle and male/senior samples.We went on to explore why there were both gender and seniority differences in the observed relationships. We concluded that the gender differences might be related to the ways in which men and women are judged by different gender stereotypes, as well as to the fact that men and women do different jobs. We concluded that seniority differences might also be due to the fact that senior and middle managers do different jobs. Another possibility was the differential impact of other factors, including managerial and leadership behaviours, as well as motivational factors, at different organisational levels. Anti-leadership theories suggested other explanations, including the possibility that senior management roles were more symbolic, less substantial and more constrained than middle management roles.Our findings have relevance to managers and leaders at all organisational levels, as well as to professional involved in human resource management and development. Five major implications are briefly considered in the following.First, our findings reinforce the conclusion that there are few, if any, influencing behaviours that apply to all situations. In consequence, it is essential to focus on individual behaviours appropriate to particular situations.Second, our findings support the interpersonal model previously proposed, and highlight the role of expectancies on the relationship between behaviour and impact.Third, our findings suggest that 360-degree assessments of performance are vulnerable to gender stereotyping. This suggests that the assessment of performance within organisational performance management systems may also be susceptible to such bias. It is, therefore, important for human resource managers, line managers and professionals who use 360-degree assessments in their work to be aware of the possibility of such bias and take the necessary steps to mitigate it.Fourth, if there is bias arising from gender stereotypes, it may arise from other stereotypes. It may, therefore, also be worth carrying out further research on other stereotypes, including race, disability, age, sexual orientation and religion and belief.Finally, there is clear need to investigate factors other than influencing behaviours that may be related to 360-degree performance assessments. We hope to publish further research on the relationship between 360-degree assessments and leadership behaviour, as well as personality, in the near future. Opens in a new window.Figure 1 The relationship between influencing styles and 360-degree assessments for the whole sample Opens in a new window.Figure 2 Gender differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Figure 3 Seniority differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Table I Findings for the sample as a whole Opens in a new window.Table II Findings for the sample of female managers Opens in a new window.Table III Findings for the sample of male managers Opens in a new window.Table IV Findings for the sample of senior managers Opens in a new window.Table V Findings for the sample of middle managers
|
- The paper builds on previous articles considering influencing behaviour in the workplace. These articles present a model of interpersonal influence and describe how individual influencing behaviour varies in different contexts. They identified the need for further investigation into the effectiveness of such behaviours in those contexts. This research utilises 360-degree performance assessments as an indicator of the "effectiveness" or impact of workplace influencing behaviours.
|
[SECTION: Findings] This paper is about the relationship between influencing behaviour, the influencing strategies used by people at work and the ways in which they combine them into influencing styles, and 360-degree assessments of their performance. It also considers gender and seniority differences in such assessments. It describes and discusses research carried out on a mixed gender group of senior and middle managers working in the public sector in the UK. The paper builds on an earlier series of articles on influencing behaviour (Manning et al., 2008a, b, c). These articles described how individual influencing behaviour tends to vary in different contexts. They readily acknowledged that simply finding that particular influencing behaviours tend to be used in specific contexts tells us nothing about the effectiveness of such behaviours in those contexts. In order to do this, it is necessary to provide some independent indicator of effectiveness or impact. 360-degree assessments of performance provide one such indicator.The paper begins by considering three underlying questions. First, why use 360-degree assessments to measure the effectiveness of particular influencing behaviours? Second, why consider gender differences in such assessments? Third, why consider seniority differences in such assessments? There is then a brief description of the research methods, including the instruments used to measure influencing behaviour and 360-degree assessments, as well as the statistical tools used to analyse findings. This is followed by a description of the research findings, including the overall findings and specific findings on gender and seniority differences. The article goes on to discuss the findings, including the extent to which they are consistent with the previous research on influencing behaviour by the authors, and why the relationship between influencing behaviours and 360-degree assessments differ according to both the gender and seniority of managers. The article concludes by recapping the main themes examined and discussing their practical implications. There are a number of reasons for using 360-degree assessments to measure the effectiveness of particular influencing behaviours. Previous research by the authors found clear relationships between influencing behaviours and contextual factors, including the roles played by individuals at work, and what is expected of them in these roles. This research presents a social psychological model of the process of interpersonal influence, in which the influencer (or agent) seeks to influence the influencee (or target). The behaviour of the agent leads to a response from the target. Perceptions, in the form of expectancies, play an important part in both influencer behaviour and influencee response. Expectancies include prior judgements, hunches and predictions about the behaviours of others, as well as the social contexts within which interaction occurs. Of particular relevance to this study are the feedback loops between the behaviour of the agent and the expectancies of the target, as well as the response of the target and the expectancies of the agent. This study investigates the ways in which the agent's influencing behaviour impacts on the judgements, predictions and hunches of the person, or target, the agent seeks to influence.360-degree assessments of performance offer an appropriate and interesting indicator to investigate this relationship between behaviour and impact. 360-degree assessments, sometimes referred to as "multi-rater performance appraisals" (McCarthy and Garavan, 2001), offer standardised measures of the judgements of behaviours. They are used in both performance appraisal and development planning systems (Huggett, 1998). They are widely accepted as a robust process for collecting feedback on perceptions of behaviour in the workplace and are utilised by almost all Fortune 500 companies (Newbold, 2008). Goodge and Burr (1999), in their review of the literature on the usefulness of 360-degree feedback, concluded that there are a surprisingly large number of academic evaluations that show real changes in competencies and attitudes as a result of 360-degree feedback programmes. This suggests that while 360-degree assessments may be "subjective" they do tap into something relevant to the performance of people at work. The use of "subjective" assessments of individuals forms an integral element of organisational performance management systems and the methods of rating performance used in such systems and in 360-degree assessments typically have much in common.Utilising 360-degree assessments as an indicator of behavioural impact is particularly relevant to perceptual or "transactional" theories of leadership (see, for example, Greene, 1975). These theories see leadership as involving transactions between the leader and subordinates that affect their relationships. Particular attention is focussed on the way that leaders emerge and how this requires the consent of followers. In this sense, leadership exists only after it is acknowledged by followers. It is a perceptual phenomenon in the mind of followers. 360-degree assessment provides a standardised indicator of such perceptions. In conclusion, 360-degree assessments of performance provide one possible way of measuring the impact of particular behaviours. They are consistent with the authors' process model of interpersonal influence in that they shine light on the "expectancies" of those being influenced. There are a number of reasons for exploring gender differences in 360-degree assessments of influencing behaviours. There is, of course, an extensive body of theory and research on gender, sexuality and organisations, as discussed in Thompson and McHugh (2002). However, there are specific reasons for looking at 360-degree assessments. In his review of the literature, Fletcher (1999) found that there are gender differences in self-assessment and 360-degree appraisal. In particular, he concluded that women tend to rate themselves less positively than do men, are less susceptible to leniency effects and there is greater congruence between their self ratings and external ratings of behaviour and performance. Ford (2005) also calls into question the gendered nature of diagnostic and developmental processes, including 360-degree assessment tools. In the light of these findings, the exploration of gender differences in self-assessed influencing behaviour and 360-degree performance assessments warrants further research. Contingency theories of management and leadership suggest the need to explore seniority differences in 360-degree assessments of behaviour. There is a long tradition in management thought that argues against the idea that there is "one best approach" to management. These "contingency" or "relativist" theories propose that there are no universal management or leadership traits or behaviours. Appropriate behaviour is contingent on, or relative to, the particular context, situation or circumstances in which the individual operates. For examples of these theories see Reddin (1970), Tannenbaum and Schmidt (1973) and Vroom and Yetton (1973). These theories identify a variety of contextual factors, including characteristics of the leader, the led, the task and the wider situation. Contingency theories suggest that different sets of influencing behaviours may be appropriate in some leadership roles and situations. It may, for example, be that the jobs carried out by managers in more senior positions differ significantly from those carried out by more junior individuals, and that different behaviours may be more or less appropriate, depending on the seniority of the job holder. Indeed, one of the findings in our earlier research was that influencing behaviour varied considerably according to the overall level of responsibility of the jobholder.Another reason for exploring seniority differences in 360-degree assessments of behaviour is that they may be linked, in some way(s), to gender differences in such assessments, given the well established fact that men are over-represented, and women under-represented, at the more senior levels of organisations. It may, therefore, be useful to consider if there is a stronger relationship between self-assessed behaviour and 360-degree assessments of performance among individuals at lower organisational levels. It may also be useful to try to control for differences in seniority when investigating gender differences. Our research looked at the relationship between an individual's influencing behaviour and 360-degree assessments of their performance. The information used in the research was collected from male and female senior and middle managers attending the People Management Skills for Senior Managers course run by the National School for Government, a large public sector training establishment in the UK. Influencing behaviour was assessed using the Influencing Strategies and Styles Profile, developed by Tony Manning, published by Management Learning Resources. 360-degree performance assessments were carried out using an instrument developed by Sir John Hunt of the London Business School and used by the National School for Government. Details of the models and measures that underpin this research can be obtained from the lead author, Tony Manning.Influencing behaviour The framework for analysing influencing behaviour is described in Manning and Robertson (2003) and Manning et al. (2008a). It identifies six sets of strategies that people may use in their attempts to influence others at work: reason, assertion, exchange, courting favour, coercion and partnership. It argues that research on the ways individuals combine, or avoid, these strategies makes it possible to identify three broad dimensions of influence style. Thus an individual's influence style can be described by locating them, in three-dimensional space, on three dimensions of influence:Factor 1. Bystander versus shotgun The "bystander" engages in relatively infrequent influence attempts, using little of any of the strategies. In contrast, the "shotgun" engages in relatively frequent influence attempts, using all of the strategies and using them frequently.Factor 2. Strategist versus opportunist The "strategist" uses reason, assertion and partnership, while avoiding the use of courting favour and exchange. In contrast, the "opportunist" tends to use courting favour and exchange, while avoiding reason, assertion and partnership.Factor 3. Collaborator versus battler The "collaborator" uses reason, partnership, courting favour and exchange, while avoiding assertion and, above all, coercion. In contrast, the "battler" tends to use assertion and coercion, while avoiding reason, partnership, courting favour and exchange.360-degree performance assessments Information for the 360-degree assessments was typically collected from the individual's line manager, colleagues and members of staff. When appropriate, it was also collected from more senior managers and external stakeholders. Scores were produced on five broad aspects of performance and overall satisfaction. The five aspects of performance were: task oriented behaviours; maintenance and motivation behaviours; appraisal and development behaviours; behaviours that stimulate innovation and differentiation; and leadership behaviours.The statistical analysis of the data In order to explore the relationship between influencing behaviour and 360-degree assessments of performance, we used two statistical tools: one established the degree of correlation between the two sets of variables and the other the statistical significance of the observed relationships.We initially looked at the degree of correlation and its statistical significance for the total data set. We then divided our data into male and female sub-sets, as well as senior and middle management sub-sets, and repeated the statistical analysis.However, we recognised that our sub-sets were not equal in size and that this might distort our findings. For example, we had more middle managers then senior managers, more male managers than female managers and our senior manager sub-set had more male managers in it than female managers. We therefore divided our data into four sub-sets, namely, senior-male, senior-female, middle-male and middle-female, and repeated the statistical analysis on these groupings. This allowed us to produce what we called "adjusted" data as well as "raw" data. For example, the "adjusted" data for senior managers is the average of the findings for male senior and female senior managers. The findings from both "raw" and "adjusted" data are available on request from the lead author, Tony Manning. In the account of findings that follow, we only report on the "adjusted" data, as we consider this less subject to distortion arising from an unequal gender and/or seniority mix in the sub-sets. We present data on the observed relationships between both facets of influencing behaviour, the frequency of use of the six strategies and the three style dimensions, and 360-degree performance assessments. The discussion focuses on the relationship between influencing style and 360-degree assessments. The sample as a whole The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample as whole, including male and female managers, at senior and middle management levels. Table I looks at influencing strategies and influencing styles.Table I suggests that, for the sample as a whole, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence. This is illustrated in Figure 1.Figures 1-3 show the average degree of correlation between each of the three the style dimensions and the six sub-scales included in the 360-degree assessment, along with the highest and lowest observed relationship with that style dimension.Male and female managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of female managers, including those at senior and middle management levels. Table II looks at influencing strategies and influencing styles.Table II suggests that, for the sample of female managers, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence, particularly with collaborator styles, where the observed relationships are strongest and most wide-ranging.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of male managers, including those at senior and middle management levels. Table III looks at influencing strategies and influencing styles.Table III shows that, for the sample of male managers, there are no statistically significant relationships between influencing styles and 360-degree assessments. However, a number of relationships approached statistical significance, specifically bystander (in relation to task), strategist (in relation to leadership and overall satisfaction) and battler (in relation to task).The main differences in the observed relationships between influencing styles and 360-degree assessment between male and female managers are on the collaborator-battler scale. In the female sample there is a very strong positive relationship with the collaborator style, whereas in the male sample there is a very weak relationship with the battler style. There are also much smaller differences on the strategist-opportunist scale. In both the male and female sample, there is a positive relationship between the strategist style and positive 360-degree assessments, although the relationship is much stronger in the female sample. This highlights another more general difference between the male and female samples: the observed relationship between the two sets of variables is much stronger in the female sample. The general pattern of gender differences is illustrated in Figure 2.Senior and middle managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of senior managers, including those at senior and middle management levels. Table IV looks at influencing strategies and influencing styles.Table IV indicates that, for the sample of senior managers, there are no statistically significant relationships, although there is a near significant relationship between the 360-degree assessment for overall satisfaction and the strategist style.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of middle managers, including those at senior and middle management levels. Table V looks at influencing strategies and influencing styles.Table V shows that, for the sample of middle managers, there are statistically significant relationships between all three influencing style dimensions and 360-degree assessments. The most positive relationships 360-degree assessments, the strongest and most wide-ranging, are associated with the collaborator style, although there are also positive relationships with both the bystander and strategist styles.The main differences in the observed relationships between influencing styles and 360-degree assessment between senior and middle managers lie on the collaborator-battler scale. In the middle management sample there is a very strong positive relationship with the collaborator style, whereas in the senior management sample there is a weak relationship with the battler style. There are also much smaller differences on the bystander-shotgun scale. In the middle management sample, there is a positive relationship between the bystander style and positive 360-degree assessments, whereas the two variables are largely unrelated in the senior management sample. This highlights another more general difference between the senior and middle management samples, in that the observed relationship between the two sets of variables is much stronger in the middle management sample. The general pattern of seniority differences is illustrated in the Figure 3. The 360-degree assessment tool used in this research is designed-to-be-used in leadership and management development activities rather than as a pure research instrument. It does, however, shine light on some of the observed relationships between influencing behaviour and contextual variables found by the present authors (Manning et al., 2008) in the previous series of articles, although the exact terms used may vary. In particular, the following findings from this study appear consistent with the earlier findings:1. Bystander styles of influence receive positive 360-degree assessments:* when used by middle managers rather than senior managers; and* in relation to task orientation.2. Strategist styles of influence tend to receive positive 360-degree assessment:* when used by both middle and senior managers; and* in relation to task orientation.3. Collaborator styles of influence tend to receive positive 360-degree assessments:* when used by middle managers; and* in relation to stimulating innovation, group maintenance and leadership.The findings presented here support the findings of previous research on gender differences in self-assessment and 360-degree assessments (Fletcher, 1999). In particular, we found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our female sample than in our male sample. However, we also found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our middle management sample then in our senior management sample. Moreover, although both gender and seniority differences were clearly observable and independent, seniority differences appeared greater than gender differences, as there was a stronger relationship between self-assessed influencing behaviour and 360-degree assessments in the male middle manager sub-group than in the female senior manager sub-group.We found both similarities and differences between the observed relationships in male and senior managers, on the one hand, and female and middle managers, on the other. In the way of similarities, collaborator styles of influence tended-to-be-rated very positively in both female and middle manager groups, whereas battler styles were weakly related to negative assessments. In the way of differences, we found gender differences on the strategist-opportunist dimension (with strategist styles being more positively assessed in the female sample) but much less in the way of seniority differences, and seniority differences on the bystander-shotgun dimension (with bystanders being more positively assessed in the middle management sample) but much less in the way of gender differences.The findings on both gender and seniority differences in the observed relationships between the two sets of factors raise a number of interesting questions about why such differences occur.First, why are there perceived gender differences? Why do some influencing behaviours appear to be more positively valued when done by men and others by women, and vice versa? We propose at least two possible reasons for such observed differences and both may be true, at least in part. The first possibility is that men and women are judged by different gender stereotypes, while the second is that men and women tend to do different kinds of jobs. These two possibilities are considered briefly in the following.The observed gender differences may arise from the fact that men and women are judged by a different gender stereotype. One interpretation of our findings, put very simply, is that the stereotype of a "good" female manager is someone who is collaborative, whereas the stereotype of a "good" male manager is someone is strong and decisive. The corollary is, of course, that female managers tend not to be valued if they are strong and decisive, while male managers tend not to be valued if they are collaborative. In consequence, individuals who conform to the "good" stereotype for their gender are likely to receive positive assessments from others and vice versa.The second possible explanation of the gender differences is that male and female managers tend to do different jobs. Wilson (2004) argues that organisations differ according to their gender regimes, in that they are structured according to the symbolism of gender, with women doing women's tasks (e.g. caring and cleaning) and occupying female jobs (e.g. nursing and secretarial), thereby perpetuating the symbolic system of subordination and subservience. Wilson (2004) considers research which illustrates that women are likely to occupy management positions in less prestigious organisations, occupy less prestigious positions in organisations, are found disproportionately at lower organisational levels and are likely to be dominated by males, even when holding more senior positions than those males. This argument is supported by Ford's (2005) considerations of the gendered nature of leadership models within organizational theory. It may be that collaborative behaviour was valued in our sample of female managers because they tend to jobs where collaboration is an integral part of their roles. Conversely, strong and decisive behaviour may have been more central to the roles played by male managers in our sample.Second, why are there perceived seniority differences? Why do some influencing behaviours appear to be more positively valued when done by senior managers and others by middle managers, and vice versa? It seems to us that there are at least four possible reasons for such observed differences. While each may be true, at least in part, some seem more plausible than others.One possibility is that middle management jobs may be more homogenous than senior management jobs. In consequence, more clearly observable relationships are likely to be found between influencing behaviours and 360-degree performance assessments among middle managers than senior managers. This is something we hope to explore further, differentiating between different types of jobs rather than seniority per se. However, we would expect middle management jobs to be varied, as well as senior management jobs, and so we doubt that this is a sufficient explanation for the findings.A second possibility is that factors other than influencing behaviours have a greater impact on 360-degree assessments in the senior management sample than the middle management one. These other factors include leadership and management behaviours, as well as motivational factors. It is likely that leadership behaviours play a particularly important part in senior management jobs and that these behaviours are quite separate from the influencing behaviours explored here. Examples of such leadership behaviours include communicating a compelling vision, building teams, and involving and developing staff. We intend to look at leadership and management behaviours, and their relationship to 360-degree assessments, in a subsequent and related article. While we would expect both senior and middle management jobs to include leadership and/or management elements, we accept that "leadership" elements may be more important in senior management positions, while "management" elements may be more important in middle management positions. We follow Kotter's (1990) distinctions between these two separate but inter-related concepts.While these two hypothetical explanations for the observed seniority differences are possible and worthy of further research, they do not seem to provide sufficient explanations. We therefore turn our attention to what are sometimes called "anti-leadership" theories. According to Van Seters and Field's (1990) review of leadership theory, there are two main anti-leadership perspectives, the "ambiguity" and "substitute" perspectives. We argue that both may help explain the observed seniority differences found in our research.In the "ambiguity" perspective, leadership is seen as a purely perceptual phenomenon, something that exists only in the mind of the observer. This idea has much in common with the "transactional" theory of leadership referred to previously. However, according to the "ambiguity" perspective view, the leader is a symbol whose actual performance is of little or no consequence, while leadership itself is an encompassing term we use to describe organisational changes we do not understand. While we would not want to accept the extreme form of this view, that the concept of leadership be abandoned altogether, we do think it possible that senior management jobs may tend to be more symbolic and less substantive than middle management jobs.In the "substitute" perspective, the concept of leadership is retained, although it is argued that the characteristics of the task, subordinates and organisation can prevent the leader from affecting subordinate performance. We have some sympathy for this view and suspect that people often have very high expectations of their leaders but, in reality, leadership roles are highly constrained and/or impacted on by a wide range of factors outside the control of the leader. In contrast, people may have more realistic expectations of middle managers, whose roles may be more substantive and less symbolic. Thus, 360-degree assessments of the performance of middle managers may be more closely linked to their behaviour than is the case with senior managers. In this paper, we have reported on and discussed research on the relationship between influencing behaviour and 360-degree assessments of performance. In our sample as whole, we found statistically significant relationships between these two sets of factors. However, we also found both gender and seniority differences in the pattern of relationships. The point here is not that men and women and/or senior and middle managers behaved differently but that the same behaviours tended to be judged differently, according to the gender and seniority of those doing the influencing.We found that the observed relationships between influencing behaviour and 360-degree assessments were stronger and more wide-ranging in both our female and middle management samples than in our male and senior management samples. We also found more pronounced similarities between our female and middle management samples, on the one hand, and our male and senior management samples, on the other hand, while there were differences between female/middle and male/senior samples.We went on to explore why there were both gender and seniority differences in the observed relationships. We concluded that the gender differences might be related to the ways in which men and women are judged by different gender stereotypes, as well as to the fact that men and women do different jobs. We concluded that seniority differences might also be due to the fact that senior and middle managers do different jobs. Another possibility was the differential impact of other factors, including managerial and leadership behaviours, as well as motivational factors, at different organisational levels. Anti-leadership theories suggested other explanations, including the possibility that senior management roles were more symbolic, less substantial and more constrained than middle management roles.Our findings have relevance to managers and leaders at all organisational levels, as well as to professional involved in human resource management and development. Five major implications are briefly considered in the following.First, our findings reinforce the conclusion that there are few, if any, influencing behaviours that apply to all situations. In consequence, it is essential to focus on individual behaviours appropriate to particular situations.Second, our findings support the interpersonal model previously proposed, and highlight the role of expectancies on the relationship between behaviour and impact.Third, our findings suggest that 360-degree assessments of performance are vulnerable to gender stereotyping. This suggests that the assessment of performance within organisational performance management systems may also be susceptible to such bias. It is, therefore, important for human resource managers, line managers and professionals who use 360-degree assessments in their work to be aware of the possibility of such bias and take the necessary steps to mitigate it.Fourth, if there is bias arising from gender stereotypes, it may arise from other stereotypes. It may, therefore, also be worth carrying out further research on other stereotypes, including race, disability, age, sexual orientation and religion and belief.Finally, there is clear need to investigate factors other than influencing behaviours that may be related to 360-degree performance assessments. We hope to publish further research on the relationship between 360-degree assessments and leadership behaviour, as well as personality, in the near future. Opens in a new window.Figure 1 The relationship between influencing styles and 360-degree assessments for the whole sample Opens in a new window.Figure 2 Gender differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Figure 3 Seniority differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Table I Findings for the sample as a whole Opens in a new window.Table II Findings for the sample of female managers Opens in a new window.Table III Findings for the sample of male managers Opens in a new window.Table IV Findings for the sample of senior managers Opens in a new window.Table V Findings for the sample of middle managers
|
- The findings extend previous work supporting the idea that there are few, if any, influencing behaviours that apply to all situations and highlight the role of expectancies in work place assessments of influencing behaviours.
|
[SECTION: Value] This paper is about the relationship between influencing behaviour, the influencing strategies used by people at work and the ways in which they combine them into influencing styles, and 360-degree assessments of their performance. It also considers gender and seniority differences in such assessments. It describes and discusses research carried out on a mixed gender group of senior and middle managers working in the public sector in the UK. The paper builds on an earlier series of articles on influencing behaviour (Manning et al., 2008a, b, c). These articles described how individual influencing behaviour tends to vary in different contexts. They readily acknowledged that simply finding that particular influencing behaviours tend to be used in specific contexts tells us nothing about the effectiveness of such behaviours in those contexts. In order to do this, it is necessary to provide some independent indicator of effectiveness or impact. 360-degree assessments of performance provide one such indicator.The paper begins by considering three underlying questions. First, why use 360-degree assessments to measure the effectiveness of particular influencing behaviours? Second, why consider gender differences in such assessments? Third, why consider seniority differences in such assessments? There is then a brief description of the research methods, including the instruments used to measure influencing behaviour and 360-degree assessments, as well as the statistical tools used to analyse findings. This is followed by a description of the research findings, including the overall findings and specific findings on gender and seniority differences. The article goes on to discuss the findings, including the extent to which they are consistent with the previous research on influencing behaviour by the authors, and why the relationship between influencing behaviours and 360-degree assessments differ according to both the gender and seniority of managers. The article concludes by recapping the main themes examined and discussing their practical implications. There are a number of reasons for using 360-degree assessments to measure the effectiveness of particular influencing behaviours. Previous research by the authors found clear relationships between influencing behaviours and contextual factors, including the roles played by individuals at work, and what is expected of them in these roles. This research presents a social psychological model of the process of interpersonal influence, in which the influencer (or agent) seeks to influence the influencee (or target). The behaviour of the agent leads to a response from the target. Perceptions, in the form of expectancies, play an important part in both influencer behaviour and influencee response. Expectancies include prior judgements, hunches and predictions about the behaviours of others, as well as the social contexts within which interaction occurs. Of particular relevance to this study are the feedback loops between the behaviour of the agent and the expectancies of the target, as well as the response of the target and the expectancies of the agent. This study investigates the ways in which the agent's influencing behaviour impacts on the judgements, predictions and hunches of the person, or target, the agent seeks to influence.360-degree assessments of performance offer an appropriate and interesting indicator to investigate this relationship between behaviour and impact. 360-degree assessments, sometimes referred to as "multi-rater performance appraisals" (McCarthy and Garavan, 2001), offer standardised measures of the judgements of behaviours. They are used in both performance appraisal and development planning systems (Huggett, 1998). They are widely accepted as a robust process for collecting feedback on perceptions of behaviour in the workplace and are utilised by almost all Fortune 500 companies (Newbold, 2008). Goodge and Burr (1999), in their review of the literature on the usefulness of 360-degree feedback, concluded that there are a surprisingly large number of academic evaluations that show real changes in competencies and attitudes as a result of 360-degree feedback programmes. This suggests that while 360-degree assessments may be "subjective" they do tap into something relevant to the performance of people at work. The use of "subjective" assessments of individuals forms an integral element of organisational performance management systems and the methods of rating performance used in such systems and in 360-degree assessments typically have much in common.Utilising 360-degree assessments as an indicator of behavioural impact is particularly relevant to perceptual or "transactional" theories of leadership (see, for example, Greene, 1975). These theories see leadership as involving transactions between the leader and subordinates that affect their relationships. Particular attention is focussed on the way that leaders emerge and how this requires the consent of followers. In this sense, leadership exists only after it is acknowledged by followers. It is a perceptual phenomenon in the mind of followers. 360-degree assessment provides a standardised indicator of such perceptions. In conclusion, 360-degree assessments of performance provide one possible way of measuring the impact of particular behaviours. They are consistent with the authors' process model of interpersonal influence in that they shine light on the "expectancies" of those being influenced. There are a number of reasons for exploring gender differences in 360-degree assessments of influencing behaviours. There is, of course, an extensive body of theory and research on gender, sexuality and organisations, as discussed in Thompson and McHugh (2002). However, there are specific reasons for looking at 360-degree assessments. In his review of the literature, Fletcher (1999) found that there are gender differences in self-assessment and 360-degree appraisal. In particular, he concluded that women tend to rate themselves less positively than do men, are less susceptible to leniency effects and there is greater congruence between their self ratings and external ratings of behaviour and performance. Ford (2005) also calls into question the gendered nature of diagnostic and developmental processes, including 360-degree assessment tools. In the light of these findings, the exploration of gender differences in self-assessed influencing behaviour and 360-degree performance assessments warrants further research. Contingency theories of management and leadership suggest the need to explore seniority differences in 360-degree assessments of behaviour. There is a long tradition in management thought that argues against the idea that there is "one best approach" to management. These "contingency" or "relativist" theories propose that there are no universal management or leadership traits or behaviours. Appropriate behaviour is contingent on, or relative to, the particular context, situation or circumstances in which the individual operates. For examples of these theories see Reddin (1970), Tannenbaum and Schmidt (1973) and Vroom and Yetton (1973). These theories identify a variety of contextual factors, including characteristics of the leader, the led, the task and the wider situation. Contingency theories suggest that different sets of influencing behaviours may be appropriate in some leadership roles and situations. It may, for example, be that the jobs carried out by managers in more senior positions differ significantly from those carried out by more junior individuals, and that different behaviours may be more or less appropriate, depending on the seniority of the job holder. Indeed, one of the findings in our earlier research was that influencing behaviour varied considerably according to the overall level of responsibility of the jobholder.Another reason for exploring seniority differences in 360-degree assessments of behaviour is that they may be linked, in some way(s), to gender differences in such assessments, given the well established fact that men are over-represented, and women under-represented, at the more senior levels of organisations. It may, therefore, be useful to consider if there is a stronger relationship between self-assessed behaviour and 360-degree assessments of performance among individuals at lower organisational levels. It may also be useful to try to control for differences in seniority when investigating gender differences. Our research looked at the relationship between an individual's influencing behaviour and 360-degree assessments of their performance. The information used in the research was collected from male and female senior and middle managers attending the People Management Skills for Senior Managers course run by the National School for Government, a large public sector training establishment in the UK. Influencing behaviour was assessed using the Influencing Strategies and Styles Profile, developed by Tony Manning, published by Management Learning Resources. 360-degree performance assessments were carried out using an instrument developed by Sir John Hunt of the London Business School and used by the National School for Government. Details of the models and measures that underpin this research can be obtained from the lead author, Tony Manning.Influencing behaviour The framework for analysing influencing behaviour is described in Manning and Robertson (2003) and Manning et al. (2008a). It identifies six sets of strategies that people may use in their attempts to influence others at work: reason, assertion, exchange, courting favour, coercion and partnership. It argues that research on the ways individuals combine, or avoid, these strategies makes it possible to identify three broad dimensions of influence style. Thus an individual's influence style can be described by locating them, in three-dimensional space, on three dimensions of influence:Factor 1. Bystander versus shotgun The "bystander" engages in relatively infrequent influence attempts, using little of any of the strategies. In contrast, the "shotgun" engages in relatively frequent influence attempts, using all of the strategies and using them frequently.Factor 2. Strategist versus opportunist The "strategist" uses reason, assertion and partnership, while avoiding the use of courting favour and exchange. In contrast, the "opportunist" tends to use courting favour and exchange, while avoiding reason, assertion and partnership.Factor 3. Collaborator versus battler The "collaborator" uses reason, partnership, courting favour and exchange, while avoiding assertion and, above all, coercion. In contrast, the "battler" tends to use assertion and coercion, while avoiding reason, partnership, courting favour and exchange.360-degree performance assessments Information for the 360-degree assessments was typically collected from the individual's line manager, colleagues and members of staff. When appropriate, it was also collected from more senior managers and external stakeholders. Scores were produced on five broad aspects of performance and overall satisfaction. The five aspects of performance were: task oriented behaviours; maintenance and motivation behaviours; appraisal and development behaviours; behaviours that stimulate innovation and differentiation; and leadership behaviours.The statistical analysis of the data In order to explore the relationship between influencing behaviour and 360-degree assessments of performance, we used two statistical tools: one established the degree of correlation between the two sets of variables and the other the statistical significance of the observed relationships.We initially looked at the degree of correlation and its statistical significance for the total data set. We then divided our data into male and female sub-sets, as well as senior and middle management sub-sets, and repeated the statistical analysis.However, we recognised that our sub-sets were not equal in size and that this might distort our findings. For example, we had more middle managers then senior managers, more male managers than female managers and our senior manager sub-set had more male managers in it than female managers. We therefore divided our data into four sub-sets, namely, senior-male, senior-female, middle-male and middle-female, and repeated the statistical analysis on these groupings. This allowed us to produce what we called "adjusted" data as well as "raw" data. For example, the "adjusted" data for senior managers is the average of the findings for male senior and female senior managers. The findings from both "raw" and "adjusted" data are available on request from the lead author, Tony Manning. In the account of findings that follow, we only report on the "adjusted" data, as we consider this less subject to distortion arising from an unequal gender and/or seniority mix in the sub-sets. We present data on the observed relationships between both facets of influencing behaviour, the frequency of use of the six strategies and the three style dimensions, and 360-degree performance assessments. The discussion focuses on the relationship between influencing style and 360-degree assessments. The sample as a whole The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample as whole, including male and female managers, at senior and middle management levels. Table I looks at influencing strategies and influencing styles.Table I suggests that, for the sample as a whole, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence. This is illustrated in Figure 1.Figures 1-3 show the average degree of correlation between each of the three the style dimensions and the six sub-scales included in the 360-degree assessment, along with the highest and lowest observed relationship with that style dimension.Male and female managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of female managers, including those at senior and middle management levels. Table II looks at influencing strategies and influencing styles.Table II suggests that, for the sample of female managers, the most positive 360-degree assessments are associated with bystander, strategist and collaborator styles of influence, particularly with collaborator styles, where the observed relationships are strongest and most wide-ranging.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of male managers, including those at senior and middle management levels. Table III looks at influencing strategies and influencing styles.Table III shows that, for the sample of male managers, there are no statistically significant relationships between influencing styles and 360-degree assessments. However, a number of relationships approached statistical significance, specifically bystander (in relation to task), strategist (in relation to leadership and overall satisfaction) and battler (in relation to task).The main differences in the observed relationships between influencing styles and 360-degree assessment between male and female managers are on the collaborator-battler scale. In the female sample there is a very strong positive relationship with the collaborator style, whereas in the male sample there is a very weak relationship with the battler style. There are also much smaller differences on the strategist-opportunist scale. In both the male and female sample, there is a positive relationship between the strategist style and positive 360-degree assessments, although the relationship is much stronger in the female sample. This highlights another more general difference between the male and female samples: the observed relationship between the two sets of variables is much stronger in the female sample. The general pattern of gender differences is illustrated in Figure 2.Senior and middle managers The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of senior managers, including those at senior and middle management levels. Table IV looks at influencing strategies and influencing styles.Table IV indicates that, for the sample of senior managers, there are no statistically significant relationships, although there is a near significant relationship between the 360-degree assessment for overall satisfaction and the strategist style.The following tables show the statistically significant relationships between influencing behaviour and 360-degree performance assessments for the (adjusted) sample of middle managers, including those at senior and middle management levels. Table V looks at influencing strategies and influencing styles.Table V shows that, for the sample of middle managers, there are statistically significant relationships between all three influencing style dimensions and 360-degree assessments. The most positive relationships 360-degree assessments, the strongest and most wide-ranging, are associated with the collaborator style, although there are also positive relationships with both the bystander and strategist styles.The main differences in the observed relationships between influencing styles and 360-degree assessment between senior and middle managers lie on the collaborator-battler scale. In the middle management sample there is a very strong positive relationship with the collaborator style, whereas in the senior management sample there is a weak relationship with the battler style. There are also much smaller differences on the bystander-shotgun scale. In the middle management sample, there is a positive relationship between the bystander style and positive 360-degree assessments, whereas the two variables are largely unrelated in the senior management sample. This highlights another more general difference between the senior and middle management samples, in that the observed relationship between the two sets of variables is much stronger in the middle management sample. The general pattern of seniority differences is illustrated in the Figure 3. The 360-degree assessment tool used in this research is designed-to-be-used in leadership and management development activities rather than as a pure research instrument. It does, however, shine light on some of the observed relationships between influencing behaviour and contextual variables found by the present authors (Manning et al., 2008) in the previous series of articles, although the exact terms used may vary. In particular, the following findings from this study appear consistent with the earlier findings:1. Bystander styles of influence receive positive 360-degree assessments:* when used by middle managers rather than senior managers; and* in relation to task orientation.2. Strategist styles of influence tend to receive positive 360-degree assessment:* when used by both middle and senior managers; and* in relation to task orientation.3. Collaborator styles of influence tend to receive positive 360-degree assessments:* when used by middle managers; and* in relation to stimulating innovation, group maintenance and leadership.The findings presented here support the findings of previous research on gender differences in self-assessment and 360-degree assessments (Fletcher, 1999). In particular, we found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our female sample than in our male sample. However, we also found a much stronger relationship between self-assessed influencing behaviour and 360-degree assessments in our middle management sample then in our senior management sample. Moreover, although both gender and seniority differences were clearly observable and independent, seniority differences appeared greater than gender differences, as there was a stronger relationship between self-assessed influencing behaviour and 360-degree assessments in the male middle manager sub-group than in the female senior manager sub-group.We found both similarities and differences between the observed relationships in male and senior managers, on the one hand, and female and middle managers, on the other. In the way of similarities, collaborator styles of influence tended-to-be-rated very positively in both female and middle manager groups, whereas battler styles were weakly related to negative assessments. In the way of differences, we found gender differences on the strategist-opportunist dimension (with strategist styles being more positively assessed in the female sample) but much less in the way of seniority differences, and seniority differences on the bystander-shotgun dimension (with bystanders being more positively assessed in the middle management sample) but much less in the way of gender differences.The findings on both gender and seniority differences in the observed relationships between the two sets of factors raise a number of interesting questions about why such differences occur.First, why are there perceived gender differences? Why do some influencing behaviours appear to be more positively valued when done by men and others by women, and vice versa? We propose at least two possible reasons for such observed differences and both may be true, at least in part. The first possibility is that men and women are judged by different gender stereotypes, while the second is that men and women tend to do different kinds of jobs. These two possibilities are considered briefly in the following.The observed gender differences may arise from the fact that men and women are judged by a different gender stereotype. One interpretation of our findings, put very simply, is that the stereotype of a "good" female manager is someone who is collaborative, whereas the stereotype of a "good" male manager is someone is strong and decisive. The corollary is, of course, that female managers tend not to be valued if they are strong and decisive, while male managers tend not to be valued if they are collaborative. In consequence, individuals who conform to the "good" stereotype for their gender are likely to receive positive assessments from others and vice versa.The second possible explanation of the gender differences is that male and female managers tend to do different jobs. Wilson (2004) argues that organisations differ according to their gender regimes, in that they are structured according to the symbolism of gender, with women doing women's tasks (e.g. caring and cleaning) and occupying female jobs (e.g. nursing and secretarial), thereby perpetuating the symbolic system of subordination and subservience. Wilson (2004) considers research which illustrates that women are likely to occupy management positions in less prestigious organisations, occupy less prestigious positions in organisations, are found disproportionately at lower organisational levels and are likely to be dominated by males, even when holding more senior positions than those males. This argument is supported by Ford's (2005) considerations of the gendered nature of leadership models within organizational theory. It may be that collaborative behaviour was valued in our sample of female managers because they tend to jobs where collaboration is an integral part of their roles. Conversely, strong and decisive behaviour may have been more central to the roles played by male managers in our sample.Second, why are there perceived seniority differences? Why do some influencing behaviours appear to be more positively valued when done by senior managers and others by middle managers, and vice versa? It seems to us that there are at least four possible reasons for such observed differences. While each may be true, at least in part, some seem more plausible than others.One possibility is that middle management jobs may be more homogenous than senior management jobs. In consequence, more clearly observable relationships are likely to be found between influencing behaviours and 360-degree performance assessments among middle managers than senior managers. This is something we hope to explore further, differentiating between different types of jobs rather than seniority per se. However, we would expect middle management jobs to be varied, as well as senior management jobs, and so we doubt that this is a sufficient explanation for the findings.A second possibility is that factors other than influencing behaviours have a greater impact on 360-degree assessments in the senior management sample than the middle management one. These other factors include leadership and management behaviours, as well as motivational factors. It is likely that leadership behaviours play a particularly important part in senior management jobs and that these behaviours are quite separate from the influencing behaviours explored here. Examples of such leadership behaviours include communicating a compelling vision, building teams, and involving and developing staff. We intend to look at leadership and management behaviours, and their relationship to 360-degree assessments, in a subsequent and related article. While we would expect both senior and middle management jobs to include leadership and/or management elements, we accept that "leadership" elements may be more important in senior management positions, while "management" elements may be more important in middle management positions. We follow Kotter's (1990) distinctions between these two separate but inter-related concepts.While these two hypothetical explanations for the observed seniority differences are possible and worthy of further research, they do not seem to provide sufficient explanations. We therefore turn our attention to what are sometimes called "anti-leadership" theories. According to Van Seters and Field's (1990) review of leadership theory, there are two main anti-leadership perspectives, the "ambiguity" and "substitute" perspectives. We argue that both may help explain the observed seniority differences found in our research.In the "ambiguity" perspective, leadership is seen as a purely perceptual phenomenon, something that exists only in the mind of the observer. This idea has much in common with the "transactional" theory of leadership referred to previously. However, according to the "ambiguity" perspective view, the leader is a symbol whose actual performance is of little or no consequence, while leadership itself is an encompassing term we use to describe organisational changes we do not understand. While we would not want to accept the extreme form of this view, that the concept of leadership be abandoned altogether, we do think it possible that senior management jobs may tend to be more symbolic and less substantive than middle management jobs.In the "substitute" perspective, the concept of leadership is retained, although it is argued that the characteristics of the task, subordinates and organisation can prevent the leader from affecting subordinate performance. We have some sympathy for this view and suspect that people often have very high expectations of their leaders but, in reality, leadership roles are highly constrained and/or impacted on by a wide range of factors outside the control of the leader. In contrast, people may have more realistic expectations of middle managers, whose roles may be more substantive and less symbolic. Thus, 360-degree assessments of the performance of middle managers may be more closely linked to their behaviour than is the case with senior managers. In this paper, we have reported on and discussed research on the relationship between influencing behaviour and 360-degree assessments of performance. In our sample as whole, we found statistically significant relationships between these two sets of factors. However, we also found both gender and seniority differences in the pattern of relationships. The point here is not that men and women and/or senior and middle managers behaved differently but that the same behaviours tended to be judged differently, according to the gender and seniority of those doing the influencing.We found that the observed relationships between influencing behaviour and 360-degree assessments were stronger and more wide-ranging in both our female and middle management samples than in our male and senior management samples. We also found more pronounced similarities between our female and middle management samples, on the one hand, and our male and senior management samples, on the other hand, while there were differences between female/middle and male/senior samples.We went on to explore why there were both gender and seniority differences in the observed relationships. We concluded that the gender differences might be related to the ways in which men and women are judged by different gender stereotypes, as well as to the fact that men and women do different jobs. We concluded that seniority differences might also be due to the fact that senior and middle managers do different jobs. Another possibility was the differential impact of other factors, including managerial and leadership behaviours, as well as motivational factors, at different organisational levels. Anti-leadership theories suggested other explanations, including the possibility that senior management roles were more symbolic, less substantial and more constrained than middle management roles.Our findings have relevance to managers and leaders at all organisational levels, as well as to professional involved in human resource management and development. Five major implications are briefly considered in the following.First, our findings reinforce the conclusion that there are few, if any, influencing behaviours that apply to all situations. In consequence, it is essential to focus on individual behaviours appropriate to particular situations.Second, our findings support the interpersonal model previously proposed, and highlight the role of expectancies on the relationship between behaviour and impact.Third, our findings suggest that 360-degree assessments of performance are vulnerable to gender stereotyping. This suggests that the assessment of performance within organisational performance management systems may also be susceptible to such bias. It is, therefore, important for human resource managers, line managers and professionals who use 360-degree assessments in their work to be aware of the possibility of such bias and take the necessary steps to mitigate it.Fourth, if there is bias arising from gender stereotypes, it may arise from other stereotypes. It may, therefore, also be worth carrying out further research on other stereotypes, including race, disability, age, sexual orientation and religion and belief.Finally, there is clear need to investigate factors other than influencing behaviours that may be related to 360-degree performance assessments. We hope to publish further research on the relationship between 360-degree assessments and leadership behaviour, as well as personality, in the near future. Opens in a new window.Figure 1 The relationship between influencing styles and 360-degree assessments for the whole sample Opens in a new window.Figure 2 Gender differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Figure 3 Seniority differences in relationships between influencing styles and 360-degree assessments Opens in a new window.Table I Findings for the sample as a whole Opens in a new window.Table II Findings for the sample of female managers Opens in a new window.Table III Findings for the sample of male managers Opens in a new window.Table IV Findings for the sample of senior managers Opens in a new window.Table V Findings for the sample of middle managers
|
- The research highlights ways in which the relationship between influencing behaviour and impact differ according to both the gender and seniority of those seeking to influence. This indicates that the "expectancies" of the influence or target affect perceptions of influencing behaviour and assessments of impact. This is consistent with the model of interpersonal influence previously developed, which includes explicit reference to feedback loops between behaviour, responses and expectancies. This raises further questions as to the impact of expectancies on 360-degree assessment, and the nature and fairness of assessment within organisational performance management systems.
|
[SECTION: Purpose] Supply chain uncertainty is becoming increasingly popular in business management. However, few studies have provided a depth discussion on the uncertainty so far. Due to the nature of uncertainty, uncertainty is not able to be forecasted or expected beforehand (Knight, 1921). Most business decisions include an element of uncertainty, but since there is no formal solution of dealing with uncertainty (Erasmus et al., 2013). Many business studies heavily relied on the risk management theory and crisis management theory to discourse the uncertainty in business management. Most managers treat uncertainty as an important aspect of risk, and about half (54 per cent) of the managers interviewed by Shapira consider uncertainty as a factor in risk (March and Shapira, 1987). Giddens (2002) asserted that traditional cultures did not have a concept of risk because they did not need one, danger and hazard were associated with the past and the loss of faith, risk is linked to modernisation and the desire to control the future. Following the logic, uncertainty is strongly associated with the future. Erasmus et al. (2013) described uncertainty in business is a situation where managers are simply unable to identify the various deviations and unable to assess the likelihood of their occurrence. Typically, risks are often somewhere in the middle of the risk-uncertainty spectrum (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). In addition, most risks have an element of uncertainty (Erasmus et al., 2013). Risks occur because people never know exactly what will happen in the future. People can use the best forecasts and do every possible analysis, but there is always uncertainty about future events. It is this uncertainty that brings risks (Waters, 2011). T. Aven (2011) suggested risk could be defined through probabilities and uncertainties. In economics, risk is the decision makers are not able to or willing to define their utility function (Aven, 2011, p. 21). Uncertainty increases the possibility of risk occurrence, and risk is a consequence of uncertainty. In other words, risk occurs because of uncertainty about the future, this uncertainty means that unexpected events may occur, and when these unexpected events occur, they cause some kind of damage. Both the terms uncertainty and risk may include sources, events and impacts, and they can be used to indicate concepts and/or objects (Saminian-Darash and Rabinow, 2015). Therefore, sometimes the term uncertainty is confused with risk (Sanchez-Rodrigues Vasco et al., 2008). In addition, the uncertainty and risk are terms that in practice are often used interchangeably (Peck, 2006). The impacts of supply chain uncertainty and risk on the logistics performance are often similar. C.S Tang (2006) suggested that risks in the supply chain are inherent uncertainties: in other words, managers have to face and manage them. In addition, they may influence the logistics performance from the on-time delivery, freight safety, information and customers. The study examines the impacts of both supply chain uncertainty and risk simultaneously on the logistics performance. A courier company is a less than truckload third-party logistics (3PLs) carrier. 3PLs are sorted into different types including freight forwarders, courier companies and other companies that integrate and offer subcontracted logistics and transportation services. Courier company is one of the most significant 3PLs modules for all types of 3PLs (Wang et al., 2015b). In addition, today's courier delivery is different from traditional rail, sea, road or air transport, courier delivery offers door to door fast delivery, and customers may be directly involved in the delivery processes. Courier delivery may need to face more uncertainties and risks than other delivery methods including air, sea and road. Moreover, with rapid development of online business, courier delivery has become a very important delivery method for the express small and medium packaging delivery internationally. However, there are very few studies on the courier business. To provide an intuitive understanding of the impacts of supply chain uncertainty and risk on the logistics performance in the courier industry, we surveyed 98 courier companies in Australia and the paper presents an empirical study of supply chain uncertainty and risk in the Australian courier industry. The quantitative methods are deployed to analyse the impacts of supply chain uncertainty and risk on the logistics performance. Risk and uncertainty is becoming increasingly popular in the supply chain (Davis, 1993; Prater, 2005; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). The definition of supply chain uncertainty was given by Vorst and Beulens (2002) as follows:Decision-making situations in the supply-chain in which the decision-maker does not know definitely what to decide as he/she is indistinct about the objectives; lacks information about (or understanding of) the supply-chain or its environment; lacks information processing capacities; is unable to accurately predict the impact of possible control actions on supply-chain behavior; or lacks effective control actions (non-controllability).(Vorst and Beulens, 2002, p. 413) In looking at the emergence of risk in early studies, Miller (1992) argued about the difference between risk and uncertainty, risks in business refer to unanticipated variation or negative variation may influence business performance such as revenues, costs, profit, market share; uncertainty refer to the unpredictability of environmental or organisational variables that impact business performance or the insufficient information about these variables. Risk is if we do not know what will happen next, but we do know what the distribution looks like. Uncertainty is if we do not know what will happen next, and we do not even know what the possible distribution looks like (Ritholtz, 2012). In February 2002, Donald Rumsfeld, the then US Secretary of State for Defence stated at a Defence Department briefing: "something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns - there are things we do not know we don't know" (Logan, 2009, p. 712). Moreover, Simchi-Levi et al. (2008) regarded that the unknown unknown is a type of risk associated with scenarios where one cannot identify likelihood of occurrence; of course, due to their nature, the unknown-unknown are difficult to control while the known-unknown are more controllable. March and Shapira (1987, p. 1404) defined "risk as the variation in the distribution of possible supply chain outcomes, their likelihood, and their subjective values". The supply chain risks comprise "any risks for the information, material and product flows from original supplier to the delivery of the final product for the end user" (Juttner et al., 2003, p. 200). In supply chain risk management literature, "risk is unreliable and uncertain resources creating supply chain interruption, whereas uncertainty is matching risk between supply and demand in supply chain processes" (Tang and Nurmaya Musa, 2011, p. 26). Sanchez-Rodrigues Vasco et al. (2008) clarify how these two concepts differ in supply chain: Risk is a function of outcome and probability and hence it is something that can be estimated. If the probability that an event could occur is low, but the outcome of that event can have a highly detrimental impact on the supply chain, the occurrence of that event represents a considerable risk for the chain. Uncertainty occurs when decision makers cannot estimate the outcome of an event or the probability of its occurrence (Sanchez-Rodrigues Vasco et al., 2008, p. 390). Supply chain uncertainty and risk usually are interchangeable in practice (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). Juttner et al. (2003), Peck (2006) and Prater (2005) suggested that the difference between supply chain uncertainty and risk is blurred to the extent that it is not important to distinguish. Many supply chains risks are related to the uncertainty and they are inseparable (McManus and Hastings, 2006; Prater, 2005; Rodrigues et al., 2010; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). However, some authors (Miller, 1992; Peck, 2006; Wagner and Bode, 2008) suggest that risk is only associated with issues that may lead to negative outcomes. Traditionally, risk and supply chain risk refer to two attributes: first, the expected value does not adequately capture events with low probability but high consequences; and second, rare and extreme events cause substantial negative consequences (Aven, 2011; Tang and Nurmaya Musa, 2011). This study does not only focus on these two extreme situations, but also concern the day to day operations risk and uncertainty in logistics and transport service providers. These supply chain risks and uncertainties, which most managers want to deal with imminently in a real world environment (Wang et al., 2014a). Supply chain uncertainty and risk are complex notions that come in many different forms and may include supply chain uncertainty and risk sources, risk consequences and risk drivers (Christopher and Lee, 2004; Juttner et al., 2003; Manuj and Mentzer, 2008; Rodrigues et al., 2008). Several different supply chain uncertainties and risks have been identified in previous research works. Lee (2002) illustrated on narrower aspects of supply chain uncertainty that there are two types of uncertainties - supply and demand uncertainty. Davis (1993) illustrated that sources of supply-chain uncertainty were relevant to internal manufacturing processes, supply-side processes or demand-side issues (usually end-customer demand). Mason-Jones and Towill (1998) added a source: control uncertainty, which was concerned with the capability of an organisation, in a supply chain uncertainty circle. The model comprised four quadrants: demand side, supply side, manufacturing process and control systems; and it suggested that reducing these uncertainties would reduce cost. Wilding (1998) proposed a supply chain complexity triangle, which introduces a new source of uncertainty - parallel interaction. Juttner et al. (2003) suggested three categories: environmental risk, network-related risk and organisational risk. Prater (2005) divided supply chain uncertainty into two separate levels, macro- and micro-level uncertainties; macro-level uncertainty refers to risks due to disruptions, and macro-level uncertainty is a higher level category of uncertainty, whereas micro-level uncertainty relates to a more specific source of uncertainty. Sanchez-Rodrigues et al. (2010) investigated the main causes of contingent uncertainty in transport operations. Murugesan et al. (2013) stated six categories of supply chain risk including supply side risk, manufacturing side risk, demand side risk, information risk, logistics risk and environment risk in a supply chain. Wang et al. (2014b) identified three types of supply chain uncertainty and risk, they are company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk. Rangel et al. (2015) identified 16 supply chain risk classifications, 56 risk types were sorted in terms of the existing conceptual similarities. Uncertainty increases the risk within supply chains, and risk is a consequence of the external and internal uncertainties that affect a supply chain (Rodrigues et al., 2008). Many authors recognised that uncertainty was an issue in the supply chain and logistics industry (Davis, 1993; de Leeuw and van den Berg, 2011; Hult et al., 2010; Joseph, 2004; Lee, 2002; Prater, 2005; Rahman, 2011; Rodrigues et al., 2010; Rodrigues et al., 2008; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012; Vorst and Beulens, 2002). Hult et al. (2010) illustrated that uncertainty inherent in the supply chain had an exogenous element for any given participant. For managers, risk is a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters 2011, p. 12). Prater (2005) demonstrated that there were many other distinct sources of uncertainty had received insufficient attention in supply chain. According to literature review, supply chain uncertainty and risk is defined as the potential disturbances to the flow of goods, information and money in this paper (Ellegaard, 2008; McKinnon and Ge, 2004; Murugesan et al., 2013; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012). Vorst and Beulens (2002) identified sources of uncertainty for supply chain redesign strategies. Rodrigues et al. (2008) developed a logistics-oriented uncertainty model - the logistics uncertainty pyramid model, which includes five sources of uncertainty related to suppliers, customer, carrier, control system and external environment. Sanchez-Rodrigues et al. (2010) evaluated the causes of uncertainty in logistics operations. McManus and Hastings (2006) illustrated risks and opportunities are the consequences of the uncertainties to a programme or system. For managers, risk was a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters, 2011). Supply chain risk and uncertainty has become a major topic in the supply chain literature (Davis, 1993; Prater, 2005; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Supply chain uncertainty and risk can be categorised in many different ways and from different perspectives (Christopher and Peck, 2004; Juttner et al., 2003). The previous studies in supply chain uncertainty and risk are summarised in Table I. Supply chain risk mainly reflects negative impacts on logistics performance, such as delays, damage and loss (Sanchez-Rodrigues et al., 2010); some studies (Hoffman, 2006; McKinnon and Ge, 2004; Rodrigues et al., 2008; Simangunsong et al., 2012; Juttner et al., 2003) urged that the supply chain uncertainty and risk have negative impacts on the logistics performance. Others like Merschmann and Thonemann (2011) found no significant relationship between uncertainty and performance, while Saminian-Darash and Rabinow (2015) argued that present uncertainty may have positive impacts in the future. The empirical study focuses on the relationship between supply chain uncertainty and risk and the logistics performance in the courier business. Findings from the study provide a starting point for understanding of supply chain uncertainty and risk in the courier industry. The quantitative study attempts to use the structure equation modelling approach to examine the relationship between supply chain uncertainty and risk and logistics performance in the Australian courier industry. According to an extensive literature review and previous studies, supply chain uncertainty and risk is associated with logistics performance (Prater, 2005; Sanchez-Rodrigues et al., 2010; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Risk has been discussed in the financial industry. It is calculable and in practice, statistical risk measures often depend on the probability of loss and size of loss (Beneplanc and Rochet, 2011). Traditional quantitative risk assessment is based on probability and severity of the risk (Aven, 2011; Juttner et al., 2003). The other way to assess risk and uncertainty is focusing on the consequences, the different risks and uncertainties can be categorised along those focused the consequences (Juttner et al., 2003; Simangunsong et al., 2012). In order to measure the supply chain uncertainty and risk accurately, the scale of supply chain uncertainties and risks was adopted and developed from studies (Murugesan et al., 2013; Rodrigues et al., 2008; Simangunsong et al., 2012). Participants are invited to rate the impacts of both supply chain uncertainty and risk in terms of the severity in companies. The study demonstrated and tested the research framework in Figure 1. The exogenous variable is supply chain uncertainty and risk, the measurement scale is developed from previous study (Murugesan et al., 2013; Simangunsong et al., 2012; Wang et al., 2014b). It consists of three-dimensional structures including company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk to measure the supply chain uncertainty and risk in the Australian courier industry. The endogenous variable is logistics performance. The measurement scale of logistics performance is empirically validated in the Australian courier firms before we use it in this study (Wang et al., 2015a). According to the literature review and actual courier operations, the logistics performance is measured by the delivery performance, information accuracy, customer satisfaction and freight safety (Holmberg, 2000; Lai, 2004). There is an argument about the impacts of supply chain uncertainty and risk. Saminian-Darash and Rabinow (2015) argued that uncertainty may bring positive impacts in the future. Supply chain uncertainty and risk is an issue in the logistics and supply chain industry (Prater, 2005; Lee, 2002; Sanchez-Rodrigues et al., 2010). Uncertainty and risk in the supply chain may lead to its poor logistics performance (Christopher and Lee, 2004; Simangunsong et al., 2012). For example, Sanchez-Rodrigues et al. (2010) urged transport-related uncertainty, and the main drivers impacting the sustainability and transport operations are delays, variable demand/poor information, delivery constraints and insufficient supply chain integration. Therefore, we proposed that there is a significant relationship between supply chain uncertainty and risk and logistics performance. And then the conceptual model is tested in the Australian courier industry. Partial least squares approach for structural equation modelling is used to assist empirical data analysis. We follow the procedures from Hair (2010) to validate the measurement models and structure model. Measurement models are used to assess the reliability and validity of the scale items. The proposed hypotheses are tested in the structural model. Confirmatory factor analysis (CFA) is conducted for validating measurement models. Path analysis is used to validate the relationship between supply chain uncertainty and risk and logistics performance. Path analysis is the basis for structural equation model. It is a technique for estimating the unknown parameters of a system of simultaneous equations (Lowery and Gaskin, 2014). The measurement models of supply chain uncertainty and risk and logistics performance have been drawn from previous studies (Fawcett and Cooper, 1998; Murugesan et al., 2013; Pichet and Shinya, 2008; Simangunsong et al., 2012; Wang et al., 2014b, 2015a). The instrument To ensure the reliability and validity of the instruments, the instrument development focuses on the courier industry. An extensive literature review is conducted to identify supply chain uncertainty and risk variables. In addition, a pilot study is undertaken to refine the variables and questionnaire by supply chain and logistics academics and managers who have extensive experience working in a transport and logistics industry. Based on the previous studies, supply chain uncertainty and risk has very similar impacts on the logistics performance. Moreover, they are inseparable, and managers often notify and manage them simultaneously in a real world environment. A seven-point Likert scale has been used to assess the impacts of supply chain uncertainty and risk in terms of the severity in the company, "1" represented "No problem", and "7" represented "Very severe problem". The perception of logistics performance is measured by a Likert scale which ranged from 1 "strongly disagree" to 7 "strongly agree". Data According to the latest business transport report of Australian Bureau of Statistics, businesses employed 80,000 persons in the postal and courier pickup and delivery services industry subdivision at end June 2011 (Australian Bureau of Statistics, 2012). Literally, the Australian courier industry is a relatively small industry compared to other traditional industries. We surveyed 98 Australian courier companies. The study employs an online survey for data collection. Total 229 responses are recorded in the website. In total, 162 surveys are fully completed. In total, 67 incomplete surveys are deleted form data set. The sample characteristic includes 80 responses (49 per cent) are general/branch/operations manager. In total, 27 responses (17 per cent) are sales/customer service/other manager. In total, 14 responses (9 per cent) are supervisor/team leader. Total 121 (75 per cent) responses are from management supervisor roles in the Australian courier industry. The participants are from all over the Australia. Top three states are: Victoria (66 responses, 41 per cent), New South Wales (40 responses, 25 per cent) and Queensland (14 responses, 9 per cent). In total, 107 participants (66 per cent) in this survey have more than five years of experience in transport/supply chain/logistics industry. Data examination is an initial stage of data analysis. During the process of data examination, we screen the data and evaluate the impact of different types of data, and tests for the assumptions underlying multivariate techniques (Hair, 2010). Analysing empirical data using standardised procedures is important for a successful quantitative research (Dornyei, 2007). The section presents findings from the survey and research models. The questionnaire is used to measure the supply chain uncertainty and risk under the different categories. Top five supply chain uncertainties and risks are identified in each category. The results are shown in Table II. The top supply chain uncertainties and risks may imply the potential problems in the Australian courier industry. Table III provides the descriptive statistics, Cronbach's a value, composite reliability and average variance extracted value. The validity of the constructs is further tested by a CFA in a path model. Top supply chain uncertainties and risks The descriptive findings indicate that the top five supply chain uncertainties and risks in the different categories including company-side, customer-side, and environment uncertainty and risk. For example, delay in pickup and delivery is the top one company-side uncertainty and risk in the Australian courier industry. Overall, we identified the top supply chain uncertainties and risks in terms of the impacts on the logistics performance in the courier industry. They are delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices, delays in pickup/delivery, labour/driver shortage, incorrect delivery information and delay or unavailability of the delivery information. The results are in line with previous studies (Sanchez-Rodrigues et al., 2009). Overall, environment uncertainty and risk had the highest average mean value 2.472, followed by customer-related risk with 2.406. The greatest impact of supply chain uncertainty and risk was from outside company in the Australian courier industry. According to the survey, most Australian courier firms understand the impact of company-side risk, and managers try to deal with the internal aspects so that they, including logistics and information risk, have less impact than external aspects including customer and environment risk. The operating cost could be a challenge in the Australian courier industry. In addition, the managers may need to focus on the following incidents including delays in pickup/delivery, customer complaint and damaged/lost freight in the company. Reliability and validity Table III presents the descriptive statistics, reliability and validity. They include factor loading, t-value, mean and standard deviation for the items, and Cronbach's a value, composite reliability, and average variance extracted value for the constructs. Reliability is an assessment of the degree of consistency between multiple measurements of a variable (Hair, 2010). There are two reliability test methods including test-retest and reliability coefficient. This study applies reliability coefficient with Cronbach's a to test the reliability of the scale. Reliability is demonstrated by composite reliability greater than 0.700. In the study, all the CR values are greater than 0.9. Validity is an important dimension to indicate the degree of accuracy of measurements. Convergent validity assesses the degree to which two measures of the same concept are correlated (Hair, 2010). High correlations are required to ensure the convergent validity, great than 0.7 is considered as a satisfaction level. In contrast, discriminant validity is the degree to which two conceptually similar concepts are distinct (Hair, 2010). This indicates that the scale is sufficiently different from the other similar concept. Normally the discriminate validity less than 0.7 is considered as a satisfaction level (Hair, 2010). In this study, AVE is greater than 0.500, and communalities are greater than 0.500. Discriminant validity is demonstrated by the square root of the AVE being greater than any of the inter-construct correlations (Hair et al., 2012). Path model The path model results are presented in Figure 2. A confidence interval indicates how reliable survey results are. In applied practice, confidence intervals are typically stated at the 95% confidence level (p<0.05 for t>1.96) (Zar, 1984). The estimation of the structural relationships in the model was conducted by using a bootstrap routine with 1,000 iterations. The bootstrapping sample relates to significances of p<0.1 for t>1.65, p<0.05 for t>1.96 and p<0.001 for t>2.58 (Hair and Anderson, 2010). A path coefficient is used for testing hypotheses in the paper. The standardised path estimates (b) represent the strength, direction and significance of the relationship between constructs. b is considered to be large, medium and small for values of greater than 0.37, 0.24 and 0.1, respectively. Absolute value of a path coefficient should be not greater than 1. Negative value stands for the negative relationship between two concepts. Positive value stands for the positive relationship. According to the data analysis, standardised coefficient (b) -0.43 and t-value 5.48, the hypothesis is supported. The significant path coefficient showed that logistics performance is influenced by the supply chain uncertainty and risk. The findings are presented, and the results show that there is a negative significant relationship between supply chain uncertainty and risk and the logistics performance in the Australian courier industry. The paper presents a study of supply chain uncertainty and risk in the Australian courier industry. Although the courier has become a fast-growing industry in logistics and transport, there are very few studies on the courier industry. This study clarifies the impact of supply chain and risk on the logistics performance, and assists both researchers and managers in understanding and managing supply chain uncertainty and risk in the courier industry. In addition, managers may use logistics performance to evaluate and monitor their performance as service providers. The study focuses on the supply chain uncertainty and risk, and logistics performance. We found empirical evidence supporting the negative impacts of supply chain uncertainty and risk on logistics performance in the Australian courier industry. Merschmann and Thonemann (2011) could not find a significant relationship between the supply chain uncertainty and performance in an industry. Saminian-Darash and Rabinow (2015) argued that uncertainty may have a positive impact on the industry. Simangunsong et al. (2012) argued that supply chain uncertainty and risk may have negative impacts on the industry. In addition, courier industry may need more attention to improve the efficiency (Chang and Yen, 2012). Therefore, it is significant to examine and clarify the impacts of supply chain uncertainty and risk in the courier industry. The empirical results would provide insights into supply chain risk management in the courier industry. Moreover, the management may use the scale of supply chain uncertainty and risk and logistics performance to improve the internal operations. This may enhance the overall efficiency and effectiveness of logistics performance in the courier industry. According to the mean value, the supply chain uncertainties and risks have been identified in the different categories. The top supply chain uncertainties and risks include delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices and delays in pickup/delivery. The results are consistent with Simangunsong et al. (2012) in that supply chain uncertainty influence logistics performance. Meanwhile, similar results were found in T. Aven (2012) and Beneplanc and Rochet (2011) in that the supply chain risk may increase the operating costs and affect the performance. The study focuses on the Australian courier industry, the supply chain uncertainty and risk are categorised in terms of the actual courier operations and literature review (Joseph, 2004; Manuj and Mentzer, 2008; Sanchez-Rodrigues et al., 2009; Sodhi and Tang, 2012). Courier industry has unique characteristics including fast door to door delivery, customer may directly involve in the delivery process, etc. This is different from traditional logistics and transport businesses. Therefore, it is significant to investigate the impacts of supply chain uncertainty and risk in the courier industry. The results confirmed that the supply chain uncertainty and risk consisting of three main dimensions - company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk in the Australian courier industry. The results indicate that the supply chain uncertainty and risk have high reliability and validity. According to the empirical analysis, external supply chain uncertainty and risk including customer-side uncertainty and environment uncertainty and risk have a higher severity than company-side uncertainty and risk in the Australian courier company. This provides direction to support managers to focus on the external supply chain uncertainty and risk. For example, customer's mistake is one of the top supply chain uncertainties and risks under customer-side uncertainty and risk. Rangel et al. (2015) identified customer risk in the delivery process. The management may consider about how they can improve the communication with the customers and provide additional instructions to guide customers to use the services. For the company-side uncertainty and risk, the managers may improve the service flexibility, customer service and quick response to the unexpected events and/or problems. This includes internal risks to the organisation and supply chain (Christopher and Peck, 2004; Rangel et al., 2015). For the environment uncertainty and risk, the survey results show road congestion is top one supply chain uncertainty and risk under the environment category. It has become a major challenge in the Australian courier industry. The results in line with recent Australian THE AGE news, the cost of road congestion may set to triple to more than $9 billion a year by 2031 in Victoria, Australia (Carey, 2015). The environment uncertainty and risk includes external risks to the supply chain (Christopher and Peck, 2004; Manuj and Mentzer, 2008). Moreover, courier firms heavily relay on the road transport. Therefore, it is important to pay attention to the external supply chain uncertainty and risk in the courier industry. There is a close relationship between supply chain risks and performance in the transport industry (Naim et al., 2010; Sanchez-Rodrigues et al., 2009). In addition, the logistics performance plays a vital role in the courier service performance, for example: the delivery performance may directly influence the customers' satisfactions and quality of service (Pichet and Shinya, 2008). Moreover, logistics performance measuring has increased popularity in logistics and supply chain management. Especially, the logistics performance of the courier firms may provide valuable insights into 3PLs management (Bolumole, 2003). In the study, the logistics performance assessment based on the actual operations of courier firms, we consider factors including customer, freight, and information and delivery performance (Fawcett and Cooper, 1998; Jayaram and Tan, 2010; Lai, 2004; Pichet and Shinya, 2008). This may provide a way to assess the 3PLs providers' performance and help management to monitor and control the courier service performance (Pichet and Shinya, 2008). According to the data analysis, the results have high reliability and validity. We find that supply chain uncertainty and risk has a statistically significant negative relationship with logistics performance. This implies that it has negative impacts on the customer satisfaction, operating costs, on-time delivery, freight security and information accuracy. The findings provide directions in the implementation of strategies to manage the supply chain uncertainty and risk and improve logistics performance. Further research should be considered to find alternative configurations and solutions to manage the supply chain uncertainty and risk. The study focuses on the Australian courier industry, this may limit the implications of findings in different sections. However, the research models may be widely examined and validated in the different context. This would help us to refine the study and enrich the supply chain literature. In addition, it may enlighten both academics and practitioners to understand and pay attention to the supply chain uncertainty and risk management in the courier industry.
|
The purpose of this paper is to present an empirical evidence of the impacts of supply chain uncertainty and risk on the logistics performance in the Australian courier industry. This study examines the impacts of supply chain and risk on the logistics performance in the Australian courier industry.
|
[SECTION: Method] Supply chain uncertainty is becoming increasingly popular in business management. However, few studies have provided a depth discussion on the uncertainty so far. Due to the nature of uncertainty, uncertainty is not able to be forecasted or expected beforehand (Knight, 1921). Most business decisions include an element of uncertainty, but since there is no formal solution of dealing with uncertainty (Erasmus et al., 2013). Many business studies heavily relied on the risk management theory and crisis management theory to discourse the uncertainty in business management. Most managers treat uncertainty as an important aspect of risk, and about half (54 per cent) of the managers interviewed by Shapira consider uncertainty as a factor in risk (March and Shapira, 1987). Giddens (2002) asserted that traditional cultures did not have a concept of risk because they did not need one, danger and hazard were associated with the past and the loss of faith, risk is linked to modernisation and the desire to control the future. Following the logic, uncertainty is strongly associated with the future. Erasmus et al. (2013) described uncertainty in business is a situation where managers are simply unable to identify the various deviations and unable to assess the likelihood of their occurrence. Typically, risks are often somewhere in the middle of the risk-uncertainty spectrum (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). In addition, most risks have an element of uncertainty (Erasmus et al., 2013). Risks occur because people never know exactly what will happen in the future. People can use the best forecasts and do every possible analysis, but there is always uncertainty about future events. It is this uncertainty that brings risks (Waters, 2011). T. Aven (2011) suggested risk could be defined through probabilities and uncertainties. In economics, risk is the decision makers are not able to or willing to define their utility function (Aven, 2011, p. 21). Uncertainty increases the possibility of risk occurrence, and risk is a consequence of uncertainty. In other words, risk occurs because of uncertainty about the future, this uncertainty means that unexpected events may occur, and when these unexpected events occur, they cause some kind of damage. Both the terms uncertainty and risk may include sources, events and impacts, and they can be used to indicate concepts and/or objects (Saminian-Darash and Rabinow, 2015). Therefore, sometimes the term uncertainty is confused with risk (Sanchez-Rodrigues Vasco et al., 2008). In addition, the uncertainty and risk are terms that in practice are often used interchangeably (Peck, 2006). The impacts of supply chain uncertainty and risk on the logistics performance are often similar. C.S Tang (2006) suggested that risks in the supply chain are inherent uncertainties: in other words, managers have to face and manage them. In addition, they may influence the logistics performance from the on-time delivery, freight safety, information and customers. The study examines the impacts of both supply chain uncertainty and risk simultaneously on the logistics performance. A courier company is a less than truckload third-party logistics (3PLs) carrier. 3PLs are sorted into different types including freight forwarders, courier companies and other companies that integrate and offer subcontracted logistics and transportation services. Courier company is one of the most significant 3PLs modules for all types of 3PLs (Wang et al., 2015b). In addition, today's courier delivery is different from traditional rail, sea, road or air transport, courier delivery offers door to door fast delivery, and customers may be directly involved in the delivery processes. Courier delivery may need to face more uncertainties and risks than other delivery methods including air, sea and road. Moreover, with rapid development of online business, courier delivery has become a very important delivery method for the express small and medium packaging delivery internationally. However, there are very few studies on the courier business. To provide an intuitive understanding of the impacts of supply chain uncertainty and risk on the logistics performance in the courier industry, we surveyed 98 courier companies in Australia and the paper presents an empirical study of supply chain uncertainty and risk in the Australian courier industry. The quantitative methods are deployed to analyse the impacts of supply chain uncertainty and risk on the logistics performance. Risk and uncertainty is becoming increasingly popular in the supply chain (Davis, 1993; Prater, 2005; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). The definition of supply chain uncertainty was given by Vorst and Beulens (2002) as follows:Decision-making situations in the supply-chain in which the decision-maker does not know definitely what to decide as he/she is indistinct about the objectives; lacks information about (or understanding of) the supply-chain or its environment; lacks information processing capacities; is unable to accurately predict the impact of possible control actions on supply-chain behavior; or lacks effective control actions (non-controllability).(Vorst and Beulens, 2002, p. 413) In looking at the emergence of risk in early studies, Miller (1992) argued about the difference between risk and uncertainty, risks in business refer to unanticipated variation or negative variation may influence business performance such as revenues, costs, profit, market share; uncertainty refer to the unpredictability of environmental or organisational variables that impact business performance or the insufficient information about these variables. Risk is if we do not know what will happen next, but we do know what the distribution looks like. Uncertainty is if we do not know what will happen next, and we do not even know what the possible distribution looks like (Ritholtz, 2012). In February 2002, Donald Rumsfeld, the then US Secretary of State for Defence stated at a Defence Department briefing: "something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns - there are things we do not know we don't know" (Logan, 2009, p. 712). Moreover, Simchi-Levi et al. (2008) regarded that the unknown unknown is a type of risk associated with scenarios where one cannot identify likelihood of occurrence; of course, due to their nature, the unknown-unknown are difficult to control while the known-unknown are more controllable. March and Shapira (1987, p. 1404) defined "risk as the variation in the distribution of possible supply chain outcomes, their likelihood, and their subjective values". The supply chain risks comprise "any risks for the information, material and product flows from original supplier to the delivery of the final product for the end user" (Juttner et al., 2003, p. 200). In supply chain risk management literature, "risk is unreliable and uncertain resources creating supply chain interruption, whereas uncertainty is matching risk between supply and demand in supply chain processes" (Tang and Nurmaya Musa, 2011, p. 26). Sanchez-Rodrigues Vasco et al. (2008) clarify how these two concepts differ in supply chain: Risk is a function of outcome and probability and hence it is something that can be estimated. If the probability that an event could occur is low, but the outcome of that event can have a highly detrimental impact on the supply chain, the occurrence of that event represents a considerable risk for the chain. Uncertainty occurs when decision makers cannot estimate the outcome of an event or the probability of its occurrence (Sanchez-Rodrigues Vasco et al., 2008, p. 390). Supply chain uncertainty and risk usually are interchangeable in practice (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). Juttner et al. (2003), Peck (2006) and Prater (2005) suggested that the difference between supply chain uncertainty and risk is blurred to the extent that it is not important to distinguish. Many supply chains risks are related to the uncertainty and they are inseparable (McManus and Hastings, 2006; Prater, 2005; Rodrigues et al., 2010; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). However, some authors (Miller, 1992; Peck, 2006; Wagner and Bode, 2008) suggest that risk is only associated with issues that may lead to negative outcomes. Traditionally, risk and supply chain risk refer to two attributes: first, the expected value does not adequately capture events with low probability but high consequences; and second, rare and extreme events cause substantial negative consequences (Aven, 2011; Tang and Nurmaya Musa, 2011). This study does not only focus on these two extreme situations, but also concern the day to day operations risk and uncertainty in logistics and transport service providers. These supply chain risks and uncertainties, which most managers want to deal with imminently in a real world environment (Wang et al., 2014a). Supply chain uncertainty and risk are complex notions that come in many different forms and may include supply chain uncertainty and risk sources, risk consequences and risk drivers (Christopher and Lee, 2004; Juttner et al., 2003; Manuj and Mentzer, 2008; Rodrigues et al., 2008). Several different supply chain uncertainties and risks have been identified in previous research works. Lee (2002) illustrated on narrower aspects of supply chain uncertainty that there are two types of uncertainties - supply and demand uncertainty. Davis (1993) illustrated that sources of supply-chain uncertainty were relevant to internal manufacturing processes, supply-side processes or demand-side issues (usually end-customer demand). Mason-Jones and Towill (1998) added a source: control uncertainty, which was concerned with the capability of an organisation, in a supply chain uncertainty circle. The model comprised four quadrants: demand side, supply side, manufacturing process and control systems; and it suggested that reducing these uncertainties would reduce cost. Wilding (1998) proposed a supply chain complexity triangle, which introduces a new source of uncertainty - parallel interaction. Juttner et al. (2003) suggested three categories: environmental risk, network-related risk and organisational risk. Prater (2005) divided supply chain uncertainty into two separate levels, macro- and micro-level uncertainties; macro-level uncertainty refers to risks due to disruptions, and macro-level uncertainty is a higher level category of uncertainty, whereas micro-level uncertainty relates to a more specific source of uncertainty. Sanchez-Rodrigues et al. (2010) investigated the main causes of contingent uncertainty in transport operations. Murugesan et al. (2013) stated six categories of supply chain risk including supply side risk, manufacturing side risk, demand side risk, information risk, logistics risk and environment risk in a supply chain. Wang et al. (2014b) identified three types of supply chain uncertainty and risk, they are company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk. Rangel et al. (2015) identified 16 supply chain risk classifications, 56 risk types were sorted in terms of the existing conceptual similarities. Uncertainty increases the risk within supply chains, and risk is a consequence of the external and internal uncertainties that affect a supply chain (Rodrigues et al., 2008). Many authors recognised that uncertainty was an issue in the supply chain and logistics industry (Davis, 1993; de Leeuw and van den Berg, 2011; Hult et al., 2010; Joseph, 2004; Lee, 2002; Prater, 2005; Rahman, 2011; Rodrigues et al., 2010; Rodrigues et al., 2008; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012; Vorst and Beulens, 2002). Hult et al. (2010) illustrated that uncertainty inherent in the supply chain had an exogenous element for any given participant. For managers, risk is a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters 2011, p. 12). Prater (2005) demonstrated that there were many other distinct sources of uncertainty had received insufficient attention in supply chain. According to literature review, supply chain uncertainty and risk is defined as the potential disturbances to the flow of goods, information and money in this paper (Ellegaard, 2008; McKinnon and Ge, 2004; Murugesan et al., 2013; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012). Vorst and Beulens (2002) identified sources of uncertainty for supply chain redesign strategies. Rodrigues et al. (2008) developed a logistics-oriented uncertainty model - the logistics uncertainty pyramid model, which includes five sources of uncertainty related to suppliers, customer, carrier, control system and external environment. Sanchez-Rodrigues et al. (2010) evaluated the causes of uncertainty in logistics operations. McManus and Hastings (2006) illustrated risks and opportunities are the consequences of the uncertainties to a programme or system. For managers, risk was a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters, 2011). Supply chain risk and uncertainty has become a major topic in the supply chain literature (Davis, 1993; Prater, 2005; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Supply chain uncertainty and risk can be categorised in many different ways and from different perspectives (Christopher and Peck, 2004; Juttner et al., 2003). The previous studies in supply chain uncertainty and risk are summarised in Table I. Supply chain risk mainly reflects negative impacts on logistics performance, such as delays, damage and loss (Sanchez-Rodrigues et al., 2010); some studies (Hoffman, 2006; McKinnon and Ge, 2004; Rodrigues et al., 2008; Simangunsong et al., 2012; Juttner et al., 2003) urged that the supply chain uncertainty and risk have negative impacts on the logistics performance. Others like Merschmann and Thonemann (2011) found no significant relationship between uncertainty and performance, while Saminian-Darash and Rabinow (2015) argued that present uncertainty may have positive impacts in the future. The empirical study focuses on the relationship between supply chain uncertainty and risk and the logistics performance in the courier business. Findings from the study provide a starting point for understanding of supply chain uncertainty and risk in the courier industry. The quantitative study attempts to use the structure equation modelling approach to examine the relationship between supply chain uncertainty and risk and logistics performance in the Australian courier industry. According to an extensive literature review and previous studies, supply chain uncertainty and risk is associated with logistics performance (Prater, 2005; Sanchez-Rodrigues et al., 2010; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Risk has been discussed in the financial industry. It is calculable and in practice, statistical risk measures often depend on the probability of loss and size of loss (Beneplanc and Rochet, 2011). Traditional quantitative risk assessment is based on probability and severity of the risk (Aven, 2011; Juttner et al., 2003). The other way to assess risk and uncertainty is focusing on the consequences, the different risks and uncertainties can be categorised along those focused the consequences (Juttner et al., 2003; Simangunsong et al., 2012). In order to measure the supply chain uncertainty and risk accurately, the scale of supply chain uncertainties and risks was adopted and developed from studies (Murugesan et al., 2013; Rodrigues et al., 2008; Simangunsong et al., 2012). Participants are invited to rate the impacts of both supply chain uncertainty and risk in terms of the severity in companies. The study demonstrated and tested the research framework in Figure 1. The exogenous variable is supply chain uncertainty and risk, the measurement scale is developed from previous study (Murugesan et al., 2013; Simangunsong et al., 2012; Wang et al., 2014b). It consists of three-dimensional structures including company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk to measure the supply chain uncertainty and risk in the Australian courier industry. The endogenous variable is logistics performance. The measurement scale of logistics performance is empirically validated in the Australian courier firms before we use it in this study (Wang et al., 2015a). According to the literature review and actual courier operations, the logistics performance is measured by the delivery performance, information accuracy, customer satisfaction and freight safety (Holmberg, 2000; Lai, 2004). There is an argument about the impacts of supply chain uncertainty and risk. Saminian-Darash and Rabinow (2015) argued that uncertainty may bring positive impacts in the future. Supply chain uncertainty and risk is an issue in the logistics and supply chain industry (Prater, 2005; Lee, 2002; Sanchez-Rodrigues et al., 2010). Uncertainty and risk in the supply chain may lead to its poor logistics performance (Christopher and Lee, 2004; Simangunsong et al., 2012). For example, Sanchez-Rodrigues et al. (2010) urged transport-related uncertainty, and the main drivers impacting the sustainability and transport operations are delays, variable demand/poor information, delivery constraints and insufficient supply chain integration. Therefore, we proposed that there is a significant relationship between supply chain uncertainty and risk and logistics performance. And then the conceptual model is tested in the Australian courier industry. Partial least squares approach for structural equation modelling is used to assist empirical data analysis. We follow the procedures from Hair (2010) to validate the measurement models and structure model. Measurement models are used to assess the reliability and validity of the scale items. The proposed hypotheses are tested in the structural model. Confirmatory factor analysis (CFA) is conducted for validating measurement models. Path analysis is used to validate the relationship between supply chain uncertainty and risk and logistics performance. Path analysis is the basis for structural equation model. It is a technique for estimating the unknown parameters of a system of simultaneous equations (Lowery and Gaskin, 2014). The measurement models of supply chain uncertainty and risk and logistics performance have been drawn from previous studies (Fawcett and Cooper, 1998; Murugesan et al., 2013; Pichet and Shinya, 2008; Simangunsong et al., 2012; Wang et al., 2014b, 2015a). The instrument To ensure the reliability and validity of the instruments, the instrument development focuses on the courier industry. An extensive literature review is conducted to identify supply chain uncertainty and risk variables. In addition, a pilot study is undertaken to refine the variables and questionnaire by supply chain and logistics academics and managers who have extensive experience working in a transport and logistics industry. Based on the previous studies, supply chain uncertainty and risk has very similar impacts on the logistics performance. Moreover, they are inseparable, and managers often notify and manage them simultaneously in a real world environment. A seven-point Likert scale has been used to assess the impacts of supply chain uncertainty and risk in terms of the severity in the company, "1" represented "No problem", and "7" represented "Very severe problem". The perception of logistics performance is measured by a Likert scale which ranged from 1 "strongly disagree" to 7 "strongly agree". Data According to the latest business transport report of Australian Bureau of Statistics, businesses employed 80,000 persons in the postal and courier pickup and delivery services industry subdivision at end June 2011 (Australian Bureau of Statistics, 2012). Literally, the Australian courier industry is a relatively small industry compared to other traditional industries. We surveyed 98 Australian courier companies. The study employs an online survey for data collection. Total 229 responses are recorded in the website. In total, 162 surveys are fully completed. In total, 67 incomplete surveys are deleted form data set. The sample characteristic includes 80 responses (49 per cent) are general/branch/operations manager. In total, 27 responses (17 per cent) are sales/customer service/other manager. In total, 14 responses (9 per cent) are supervisor/team leader. Total 121 (75 per cent) responses are from management supervisor roles in the Australian courier industry. The participants are from all over the Australia. Top three states are: Victoria (66 responses, 41 per cent), New South Wales (40 responses, 25 per cent) and Queensland (14 responses, 9 per cent). In total, 107 participants (66 per cent) in this survey have more than five years of experience in transport/supply chain/logistics industry. Data examination is an initial stage of data analysis. During the process of data examination, we screen the data and evaluate the impact of different types of data, and tests for the assumptions underlying multivariate techniques (Hair, 2010). Analysing empirical data using standardised procedures is important for a successful quantitative research (Dornyei, 2007). The section presents findings from the survey and research models. The questionnaire is used to measure the supply chain uncertainty and risk under the different categories. Top five supply chain uncertainties and risks are identified in each category. The results are shown in Table II. The top supply chain uncertainties and risks may imply the potential problems in the Australian courier industry. Table III provides the descriptive statistics, Cronbach's a value, composite reliability and average variance extracted value. The validity of the constructs is further tested by a CFA in a path model. Top supply chain uncertainties and risks The descriptive findings indicate that the top five supply chain uncertainties and risks in the different categories including company-side, customer-side, and environment uncertainty and risk. For example, delay in pickup and delivery is the top one company-side uncertainty and risk in the Australian courier industry. Overall, we identified the top supply chain uncertainties and risks in terms of the impacts on the logistics performance in the courier industry. They are delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices, delays in pickup/delivery, labour/driver shortage, incorrect delivery information and delay or unavailability of the delivery information. The results are in line with previous studies (Sanchez-Rodrigues et al., 2009). Overall, environment uncertainty and risk had the highest average mean value 2.472, followed by customer-related risk with 2.406. The greatest impact of supply chain uncertainty and risk was from outside company in the Australian courier industry. According to the survey, most Australian courier firms understand the impact of company-side risk, and managers try to deal with the internal aspects so that they, including logistics and information risk, have less impact than external aspects including customer and environment risk. The operating cost could be a challenge in the Australian courier industry. In addition, the managers may need to focus on the following incidents including delays in pickup/delivery, customer complaint and damaged/lost freight in the company. Reliability and validity Table III presents the descriptive statistics, reliability and validity. They include factor loading, t-value, mean and standard deviation for the items, and Cronbach's a value, composite reliability, and average variance extracted value for the constructs. Reliability is an assessment of the degree of consistency between multiple measurements of a variable (Hair, 2010). There are two reliability test methods including test-retest and reliability coefficient. This study applies reliability coefficient with Cronbach's a to test the reliability of the scale. Reliability is demonstrated by composite reliability greater than 0.700. In the study, all the CR values are greater than 0.9. Validity is an important dimension to indicate the degree of accuracy of measurements. Convergent validity assesses the degree to which two measures of the same concept are correlated (Hair, 2010). High correlations are required to ensure the convergent validity, great than 0.7 is considered as a satisfaction level. In contrast, discriminant validity is the degree to which two conceptually similar concepts are distinct (Hair, 2010). This indicates that the scale is sufficiently different from the other similar concept. Normally the discriminate validity less than 0.7 is considered as a satisfaction level (Hair, 2010). In this study, AVE is greater than 0.500, and communalities are greater than 0.500. Discriminant validity is demonstrated by the square root of the AVE being greater than any of the inter-construct correlations (Hair et al., 2012). Path model The path model results are presented in Figure 2. A confidence interval indicates how reliable survey results are. In applied practice, confidence intervals are typically stated at the 95% confidence level (p<0.05 for t>1.96) (Zar, 1984). The estimation of the structural relationships in the model was conducted by using a bootstrap routine with 1,000 iterations. The bootstrapping sample relates to significances of p<0.1 for t>1.65, p<0.05 for t>1.96 and p<0.001 for t>2.58 (Hair and Anderson, 2010). A path coefficient is used for testing hypotheses in the paper. The standardised path estimates (b) represent the strength, direction and significance of the relationship between constructs. b is considered to be large, medium and small for values of greater than 0.37, 0.24 and 0.1, respectively. Absolute value of a path coefficient should be not greater than 1. Negative value stands for the negative relationship between two concepts. Positive value stands for the positive relationship. According to the data analysis, standardised coefficient (b) -0.43 and t-value 5.48, the hypothesis is supported. The significant path coefficient showed that logistics performance is influenced by the supply chain uncertainty and risk. The findings are presented, and the results show that there is a negative significant relationship between supply chain uncertainty and risk and the logistics performance in the Australian courier industry. The paper presents a study of supply chain uncertainty and risk in the Australian courier industry. Although the courier has become a fast-growing industry in logistics and transport, there are very few studies on the courier industry. This study clarifies the impact of supply chain and risk on the logistics performance, and assists both researchers and managers in understanding and managing supply chain uncertainty and risk in the courier industry. In addition, managers may use logistics performance to evaluate and monitor their performance as service providers. The study focuses on the supply chain uncertainty and risk, and logistics performance. We found empirical evidence supporting the negative impacts of supply chain uncertainty and risk on logistics performance in the Australian courier industry. Merschmann and Thonemann (2011) could not find a significant relationship between the supply chain uncertainty and performance in an industry. Saminian-Darash and Rabinow (2015) argued that uncertainty may have a positive impact on the industry. Simangunsong et al. (2012) argued that supply chain uncertainty and risk may have negative impacts on the industry. In addition, courier industry may need more attention to improve the efficiency (Chang and Yen, 2012). Therefore, it is significant to examine and clarify the impacts of supply chain uncertainty and risk in the courier industry. The empirical results would provide insights into supply chain risk management in the courier industry. Moreover, the management may use the scale of supply chain uncertainty and risk and logistics performance to improve the internal operations. This may enhance the overall efficiency and effectiveness of logistics performance in the courier industry. According to the mean value, the supply chain uncertainties and risks have been identified in the different categories. The top supply chain uncertainties and risks include delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices and delays in pickup/delivery. The results are consistent with Simangunsong et al. (2012) in that supply chain uncertainty influence logistics performance. Meanwhile, similar results were found in T. Aven (2012) and Beneplanc and Rochet (2011) in that the supply chain risk may increase the operating costs and affect the performance. The study focuses on the Australian courier industry, the supply chain uncertainty and risk are categorised in terms of the actual courier operations and literature review (Joseph, 2004; Manuj and Mentzer, 2008; Sanchez-Rodrigues et al., 2009; Sodhi and Tang, 2012). Courier industry has unique characteristics including fast door to door delivery, customer may directly involve in the delivery process, etc. This is different from traditional logistics and transport businesses. Therefore, it is significant to investigate the impacts of supply chain uncertainty and risk in the courier industry. The results confirmed that the supply chain uncertainty and risk consisting of three main dimensions - company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk in the Australian courier industry. The results indicate that the supply chain uncertainty and risk have high reliability and validity. According to the empirical analysis, external supply chain uncertainty and risk including customer-side uncertainty and environment uncertainty and risk have a higher severity than company-side uncertainty and risk in the Australian courier company. This provides direction to support managers to focus on the external supply chain uncertainty and risk. For example, customer's mistake is one of the top supply chain uncertainties and risks under customer-side uncertainty and risk. Rangel et al. (2015) identified customer risk in the delivery process. The management may consider about how they can improve the communication with the customers and provide additional instructions to guide customers to use the services. For the company-side uncertainty and risk, the managers may improve the service flexibility, customer service and quick response to the unexpected events and/or problems. This includes internal risks to the organisation and supply chain (Christopher and Peck, 2004; Rangel et al., 2015). For the environment uncertainty and risk, the survey results show road congestion is top one supply chain uncertainty and risk under the environment category. It has become a major challenge in the Australian courier industry. The results in line with recent Australian THE AGE news, the cost of road congestion may set to triple to more than $9 billion a year by 2031 in Victoria, Australia (Carey, 2015). The environment uncertainty and risk includes external risks to the supply chain (Christopher and Peck, 2004; Manuj and Mentzer, 2008). Moreover, courier firms heavily relay on the road transport. Therefore, it is important to pay attention to the external supply chain uncertainty and risk in the courier industry. There is a close relationship between supply chain risks and performance in the transport industry (Naim et al., 2010; Sanchez-Rodrigues et al., 2009). In addition, the logistics performance plays a vital role in the courier service performance, for example: the delivery performance may directly influence the customers' satisfactions and quality of service (Pichet and Shinya, 2008). Moreover, logistics performance measuring has increased popularity in logistics and supply chain management. Especially, the logistics performance of the courier firms may provide valuable insights into 3PLs management (Bolumole, 2003). In the study, the logistics performance assessment based on the actual operations of courier firms, we consider factors including customer, freight, and information and delivery performance (Fawcett and Cooper, 1998; Jayaram and Tan, 2010; Lai, 2004; Pichet and Shinya, 2008). This may provide a way to assess the 3PLs providers' performance and help management to monitor and control the courier service performance (Pichet and Shinya, 2008). According to the data analysis, the results have high reliability and validity. We find that supply chain uncertainty and risk has a statistically significant negative relationship with logistics performance. This implies that it has negative impacts on the customer satisfaction, operating costs, on-time delivery, freight security and information accuracy. The findings provide directions in the implementation of strategies to manage the supply chain uncertainty and risk and improve logistics performance. Further research should be considered to find alternative configurations and solutions to manage the supply chain uncertainty and risk. The study focuses on the Australian courier industry, this may limit the implications of findings in different sections. However, the research models may be widely examined and validated in the different context. This would help us to refine the study and enrich the supply chain literature. In addition, it may enlighten both academics and practitioners to understand and pay attention to the supply chain uncertainty and risk management in the courier industry.
|
This study provides an in-depth analysis of supply chain uncertainty and risk's impacts on the logistics performance. The structure equation modelling approach is applied to examine the relationship between supply chain uncertainty and risk and logistics performance. Company-side uncertainty and risk, customer-side uncertainty and risk, and environment uncertainty and risk are used to measure the impacts of supply chain uncertainty and risk on the industry. This paper gives attention to the supply chain uncertainty and risk in the industry.
|
[SECTION: Findings] Supply chain uncertainty is becoming increasingly popular in business management. However, few studies have provided a depth discussion on the uncertainty so far. Due to the nature of uncertainty, uncertainty is not able to be forecasted or expected beforehand (Knight, 1921). Most business decisions include an element of uncertainty, but since there is no formal solution of dealing with uncertainty (Erasmus et al., 2013). Many business studies heavily relied on the risk management theory and crisis management theory to discourse the uncertainty in business management. Most managers treat uncertainty as an important aspect of risk, and about half (54 per cent) of the managers interviewed by Shapira consider uncertainty as a factor in risk (March and Shapira, 1987). Giddens (2002) asserted that traditional cultures did not have a concept of risk because they did not need one, danger and hazard were associated with the past and the loss of faith, risk is linked to modernisation and the desire to control the future. Following the logic, uncertainty is strongly associated with the future. Erasmus et al. (2013) described uncertainty in business is a situation where managers are simply unable to identify the various deviations and unable to assess the likelihood of their occurrence. Typically, risks are often somewhere in the middle of the risk-uncertainty spectrum (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). In addition, most risks have an element of uncertainty (Erasmus et al., 2013). Risks occur because people never know exactly what will happen in the future. People can use the best forecasts and do every possible analysis, but there is always uncertainty about future events. It is this uncertainty that brings risks (Waters, 2011). T. Aven (2011) suggested risk could be defined through probabilities and uncertainties. In economics, risk is the decision makers are not able to or willing to define their utility function (Aven, 2011, p. 21). Uncertainty increases the possibility of risk occurrence, and risk is a consequence of uncertainty. In other words, risk occurs because of uncertainty about the future, this uncertainty means that unexpected events may occur, and when these unexpected events occur, they cause some kind of damage. Both the terms uncertainty and risk may include sources, events and impacts, and they can be used to indicate concepts and/or objects (Saminian-Darash and Rabinow, 2015). Therefore, sometimes the term uncertainty is confused with risk (Sanchez-Rodrigues Vasco et al., 2008). In addition, the uncertainty and risk are terms that in practice are often used interchangeably (Peck, 2006). The impacts of supply chain uncertainty and risk on the logistics performance are often similar. C.S Tang (2006) suggested that risks in the supply chain are inherent uncertainties: in other words, managers have to face and manage them. In addition, they may influence the logistics performance from the on-time delivery, freight safety, information and customers. The study examines the impacts of both supply chain uncertainty and risk simultaneously on the logistics performance. A courier company is a less than truckload third-party logistics (3PLs) carrier. 3PLs are sorted into different types including freight forwarders, courier companies and other companies that integrate and offer subcontracted logistics and transportation services. Courier company is one of the most significant 3PLs modules for all types of 3PLs (Wang et al., 2015b). In addition, today's courier delivery is different from traditional rail, sea, road or air transport, courier delivery offers door to door fast delivery, and customers may be directly involved in the delivery processes. Courier delivery may need to face more uncertainties and risks than other delivery methods including air, sea and road. Moreover, with rapid development of online business, courier delivery has become a very important delivery method for the express small and medium packaging delivery internationally. However, there are very few studies on the courier business. To provide an intuitive understanding of the impacts of supply chain uncertainty and risk on the logistics performance in the courier industry, we surveyed 98 courier companies in Australia and the paper presents an empirical study of supply chain uncertainty and risk in the Australian courier industry. The quantitative methods are deployed to analyse the impacts of supply chain uncertainty and risk on the logistics performance. Risk and uncertainty is becoming increasingly popular in the supply chain (Davis, 1993; Prater, 2005; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). The definition of supply chain uncertainty was given by Vorst and Beulens (2002) as follows:Decision-making situations in the supply-chain in which the decision-maker does not know definitely what to decide as he/she is indistinct about the objectives; lacks information about (or understanding of) the supply-chain or its environment; lacks information processing capacities; is unable to accurately predict the impact of possible control actions on supply-chain behavior; or lacks effective control actions (non-controllability).(Vorst and Beulens, 2002, p. 413) In looking at the emergence of risk in early studies, Miller (1992) argued about the difference between risk and uncertainty, risks in business refer to unanticipated variation or negative variation may influence business performance such as revenues, costs, profit, market share; uncertainty refer to the unpredictability of environmental or organisational variables that impact business performance or the insufficient information about these variables. Risk is if we do not know what will happen next, but we do know what the distribution looks like. Uncertainty is if we do not know what will happen next, and we do not even know what the possible distribution looks like (Ritholtz, 2012). In February 2002, Donald Rumsfeld, the then US Secretary of State for Defence stated at a Defence Department briefing: "something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns - there are things we do not know we don't know" (Logan, 2009, p. 712). Moreover, Simchi-Levi et al. (2008) regarded that the unknown unknown is a type of risk associated with scenarios where one cannot identify likelihood of occurrence; of course, due to their nature, the unknown-unknown are difficult to control while the known-unknown are more controllable. March and Shapira (1987, p. 1404) defined "risk as the variation in the distribution of possible supply chain outcomes, their likelihood, and their subjective values". The supply chain risks comprise "any risks for the information, material and product flows from original supplier to the delivery of the final product for the end user" (Juttner et al., 2003, p. 200). In supply chain risk management literature, "risk is unreliable and uncertain resources creating supply chain interruption, whereas uncertainty is matching risk between supply and demand in supply chain processes" (Tang and Nurmaya Musa, 2011, p. 26). Sanchez-Rodrigues Vasco et al. (2008) clarify how these two concepts differ in supply chain: Risk is a function of outcome and probability and hence it is something that can be estimated. If the probability that an event could occur is low, but the outcome of that event can have a highly detrimental impact on the supply chain, the occurrence of that event represents a considerable risk for the chain. Uncertainty occurs when decision makers cannot estimate the outcome of an event or the probability of its occurrence (Sanchez-Rodrigues Vasco et al., 2008, p. 390). Supply chain uncertainty and risk usually are interchangeable in practice (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). Juttner et al. (2003), Peck (2006) and Prater (2005) suggested that the difference between supply chain uncertainty and risk is blurred to the extent that it is not important to distinguish. Many supply chains risks are related to the uncertainty and they are inseparable (McManus and Hastings, 2006; Prater, 2005; Rodrigues et al., 2010; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). However, some authors (Miller, 1992; Peck, 2006; Wagner and Bode, 2008) suggest that risk is only associated with issues that may lead to negative outcomes. Traditionally, risk and supply chain risk refer to two attributes: first, the expected value does not adequately capture events with low probability but high consequences; and second, rare and extreme events cause substantial negative consequences (Aven, 2011; Tang and Nurmaya Musa, 2011). This study does not only focus on these two extreme situations, but also concern the day to day operations risk and uncertainty in logistics and transport service providers. These supply chain risks and uncertainties, which most managers want to deal with imminently in a real world environment (Wang et al., 2014a). Supply chain uncertainty and risk are complex notions that come in many different forms and may include supply chain uncertainty and risk sources, risk consequences and risk drivers (Christopher and Lee, 2004; Juttner et al., 2003; Manuj and Mentzer, 2008; Rodrigues et al., 2008). Several different supply chain uncertainties and risks have been identified in previous research works. Lee (2002) illustrated on narrower aspects of supply chain uncertainty that there are two types of uncertainties - supply and demand uncertainty. Davis (1993) illustrated that sources of supply-chain uncertainty were relevant to internal manufacturing processes, supply-side processes or demand-side issues (usually end-customer demand). Mason-Jones and Towill (1998) added a source: control uncertainty, which was concerned with the capability of an organisation, in a supply chain uncertainty circle. The model comprised four quadrants: demand side, supply side, manufacturing process and control systems; and it suggested that reducing these uncertainties would reduce cost. Wilding (1998) proposed a supply chain complexity triangle, which introduces a new source of uncertainty - parallel interaction. Juttner et al. (2003) suggested three categories: environmental risk, network-related risk and organisational risk. Prater (2005) divided supply chain uncertainty into two separate levels, macro- and micro-level uncertainties; macro-level uncertainty refers to risks due to disruptions, and macro-level uncertainty is a higher level category of uncertainty, whereas micro-level uncertainty relates to a more specific source of uncertainty. Sanchez-Rodrigues et al. (2010) investigated the main causes of contingent uncertainty in transport operations. Murugesan et al. (2013) stated six categories of supply chain risk including supply side risk, manufacturing side risk, demand side risk, information risk, logistics risk and environment risk in a supply chain. Wang et al. (2014b) identified three types of supply chain uncertainty and risk, they are company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk. Rangel et al. (2015) identified 16 supply chain risk classifications, 56 risk types were sorted in terms of the existing conceptual similarities. Uncertainty increases the risk within supply chains, and risk is a consequence of the external and internal uncertainties that affect a supply chain (Rodrigues et al., 2008). Many authors recognised that uncertainty was an issue in the supply chain and logistics industry (Davis, 1993; de Leeuw and van den Berg, 2011; Hult et al., 2010; Joseph, 2004; Lee, 2002; Prater, 2005; Rahman, 2011; Rodrigues et al., 2010; Rodrigues et al., 2008; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012; Vorst and Beulens, 2002). Hult et al. (2010) illustrated that uncertainty inherent in the supply chain had an exogenous element for any given participant. For managers, risk is a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters 2011, p. 12). Prater (2005) demonstrated that there were many other distinct sources of uncertainty had received insufficient attention in supply chain. According to literature review, supply chain uncertainty and risk is defined as the potential disturbances to the flow of goods, information and money in this paper (Ellegaard, 2008; McKinnon and Ge, 2004; Murugesan et al., 2013; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012). Vorst and Beulens (2002) identified sources of uncertainty for supply chain redesign strategies. Rodrigues et al. (2008) developed a logistics-oriented uncertainty model - the logistics uncertainty pyramid model, which includes five sources of uncertainty related to suppliers, customer, carrier, control system and external environment. Sanchez-Rodrigues et al. (2010) evaluated the causes of uncertainty in logistics operations. McManus and Hastings (2006) illustrated risks and opportunities are the consequences of the uncertainties to a programme or system. For managers, risk was a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters, 2011). Supply chain risk and uncertainty has become a major topic in the supply chain literature (Davis, 1993; Prater, 2005; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Supply chain uncertainty and risk can be categorised in many different ways and from different perspectives (Christopher and Peck, 2004; Juttner et al., 2003). The previous studies in supply chain uncertainty and risk are summarised in Table I. Supply chain risk mainly reflects negative impacts on logistics performance, such as delays, damage and loss (Sanchez-Rodrigues et al., 2010); some studies (Hoffman, 2006; McKinnon and Ge, 2004; Rodrigues et al., 2008; Simangunsong et al., 2012; Juttner et al., 2003) urged that the supply chain uncertainty and risk have negative impacts on the logistics performance. Others like Merschmann and Thonemann (2011) found no significant relationship between uncertainty and performance, while Saminian-Darash and Rabinow (2015) argued that present uncertainty may have positive impacts in the future. The empirical study focuses on the relationship between supply chain uncertainty and risk and the logistics performance in the courier business. Findings from the study provide a starting point for understanding of supply chain uncertainty and risk in the courier industry. The quantitative study attempts to use the structure equation modelling approach to examine the relationship between supply chain uncertainty and risk and logistics performance in the Australian courier industry. According to an extensive literature review and previous studies, supply chain uncertainty and risk is associated with logistics performance (Prater, 2005; Sanchez-Rodrigues et al., 2010; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Risk has been discussed in the financial industry. It is calculable and in practice, statistical risk measures often depend on the probability of loss and size of loss (Beneplanc and Rochet, 2011). Traditional quantitative risk assessment is based on probability and severity of the risk (Aven, 2011; Juttner et al., 2003). The other way to assess risk and uncertainty is focusing on the consequences, the different risks and uncertainties can be categorised along those focused the consequences (Juttner et al., 2003; Simangunsong et al., 2012). In order to measure the supply chain uncertainty and risk accurately, the scale of supply chain uncertainties and risks was adopted and developed from studies (Murugesan et al., 2013; Rodrigues et al., 2008; Simangunsong et al., 2012). Participants are invited to rate the impacts of both supply chain uncertainty and risk in terms of the severity in companies. The study demonstrated and tested the research framework in Figure 1. The exogenous variable is supply chain uncertainty and risk, the measurement scale is developed from previous study (Murugesan et al., 2013; Simangunsong et al., 2012; Wang et al., 2014b). It consists of three-dimensional structures including company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk to measure the supply chain uncertainty and risk in the Australian courier industry. The endogenous variable is logistics performance. The measurement scale of logistics performance is empirically validated in the Australian courier firms before we use it in this study (Wang et al., 2015a). According to the literature review and actual courier operations, the logistics performance is measured by the delivery performance, information accuracy, customer satisfaction and freight safety (Holmberg, 2000; Lai, 2004). There is an argument about the impacts of supply chain uncertainty and risk. Saminian-Darash and Rabinow (2015) argued that uncertainty may bring positive impacts in the future. Supply chain uncertainty and risk is an issue in the logistics and supply chain industry (Prater, 2005; Lee, 2002; Sanchez-Rodrigues et al., 2010). Uncertainty and risk in the supply chain may lead to its poor logistics performance (Christopher and Lee, 2004; Simangunsong et al., 2012). For example, Sanchez-Rodrigues et al. (2010) urged transport-related uncertainty, and the main drivers impacting the sustainability and transport operations are delays, variable demand/poor information, delivery constraints and insufficient supply chain integration. Therefore, we proposed that there is a significant relationship between supply chain uncertainty and risk and logistics performance. And then the conceptual model is tested in the Australian courier industry. Partial least squares approach for structural equation modelling is used to assist empirical data analysis. We follow the procedures from Hair (2010) to validate the measurement models and structure model. Measurement models are used to assess the reliability and validity of the scale items. The proposed hypotheses are tested in the structural model. Confirmatory factor analysis (CFA) is conducted for validating measurement models. Path analysis is used to validate the relationship between supply chain uncertainty and risk and logistics performance. Path analysis is the basis for structural equation model. It is a technique for estimating the unknown parameters of a system of simultaneous equations (Lowery and Gaskin, 2014). The measurement models of supply chain uncertainty and risk and logistics performance have been drawn from previous studies (Fawcett and Cooper, 1998; Murugesan et al., 2013; Pichet and Shinya, 2008; Simangunsong et al., 2012; Wang et al., 2014b, 2015a). The instrument To ensure the reliability and validity of the instruments, the instrument development focuses on the courier industry. An extensive literature review is conducted to identify supply chain uncertainty and risk variables. In addition, a pilot study is undertaken to refine the variables and questionnaire by supply chain and logistics academics and managers who have extensive experience working in a transport and logistics industry. Based on the previous studies, supply chain uncertainty and risk has very similar impacts on the logistics performance. Moreover, they are inseparable, and managers often notify and manage them simultaneously in a real world environment. A seven-point Likert scale has been used to assess the impacts of supply chain uncertainty and risk in terms of the severity in the company, "1" represented "No problem", and "7" represented "Very severe problem". The perception of logistics performance is measured by a Likert scale which ranged from 1 "strongly disagree" to 7 "strongly agree". Data According to the latest business transport report of Australian Bureau of Statistics, businesses employed 80,000 persons in the postal and courier pickup and delivery services industry subdivision at end June 2011 (Australian Bureau of Statistics, 2012). Literally, the Australian courier industry is a relatively small industry compared to other traditional industries. We surveyed 98 Australian courier companies. The study employs an online survey for data collection. Total 229 responses are recorded in the website. In total, 162 surveys are fully completed. In total, 67 incomplete surveys are deleted form data set. The sample characteristic includes 80 responses (49 per cent) are general/branch/operations manager. In total, 27 responses (17 per cent) are sales/customer service/other manager. In total, 14 responses (9 per cent) are supervisor/team leader. Total 121 (75 per cent) responses are from management supervisor roles in the Australian courier industry. The participants are from all over the Australia. Top three states are: Victoria (66 responses, 41 per cent), New South Wales (40 responses, 25 per cent) and Queensland (14 responses, 9 per cent). In total, 107 participants (66 per cent) in this survey have more than five years of experience in transport/supply chain/logistics industry. Data examination is an initial stage of data analysis. During the process of data examination, we screen the data and evaluate the impact of different types of data, and tests for the assumptions underlying multivariate techniques (Hair, 2010). Analysing empirical data using standardised procedures is important for a successful quantitative research (Dornyei, 2007). The section presents findings from the survey and research models. The questionnaire is used to measure the supply chain uncertainty and risk under the different categories. Top five supply chain uncertainties and risks are identified in each category. The results are shown in Table II. The top supply chain uncertainties and risks may imply the potential problems in the Australian courier industry. Table III provides the descriptive statistics, Cronbach's a value, composite reliability and average variance extracted value. The validity of the constructs is further tested by a CFA in a path model. Top supply chain uncertainties and risks The descriptive findings indicate that the top five supply chain uncertainties and risks in the different categories including company-side, customer-side, and environment uncertainty and risk. For example, delay in pickup and delivery is the top one company-side uncertainty and risk in the Australian courier industry. Overall, we identified the top supply chain uncertainties and risks in terms of the impacts on the logistics performance in the courier industry. They are delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices, delays in pickup/delivery, labour/driver shortage, incorrect delivery information and delay or unavailability of the delivery information. The results are in line with previous studies (Sanchez-Rodrigues et al., 2009). Overall, environment uncertainty and risk had the highest average mean value 2.472, followed by customer-related risk with 2.406. The greatest impact of supply chain uncertainty and risk was from outside company in the Australian courier industry. According to the survey, most Australian courier firms understand the impact of company-side risk, and managers try to deal with the internal aspects so that they, including logistics and information risk, have less impact than external aspects including customer and environment risk. The operating cost could be a challenge in the Australian courier industry. In addition, the managers may need to focus on the following incidents including delays in pickup/delivery, customer complaint and damaged/lost freight in the company. Reliability and validity Table III presents the descriptive statistics, reliability and validity. They include factor loading, t-value, mean and standard deviation for the items, and Cronbach's a value, composite reliability, and average variance extracted value for the constructs. Reliability is an assessment of the degree of consistency between multiple measurements of a variable (Hair, 2010). There are two reliability test methods including test-retest and reliability coefficient. This study applies reliability coefficient with Cronbach's a to test the reliability of the scale. Reliability is demonstrated by composite reliability greater than 0.700. In the study, all the CR values are greater than 0.9. Validity is an important dimension to indicate the degree of accuracy of measurements. Convergent validity assesses the degree to which two measures of the same concept are correlated (Hair, 2010). High correlations are required to ensure the convergent validity, great than 0.7 is considered as a satisfaction level. In contrast, discriminant validity is the degree to which two conceptually similar concepts are distinct (Hair, 2010). This indicates that the scale is sufficiently different from the other similar concept. Normally the discriminate validity less than 0.7 is considered as a satisfaction level (Hair, 2010). In this study, AVE is greater than 0.500, and communalities are greater than 0.500. Discriminant validity is demonstrated by the square root of the AVE being greater than any of the inter-construct correlations (Hair et al., 2012). Path model The path model results are presented in Figure 2. A confidence interval indicates how reliable survey results are. In applied practice, confidence intervals are typically stated at the 95% confidence level (p<0.05 for t>1.96) (Zar, 1984). The estimation of the structural relationships in the model was conducted by using a bootstrap routine with 1,000 iterations. The bootstrapping sample relates to significances of p<0.1 for t>1.65, p<0.05 for t>1.96 and p<0.001 for t>2.58 (Hair and Anderson, 2010). A path coefficient is used for testing hypotheses in the paper. The standardised path estimates (b) represent the strength, direction and significance of the relationship between constructs. b is considered to be large, medium and small for values of greater than 0.37, 0.24 and 0.1, respectively. Absolute value of a path coefficient should be not greater than 1. Negative value stands for the negative relationship between two concepts. Positive value stands for the positive relationship. According to the data analysis, standardised coefficient (b) -0.43 and t-value 5.48, the hypothesis is supported. The significant path coefficient showed that logistics performance is influenced by the supply chain uncertainty and risk. The findings are presented, and the results show that there is a negative significant relationship between supply chain uncertainty and risk and the logistics performance in the Australian courier industry. The paper presents a study of supply chain uncertainty and risk in the Australian courier industry. Although the courier has become a fast-growing industry in logistics and transport, there are very few studies on the courier industry. This study clarifies the impact of supply chain and risk on the logistics performance, and assists both researchers and managers in understanding and managing supply chain uncertainty and risk in the courier industry. In addition, managers may use logistics performance to evaluate and monitor their performance as service providers. The study focuses on the supply chain uncertainty and risk, and logistics performance. We found empirical evidence supporting the negative impacts of supply chain uncertainty and risk on logistics performance in the Australian courier industry. Merschmann and Thonemann (2011) could not find a significant relationship between the supply chain uncertainty and performance in an industry. Saminian-Darash and Rabinow (2015) argued that uncertainty may have a positive impact on the industry. Simangunsong et al. (2012) argued that supply chain uncertainty and risk may have negative impacts on the industry. In addition, courier industry may need more attention to improve the efficiency (Chang and Yen, 2012). Therefore, it is significant to examine and clarify the impacts of supply chain uncertainty and risk in the courier industry. The empirical results would provide insights into supply chain risk management in the courier industry. Moreover, the management may use the scale of supply chain uncertainty and risk and logistics performance to improve the internal operations. This may enhance the overall efficiency and effectiveness of logistics performance in the courier industry. According to the mean value, the supply chain uncertainties and risks have been identified in the different categories. The top supply chain uncertainties and risks include delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices and delays in pickup/delivery. The results are consistent with Simangunsong et al. (2012) in that supply chain uncertainty influence logistics performance. Meanwhile, similar results were found in T. Aven (2012) and Beneplanc and Rochet (2011) in that the supply chain risk may increase the operating costs and affect the performance. The study focuses on the Australian courier industry, the supply chain uncertainty and risk are categorised in terms of the actual courier operations and literature review (Joseph, 2004; Manuj and Mentzer, 2008; Sanchez-Rodrigues et al., 2009; Sodhi and Tang, 2012). Courier industry has unique characteristics including fast door to door delivery, customer may directly involve in the delivery process, etc. This is different from traditional logistics and transport businesses. Therefore, it is significant to investigate the impacts of supply chain uncertainty and risk in the courier industry. The results confirmed that the supply chain uncertainty and risk consisting of three main dimensions - company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk in the Australian courier industry. The results indicate that the supply chain uncertainty and risk have high reliability and validity. According to the empirical analysis, external supply chain uncertainty and risk including customer-side uncertainty and environment uncertainty and risk have a higher severity than company-side uncertainty and risk in the Australian courier company. This provides direction to support managers to focus on the external supply chain uncertainty and risk. For example, customer's mistake is one of the top supply chain uncertainties and risks under customer-side uncertainty and risk. Rangel et al. (2015) identified customer risk in the delivery process. The management may consider about how they can improve the communication with the customers and provide additional instructions to guide customers to use the services. For the company-side uncertainty and risk, the managers may improve the service flexibility, customer service and quick response to the unexpected events and/or problems. This includes internal risks to the organisation and supply chain (Christopher and Peck, 2004; Rangel et al., 2015). For the environment uncertainty and risk, the survey results show road congestion is top one supply chain uncertainty and risk under the environment category. It has become a major challenge in the Australian courier industry. The results in line with recent Australian THE AGE news, the cost of road congestion may set to triple to more than $9 billion a year by 2031 in Victoria, Australia (Carey, 2015). The environment uncertainty and risk includes external risks to the supply chain (Christopher and Peck, 2004; Manuj and Mentzer, 2008). Moreover, courier firms heavily relay on the road transport. Therefore, it is important to pay attention to the external supply chain uncertainty and risk in the courier industry. There is a close relationship between supply chain risks and performance in the transport industry (Naim et al., 2010; Sanchez-Rodrigues et al., 2009). In addition, the logistics performance plays a vital role in the courier service performance, for example: the delivery performance may directly influence the customers' satisfactions and quality of service (Pichet and Shinya, 2008). Moreover, logistics performance measuring has increased popularity in logistics and supply chain management. Especially, the logistics performance of the courier firms may provide valuable insights into 3PLs management (Bolumole, 2003). In the study, the logistics performance assessment based on the actual operations of courier firms, we consider factors including customer, freight, and information and delivery performance (Fawcett and Cooper, 1998; Jayaram and Tan, 2010; Lai, 2004; Pichet and Shinya, 2008). This may provide a way to assess the 3PLs providers' performance and help management to monitor and control the courier service performance (Pichet and Shinya, 2008). According to the data analysis, the results have high reliability and validity. We find that supply chain uncertainty and risk has a statistically significant negative relationship with logistics performance. This implies that it has negative impacts on the customer satisfaction, operating costs, on-time delivery, freight security and information accuracy. The findings provide directions in the implementation of strategies to manage the supply chain uncertainty and risk and improve logistics performance. Further research should be considered to find alternative configurations and solutions to manage the supply chain uncertainty and risk. The study focuses on the Australian courier industry, this may limit the implications of findings in different sections. However, the research models may be widely examined and validated in the different context. This would help us to refine the study and enrich the supply chain literature. In addition, it may enlighten both academics and practitioners to understand and pay attention to the supply chain uncertainty and risk management in the courier industry.
|
The results indicate that supply chain uncertainty and risk have negative impacts on logistics performance. Moreover, the greatest impact of supply chain uncertainty and risk was from outside company in the Australian courier industry.
|
[SECTION: Value] Supply chain uncertainty is becoming increasingly popular in business management. However, few studies have provided a depth discussion on the uncertainty so far. Due to the nature of uncertainty, uncertainty is not able to be forecasted or expected beforehand (Knight, 1921). Most business decisions include an element of uncertainty, but since there is no formal solution of dealing with uncertainty (Erasmus et al., 2013). Many business studies heavily relied on the risk management theory and crisis management theory to discourse the uncertainty in business management. Most managers treat uncertainty as an important aspect of risk, and about half (54 per cent) of the managers interviewed by Shapira consider uncertainty as a factor in risk (March and Shapira, 1987). Giddens (2002) asserted that traditional cultures did not have a concept of risk because they did not need one, danger and hazard were associated with the past and the loss of faith, risk is linked to modernisation and the desire to control the future. Following the logic, uncertainty is strongly associated with the future. Erasmus et al. (2013) described uncertainty in business is a situation where managers are simply unable to identify the various deviations and unable to assess the likelihood of their occurrence. Typically, risks are often somewhere in the middle of the risk-uncertainty spectrum (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). In addition, most risks have an element of uncertainty (Erasmus et al., 2013). Risks occur because people never know exactly what will happen in the future. People can use the best forecasts and do every possible analysis, but there is always uncertainty about future events. It is this uncertainty that brings risks (Waters, 2011). T. Aven (2011) suggested risk could be defined through probabilities and uncertainties. In economics, risk is the decision makers are not able to or willing to define their utility function (Aven, 2011, p. 21). Uncertainty increases the possibility of risk occurrence, and risk is a consequence of uncertainty. In other words, risk occurs because of uncertainty about the future, this uncertainty means that unexpected events may occur, and when these unexpected events occur, they cause some kind of damage. Both the terms uncertainty and risk may include sources, events and impacts, and they can be used to indicate concepts and/or objects (Saminian-Darash and Rabinow, 2015). Therefore, sometimes the term uncertainty is confused with risk (Sanchez-Rodrigues Vasco et al., 2008). In addition, the uncertainty and risk are terms that in practice are often used interchangeably (Peck, 2006). The impacts of supply chain uncertainty and risk on the logistics performance are often similar. C.S Tang (2006) suggested that risks in the supply chain are inherent uncertainties: in other words, managers have to face and manage them. In addition, they may influence the logistics performance from the on-time delivery, freight safety, information and customers. The study examines the impacts of both supply chain uncertainty and risk simultaneously on the logistics performance. A courier company is a less than truckload third-party logistics (3PLs) carrier. 3PLs are sorted into different types including freight forwarders, courier companies and other companies that integrate and offer subcontracted logistics and transportation services. Courier company is one of the most significant 3PLs modules for all types of 3PLs (Wang et al., 2015b). In addition, today's courier delivery is different from traditional rail, sea, road or air transport, courier delivery offers door to door fast delivery, and customers may be directly involved in the delivery processes. Courier delivery may need to face more uncertainties and risks than other delivery methods including air, sea and road. Moreover, with rapid development of online business, courier delivery has become a very important delivery method for the express small and medium packaging delivery internationally. However, there are very few studies on the courier business. To provide an intuitive understanding of the impacts of supply chain uncertainty and risk on the logistics performance in the courier industry, we surveyed 98 courier companies in Australia and the paper presents an empirical study of supply chain uncertainty and risk in the Australian courier industry. The quantitative methods are deployed to analyse the impacts of supply chain uncertainty and risk on the logistics performance. Risk and uncertainty is becoming increasingly popular in the supply chain (Davis, 1993; Prater, 2005; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). The definition of supply chain uncertainty was given by Vorst and Beulens (2002) as follows:Decision-making situations in the supply-chain in which the decision-maker does not know definitely what to decide as he/she is indistinct about the objectives; lacks information about (or understanding of) the supply-chain or its environment; lacks information processing capacities; is unable to accurately predict the impact of possible control actions on supply-chain behavior; or lacks effective control actions (non-controllability).(Vorst and Beulens, 2002, p. 413) In looking at the emergence of risk in early studies, Miller (1992) argued about the difference between risk and uncertainty, risks in business refer to unanticipated variation or negative variation may influence business performance such as revenues, costs, profit, market share; uncertainty refer to the unpredictability of environmental or organisational variables that impact business performance or the insufficient information about these variables. Risk is if we do not know what will happen next, but we do know what the distribution looks like. Uncertainty is if we do not know what will happen next, and we do not even know what the possible distribution looks like (Ritholtz, 2012). In February 2002, Donald Rumsfeld, the then US Secretary of State for Defence stated at a Defence Department briefing: "something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns - there are things we do not know we don't know" (Logan, 2009, p. 712). Moreover, Simchi-Levi et al. (2008) regarded that the unknown unknown is a type of risk associated with scenarios where one cannot identify likelihood of occurrence; of course, due to their nature, the unknown-unknown are difficult to control while the known-unknown are more controllable. March and Shapira (1987, p. 1404) defined "risk as the variation in the distribution of possible supply chain outcomes, their likelihood, and their subjective values". The supply chain risks comprise "any risks for the information, material and product flows from original supplier to the delivery of the final product for the end user" (Juttner et al., 2003, p. 200). In supply chain risk management literature, "risk is unreliable and uncertain resources creating supply chain interruption, whereas uncertainty is matching risk between supply and demand in supply chain processes" (Tang and Nurmaya Musa, 2011, p. 26). Sanchez-Rodrigues Vasco et al. (2008) clarify how these two concepts differ in supply chain: Risk is a function of outcome and probability and hence it is something that can be estimated. If the probability that an event could occur is low, but the outcome of that event can have a highly detrimental impact on the supply chain, the occurrence of that event represents a considerable risk for the chain. Uncertainty occurs when decision makers cannot estimate the outcome of an event or the probability of its occurrence (Sanchez-Rodrigues Vasco et al., 2008, p. 390). Supply chain uncertainty and risk usually are interchangeable in practice (Juttner et al., 2003; Peck, 2006; Ritchie and Brindley, 2007). Juttner et al. (2003), Peck (2006) and Prater (2005) suggested that the difference between supply chain uncertainty and risk is blurred to the extent that it is not important to distinguish. Many supply chains risks are related to the uncertainty and they are inseparable (McManus and Hastings, 2006; Prater, 2005; Rodrigues et al., 2010; Sanchez-Rodrigues Vasco et al., 2008; Simangunsong et al., 2012). However, some authors (Miller, 1992; Peck, 2006; Wagner and Bode, 2008) suggest that risk is only associated with issues that may lead to negative outcomes. Traditionally, risk and supply chain risk refer to two attributes: first, the expected value does not adequately capture events with low probability but high consequences; and second, rare and extreme events cause substantial negative consequences (Aven, 2011; Tang and Nurmaya Musa, 2011). This study does not only focus on these two extreme situations, but also concern the day to day operations risk and uncertainty in logistics and transport service providers. These supply chain risks and uncertainties, which most managers want to deal with imminently in a real world environment (Wang et al., 2014a). Supply chain uncertainty and risk are complex notions that come in many different forms and may include supply chain uncertainty and risk sources, risk consequences and risk drivers (Christopher and Lee, 2004; Juttner et al., 2003; Manuj and Mentzer, 2008; Rodrigues et al., 2008). Several different supply chain uncertainties and risks have been identified in previous research works. Lee (2002) illustrated on narrower aspects of supply chain uncertainty that there are two types of uncertainties - supply and demand uncertainty. Davis (1993) illustrated that sources of supply-chain uncertainty were relevant to internal manufacturing processes, supply-side processes or demand-side issues (usually end-customer demand). Mason-Jones and Towill (1998) added a source: control uncertainty, which was concerned with the capability of an organisation, in a supply chain uncertainty circle. The model comprised four quadrants: demand side, supply side, manufacturing process and control systems; and it suggested that reducing these uncertainties would reduce cost. Wilding (1998) proposed a supply chain complexity triangle, which introduces a new source of uncertainty - parallel interaction. Juttner et al. (2003) suggested three categories: environmental risk, network-related risk and organisational risk. Prater (2005) divided supply chain uncertainty into two separate levels, macro- and micro-level uncertainties; macro-level uncertainty refers to risks due to disruptions, and macro-level uncertainty is a higher level category of uncertainty, whereas micro-level uncertainty relates to a more specific source of uncertainty. Sanchez-Rodrigues et al. (2010) investigated the main causes of contingent uncertainty in transport operations. Murugesan et al. (2013) stated six categories of supply chain risk including supply side risk, manufacturing side risk, demand side risk, information risk, logistics risk and environment risk in a supply chain. Wang et al. (2014b) identified three types of supply chain uncertainty and risk, they are company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk. Rangel et al. (2015) identified 16 supply chain risk classifications, 56 risk types were sorted in terms of the existing conceptual similarities. Uncertainty increases the risk within supply chains, and risk is a consequence of the external and internal uncertainties that affect a supply chain (Rodrigues et al., 2008). Many authors recognised that uncertainty was an issue in the supply chain and logistics industry (Davis, 1993; de Leeuw and van den Berg, 2011; Hult et al., 2010; Joseph, 2004; Lee, 2002; Prater, 2005; Rahman, 2011; Rodrigues et al., 2010; Rodrigues et al., 2008; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012; Vorst and Beulens, 2002). Hult et al. (2010) illustrated that uncertainty inherent in the supply chain had an exogenous element for any given participant. For managers, risk is a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters 2011, p. 12). Prater (2005) demonstrated that there were many other distinct sources of uncertainty had received insufficient attention in supply chain. According to literature review, supply chain uncertainty and risk is defined as the potential disturbances to the flow of goods, information and money in this paper (Ellegaard, 2008; McKinnon and Ge, 2004; Murugesan et al., 2013; Sanchez-Rodrigues et al., 2010; Simangunsong et al., 2012). Vorst and Beulens (2002) identified sources of uncertainty for supply chain redesign strategies. Rodrigues et al. (2008) developed a logistics-oriented uncertainty model - the logistics uncertainty pyramid model, which includes five sources of uncertainty related to suppliers, customer, carrier, control system and external environment. Sanchez-Rodrigues et al. (2010) evaluated the causes of uncertainty in logistics operations. McManus and Hastings (2006) illustrated risks and opportunities are the consequences of the uncertainties to a programme or system. For managers, risk was a threat that something might happen to disrupt normal activities or stop things happening as planned (Waters, 2011). Supply chain risk and uncertainty has become a major topic in the supply chain literature (Davis, 1993; Prater, 2005; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Supply chain uncertainty and risk can be categorised in many different ways and from different perspectives (Christopher and Peck, 2004; Juttner et al., 2003). The previous studies in supply chain uncertainty and risk are summarised in Table I. Supply chain risk mainly reflects negative impacts on logistics performance, such as delays, damage and loss (Sanchez-Rodrigues et al., 2010); some studies (Hoffman, 2006; McKinnon and Ge, 2004; Rodrigues et al., 2008; Simangunsong et al., 2012; Juttner et al., 2003) urged that the supply chain uncertainty and risk have negative impacts on the logistics performance. Others like Merschmann and Thonemann (2011) found no significant relationship between uncertainty and performance, while Saminian-Darash and Rabinow (2015) argued that present uncertainty may have positive impacts in the future. The empirical study focuses on the relationship between supply chain uncertainty and risk and the logistics performance in the courier business. Findings from the study provide a starting point for understanding of supply chain uncertainty and risk in the courier industry. The quantitative study attempts to use the structure equation modelling approach to examine the relationship between supply chain uncertainty and risk and logistics performance in the Australian courier industry. According to an extensive literature review and previous studies, supply chain uncertainty and risk is associated with logistics performance (Prater, 2005; Sanchez-Rodrigues et al., 2010; Sanchez-Rodrigues et al., 2008; Simangunsong et al., 2012). Risk has been discussed in the financial industry. It is calculable and in practice, statistical risk measures often depend on the probability of loss and size of loss (Beneplanc and Rochet, 2011). Traditional quantitative risk assessment is based on probability and severity of the risk (Aven, 2011; Juttner et al., 2003). The other way to assess risk and uncertainty is focusing on the consequences, the different risks and uncertainties can be categorised along those focused the consequences (Juttner et al., 2003; Simangunsong et al., 2012). In order to measure the supply chain uncertainty and risk accurately, the scale of supply chain uncertainties and risks was adopted and developed from studies (Murugesan et al., 2013; Rodrigues et al., 2008; Simangunsong et al., 2012). Participants are invited to rate the impacts of both supply chain uncertainty and risk in terms of the severity in companies. The study demonstrated and tested the research framework in Figure 1. The exogenous variable is supply chain uncertainty and risk, the measurement scale is developed from previous study (Murugesan et al., 2013; Simangunsong et al., 2012; Wang et al., 2014b). It consists of three-dimensional structures including company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk to measure the supply chain uncertainty and risk in the Australian courier industry. The endogenous variable is logistics performance. The measurement scale of logistics performance is empirically validated in the Australian courier firms before we use it in this study (Wang et al., 2015a). According to the literature review and actual courier operations, the logistics performance is measured by the delivery performance, information accuracy, customer satisfaction and freight safety (Holmberg, 2000; Lai, 2004). There is an argument about the impacts of supply chain uncertainty and risk. Saminian-Darash and Rabinow (2015) argued that uncertainty may bring positive impacts in the future. Supply chain uncertainty and risk is an issue in the logistics and supply chain industry (Prater, 2005; Lee, 2002; Sanchez-Rodrigues et al., 2010). Uncertainty and risk in the supply chain may lead to its poor logistics performance (Christopher and Lee, 2004; Simangunsong et al., 2012). For example, Sanchez-Rodrigues et al. (2010) urged transport-related uncertainty, and the main drivers impacting the sustainability and transport operations are delays, variable demand/poor information, delivery constraints and insufficient supply chain integration. Therefore, we proposed that there is a significant relationship between supply chain uncertainty and risk and logistics performance. And then the conceptual model is tested in the Australian courier industry. Partial least squares approach for structural equation modelling is used to assist empirical data analysis. We follow the procedures from Hair (2010) to validate the measurement models and structure model. Measurement models are used to assess the reliability and validity of the scale items. The proposed hypotheses are tested in the structural model. Confirmatory factor analysis (CFA) is conducted for validating measurement models. Path analysis is used to validate the relationship between supply chain uncertainty and risk and logistics performance. Path analysis is the basis for structural equation model. It is a technique for estimating the unknown parameters of a system of simultaneous equations (Lowery and Gaskin, 2014). The measurement models of supply chain uncertainty and risk and logistics performance have been drawn from previous studies (Fawcett and Cooper, 1998; Murugesan et al., 2013; Pichet and Shinya, 2008; Simangunsong et al., 2012; Wang et al., 2014b, 2015a). The instrument To ensure the reliability and validity of the instruments, the instrument development focuses on the courier industry. An extensive literature review is conducted to identify supply chain uncertainty and risk variables. In addition, a pilot study is undertaken to refine the variables and questionnaire by supply chain and logistics academics and managers who have extensive experience working in a transport and logistics industry. Based on the previous studies, supply chain uncertainty and risk has very similar impacts on the logistics performance. Moreover, they are inseparable, and managers often notify and manage them simultaneously in a real world environment. A seven-point Likert scale has been used to assess the impacts of supply chain uncertainty and risk in terms of the severity in the company, "1" represented "No problem", and "7" represented "Very severe problem". The perception of logistics performance is measured by a Likert scale which ranged from 1 "strongly disagree" to 7 "strongly agree". Data According to the latest business transport report of Australian Bureau of Statistics, businesses employed 80,000 persons in the postal and courier pickup and delivery services industry subdivision at end June 2011 (Australian Bureau of Statistics, 2012). Literally, the Australian courier industry is a relatively small industry compared to other traditional industries. We surveyed 98 Australian courier companies. The study employs an online survey for data collection. Total 229 responses are recorded in the website. In total, 162 surveys are fully completed. In total, 67 incomplete surveys are deleted form data set. The sample characteristic includes 80 responses (49 per cent) are general/branch/operations manager. In total, 27 responses (17 per cent) are sales/customer service/other manager. In total, 14 responses (9 per cent) are supervisor/team leader. Total 121 (75 per cent) responses are from management supervisor roles in the Australian courier industry. The participants are from all over the Australia. Top three states are: Victoria (66 responses, 41 per cent), New South Wales (40 responses, 25 per cent) and Queensland (14 responses, 9 per cent). In total, 107 participants (66 per cent) in this survey have more than five years of experience in transport/supply chain/logistics industry. Data examination is an initial stage of data analysis. During the process of data examination, we screen the data and evaluate the impact of different types of data, and tests for the assumptions underlying multivariate techniques (Hair, 2010). Analysing empirical data using standardised procedures is important for a successful quantitative research (Dornyei, 2007). The section presents findings from the survey and research models. The questionnaire is used to measure the supply chain uncertainty and risk under the different categories. Top five supply chain uncertainties and risks are identified in each category. The results are shown in Table II. The top supply chain uncertainties and risks may imply the potential problems in the Australian courier industry. Table III provides the descriptive statistics, Cronbach's a value, composite reliability and average variance extracted value. The validity of the constructs is further tested by a CFA in a path model. Top supply chain uncertainties and risks The descriptive findings indicate that the top five supply chain uncertainties and risks in the different categories including company-side, customer-side, and environment uncertainty and risk. For example, delay in pickup and delivery is the top one company-side uncertainty and risk in the Australian courier industry. Overall, we identified the top supply chain uncertainties and risks in terms of the impacts on the logistics performance in the courier industry. They are delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices, delays in pickup/delivery, labour/driver shortage, incorrect delivery information and delay or unavailability of the delivery information. The results are in line with previous studies (Sanchez-Rodrigues et al., 2009). Overall, environment uncertainty and risk had the highest average mean value 2.472, followed by customer-related risk with 2.406. The greatest impact of supply chain uncertainty and risk was from outside company in the Australian courier industry. According to the survey, most Australian courier firms understand the impact of company-side risk, and managers try to deal with the internal aspects so that they, including logistics and information risk, have less impact than external aspects including customer and environment risk. The operating cost could be a challenge in the Australian courier industry. In addition, the managers may need to focus on the following incidents including delays in pickup/delivery, customer complaint and damaged/lost freight in the company. Reliability and validity Table III presents the descriptive statistics, reliability and validity. They include factor loading, t-value, mean and standard deviation for the items, and Cronbach's a value, composite reliability, and average variance extracted value for the constructs. Reliability is an assessment of the degree of consistency between multiple measurements of a variable (Hair, 2010). There are two reliability test methods including test-retest and reliability coefficient. This study applies reliability coefficient with Cronbach's a to test the reliability of the scale. Reliability is demonstrated by composite reliability greater than 0.700. In the study, all the CR values are greater than 0.9. Validity is an important dimension to indicate the degree of accuracy of measurements. Convergent validity assesses the degree to which two measures of the same concept are correlated (Hair, 2010). High correlations are required to ensure the convergent validity, great than 0.7 is considered as a satisfaction level. In contrast, discriminant validity is the degree to which two conceptually similar concepts are distinct (Hair, 2010). This indicates that the scale is sufficiently different from the other similar concept. Normally the discriminate validity less than 0.7 is considered as a satisfaction level (Hair, 2010). In this study, AVE is greater than 0.500, and communalities are greater than 0.500. Discriminant validity is demonstrated by the square root of the AVE being greater than any of the inter-construct correlations (Hair et al., 2012). Path model The path model results are presented in Figure 2. A confidence interval indicates how reliable survey results are. In applied practice, confidence intervals are typically stated at the 95% confidence level (p<0.05 for t>1.96) (Zar, 1984). The estimation of the structural relationships in the model was conducted by using a bootstrap routine with 1,000 iterations. The bootstrapping sample relates to significances of p<0.1 for t>1.65, p<0.05 for t>1.96 and p<0.001 for t>2.58 (Hair and Anderson, 2010). A path coefficient is used for testing hypotheses in the paper. The standardised path estimates (b) represent the strength, direction and significance of the relationship between constructs. b is considered to be large, medium and small for values of greater than 0.37, 0.24 and 0.1, respectively. Absolute value of a path coefficient should be not greater than 1. Negative value stands for the negative relationship between two concepts. Positive value stands for the positive relationship. According to the data analysis, standardised coefficient (b) -0.43 and t-value 5.48, the hypothesis is supported. The significant path coefficient showed that logistics performance is influenced by the supply chain uncertainty and risk. The findings are presented, and the results show that there is a negative significant relationship between supply chain uncertainty and risk and the logistics performance in the Australian courier industry. The paper presents a study of supply chain uncertainty and risk in the Australian courier industry. Although the courier has become a fast-growing industry in logistics and transport, there are very few studies on the courier industry. This study clarifies the impact of supply chain and risk on the logistics performance, and assists both researchers and managers in understanding and managing supply chain uncertainty and risk in the courier industry. In addition, managers may use logistics performance to evaluate and monitor their performance as service providers. The study focuses on the supply chain uncertainty and risk, and logistics performance. We found empirical evidence supporting the negative impacts of supply chain uncertainty and risk on logistics performance in the Australian courier industry. Merschmann and Thonemann (2011) could not find a significant relationship between the supply chain uncertainty and performance in an industry. Saminian-Darash and Rabinow (2015) argued that uncertainty may have a positive impact on the industry. Simangunsong et al. (2012) argued that supply chain uncertainty and risk may have negative impacts on the industry. In addition, courier industry may need more attention to improve the efficiency (Chang and Yen, 2012). Therefore, it is significant to examine and clarify the impacts of supply chain uncertainty and risk in the courier industry. The empirical results would provide insights into supply chain risk management in the courier industry. Moreover, the management may use the scale of supply chain uncertainty and risk and logistics performance to improve the internal operations. This may enhance the overall efficiency and effectiveness of logistics performance in the courier industry. According to the mean value, the supply chain uncertainties and risks have been identified in the different categories. The top supply chain uncertainties and risks include delays due to customers' mistakes, road congestion/closures, higher customer expectation, unstable fuel prices and delays in pickup/delivery. The results are consistent with Simangunsong et al. (2012) in that supply chain uncertainty influence logistics performance. Meanwhile, similar results were found in T. Aven (2012) and Beneplanc and Rochet (2011) in that the supply chain risk may increase the operating costs and affect the performance. The study focuses on the Australian courier industry, the supply chain uncertainty and risk are categorised in terms of the actual courier operations and literature review (Joseph, 2004; Manuj and Mentzer, 2008; Sanchez-Rodrigues et al., 2009; Sodhi and Tang, 2012). Courier industry has unique characteristics including fast door to door delivery, customer may directly involve in the delivery process, etc. This is different from traditional logistics and transport businesses. Therefore, it is significant to investigate the impacts of supply chain uncertainty and risk in the courier industry. The results confirmed that the supply chain uncertainty and risk consisting of three main dimensions - company-side uncertainty and risk, customer-side uncertainty and risk and environment uncertainty and risk in the Australian courier industry. The results indicate that the supply chain uncertainty and risk have high reliability and validity. According to the empirical analysis, external supply chain uncertainty and risk including customer-side uncertainty and environment uncertainty and risk have a higher severity than company-side uncertainty and risk in the Australian courier company. This provides direction to support managers to focus on the external supply chain uncertainty and risk. For example, customer's mistake is one of the top supply chain uncertainties and risks under customer-side uncertainty and risk. Rangel et al. (2015) identified customer risk in the delivery process. The management may consider about how they can improve the communication with the customers and provide additional instructions to guide customers to use the services. For the company-side uncertainty and risk, the managers may improve the service flexibility, customer service and quick response to the unexpected events and/or problems. This includes internal risks to the organisation and supply chain (Christopher and Peck, 2004; Rangel et al., 2015). For the environment uncertainty and risk, the survey results show road congestion is top one supply chain uncertainty and risk under the environment category. It has become a major challenge in the Australian courier industry. The results in line with recent Australian THE AGE news, the cost of road congestion may set to triple to more than $9 billion a year by 2031 in Victoria, Australia (Carey, 2015). The environment uncertainty and risk includes external risks to the supply chain (Christopher and Peck, 2004; Manuj and Mentzer, 2008). Moreover, courier firms heavily relay on the road transport. Therefore, it is important to pay attention to the external supply chain uncertainty and risk in the courier industry. There is a close relationship between supply chain risks and performance in the transport industry (Naim et al., 2010; Sanchez-Rodrigues et al., 2009). In addition, the logistics performance plays a vital role in the courier service performance, for example: the delivery performance may directly influence the customers' satisfactions and quality of service (Pichet and Shinya, 2008). Moreover, logistics performance measuring has increased popularity in logistics and supply chain management. Especially, the logistics performance of the courier firms may provide valuable insights into 3PLs management (Bolumole, 2003). In the study, the logistics performance assessment based on the actual operations of courier firms, we consider factors including customer, freight, and information and delivery performance (Fawcett and Cooper, 1998; Jayaram and Tan, 2010; Lai, 2004; Pichet and Shinya, 2008). This may provide a way to assess the 3PLs providers' performance and help management to monitor and control the courier service performance (Pichet and Shinya, 2008). According to the data analysis, the results have high reliability and validity. We find that supply chain uncertainty and risk has a statistically significant negative relationship with logistics performance. This implies that it has negative impacts on the customer satisfaction, operating costs, on-time delivery, freight security and information accuracy. The findings provide directions in the implementation of strategies to manage the supply chain uncertainty and risk and improve logistics performance. Further research should be considered to find alternative configurations and solutions to manage the supply chain uncertainty and risk. The study focuses on the Australian courier industry, this may limit the implications of findings in different sections. However, the research models may be widely examined and validated in the different context. This would help us to refine the study and enrich the supply chain literature. In addition, it may enlighten both academics and practitioners to understand and pay attention to the supply chain uncertainty and risk management in the courier industry.
|
The study focuses on the Australian courier industry, this may limit the implications of findings in different industries. However, the research models may be examined and validated in the different context.
|
[SECTION: Purpose] "Work life has undergone tremendous changes within the last 100 years" (Frese, 2008, p. 397). The increase of work complexity, global competition and the dissolution of the unity of work in time and space linked to rapid innovation lead to employees dealing with a work environment that is more and more demanding (Frese, 2008). Additionally, the advancement in technology allows employees to use cellphones or laptops with high-speed data connections at any place, making it possible for them to work at any time (e.g. van Beek et al., 2012). Overall, these changes together with technology development might encourage employees to work harder and during longer hours (e.g. van Wijhe et al., 2011). In the scientific literature, two different types of working hard have been distinguished; i.e. an intrinsically negative form named workaholism and an intrinsically positive form named work engagement (e.g. Schaufeli et al., 2008b). A large body of research has shown that work engagement is positively related to various work outcomes and indicators of employees' well-being, whereas workaholism generally displays negative relationships with the same variables (e.g. Schaufeli and Bakker, 2004; Taris et al., 2010). More recently, some research has begun to investigate the concomitant effects of these two types of working hard on these outcomes (e.g. Del Libano et al., 2012; Schaufeli et al., 2008b). In line with this perspective, the first aim of our research is to analyze the relationships of work engagement and workaholism altogether with different indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Specifically, we expected that workaholism will be related to low well-being, as indicated by lower levels of job satisfaction and higher levels of sleep problems and perceived stress, whereas work engagement will be associated positively with high well-being (i.e. higher levels of job satisfaction, lower levels of perceived stress and sleep problems). Second, the present study examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support on these relationships. If prior work has indicated that the organization, supervisor, and coworkers represent valuable sources of support that have a positive influence on employees' well-being (Ng and Sorensen, 2008), a rather unexplored issue is how these three types of work-related social support might have a concomitant impact on the two forms of working hard, which, in turn, will influence employees' well-being. Yet, according to Ng and Sorensen (2008), "it may be unwarranted for researchers to assume the effects of perceptions of different sources of support on employees are similar" (p. 259). On the contrary, these authors stressed the importance of this issue to scholars and recommended to consider this matter by examining the specific effect of each source of social support and thus including more than one source of work-related social support in their studies. In line with this suggestion, we aimed to investigate whether the effects of three different types of work-related social support on employees' well-being are mediated by work engagement and workaholism. In doing so, our research will contribute to the work engagement and workaholism literature in examining a more comprehensive model including both antecedents and consequences of these two constructs. Additionally, the present study also helps to identify which source of support is the most effective to increase employees' well-being through the two different types of working hard. Addressing this issue may provide a better understanding of the specific effect of each type of work-related social support, which might help lead to theory development (Whetten, 1989). Furthermore, at the practical level, identifying differences in their effects is of utmost importance as it will help practitioners to implement more appropriate interventions in order to enhance employees' well-being. Nowadays, there is still debate regarding the definition of workaholism. However, in our research, we refer to Schaufeli et al. (2009a) who define workaholism as "the tendency to work excessively hard and being obsessed with work, which manifests itself in working compulsively" (p. 322). We decided to adopt this definition because it comprises the two characteristics of workaholism (i.e. working excessively and having an obsessive inner drive) that scholars identified as key and recurrent elements in the various definitions of this construct (e.g. Guglielmi et al., 2012; McMillan and O'Driscoll, 2006). Typical workaholic employees spend a great amount of their time working (van Beek et al., 2011). They experience a strong and incontrollable inner drive, need, or compulsion to work hard which is not due to external factors such as financial factors and career perspectives (Schaufeli et al., 2006). More precisely, building on the self-determination theory (Deci and Ryan, 1985), van Beek et al. (2012) showed that workaholic employees are driven by an introjected regulation (i.e. a form of extrinsic motivation). Introjected regulation is described as "a product of an internalization process in which individuals rigidly adopt external standards of self-worth and social approval without fully identifying with them" (van Beek et al., 2012, p. 33). Recently, based on Higgins's regulatory focus theory (RFT; Higgins, 1997), van Beek et al. (2014) also demonstrated that workaholic employees have higher levels of prevention focus, meaning that they are sensitive to the absence or presence of negative outcomes and use avoidance strategies. Taken together, these results support the idea that workaholic employees work hard to avoid negative feelings such as guilt, shame, irritability, and anxiety or to increase feelings of pride (e.g. van Beek et al., 2012, 2014). Workaholic employees by definition work hard and during long and excessive hours (van Beek et al., 2011). Furthermore, these employees are unable to disengage from their work and think about it continually, even when they are not working (van Beek et al., 2011). Consequently, they have less opportunity to recover from their work, such as by relaxing, and therefore might have a higher tendency to deplete their resources (Van Wijhe et al., 2014). In line with this view, prior empirical studies have shown that workaholism is related to negative outcomes for employees, such as lower job satisfaction (e.g. Del Libano et al., 2012; van Beek et al., 2014), lower life satisfaction (Bonebright et al., 2000), and poorer social relationships outside their work (Schaufeli et al., 2008b). Workaholic employees have also been found to be less happy (Schaufeli et al., 2009b), to suffer more from health complaints, and to report lower levels of self-perceived health (e.g. Schaufeli et al., 2006), and higher levels of exhaustion (e.g. Taris et al., 2005) and sleep problems (e.g. Kubota et al., 2010, 2012). Conversely, an enthusiastic involvement in the job, called work engagement, might also explain the employees' propensity to work hard. Work engagement is defined as "a positive and fulfilling work-related state that is characterized by vigor, dedication and absorption" (Schaufeli et al., 2002a, p. 72). Among these three dimensions, vigor consists of high levels of energy, mental resilience while working, and persistence when facing difficulties (Schaufeli et al., 2002a). Dedication refers to being involved in one's work and experiencing a sense of significance, inspiration, pride, and challenge at work (Schaufeli et al., 2002a). Absorption is characterized by being fully concentrated and engrossed in one's work, whereby time passes fast and people have difficulties to detach from their job (Schaufeli et al., 2002a). Work engagement (Schaufeli et al., 2002a) has been shown to be driven by intrinsic work motivation. Work engaged employees thus consider their work as interesting, enjoyable, and satisfying (van Beek et al., 2012). Recently, based on RFT theory (Higgins, 1997), work engagement has also been positively related to having a promotion focus (van Beek et al., 2014), meaning that work engaged employees are sensitive to the absence or presence of positive outcomes. This finding also indicates that work engaged employees use approach strategies and therefore are likely to use an approach that "matches to their work goals that represent their hopes, wishes, and aspirations" (van Beek et al., 2014, p. 56). In sum, engaged employees have a sense of energetic connection with their work, are happily engrossed in their job, and do not feel guilty when they are not working (Schaufeli et al., 2008b). In line with this perspective, several studies have indicated that work engagement is associated with various positive outcomes for both organizations and employees. For example, engaged employees have been shown to be more satisfied with their job (e.g. Del Libano et al., 2012; van Beek et al., 2014), to demonstrate more personal initiative (Sonnentag, 2003), to have less intention to quit the organization (Schaufeli and Bakker, 2004; van Beek et al., 2014), and to perform better than non-engaged employees (e.g. Salanova et al., 2005). Work engagement was found to be related to higher life satisfaction and a better mental and physical health (Schaufeli and Salanova, 2007; Schaufeli et al., 2008b). Furthermore, results of prior studies showed that work engagement is negatively associated with various indicators of low well-being such as suffering from psychosomatic symptoms (e.g. headaches, cardiovascular problems; Koyuncu et al., 2006; Schaufeli et al., 2008b), exhaustion from work (e.g. Koyuncu et al., 2006), and sleep problems (Hallberg and Schaufeli, 2006). Thus, in short, work engagement and workaholism characterize two different forms of psychological states and have various associations with different work attitudes and indicators of well-being. While the former is related to positive outcomes, the latter is generally associated with negative ones. In line with these previous empirical findings and arguments, we posited that: H1. Work engagement is positively related to (a) job satisfaction and negatively related to (b) perceived stress and (c) sleep problems. H2. Workaholism is negatively related to (a) job satisfaction and positively related to (b) perceived stress and (c) sleep problems. Social support According to the job demands-resources model (JD-R) (Demerouti et al., 2001; Schaufeli and Bakker, 2004), two different types of work conditions, namely job demands and job resources, influence employees' well-being via a dual process, i.e. a health impairment process (linking job demands to negative outcomes through burnout) and a motivational process (linking job resources to positive outcomes through work engagement). Job demands refer to physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological effort (e.g. time pressure, emotional demands, physical demands). Job resources are defined as the physical, psychological, social or organizational aspects of the job that reduce job demands, are functional for achieving work goals or stimulate personal growth, learning and development (e.g. supportive work environment, supervisor support, coworker support and feedback; Demerouti et al., 2001). More precisely, the JD-R model describes a positive motivational process in which job resources such as social support are able to enhance work engagement which, in turn, has positive consequences for employees and organizations. In line with this perspective, Schaufeli and Bakker (2004) have suggested that social support is able to drive an intrinsic motivational process by satisfying employees' needs for autonomy and need to belong, as well as an extrinsic motivational process by increasing the probability to reach work goals. Supervisor and coworker support, for instance, might be able to have an intrinsic motivation role by fulfilling employees' need to belong (Xanthopoulou et al., 2008). Furthermore, coworker support might create among employees the conviction that they will receive help from their colleagues when needed, which might increase their confidence that they will achieve their work goals (Xanthopoulou et al., 2008). In doing so, coworker support might also play an extrinsic motivation role. Empirical studies that have commonly investigated the positive influence of social support on work engagement have precisely focused on supervisor and coworker support (e.g. Korunka et al., 2009). Accordingly, work engagement has been found to be positively predicted by both perceived supervisor support (e.g. Gillet et al., 2013) and perceived coworker support (e.g. Schaufeli and Bakker, 2004; Xanthopoulou et al., 2008) in several studies. In contrast, the influence of perceived organizational support, defined as employees' global beliefs that the organization cares about their well-being and values their contributions (Eisenberger et al., 1986), has been less investigated. Yet, numerous studies have demonstrated the positive influence of perceived organizational support on employees' well-being. Perceived organizational support has for example been shown to increase employees' job satisfaction and to reduce their stress (e.g. Eisenberger and Stinglhamber, 2011; Rhoades and Eisenberger, 2002). Furthermore, perceived organizational support has been positively associated with work engagement in some prior studies (e.g. Caesens and Stinglhamber, 2014; Kinnunen et al., 2008; Sulea et al., 2012). Given this empirical evidence, it seems reasonable to suggest that perceived organizational support, perceived supervisor support, and perceived coworker support are able to positively influence work engagement. However, to the best of our knowledge, no study has examined the positive effects of these three forms of support altogether on work engagement. On the other hand, a very scarce literature has examined the relationship between social support and the negative type of working hard, i.e. workaholism. In this literature, it appeared that social support as a general resource is negatively linked to workaholism (Schaufeli et al., 2008a). The conservation of resources theory (COR; Hobfoll, 1985, 2002) helps to better understand how work-related social support may be negatively related to workaholism. A central tenet of the COR theory is that people "with greater resources are less vulnerable to resource loss and more capable of resource gain" (Hakanen and Roodt, 2010, p. 89). In line with this principle, social support might both help employees to cope with stressful events such as juggling multiple roles (Nicklin and McNall, 2013), and prevent them from resource depletion (Somech and Drach-Zahavy, 2013). Therefore, as an energizing resource for employees, social support might help them cope with their tendency to work hard. In line with this view, previous authors have suggested that supervisor support and coworker cohesion are related to lower levels of compulsion to work (Johnstone and Johnston, 2005). In the same vein, Taris et al. (2010) have also argued that providing supervisors with effective training might help to raise employees' awareness of the meaning, aim, and relevance of their work. This might therefore help to reduce employees' compulsion to work hard (Taris et al., 2010). Furthermore, according to the literature on perceived organizational support (Eisenberger and Stinglhamber, 2011), high levels of perceived organizational support indicate that the organization cares about employees' well-being and is willing to extend itself to provide help for employees when they need it (George et al., 1993). Therefore, it seems that supportive organizations might be more prone to offer assistance program to workaholic employees. It is also reasonable to think that organizations who highly consider their human capital would be more inclined to implement individual-level interventions in order to help workaholic employees (Taris et al., 2010). In short, while the organization (e.g. Eisenberger et al., 1986), supervisor (e.g. Eisenberger et al., 2002), and coworkers (e.g. Bishop et al., 2000) have been shown to represent valuable sources of support, to the best of our knowledge, no previous study has included these three foci of support at once in order to investigate their specific impact either on work engagement or on workaholism. Nevertheless, Ng and Sorensen (2008) have stressed, in their meta-analysis, that the effects on employees of different sources of social support (e.g. perceived organizational support, perceived supervisor support, and perceived coworker support) are very dissimilar. For instance, these authors have shown that perceived supervisor support is more strongly related to several work-related outcomes (i.e. job satisfaction, affective commitment and turnover intentions) than perceived colleague support. According to Ng and Sorensen (2008), each source of social support has not necessarily the same consequences and differs in terms of strength of associations with its outcomes. Therefore, these authors recommended that researchers carefully examine the effects of each source of support in their studies. In line with these recommendations and our theoretical model presented in Figure 1, our study aims to explore the impact of perceived organizational support, perceived supervisor support, and perceived coworker support on both work engagement and workaholism which, in turn, will influence various indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Based on Ng and Sorensen's (2008) recommendations and previous empirical findings, we posited the following hypotheses: H3. Work engagement mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. H4. Workaholism mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. Sample and procedure A total of 425 PhD students of a Belgian University responded to an online questionnaire related to well-being at work (a response rate of approximately 21.25 percent). Due to missing data, only 343 of these 425 questionnaires were usable and thus maintained in the final sample. The external link to the online questionnaire was sent in an e-mail describing the aim of the questionnaire, and PhD students were assured of the anonymity and confidentiality of their responses. This specific population seemed particularly relevant to assess the two forms of working hard, i.e. work engagement and workaholism. Indeed, PhD student work is characterized by long work hours per week and extended concentrating and cognitive efforts. Furthermore, this population seems to be exposed to multiple demands resulting from their job such as research, academic coursework, competition and institutional demands (e.g. Myers et al., 2012). Of this sample, 42.86 percent were males and 57.14 percent were females. In average, participants were 28.27 years of age (SD=4.43), had been employed by the university for 3.02 years (SD=2.07) and had been working with their advisor for 3.30 years (SD=2.20). Measures Because our participants spoke French, scales used in the questionnaire were translated from English to French using the translation-back-translation procedure recommended by Brislin (1980). However, when available we used validated French versions of the scales. Work-related social support Perceived organizational support was measured using a short four-item version of the Survey of Perceived Organizational Support (SPOS) (Eisenberger et al., 1986). These four items covered well the two fundamental aspects of perceived organizational support, namely "valorization of employees' contributions" and "being concerned about employees' well-being". According to Rhoades and Eisenberger (2002), because of the high internal consistencies, and the unidimensionality of the SPOS, using a short version is not problematic. A sample item is: "[Name of the organization/university] really cares about my well-being". Perceived supervisor support was measured using an adapted version of the SPOS (four items) inspired in the same manner of Rhoades et al. (2001) and Eisenberger et al. (2002), replacing the word "organization" with the term "advisor". A sample item is "Even if I did the best job possible, my advisor would fail to notice" (reverse item). Prior empirical research indicated good psychometric properties of this perceived supervisor support scale (e.g. Rhoades et al., 2001). Perceived coworker support was operationalized using an adapted version of the SPOS in four items inspired in the same manner of Bishop et al. (2000) and Ladd and Henry (2000). A sample item is "My coworkers show very little concern for me" (reverse item). Prior studies using this perceived coworker support scale showed good psychometric properties (e.g. Ladd and Henry, 2000). Participants responded to a seven-point Likert-type scale ranging from 1 ("Strongly disagree") to 7 ("Strongly agree"). Work engagement We used the short version of the "Utrecht Work Engagement Scale" in nine items (UWES) (Schaufeli et al., 2002a) to assess work engagement. The scale includes three dimensions: vigor (three items; e.g. "At my work, I feel bursting of energy"), dedication (three items; e.g. "I am enthusiastic about my job"), and absorption (three items; e.g. "I feel happy when I am working intensely"). The response scale ranged from 1 ("Never") to 7 ("Always"). Workaholism We measured workaholism using the validated ten-item short version (Del Libano et al., 2010) of the "Dutch Work Addiction Scale" (DUWAS; Schaufeli et al., 2006) which includes the two dimensions of the construct, i.e. working excessively and working compulsively. Sample items are: "I find myself continuing work after my co-workers have called it quits" (working excessively; five items) and "I often feel that there's something inside me that drives me to work hard" (working compulsively; five items). The response scale ranged from 1 ("Never") to 4 ("Always"). Job satisfaction Job satisfaction was measured with four items from Eisenberger et al. (1997). A sample item is: "All in all, I am very satisfied with my current job". The response scale ranged from 1 ("Strongly disagree") to 7 ("Strongly agree"). Perceived stress We measured perceived stress with four items from the Perceived Stress Scale (PSS) (Cohen et al., 1983). A sample item is: "In the last month, how often have you felt difficulties were piling up so high that you could not overcome them?". The response scale ranged from 1 ("Never") to 5 ("Very often"). Sleep problems We measured sleep problems with four items from the "Jenkins Sleep Quality Index" (JSQ) (Jenkins et al., 1988) assessing the most common sleep problems (i.e. difficulties falling asleep, waking up during the night, waking up and having difficulties falling asleep again, and waking up tired). The response scale, indicating how often the stated condition occurred during an average month, ranged from 1 ("Not at all") to 6 ("22 to 31 days/month"). A sample item is: "I have had difficulties to fall asleep". Control variables Gender, age, tenure in the university and tenure with the advisor were measured. Discriminant validity In order to evaluate the distinctiveness of the eight concepts included in our study (i.e. perceived organizational support, perceived supervisor support, perceived coworker support, work engagement, workaholism, job satisfaction, perceived stress, and sleep problems), we conducted confirmatory factor analyses using Mplus 6.12 (Muthen and Muthen, 1998-2011). Based on the fact that we used the same items to measure each type of work related social support (i.e. perceived organizational support, perceived supervisor support and perceived coworker support), we allowed the error covariances of these content-equivalent items to correlate freely. Additionally, due to a considerable content overlap among some items included either in the work engagement scale or the workaholism one, we allowed the error covariances of some of the paired items to correlate freely, as it has been previously done in the validation studies of these scales (Del Libano et al., 2010; Schaufeli et al., 2002b). Based on the kh2 difference test (Bentler and Bonett, 1980), results of the CFA indicated that the hypothesized measurement model fitted the data well and was superior to all more constrained models. Indeed, as displayed in Table I, the hypothesized model had a better fit than the alternative measurement models. Because our data were self-reported, we also conducted the Harman single-factor test (Podsakoff et al., 2003) by constraining all items to load on a single factor model. Results indicated that the fit of the one-factor model was very poor. Furthermore, in line with Podsakoff et al.'s (2003) and Richardson et al.'s (2009) recommendations, we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results of this additional analysis indicated that the average variance explained by the common method factor was 10.89 percent. This value is less than half of the amount of method variance (25 percent) that Williams et al. (1989) refer to for self-reported studies. Furthermore, all items in the hypothesized eight-factor model display acceptable loadings. Indeed, items loaded on their respective factors with loadings ranging from 0.58 to 0.87 for perceived organizational support, from 0.67 to 0.93 for perceived supervisor support, from 0.67 to 0.92 for perceived coworker support, from 0.50 to 0.87 for work engagement, from 0.47 to 0.69 for workaholism, from 0.84 to 0.94 for job satisfaction, from 0.69 to 0.78 for perceived stress, and from 0.55 to 0.87 for sleep problems. Based on all of this evidence, the eight variables of our model were treated as separate constructs in our subsequent analyses in order to test our hypotheses. Relationships among variables Means, standard deviations, internal reliabilities, and correlations among our variables are displayed in Table II. All Cronbach's a's were above the 0.70 criterion established by Nunnally (1978). Test of hypotheses Following Becker's (2005) recommendations, we only statistically controlled for socio-demographic variables with a significant correlation with the dependent variables in our model (i.e. mediators and outcomes). Therefore, we introduced organizational tenure and tenure with the supervisor as additional exogenous variables predicting workaholism and job satisfaction, respectively. Furthermore, tenure with the supervisor and gender were controlled for perceived stress and gender was controlled for sleep problems. It is even more important to control for these variables given that past research also indicated that they have an impact on our dependent variables. More precisely, tenure in the organization has been found to be negatively associated with workaholism (Taris et al., 2005) and job satisfaction (Duffy et al., 1998). Scholars also suggested that is important to control for tenure with the supervisor knowing that possible temporal effects might explain the influence of the relationship with the supervisor on outcomes (e.g. Wang et al., 2013). Finally, prior research indicated that woman generally report higher levels of stress (e.g. Gyllensten and Palmer, 2005) and sleep problems than men (e.g. Ohayon, 1996). Therefore, as recommended by Spector and Brannick (2011), we included these control variables in the subsequent analyses, based on reasonable theoretical or empirical evidence that these socio-demographic variables are linked to variables included in our research model. Using Mplus 6.12 (Muthen and Muthen, 1998-2011), we conducted SEM analyses in order to test our hypotheses. Because of the different response scales, all items responses were standardized prior to these analyses. Then, we compared the fit of our hypothesized model with nine alternative models. Table III displays the fit indices of all these models. As shown in this table, results indicated that the hypothesized model has a good fit to the data, as indicated by a kh2(950)=1730.55, a CFI of 0.90, a SRMR of 0.10 and a RMSEA of 0.05. However, based on the kh2 difference test (Bentler and Bonett, 1980), some kh2 changes were significant, and revealed that the alternative model 4, which adds paths between perceived organizational support and job satisfaction, between perceived stress and sleep problems, and between perceived supervisor support and job satisfaction, was superior to the hypothesized model and the alternative models 1, 2 and 3 (for more details, see Table III). Therefore, this alternative model 4 was retained as the best fitting model (kh2(946)=1658.43, RMSEA=0.05, SRMR=0.09 and CFI=0.91). Standardized parameter estimates of this alternative model 4 are presented in Figure 2. For the sakeof clarity, the effects of the control variables are detailed in the text. Organizational tenure was related to job satisfaction (g=-0.12, p<0.05) but not to workaholism (g=0.14, ns). Tenure with the supervisor was related to job satisfaction (g=0.10, p<0.05) but not to workaholism and perceived stress (g=-0.05, ns; g=0.01, ns). Gender has a significant impact on perceived stress and sleep problems (g=-0.10, p<0.05; g=-0.14, p<0.01, respectively), indicating that men perceive less stress and suffer less from sleep problems than women. Controlling for these variables, results indicated that work engagement is positively associated with job satisfaction (b=0.54, p<0.001) and negatively with perceived stress (b=-0.29, p<0.001), but not with sleep problems (b=-0.11, ns), supporting H1(a) and H1(b). In the opposite, results indicated that workaholism is negatively related to job satisfaction (b=-0.19, p<0.001), and positively related to perceived stress (b=0.36, p<0.001), and sleep problems (b=0.37, p<0.001), providing support for H2(a), H2(b) and H2(c). Furthermore, perceived organizational support is positively related to work engagement (g=0.13, p<0.05) but is not related to workaholism (g=-0.10, ns). However, perceived organizational support has direct effects on job satisfaction (g=0.19, p<0.001), perceived stress (g=-0.21, p<0.01) and sleep problems (g=-0.17, p<0.01). Perceived supervisor support is also positively linked to work engagement (g=0.35, p<0.001) but not to workaholism (g=-0.05, ns) and has a direct positive impact on job satisfaction (g=0.25, p<0.001). Additionally, results indicated that perceived coworker support is negatively related to workaholism (g=-0.15, p<0.05), but not to work engagement (g=0.01, ns). A bootstrapping analysis was performed on the final model (alternative 4; Preacher and Hayes, 2004) in order to test the unstandardized indirect effects. The results of this analysis indicated that the indirect effects of perceived organizational support on job satisfaction and perceived stress through work engagement are significant (indirect effect=0.07; BCa 95 percent CI=[0.006; 0.140] and indirect effect=-0.03; BCa 95 percent CI=[-0.078; -0.005], respectively), supporting H3(a) and H3(b). Furthermore, the indirect effects of perceived supervisor support on job satisfaction and perceived stress through work engagement are also significant (indirect effect=0.17; BCa 95 percent CI=[0.111; 0.237] and indirect effect=-0.09; BCa 95 percent CI=[-0.137; -0.049], respectively), supporting H3(a) and H3(b). Finally, results showed that the indirect effects of perceived coworker support on job satisfaction, perceived stress, and sleep problems through workaholism are significant (indirect effect=0.03; BCa 95 percent CI=[0.001; 0.066]; indirect effect=-0.04; BCa 95 percent CI=[-0.105; -0.003] and indirect effect=-0.04; BCa 95 precent CI=[-0.098; -0.002], respectively), providing support for H4(a), H4(b) and H4(c). The purpose of this study was to examine the relationships between workaholism and work engagement with various indicators of well-being (i.e. job satisfaction, perceived stress, and sleep problems). Furthermore, the study was designed to explore the potential influence of various sources of support (perceived organizational support, perceived supervisor support, and perceived coworker support) on these relationships. To the best of our knowledge, this is the first study that tests the joint influence of work engagement and workaholism on perceived stress and sleep problems. Furthermore, this is the first research that investigates the effects of three different forms of work-related social support on work engagement and workaholism, which in turn influence employee' well-being. Our findings indicated that associations of workaholism and work engagement with indicators of well-being are opposite. More precisely, workaholism relates to negative indicators of well-being (i.e. lower levels of job satisfaction and higher levels of stress and sleep problems), whereas work engagement is associated with positive outcomes (i.e. higher levels of job satisfaction and lower levels of perceived stress). Our results corroborate prior research which found that workaholism is related to lower levels of job satisfaction (e.g. Del Libano et al., 2012), more health complaints (e.g. Schaufeli et al., 2006), and higher levels of sleep problems (e.g. Kubota et al., 2010). Furthermore, our findings are in line with prior studies which showed that work engagement is positively associated with higher levels of job satisfaction (e.g. van Beek et al., 2014) and a better mental and physical health (e.g. Schaufeli et al., 2008b). Interestingly, our results indicated no significant impact of work engagement on sleep problems. Nevertheless, our findings replicate the significant negative correlation found by Hallberg and Schaufeli (2006) between work engagement and sleep disturbances, but fail to demonstrate that work engagement actually reduces sleep problems. This absence of result can be explained in several ways. Contrary to Hallberg and Schaufeli's (2006) study, our research considered the effect of workaholism and work engagement altogether on sleep problems so that workaholism can account for the majority of the variance of sleep problems. Furthermore, we measured sleep problems with another scale than Hallberg and Schaufeli (2006). Concerning the question of which source of support influences work engagement and workaholism to eventually predict well-being, our results indicated that perceived organizational support, perceived supervisor support, and perceived coworker support are empirically distinct constructs that have different effects on work engagement and workaholism. More precisely, our results showed that work engagement partially mediated the relationship between perceived organizational support and both job satisfaction and perceived stress. Perceived organizational support has indeed a direct positive impact on job satisfaction and a direct negative impact on perceived stress and sleep problems. Work engagement was also found to mediate the influence of perceived supervisor support on job satisfaction (partially) and perceived stress (totally). Finally, workaholism was found to fully mediate the relationship between perceived coworker support and job satisfaction, perceived stress, and sleep problems. In short, our results indicated that perceived organizational support and perceived supervisor support are able to foster work engagement whereas perceived coworker support is negatively associated with workaholism. To be more precise, perceived supervisor support has a higher impact on work engagement than perceived organizational support (Dkh2(1)=3.14, p<0.10). Overall, these findings thus provided evidence for Ng and Sorensen's (2008) recommendations suggesting that different sources of social support have different effects and vary in terms of strength of the associations with employees' outcomes. Furthermore, our results are in line with the multi-foci perspective in the social exchange literature (Cropanzano et al., 2004; Lavelle et al., 2007) which suggest that people can develop multiple relationships at work and have distinct social exchange relationships with diverse organizational entities, such as the organization as a whole, and with specific entities within the organization such as supervisors, coworkers, or work groups (Cropanzano et al., 2004; Lavelle et al., 2007). In this multi-foci view, employees' proximity and high frequency of interaction with local organizational representatives and constituencies provide an advantage over more encompassing organizational units, including the entire organization, for developing strong exchange relationships (e.g. Becker, 1992; Mueller and Lawler, 1999). Accordingly, studies based on this multi-foci perspective of social exchange showed, for instance, that more proximal social exchange targets (i.e. supervisors or team) are stronger predictors of employees' performance than more distal targets (e.g. the organization; Lavelle et al., 2007). According to Mueller and Lawler (1999), this phenomenon can be explained by the fact that more proximal targets provided employees with a more important sense of control over their work. Our results are therefore consistent with these studies, in showing that more proximal units of social support (i.e. supervisor support and coworker support) have stronger associations with work engagement for the former (i.e. supervisor support) and workaholism for the latter (i.e. coworker support) than a more distal unit, i.e. organizational support. By doing so, we extend previous knowledge in showing which source of support is the most effective to influence each of these two forms of working hard. More precisely, the finding that perceived supervisor support contributes more strongly to work engagement than other sources of work-related social support is consistent with suggestions made by previous authors that high frequency of interaction with supervisors helps to create strong relationships with these entities (e.g. Becker, 1992). Supervisors also play a serious role in employees' everyday work life (Liden et al., 1997) and are a critical resource in their daily work. Furthermore, our findings also indicate that perceived coworker support is the only work-related source of support present in our study to be able to reduce employees' workaholism. Perceived support from coworkers might be able to help workaholic employees to detach from their job, for instance by inciting them to engage in off-job activities (e.g. sports), by distracting them from their work, or by boosting their social life outside their work. Finally, contrary to past research (e.g. Gillet et al., 2013), our results showed that perceived coworker support does not predict work engagement. This divergence of results may be due to the fact that, in the current study, we took into account and controlled for the effects of the three sources of work-related social support altogether on work engagement and workaholism. Therefore, perceived supervisor support and, in a lesser extent, perceived organizational support account for the majority of variance in work engagement. Supporting this view, this particular finding is consistent with some prior studies (e.g. Othman and Nasurdin, 2013) that found that coworker support was not related to work engagement when the influence of supervisor support was taken into account. Limitations and perspectives for future research Despite its contributions, several limitations of this research should be mentioned. First, the cross-sectional design of the study prevents us from making any inference of causality among the variables included in our model. For instance, our results indicated that perceived coworker support is negatively related to workaholism. However, we cannot exclude the possibility that workaholic employees might perceive less support from their coworkers than non-workaholic employees. Therefore, longitudinal research with repeated measures is needed in order to investigate causal relationships with more acuity. Second, the data were exclusively based on self-reported measurements, which exposed our study to the common method variance effect. Nevertheless, our study was primarily intended to assess employees' perceptions at work and we therefore needed to measure self-perception of these constructs. As recommended by Podsakoff et al. (2003), we assured respondents of the anonymity of their responses in order to reduce this common method bias. Even with these precautions, we cannot totally exclude the possibility that common method bias may have influenced our results. Therefore, as indicated above, we also conducted the Harman's one-factor test (Podsakoff et al., 2003) in our sample and the results showed a very poor fit of a one-factor model. Furthermore, as recommended by Podsakoff et al. (2003), we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results indicated that the average variance explained in the items by the common method factor was only 10.89 percent. This evidence considerably reduces our concerns regarding this potential threat. Third, the results of this study are specific to a PhD student population and based on a very homogenous sample. In order to increase the generability of our findings, future research should thus replicate these results among various organizational and industrial settings. Fourth, given this specificity of our sample, it would have been very interesting to examine the influence of other sources of social support in the University on employees' well-being. In particular, future research should consider the influence of sources of support which are comprised between the organizational and the supervisor level, i.e. the perceived support from the faculty or from the research department. Fifth, we examined the effects of three forms of work-related social support on employees' well-being, through work engagement and workaholism, without including any job demands in our research model. However, prior studies have reported a strong positive relationship between employees' workaholism and job demands (e.g. Schaufeli et al., 2008b). Indeed, workaholic employees' tend to create their own job demands (Guglielmi et al., 2012), such as making their work more complicated by accepting new tasks (e.g. Machlowitz, 1980). In line with this, Taris et al. (2005) found that the positive relationship between workaholism and employees' exhaustion is partially mediated by job demands (i.e. work overload). In a similar vein, Schaufeli et al. (2009b) showed that role conflict was a mediator of the relationships between workaholism and employees' well-being (i.e. burnout, job satisfaction, happiness, and perceived health). Furthermore, other scholars have argued based on the COR theory that job resources might become more salient in order to influence employees' work engagement when employees face high levels of job demands (Bakker and Demerouti, 2007). In line with this view, Hakanen et al. (2005) found that when resources at work were high (i.e. positive contacts with patients, peer contacts, variability in professional skills), these resources were able to attenuate the negative effects of job demands on work engagement. Given these empirical studies, we think that future research should replicate our study by taking into account the influence of job demands in the investigated relationships. Based on the evidence above, job demands might be hypothesized as interacting with social support in predicting work engagement, whereas they might also be considered as a mediator in the relationships between workaholism and well-being (i.e. job satisfaction, perceived stress, and sleep problems). Future research should thus examine the precise role played by job demands in the theoretical model that we tested. Future research should also envisage the possibility that work-related social support might have a dark side in certain cases. In line with this idea, Beehr et al. (2010) suggested the possibility that social interactions in the workplace such as supervisor or colleague support might be harmful for employees' psychological and physical health under certain circumstances. Results of their study showed, for instance, that social interactions with the supervisor or with colleagues might increase rather than reduce employees' strains when these interactions serve to underline how stressful the situation is. Therefore, it might be possible that the positive influence of perceived supervisor support or perceived coworker support on employees' well-being found in this study is canceled or reversed under specific circumstances or for specific individuals (e.g. when employees are not in demand of social support). Future research is therefore needed to address this specific and interesting issue. Finally, because we were interested in the relative impact of each source of work-related social support, we examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support independently of their influence on each other. However, we should also note that authors have underlined that these three sources of work-related support are important entities in the work environment and that "perceptions of support given by any one of these sources is likely to influence perceptions of support given by the others" (Ng and Sorensen, 2008, p. 262). In line with this view, a large body of research has, for instance, reported a positive relationship between perceived supervisor support and perceived organizational support (e.g. Eisenberger et al., 2002; Rhoades and Eisenberger, 2002; Rhoades et al., 2001). Therefore, future research may, for instance, investigate whether perceived supervisor support influences perceived organizational support which, in turn, impacts work engagement. Practical implications Although the current findings rely on a sample of PhD students who represent a specific and peculiar category of workers, this study has valuable potential practical implications for managers and practitioners because it provides new understanding concerning the consequences of work engagement and workaholism on employees' well-being. Because work engagement is associated with positive indicators of well-being (increased job satisfaction and reduced perceived stress), whereas workaholism is linked to negative ones (reduced job satisfaction, increased perceived stress and sleep problems), managers should promote practices in order to foster work engagement and prevent workaholism. In line with this point of view, our findings indicated that the most powerful source of support to foster work engagement is perceived supervisor support. Therefore, managers should encourage supervisors to be supportive. More precisely, they should inspire supervisors to be more active in this supportive role (Newman et al., 2012). Perceived supervisor support can also be fostered by encouraging supervisors to have regular meetings with their subordinates (Newman et al., 2012) or by training them to be supportive in their role of directing, evaluating and coaching their subordinates (Eisenberger and Stinglhamber, 2011). Another possible way to foster supervisor support is by promoting a two-way communication that helps to create a climate of trust between employees and supervisors (Ng and Sorensen, 2008). If supervisors are present for their subordinates when needed and help them both instrumentally and emotionally, it might also increase levels of perceived supervisor support (Somech and Drach-Zahavy, 2013). Furthermore, our results indicated that perceived organizational support enhances work engagement, albeit to a lesser extent than perceived supervisor support. Perceived organizational support also increases job satisfaction and decreases perceived stress and sleep problems. Practically, perceived organizational support can be promoted, for instance, by maintaining open channels of communication, by providing useful resources for employees when they are in need in order to help them to do their job adequately, or by providing job security through the fixed aim of avoiding layoff as much as possible (Eisenberger and Stinglhamber, 2011). In addition, previous studies indicated that perceived organizational support can be fostered by providing effective training for employees, by enhancing employees' autonomy to fulfill their job responsibilities and by increasing procedural fairness regarding rewards and positive job conditions (Eisenberger and Stinglhamber, 2011). Finally, our results showed that perceived coworker support has a negative influence on workaholism. Therefore, managers should enhance support among coworkers in order to reduce workaholism. For example, managers can encourage informal mentoring among employees in order to build a strong social network or organize social events outside of work where employees will be invited to freely interact with coworkers (Newman et al., 2012). Managers can also help to create an organizational culture where interactions between colleagues from different departments or units are a common practice (Newman et al., 2012). Opens in a new window. Figure 1 Conceptual model Opens in a new window. Figure 2 Completely standardized path coefficients for the alternative model 4 Opens in a new window. Table I Confirmatory factor analyses fit indices for measurement models Opens in a new window. Table II Descriptive statistics and intercorrelations among variables Opens in a new window. Table III Study 1: fit indices for structural models
|
The purpose of this paper are twofold. First, the authors examined the effects of two types of working hard (i.e. work engagement, workaholism) on employees' well-being (i.e. job satisfaction, perceived stress, and sleep problems). Second, the authors tested the extent to which both types of working hard mediate the relationship between three types of work-related social support (i.e. perceived organizational support, perceived supervisor support, and perceived coworker support) and employees' well-being.
|
[SECTION: Method] "Work life has undergone tremendous changes within the last 100 years" (Frese, 2008, p. 397). The increase of work complexity, global competition and the dissolution of the unity of work in time and space linked to rapid innovation lead to employees dealing with a work environment that is more and more demanding (Frese, 2008). Additionally, the advancement in technology allows employees to use cellphones or laptops with high-speed data connections at any place, making it possible for them to work at any time (e.g. van Beek et al., 2012). Overall, these changes together with technology development might encourage employees to work harder and during longer hours (e.g. van Wijhe et al., 2011). In the scientific literature, two different types of working hard have been distinguished; i.e. an intrinsically negative form named workaholism and an intrinsically positive form named work engagement (e.g. Schaufeli et al., 2008b). A large body of research has shown that work engagement is positively related to various work outcomes and indicators of employees' well-being, whereas workaholism generally displays negative relationships with the same variables (e.g. Schaufeli and Bakker, 2004; Taris et al., 2010). More recently, some research has begun to investigate the concomitant effects of these two types of working hard on these outcomes (e.g. Del Libano et al., 2012; Schaufeli et al., 2008b). In line with this perspective, the first aim of our research is to analyze the relationships of work engagement and workaholism altogether with different indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Specifically, we expected that workaholism will be related to low well-being, as indicated by lower levels of job satisfaction and higher levels of sleep problems and perceived stress, whereas work engagement will be associated positively with high well-being (i.e. higher levels of job satisfaction, lower levels of perceived stress and sleep problems). Second, the present study examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support on these relationships. If prior work has indicated that the organization, supervisor, and coworkers represent valuable sources of support that have a positive influence on employees' well-being (Ng and Sorensen, 2008), a rather unexplored issue is how these three types of work-related social support might have a concomitant impact on the two forms of working hard, which, in turn, will influence employees' well-being. Yet, according to Ng and Sorensen (2008), "it may be unwarranted for researchers to assume the effects of perceptions of different sources of support on employees are similar" (p. 259). On the contrary, these authors stressed the importance of this issue to scholars and recommended to consider this matter by examining the specific effect of each source of social support and thus including more than one source of work-related social support in their studies. In line with this suggestion, we aimed to investigate whether the effects of three different types of work-related social support on employees' well-being are mediated by work engagement and workaholism. In doing so, our research will contribute to the work engagement and workaholism literature in examining a more comprehensive model including both antecedents and consequences of these two constructs. Additionally, the present study also helps to identify which source of support is the most effective to increase employees' well-being through the two different types of working hard. Addressing this issue may provide a better understanding of the specific effect of each type of work-related social support, which might help lead to theory development (Whetten, 1989). Furthermore, at the practical level, identifying differences in their effects is of utmost importance as it will help practitioners to implement more appropriate interventions in order to enhance employees' well-being. Nowadays, there is still debate regarding the definition of workaholism. However, in our research, we refer to Schaufeli et al. (2009a) who define workaholism as "the tendency to work excessively hard and being obsessed with work, which manifests itself in working compulsively" (p. 322). We decided to adopt this definition because it comprises the two characteristics of workaholism (i.e. working excessively and having an obsessive inner drive) that scholars identified as key and recurrent elements in the various definitions of this construct (e.g. Guglielmi et al., 2012; McMillan and O'Driscoll, 2006). Typical workaholic employees spend a great amount of their time working (van Beek et al., 2011). They experience a strong and incontrollable inner drive, need, or compulsion to work hard which is not due to external factors such as financial factors and career perspectives (Schaufeli et al., 2006). More precisely, building on the self-determination theory (Deci and Ryan, 1985), van Beek et al. (2012) showed that workaholic employees are driven by an introjected regulation (i.e. a form of extrinsic motivation). Introjected regulation is described as "a product of an internalization process in which individuals rigidly adopt external standards of self-worth and social approval without fully identifying with them" (van Beek et al., 2012, p. 33). Recently, based on Higgins's regulatory focus theory (RFT; Higgins, 1997), van Beek et al. (2014) also demonstrated that workaholic employees have higher levels of prevention focus, meaning that they are sensitive to the absence or presence of negative outcomes and use avoidance strategies. Taken together, these results support the idea that workaholic employees work hard to avoid negative feelings such as guilt, shame, irritability, and anxiety or to increase feelings of pride (e.g. van Beek et al., 2012, 2014). Workaholic employees by definition work hard and during long and excessive hours (van Beek et al., 2011). Furthermore, these employees are unable to disengage from their work and think about it continually, even when they are not working (van Beek et al., 2011). Consequently, they have less opportunity to recover from their work, such as by relaxing, and therefore might have a higher tendency to deplete their resources (Van Wijhe et al., 2014). In line with this view, prior empirical studies have shown that workaholism is related to negative outcomes for employees, such as lower job satisfaction (e.g. Del Libano et al., 2012; van Beek et al., 2014), lower life satisfaction (Bonebright et al., 2000), and poorer social relationships outside their work (Schaufeli et al., 2008b). Workaholic employees have also been found to be less happy (Schaufeli et al., 2009b), to suffer more from health complaints, and to report lower levels of self-perceived health (e.g. Schaufeli et al., 2006), and higher levels of exhaustion (e.g. Taris et al., 2005) and sleep problems (e.g. Kubota et al., 2010, 2012). Conversely, an enthusiastic involvement in the job, called work engagement, might also explain the employees' propensity to work hard. Work engagement is defined as "a positive and fulfilling work-related state that is characterized by vigor, dedication and absorption" (Schaufeli et al., 2002a, p. 72). Among these three dimensions, vigor consists of high levels of energy, mental resilience while working, and persistence when facing difficulties (Schaufeli et al., 2002a). Dedication refers to being involved in one's work and experiencing a sense of significance, inspiration, pride, and challenge at work (Schaufeli et al., 2002a). Absorption is characterized by being fully concentrated and engrossed in one's work, whereby time passes fast and people have difficulties to detach from their job (Schaufeli et al., 2002a). Work engagement (Schaufeli et al., 2002a) has been shown to be driven by intrinsic work motivation. Work engaged employees thus consider their work as interesting, enjoyable, and satisfying (van Beek et al., 2012). Recently, based on RFT theory (Higgins, 1997), work engagement has also been positively related to having a promotion focus (van Beek et al., 2014), meaning that work engaged employees are sensitive to the absence or presence of positive outcomes. This finding also indicates that work engaged employees use approach strategies and therefore are likely to use an approach that "matches to their work goals that represent their hopes, wishes, and aspirations" (van Beek et al., 2014, p. 56). In sum, engaged employees have a sense of energetic connection with their work, are happily engrossed in their job, and do not feel guilty when they are not working (Schaufeli et al., 2008b). In line with this perspective, several studies have indicated that work engagement is associated with various positive outcomes for both organizations and employees. For example, engaged employees have been shown to be more satisfied with their job (e.g. Del Libano et al., 2012; van Beek et al., 2014), to demonstrate more personal initiative (Sonnentag, 2003), to have less intention to quit the organization (Schaufeli and Bakker, 2004; van Beek et al., 2014), and to perform better than non-engaged employees (e.g. Salanova et al., 2005). Work engagement was found to be related to higher life satisfaction and a better mental and physical health (Schaufeli and Salanova, 2007; Schaufeli et al., 2008b). Furthermore, results of prior studies showed that work engagement is negatively associated with various indicators of low well-being such as suffering from psychosomatic symptoms (e.g. headaches, cardiovascular problems; Koyuncu et al., 2006; Schaufeli et al., 2008b), exhaustion from work (e.g. Koyuncu et al., 2006), and sleep problems (Hallberg and Schaufeli, 2006). Thus, in short, work engagement and workaholism characterize two different forms of psychological states and have various associations with different work attitudes and indicators of well-being. While the former is related to positive outcomes, the latter is generally associated with negative ones. In line with these previous empirical findings and arguments, we posited that: H1. Work engagement is positively related to (a) job satisfaction and negatively related to (b) perceived stress and (c) sleep problems. H2. Workaholism is negatively related to (a) job satisfaction and positively related to (b) perceived stress and (c) sleep problems. Social support According to the job demands-resources model (JD-R) (Demerouti et al., 2001; Schaufeli and Bakker, 2004), two different types of work conditions, namely job demands and job resources, influence employees' well-being via a dual process, i.e. a health impairment process (linking job demands to negative outcomes through burnout) and a motivational process (linking job resources to positive outcomes through work engagement). Job demands refer to physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological effort (e.g. time pressure, emotional demands, physical demands). Job resources are defined as the physical, psychological, social or organizational aspects of the job that reduce job demands, are functional for achieving work goals or stimulate personal growth, learning and development (e.g. supportive work environment, supervisor support, coworker support and feedback; Demerouti et al., 2001). More precisely, the JD-R model describes a positive motivational process in which job resources such as social support are able to enhance work engagement which, in turn, has positive consequences for employees and organizations. In line with this perspective, Schaufeli and Bakker (2004) have suggested that social support is able to drive an intrinsic motivational process by satisfying employees' needs for autonomy and need to belong, as well as an extrinsic motivational process by increasing the probability to reach work goals. Supervisor and coworker support, for instance, might be able to have an intrinsic motivation role by fulfilling employees' need to belong (Xanthopoulou et al., 2008). Furthermore, coworker support might create among employees the conviction that they will receive help from their colleagues when needed, which might increase their confidence that they will achieve their work goals (Xanthopoulou et al., 2008). In doing so, coworker support might also play an extrinsic motivation role. Empirical studies that have commonly investigated the positive influence of social support on work engagement have precisely focused on supervisor and coworker support (e.g. Korunka et al., 2009). Accordingly, work engagement has been found to be positively predicted by both perceived supervisor support (e.g. Gillet et al., 2013) and perceived coworker support (e.g. Schaufeli and Bakker, 2004; Xanthopoulou et al., 2008) in several studies. In contrast, the influence of perceived organizational support, defined as employees' global beliefs that the organization cares about their well-being and values their contributions (Eisenberger et al., 1986), has been less investigated. Yet, numerous studies have demonstrated the positive influence of perceived organizational support on employees' well-being. Perceived organizational support has for example been shown to increase employees' job satisfaction and to reduce their stress (e.g. Eisenberger and Stinglhamber, 2011; Rhoades and Eisenberger, 2002). Furthermore, perceived organizational support has been positively associated with work engagement in some prior studies (e.g. Caesens and Stinglhamber, 2014; Kinnunen et al., 2008; Sulea et al., 2012). Given this empirical evidence, it seems reasonable to suggest that perceived organizational support, perceived supervisor support, and perceived coworker support are able to positively influence work engagement. However, to the best of our knowledge, no study has examined the positive effects of these three forms of support altogether on work engagement. On the other hand, a very scarce literature has examined the relationship between social support and the negative type of working hard, i.e. workaholism. In this literature, it appeared that social support as a general resource is negatively linked to workaholism (Schaufeli et al., 2008a). The conservation of resources theory (COR; Hobfoll, 1985, 2002) helps to better understand how work-related social support may be negatively related to workaholism. A central tenet of the COR theory is that people "with greater resources are less vulnerable to resource loss and more capable of resource gain" (Hakanen and Roodt, 2010, p. 89). In line with this principle, social support might both help employees to cope with stressful events such as juggling multiple roles (Nicklin and McNall, 2013), and prevent them from resource depletion (Somech and Drach-Zahavy, 2013). Therefore, as an energizing resource for employees, social support might help them cope with their tendency to work hard. In line with this view, previous authors have suggested that supervisor support and coworker cohesion are related to lower levels of compulsion to work (Johnstone and Johnston, 2005). In the same vein, Taris et al. (2010) have also argued that providing supervisors with effective training might help to raise employees' awareness of the meaning, aim, and relevance of their work. This might therefore help to reduce employees' compulsion to work hard (Taris et al., 2010). Furthermore, according to the literature on perceived organizational support (Eisenberger and Stinglhamber, 2011), high levels of perceived organizational support indicate that the organization cares about employees' well-being and is willing to extend itself to provide help for employees when they need it (George et al., 1993). Therefore, it seems that supportive organizations might be more prone to offer assistance program to workaholic employees. It is also reasonable to think that organizations who highly consider their human capital would be more inclined to implement individual-level interventions in order to help workaholic employees (Taris et al., 2010). In short, while the organization (e.g. Eisenberger et al., 1986), supervisor (e.g. Eisenberger et al., 2002), and coworkers (e.g. Bishop et al., 2000) have been shown to represent valuable sources of support, to the best of our knowledge, no previous study has included these three foci of support at once in order to investigate their specific impact either on work engagement or on workaholism. Nevertheless, Ng and Sorensen (2008) have stressed, in their meta-analysis, that the effects on employees of different sources of social support (e.g. perceived organizational support, perceived supervisor support, and perceived coworker support) are very dissimilar. For instance, these authors have shown that perceived supervisor support is more strongly related to several work-related outcomes (i.e. job satisfaction, affective commitment and turnover intentions) than perceived colleague support. According to Ng and Sorensen (2008), each source of social support has not necessarily the same consequences and differs in terms of strength of associations with its outcomes. Therefore, these authors recommended that researchers carefully examine the effects of each source of support in their studies. In line with these recommendations and our theoretical model presented in Figure 1, our study aims to explore the impact of perceived organizational support, perceived supervisor support, and perceived coworker support on both work engagement and workaholism which, in turn, will influence various indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Based on Ng and Sorensen's (2008) recommendations and previous empirical findings, we posited the following hypotheses: H3. Work engagement mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. H4. Workaholism mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. Sample and procedure A total of 425 PhD students of a Belgian University responded to an online questionnaire related to well-being at work (a response rate of approximately 21.25 percent). Due to missing data, only 343 of these 425 questionnaires were usable and thus maintained in the final sample. The external link to the online questionnaire was sent in an e-mail describing the aim of the questionnaire, and PhD students were assured of the anonymity and confidentiality of their responses. This specific population seemed particularly relevant to assess the two forms of working hard, i.e. work engagement and workaholism. Indeed, PhD student work is characterized by long work hours per week and extended concentrating and cognitive efforts. Furthermore, this population seems to be exposed to multiple demands resulting from their job such as research, academic coursework, competition and institutional demands (e.g. Myers et al., 2012). Of this sample, 42.86 percent were males and 57.14 percent were females. In average, participants were 28.27 years of age (SD=4.43), had been employed by the university for 3.02 years (SD=2.07) and had been working with their advisor for 3.30 years (SD=2.20). Measures Because our participants spoke French, scales used in the questionnaire were translated from English to French using the translation-back-translation procedure recommended by Brislin (1980). However, when available we used validated French versions of the scales. Work-related social support Perceived organizational support was measured using a short four-item version of the Survey of Perceived Organizational Support (SPOS) (Eisenberger et al., 1986). These four items covered well the two fundamental aspects of perceived organizational support, namely "valorization of employees' contributions" and "being concerned about employees' well-being". According to Rhoades and Eisenberger (2002), because of the high internal consistencies, and the unidimensionality of the SPOS, using a short version is not problematic. A sample item is: "[Name of the organization/university] really cares about my well-being". Perceived supervisor support was measured using an adapted version of the SPOS (four items) inspired in the same manner of Rhoades et al. (2001) and Eisenberger et al. (2002), replacing the word "organization" with the term "advisor". A sample item is "Even if I did the best job possible, my advisor would fail to notice" (reverse item). Prior empirical research indicated good psychometric properties of this perceived supervisor support scale (e.g. Rhoades et al., 2001). Perceived coworker support was operationalized using an adapted version of the SPOS in four items inspired in the same manner of Bishop et al. (2000) and Ladd and Henry (2000). A sample item is "My coworkers show very little concern for me" (reverse item). Prior studies using this perceived coworker support scale showed good psychometric properties (e.g. Ladd and Henry, 2000). Participants responded to a seven-point Likert-type scale ranging from 1 ("Strongly disagree") to 7 ("Strongly agree"). Work engagement We used the short version of the "Utrecht Work Engagement Scale" in nine items (UWES) (Schaufeli et al., 2002a) to assess work engagement. The scale includes three dimensions: vigor (three items; e.g. "At my work, I feel bursting of energy"), dedication (three items; e.g. "I am enthusiastic about my job"), and absorption (three items; e.g. "I feel happy when I am working intensely"). The response scale ranged from 1 ("Never") to 7 ("Always"). Workaholism We measured workaholism using the validated ten-item short version (Del Libano et al., 2010) of the "Dutch Work Addiction Scale" (DUWAS; Schaufeli et al., 2006) which includes the two dimensions of the construct, i.e. working excessively and working compulsively. Sample items are: "I find myself continuing work after my co-workers have called it quits" (working excessively; five items) and "I often feel that there's something inside me that drives me to work hard" (working compulsively; five items). The response scale ranged from 1 ("Never") to 4 ("Always"). Job satisfaction Job satisfaction was measured with four items from Eisenberger et al. (1997). A sample item is: "All in all, I am very satisfied with my current job". The response scale ranged from 1 ("Strongly disagree") to 7 ("Strongly agree"). Perceived stress We measured perceived stress with four items from the Perceived Stress Scale (PSS) (Cohen et al., 1983). A sample item is: "In the last month, how often have you felt difficulties were piling up so high that you could not overcome them?". The response scale ranged from 1 ("Never") to 5 ("Very often"). Sleep problems We measured sleep problems with four items from the "Jenkins Sleep Quality Index" (JSQ) (Jenkins et al., 1988) assessing the most common sleep problems (i.e. difficulties falling asleep, waking up during the night, waking up and having difficulties falling asleep again, and waking up tired). The response scale, indicating how often the stated condition occurred during an average month, ranged from 1 ("Not at all") to 6 ("22 to 31 days/month"). A sample item is: "I have had difficulties to fall asleep". Control variables Gender, age, tenure in the university and tenure with the advisor were measured. Discriminant validity In order to evaluate the distinctiveness of the eight concepts included in our study (i.e. perceived organizational support, perceived supervisor support, perceived coworker support, work engagement, workaholism, job satisfaction, perceived stress, and sleep problems), we conducted confirmatory factor analyses using Mplus 6.12 (Muthen and Muthen, 1998-2011). Based on the fact that we used the same items to measure each type of work related social support (i.e. perceived organizational support, perceived supervisor support and perceived coworker support), we allowed the error covariances of these content-equivalent items to correlate freely. Additionally, due to a considerable content overlap among some items included either in the work engagement scale or the workaholism one, we allowed the error covariances of some of the paired items to correlate freely, as it has been previously done in the validation studies of these scales (Del Libano et al., 2010; Schaufeli et al., 2002b). Based on the kh2 difference test (Bentler and Bonett, 1980), results of the CFA indicated that the hypothesized measurement model fitted the data well and was superior to all more constrained models. Indeed, as displayed in Table I, the hypothesized model had a better fit than the alternative measurement models. Because our data were self-reported, we also conducted the Harman single-factor test (Podsakoff et al., 2003) by constraining all items to load on a single factor model. Results indicated that the fit of the one-factor model was very poor. Furthermore, in line with Podsakoff et al.'s (2003) and Richardson et al.'s (2009) recommendations, we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results of this additional analysis indicated that the average variance explained by the common method factor was 10.89 percent. This value is less than half of the amount of method variance (25 percent) that Williams et al. (1989) refer to for self-reported studies. Furthermore, all items in the hypothesized eight-factor model display acceptable loadings. Indeed, items loaded on their respective factors with loadings ranging from 0.58 to 0.87 for perceived organizational support, from 0.67 to 0.93 for perceived supervisor support, from 0.67 to 0.92 for perceived coworker support, from 0.50 to 0.87 for work engagement, from 0.47 to 0.69 for workaholism, from 0.84 to 0.94 for job satisfaction, from 0.69 to 0.78 for perceived stress, and from 0.55 to 0.87 for sleep problems. Based on all of this evidence, the eight variables of our model were treated as separate constructs in our subsequent analyses in order to test our hypotheses. Relationships among variables Means, standard deviations, internal reliabilities, and correlations among our variables are displayed in Table II. All Cronbach's a's were above the 0.70 criterion established by Nunnally (1978). Test of hypotheses Following Becker's (2005) recommendations, we only statistically controlled for socio-demographic variables with a significant correlation with the dependent variables in our model (i.e. mediators and outcomes). Therefore, we introduced organizational tenure and tenure with the supervisor as additional exogenous variables predicting workaholism and job satisfaction, respectively. Furthermore, tenure with the supervisor and gender were controlled for perceived stress and gender was controlled for sleep problems. It is even more important to control for these variables given that past research also indicated that they have an impact on our dependent variables. More precisely, tenure in the organization has been found to be negatively associated with workaholism (Taris et al., 2005) and job satisfaction (Duffy et al., 1998). Scholars also suggested that is important to control for tenure with the supervisor knowing that possible temporal effects might explain the influence of the relationship with the supervisor on outcomes (e.g. Wang et al., 2013). Finally, prior research indicated that woman generally report higher levels of stress (e.g. Gyllensten and Palmer, 2005) and sleep problems than men (e.g. Ohayon, 1996). Therefore, as recommended by Spector and Brannick (2011), we included these control variables in the subsequent analyses, based on reasonable theoretical or empirical evidence that these socio-demographic variables are linked to variables included in our research model. Using Mplus 6.12 (Muthen and Muthen, 1998-2011), we conducted SEM analyses in order to test our hypotheses. Because of the different response scales, all items responses were standardized prior to these analyses. Then, we compared the fit of our hypothesized model with nine alternative models. Table III displays the fit indices of all these models. As shown in this table, results indicated that the hypothesized model has a good fit to the data, as indicated by a kh2(950)=1730.55, a CFI of 0.90, a SRMR of 0.10 and a RMSEA of 0.05. However, based on the kh2 difference test (Bentler and Bonett, 1980), some kh2 changes were significant, and revealed that the alternative model 4, which adds paths between perceived organizational support and job satisfaction, between perceived stress and sleep problems, and between perceived supervisor support and job satisfaction, was superior to the hypothesized model and the alternative models 1, 2 and 3 (for more details, see Table III). Therefore, this alternative model 4 was retained as the best fitting model (kh2(946)=1658.43, RMSEA=0.05, SRMR=0.09 and CFI=0.91). Standardized parameter estimates of this alternative model 4 are presented in Figure 2. For the sakeof clarity, the effects of the control variables are detailed in the text. Organizational tenure was related to job satisfaction (g=-0.12, p<0.05) but not to workaholism (g=0.14, ns). Tenure with the supervisor was related to job satisfaction (g=0.10, p<0.05) but not to workaholism and perceived stress (g=-0.05, ns; g=0.01, ns). Gender has a significant impact on perceived stress and sleep problems (g=-0.10, p<0.05; g=-0.14, p<0.01, respectively), indicating that men perceive less stress and suffer less from sleep problems than women. Controlling for these variables, results indicated that work engagement is positively associated with job satisfaction (b=0.54, p<0.001) and negatively with perceived stress (b=-0.29, p<0.001), but not with sleep problems (b=-0.11, ns), supporting H1(a) and H1(b). In the opposite, results indicated that workaholism is negatively related to job satisfaction (b=-0.19, p<0.001), and positively related to perceived stress (b=0.36, p<0.001), and sleep problems (b=0.37, p<0.001), providing support for H2(a), H2(b) and H2(c). Furthermore, perceived organizational support is positively related to work engagement (g=0.13, p<0.05) but is not related to workaholism (g=-0.10, ns). However, perceived organizational support has direct effects on job satisfaction (g=0.19, p<0.001), perceived stress (g=-0.21, p<0.01) and sleep problems (g=-0.17, p<0.01). Perceived supervisor support is also positively linked to work engagement (g=0.35, p<0.001) but not to workaholism (g=-0.05, ns) and has a direct positive impact on job satisfaction (g=0.25, p<0.001). Additionally, results indicated that perceived coworker support is negatively related to workaholism (g=-0.15, p<0.05), but not to work engagement (g=0.01, ns). A bootstrapping analysis was performed on the final model (alternative 4; Preacher and Hayes, 2004) in order to test the unstandardized indirect effects. The results of this analysis indicated that the indirect effects of perceived organizational support on job satisfaction and perceived stress through work engagement are significant (indirect effect=0.07; BCa 95 percent CI=[0.006; 0.140] and indirect effect=-0.03; BCa 95 percent CI=[-0.078; -0.005], respectively), supporting H3(a) and H3(b). Furthermore, the indirect effects of perceived supervisor support on job satisfaction and perceived stress through work engagement are also significant (indirect effect=0.17; BCa 95 percent CI=[0.111; 0.237] and indirect effect=-0.09; BCa 95 percent CI=[-0.137; -0.049], respectively), supporting H3(a) and H3(b). Finally, results showed that the indirect effects of perceived coworker support on job satisfaction, perceived stress, and sleep problems through workaholism are significant (indirect effect=0.03; BCa 95 percent CI=[0.001; 0.066]; indirect effect=-0.04; BCa 95 percent CI=[-0.105; -0.003] and indirect effect=-0.04; BCa 95 precent CI=[-0.098; -0.002], respectively), providing support for H4(a), H4(b) and H4(c). The purpose of this study was to examine the relationships between workaholism and work engagement with various indicators of well-being (i.e. job satisfaction, perceived stress, and sleep problems). Furthermore, the study was designed to explore the potential influence of various sources of support (perceived organizational support, perceived supervisor support, and perceived coworker support) on these relationships. To the best of our knowledge, this is the first study that tests the joint influence of work engagement and workaholism on perceived stress and sleep problems. Furthermore, this is the first research that investigates the effects of three different forms of work-related social support on work engagement and workaholism, which in turn influence employee' well-being. Our findings indicated that associations of workaholism and work engagement with indicators of well-being are opposite. More precisely, workaholism relates to negative indicators of well-being (i.e. lower levels of job satisfaction and higher levels of stress and sleep problems), whereas work engagement is associated with positive outcomes (i.e. higher levels of job satisfaction and lower levels of perceived stress). Our results corroborate prior research which found that workaholism is related to lower levels of job satisfaction (e.g. Del Libano et al., 2012), more health complaints (e.g. Schaufeli et al., 2006), and higher levels of sleep problems (e.g. Kubota et al., 2010). Furthermore, our findings are in line with prior studies which showed that work engagement is positively associated with higher levels of job satisfaction (e.g. van Beek et al., 2014) and a better mental and physical health (e.g. Schaufeli et al., 2008b). Interestingly, our results indicated no significant impact of work engagement on sleep problems. Nevertheless, our findings replicate the significant negative correlation found by Hallberg and Schaufeli (2006) between work engagement and sleep disturbances, but fail to demonstrate that work engagement actually reduces sleep problems. This absence of result can be explained in several ways. Contrary to Hallberg and Schaufeli's (2006) study, our research considered the effect of workaholism and work engagement altogether on sleep problems so that workaholism can account for the majority of the variance of sleep problems. Furthermore, we measured sleep problems with another scale than Hallberg and Schaufeli (2006). Concerning the question of which source of support influences work engagement and workaholism to eventually predict well-being, our results indicated that perceived organizational support, perceived supervisor support, and perceived coworker support are empirically distinct constructs that have different effects on work engagement and workaholism. More precisely, our results showed that work engagement partially mediated the relationship between perceived organizational support and both job satisfaction and perceived stress. Perceived organizational support has indeed a direct positive impact on job satisfaction and a direct negative impact on perceived stress and sleep problems. Work engagement was also found to mediate the influence of perceived supervisor support on job satisfaction (partially) and perceived stress (totally). Finally, workaholism was found to fully mediate the relationship between perceived coworker support and job satisfaction, perceived stress, and sleep problems. In short, our results indicated that perceived organizational support and perceived supervisor support are able to foster work engagement whereas perceived coworker support is negatively associated with workaholism. To be more precise, perceived supervisor support has a higher impact on work engagement than perceived organizational support (Dkh2(1)=3.14, p<0.10). Overall, these findings thus provided evidence for Ng and Sorensen's (2008) recommendations suggesting that different sources of social support have different effects and vary in terms of strength of the associations with employees' outcomes. Furthermore, our results are in line with the multi-foci perspective in the social exchange literature (Cropanzano et al., 2004; Lavelle et al., 2007) which suggest that people can develop multiple relationships at work and have distinct social exchange relationships with diverse organizational entities, such as the organization as a whole, and with specific entities within the organization such as supervisors, coworkers, or work groups (Cropanzano et al., 2004; Lavelle et al., 2007). In this multi-foci view, employees' proximity and high frequency of interaction with local organizational representatives and constituencies provide an advantage over more encompassing organizational units, including the entire organization, for developing strong exchange relationships (e.g. Becker, 1992; Mueller and Lawler, 1999). Accordingly, studies based on this multi-foci perspective of social exchange showed, for instance, that more proximal social exchange targets (i.e. supervisors or team) are stronger predictors of employees' performance than more distal targets (e.g. the organization; Lavelle et al., 2007). According to Mueller and Lawler (1999), this phenomenon can be explained by the fact that more proximal targets provided employees with a more important sense of control over their work. Our results are therefore consistent with these studies, in showing that more proximal units of social support (i.e. supervisor support and coworker support) have stronger associations with work engagement for the former (i.e. supervisor support) and workaholism for the latter (i.e. coworker support) than a more distal unit, i.e. organizational support. By doing so, we extend previous knowledge in showing which source of support is the most effective to influence each of these two forms of working hard. More precisely, the finding that perceived supervisor support contributes more strongly to work engagement than other sources of work-related social support is consistent with suggestions made by previous authors that high frequency of interaction with supervisors helps to create strong relationships with these entities (e.g. Becker, 1992). Supervisors also play a serious role in employees' everyday work life (Liden et al., 1997) and are a critical resource in their daily work. Furthermore, our findings also indicate that perceived coworker support is the only work-related source of support present in our study to be able to reduce employees' workaholism. Perceived support from coworkers might be able to help workaholic employees to detach from their job, for instance by inciting them to engage in off-job activities (e.g. sports), by distracting them from their work, or by boosting their social life outside their work. Finally, contrary to past research (e.g. Gillet et al., 2013), our results showed that perceived coworker support does not predict work engagement. This divergence of results may be due to the fact that, in the current study, we took into account and controlled for the effects of the three sources of work-related social support altogether on work engagement and workaholism. Therefore, perceived supervisor support and, in a lesser extent, perceived organizational support account for the majority of variance in work engagement. Supporting this view, this particular finding is consistent with some prior studies (e.g. Othman and Nasurdin, 2013) that found that coworker support was not related to work engagement when the influence of supervisor support was taken into account. Limitations and perspectives for future research Despite its contributions, several limitations of this research should be mentioned. First, the cross-sectional design of the study prevents us from making any inference of causality among the variables included in our model. For instance, our results indicated that perceived coworker support is negatively related to workaholism. However, we cannot exclude the possibility that workaholic employees might perceive less support from their coworkers than non-workaholic employees. Therefore, longitudinal research with repeated measures is needed in order to investigate causal relationships with more acuity. Second, the data were exclusively based on self-reported measurements, which exposed our study to the common method variance effect. Nevertheless, our study was primarily intended to assess employees' perceptions at work and we therefore needed to measure self-perception of these constructs. As recommended by Podsakoff et al. (2003), we assured respondents of the anonymity of their responses in order to reduce this common method bias. Even with these precautions, we cannot totally exclude the possibility that common method bias may have influenced our results. Therefore, as indicated above, we also conducted the Harman's one-factor test (Podsakoff et al., 2003) in our sample and the results showed a very poor fit of a one-factor model. Furthermore, as recommended by Podsakoff et al. (2003), we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results indicated that the average variance explained in the items by the common method factor was only 10.89 percent. This evidence considerably reduces our concerns regarding this potential threat. Third, the results of this study are specific to a PhD student population and based on a very homogenous sample. In order to increase the generability of our findings, future research should thus replicate these results among various organizational and industrial settings. Fourth, given this specificity of our sample, it would have been very interesting to examine the influence of other sources of social support in the University on employees' well-being. In particular, future research should consider the influence of sources of support which are comprised between the organizational and the supervisor level, i.e. the perceived support from the faculty or from the research department. Fifth, we examined the effects of three forms of work-related social support on employees' well-being, through work engagement and workaholism, without including any job demands in our research model. However, prior studies have reported a strong positive relationship between employees' workaholism and job demands (e.g. Schaufeli et al., 2008b). Indeed, workaholic employees' tend to create their own job demands (Guglielmi et al., 2012), such as making their work more complicated by accepting new tasks (e.g. Machlowitz, 1980). In line with this, Taris et al. (2005) found that the positive relationship between workaholism and employees' exhaustion is partially mediated by job demands (i.e. work overload). In a similar vein, Schaufeli et al. (2009b) showed that role conflict was a mediator of the relationships between workaholism and employees' well-being (i.e. burnout, job satisfaction, happiness, and perceived health). Furthermore, other scholars have argued based on the COR theory that job resources might become more salient in order to influence employees' work engagement when employees face high levels of job demands (Bakker and Demerouti, 2007). In line with this view, Hakanen et al. (2005) found that when resources at work were high (i.e. positive contacts with patients, peer contacts, variability in professional skills), these resources were able to attenuate the negative effects of job demands on work engagement. Given these empirical studies, we think that future research should replicate our study by taking into account the influence of job demands in the investigated relationships. Based on the evidence above, job demands might be hypothesized as interacting with social support in predicting work engagement, whereas they might also be considered as a mediator in the relationships between workaholism and well-being (i.e. job satisfaction, perceived stress, and sleep problems). Future research should thus examine the precise role played by job demands in the theoretical model that we tested. Future research should also envisage the possibility that work-related social support might have a dark side in certain cases. In line with this idea, Beehr et al. (2010) suggested the possibility that social interactions in the workplace such as supervisor or colleague support might be harmful for employees' psychological and physical health under certain circumstances. Results of their study showed, for instance, that social interactions with the supervisor or with colleagues might increase rather than reduce employees' strains when these interactions serve to underline how stressful the situation is. Therefore, it might be possible that the positive influence of perceived supervisor support or perceived coworker support on employees' well-being found in this study is canceled or reversed under specific circumstances or for specific individuals (e.g. when employees are not in demand of social support). Future research is therefore needed to address this specific and interesting issue. Finally, because we were interested in the relative impact of each source of work-related social support, we examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support independently of their influence on each other. However, we should also note that authors have underlined that these three sources of work-related support are important entities in the work environment and that "perceptions of support given by any one of these sources is likely to influence perceptions of support given by the others" (Ng and Sorensen, 2008, p. 262). In line with this view, a large body of research has, for instance, reported a positive relationship between perceived supervisor support and perceived organizational support (e.g. Eisenberger et al., 2002; Rhoades and Eisenberger, 2002; Rhoades et al., 2001). Therefore, future research may, for instance, investigate whether perceived supervisor support influences perceived organizational support which, in turn, impacts work engagement. Practical implications Although the current findings rely on a sample of PhD students who represent a specific and peculiar category of workers, this study has valuable potential practical implications for managers and practitioners because it provides new understanding concerning the consequences of work engagement and workaholism on employees' well-being. Because work engagement is associated with positive indicators of well-being (increased job satisfaction and reduced perceived stress), whereas workaholism is linked to negative ones (reduced job satisfaction, increased perceived stress and sleep problems), managers should promote practices in order to foster work engagement and prevent workaholism. In line with this point of view, our findings indicated that the most powerful source of support to foster work engagement is perceived supervisor support. Therefore, managers should encourage supervisors to be supportive. More precisely, they should inspire supervisors to be more active in this supportive role (Newman et al., 2012). Perceived supervisor support can also be fostered by encouraging supervisors to have regular meetings with their subordinates (Newman et al., 2012) or by training them to be supportive in their role of directing, evaluating and coaching their subordinates (Eisenberger and Stinglhamber, 2011). Another possible way to foster supervisor support is by promoting a two-way communication that helps to create a climate of trust between employees and supervisors (Ng and Sorensen, 2008). If supervisors are present for their subordinates when needed and help them both instrumentally and emotionally, it might also increase levels of perceived supervisor support (Somech and Drach-Zahavy, 2013). Furthermore, our results indicated that perceived organizational support enhances work engagement, albeit to a lesser extent than perceived supervisor support. Perceived organizational support also increases job satisfaction and decreases perceived stress and sleep problems. Practically, perceived organizational support can be promoted, for instance, by maintaining open channels of communication, by providing useful resources for employees when they are in need in order to help them to do their job adequately, or by providing job security through the fixed aim of avoiding layoff as much as possible (Eisenberger and Stinglhamber, 2011). In addition, previous studies indicated that perceived organizational support can be fostered by providing effective training for employees, by enhancing employees' autonomy to fulfill their job responsibilities and by increasing procedural fairness regarding rewards and positive job conditions (Eisenberger and Stinglhamber, 2011). Finally, our results showed that perceived coworker support has a negative influence on workaholism. Therefore, managers should enhance support among coworkers in order to reduce workaholism. For example, managers can encourage informal mentoring among employees in order to build a strong social network or organize social events outside of work where employees will be invited to freely interact with coworkers (Newman et al., 2012). Managers can also help to create an organizational culture where interactions between colleagues from different departments or units are a common practice (Newman et al., 2012). Opens in a new window. Figure 1 Conceptual model Opens in a new window. Figure 2 Completely standardized path coefficients for the alternative model 4 Opens in a new window. Table I Confirmatory factor analyses fit indices for measurement models Opens in a new window. Table II Descriptive statistics and intercorrelations among variables Opens in a new window. Table III Study 1: fit indices for structural models
|
An online questionnaire was administered to 343 PhD students.
|
[SECTION: Findings] "Work life has undergone tremendous changes within the last 100 years" (Frese, 2008, p. 397). The increase of work complexity, global competition and the dissolution of the unity of work in time and space linked to rapid innovation lead to employees dealing with a work environment that is more and more demanding (Frese, 2008). Additionally, the advancement in technology allows employees to use cellphones or laptops with high-speed data connections at any place, making it possible for them to work at any time (e.g. van Beek et al., 2012). Overall, these changes together with technology development might encourage employees to work harder and during longer hours (e.g. van Wijhe et al., 2011). In the scientific literature, two different types of working hard have been distinguished; i.e. an intrinsically negative form named workaholism and an intrinsically positive form named work engagement (e.g. Schaufeli et al., 2008b). A large body of research has shown that work engagement is positively related to various work outcomes and indicators of employees' well-being, whereas workaholism generally displays negative relationships with the same variables (e.g. Schaufeli and Bakker, 2004; Taris et al., 2010). More recently, some research has begun to investigate the concomitant effects of these two types of working hard on these outcomes (e.g. Del Libano et al., 2012; Schaufeli et al., 2008b). In line with this perspective, the first aim of our research is to analyze the relationships of work engagement and workaholism altogether with different indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Specifically, we expected that workaholism will be related to low well-being, as indicated by lower levels of job satisfaction and higher levels of sleep problems and perceived stress, whereas work engagement will be associated positively with high well-being (i.e. higher levels of job satisfaction, lower levels of perceived stress and sleep problems). Second, the present study examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support on these relationships. If prior work has indicated that the organization, supervisor, and coworkers represent valuable sources of support that have a positive influence on employees' well-being (Ng and Sorensen, 2008), a rather unexplored issue is how these three types of work-related social support might have a concomitant impact on the two forms of working hard, which, in turn, will influence employees' well-being. Yet, according to Ng and Sorensen (2008), "it may be unwarranted for researchers to assume the effects of perceptions of different sources of support on employees are similar" (p. 259). On the contrary, these authors stressed the importance of this issue to scholars and recommended to consider this matter by examining the specific effect of each source of social support and thus including more than one source of work-related social support in their studies. In line with this suggestion, we aimed to investigate whether the effects of three different types of work-related social support on employees' well-being are mediated by work engagement and workaholism. In doing so, our research will contribute to the work engagement and workaholism literature in examining a more comprehensive model including both antecedents and consequences of these two constructs. Additionally, the present study also helps to identify which source of support is the most effective to increase employees' well-being through the two different types of working hard. Addressing this issue may provide a better understanding of the specific effect of each type of work-related social support, which might help lead to theory development (Whetten, 1989). Furthermore, at the practical level, identifying differences in their effects is of utmost importance as it will help practitioners to implement more appropriate interventions in order to enhance employees' well-being. Nowadays, there is still debate regarding the definition of workaholism. However, in our research, we refer to Schaufeli et al. (2009a) who define workaholism as "the tendency to work excessively hard and being obsessed with work, which manifests itself in working compulsively" (p. 322). We decided to adopt this definition because it comprises the two characteristics of workaholism (i.e. working excessively and having an obsessive inner drive) that scholars identified as key and recurrent elements in the various definitions of this construct (e.g. Guglielmi et al., 2012; McMillan and O'Driscoll, 2006). Typical workaholic employees spend a great amount of their time working (van Beek et al., 2011). They experience a strong and incontrollable inner drive, need, or compulsion to work hard which is not due to external factors such as financial factors and career perspectives (Schaufeli et al., 2006). More precisely, building on the self-determination theory (Deci and Ryan, 1985), van Beek et al. (2012) showed that workaholic employees are driven by an introjected regulation (i.e. a form of extrinsic motivation). Introjected regulation is described as "a product of an internalization process in which individuals rigidly adopt external standards of self-worth and social approval without fully identifying with them" (van Beek et al., 2012, p. 33). Recently, based on Higgins's regulatory focus theory (RFT; Higgins, 1997), van Beek et al. (2014) also demonstrated that workaholic employees have higher levels of prevention focus, meaning that they are sensitive to the absence or presence of negative outcomes and use avoidance strategies. Taken together, these results support the idea that workaholic employees work hard to avoid negative feelings such as guilt, shame, irritability, and anxiety or to increase feelings of pride (e.g. van Beek et al., 2012, 2014). Workaholic employees by definition work hard and during long and excessive hours (van Beek et al., 2011). Furthermore, these employees are unable to disengage from their work and think about it continually, even when they are not working (van Beek et al., 2011). Consequently, they have less opportunity to recover from their work, such as by relaxing, and therefore might have a higher tendency to deplete their resources (Van Wijhe et al., 2014). In line with this view, prior empirical studies have shown that workaholism is related to negative outcomes for employees, such as lower job satisfaction (e.g. Del Libano et al., 2012; van Beek et al., 2014), lower life satisfaction (Bonebright et al., 2000), and poorer social relationships outside their work (Schaufeli et al., 2008b). Workaholic employees have also been found to be less happy (Schaufeli et al., 2009b), to suffer more from health complaints, and to report lower levels of self-perceived health (e.g. Schaufeli et al., 2006), and higher levels of exhaustion (e.g. Taris et al., 2005) and sleep problems (e.g. Kubota et al., 2010, 2012). Conversely, an enthusiastic involvement in the job, called work engagement, might also explain the employees' propensity to work hard. Work engagement is defined as "a positive and fulfilling work-related state that is characterized by vigor, dedication and absorption" (Schaufeli et al., 2002a, p. 72). Among these three dimensions, vigor consists of high levels of energy, mental resilience while working, and persistence when facing difficulties (Schaufeli et al., 2002a). Dedication refers to being involved in one's work and experiencing a sense of significance, inspiration, pride, and challenge at work (Schaufeli et al., 2002a). Absorption is characterized by being fully concentrated and engrossed in one's work, whereby time passes fast and people have difficulties to detach from their job (Schaufeli et al., 2002a). Work engagement (Schaufeli et al., 2002a) has been shown to be driven by intrinsic work motivation. Work engaged employees thus consider their work as interesting, enjoyable, and satisfying (van Beek et al., 2012). Recently, based on RFT theory (Higgins, 1997), work engagement has also been positively related to having a promotion focus (van Beek et al., 2014), meaning that work engaged employees are sensitive to the absence or presence of positive outcomes. This finding also indicates that work engaged employees use approach strategies and therefore are likely to use an approach that "matches to their work goals that represent their hopes, wishes, and aspirations" (van Beek et al., 2014, p. 56). In sum, engaged employees have a sense of energetic connection with their work, are happily engrossed in their job, and do not feel guilty when they are not working (Schaufeli et al., 2008b). In line with this perspective, several studies have indicated that work engagement is associated with various positive outcomes for both organizations and employees. For example, engaged employees have been shown to be more satisfied with their job (e.g. Del Libano et al., 2012; van Beek et al., 2014), to demonstrate more personal initiative (Sonnentag, 2003), to have less intention to quit the organization (Schaufeli and Bakker, 2004; van Beek et al., 2014), and to perform better than non-engaged employees (e.g. Salanova et al., 2005). Work engagement was found to be related to higher life satisfaction and a better mental and physical health (Schaufeli and Salanova, 2007; Schaufeli et al., 2008b). Furthermore, results of prior studies showed that work engagement is negatively associated with various indicators of low well-being such as suffering from psychosomatic symptoms (e.g. headaches, cardiovascular problems; Koyuncu et al., 2006; Schaufeli et al., 2008b), exhaustion from work (e.g. Koyuncu et al., 2006), and sleep problems (Hallberg and Schaufeli, 2006). Thus, in short, work engagement and workaholism characterize two different forms of psychological states and have various associations with different work attitudes and indicators of well-being. While the former is related to positive outcomes, the latter is generally associated with negative ones. In line with these previous empirical findings and arguments, we posited that: H1. Work engagement is positively related to (a) job satisfaction and negatively related to (b) perceived stress and (c) sleep problems. H2. Workaholism is negatively related to (a) job satisfaction and positively related to (b) perceived stress and (c) sleep problems. Social support According to the job demands-resources model (JD-R) (Demerouti et al., 2001; Schaufeli and Bakker, 2004), two different types of work conditions, namely job demands and job resources, influence employees' well-being via a dual process, i.e. a health impairment process (linking job demands to negative outcomes through burnout) and a motivational process (linking job resources to positive outcomes through work engagement). Job demands refer to physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological effort (e.g. time pressure, emotional demands, physical demands). Job resources are defined as the physical, psychological, social or organizational aspects of the job that reduce job demands, are functional for achieving work goals or stimulate personal growth, learning and development (e.g. supportive work environment, supervisor support, coworker support and feedback; Demerouti et al., 2001). More precisely, the JD-R model describes a positive motivational process in which job resources such as social support are able to enhance work engagement which, in turn, has positive consequences for employees and organizations. In line with this perspective, Schaufeli and Bakker (2004) have suggested that social support is able to drive an intrinsic motivational process by satisfying employees' needs for autonomy and need to belong, as well as an extrinsic motivational process by increasing the probability to reach work goals. Supervisor and coworker support, for instance, might be able to have an intrinsic motivation role by fulfilling employees' need to belong (Xanthopoulou et al., 2008). Furthermore, coworker support might create among employees the conviction that they will receive help from their colleagues when needed, which might increase their confidence that they will achieve their work goals (Xanthopoulou et al., 2008). In doing so, coworker support might also play an extrinsic motivation role. Empirical studies that have commonly investigated the positive influence of social support on work engagement have precisely focused on supervisor and coworker support (e.g. Korunka et al., 2009). Accordingly, work engagement has been found to be positively predicted by both perceived supervisor support (e.g. Gillet et al., 2013) and perceived coworker support (e.g. Schaufeli and Bakker, 2004; Xanthopoulou et al., 2008) in several studies. In contrast, the influence of perceived organizational support, defined as employees' global beliefs that the organization cares about their well-being and values their contributions (Eisenberger et al., 1986), has been less investigated. Yet, numerous studies have demonstrated the positive influence of perceived organizational support on employees' well-being. Perceived organizational support has for example been shown to increase employees' job satisfaction and to reduce their stress (e.g. Eisenberger and Stinglhamber, 2011; Rhoades and Eisenberger, 2002). Furthermore, perceived organizational support has been positively associated with work engagement in some prior studies (e.g. Caesens and Stinglhamber, 2014; Kinnunen et al., 2008; Sulea et al., 2012). Given this empirical evidence, it seems reasonable to suggest that perceived organizational support, perceived supervisor support, and perceived coworker support are able to positively influence work engagement. However, to the best of our knowledge, no study has examined the positive effects of these three forms of support altogether on work engagement. On the other hand, a very scarce literature has examined the relationship between social support and the negative type of working hard, i.e. workaholism. In this literature, it appeared that social support as a general resource is negatively linked to workaholism (Schaufeli et al., 2008a). The conservation of resources theory (COR; Hobfoll, 1985, 2002) helps to better understand how work-related social support may be negatively related to workaholism. A central tenet of the COR theory is that people "with greater resources are less vulnerable to resource loss and more capable of resource gain" (Hakanen and Roodt, 2010, p. 89). In line with this principle, social support might both help employees to cope with stressful events such as juggling multiple roles (Nicklin and McNall, 2013), and prevent them from resource depletion (Somech and Drach-Zahavy, 2013). Therefore, as an energizing resource for employees, social support might help them cope with their tendency to work hard. In line with this view, previous authors have suggested that supervisor support and coworker cohesion are related to lower levels of compulsion to work (Johnstone and Johnston, 2005). In the same vein, Taris et al. (2010) have also argued that providing supervisors with effective training might help to raise employees' awareness of the meaning, aim, and relevance of their work. This might therefore help to reduce employees' compulsion to work hard (Taris et al., 2010). Furthermore, according to the literature on perceived organizational support (Eisenberger and Stinglhamber, 2011), high levels of perceived organizational support indicate that the organization cares about employees' well-being and is willing to extend itself to provide help for employees when they need it (George et al., 1993). Therefore, it seems that supportive organizations might be more prone to offer assistance program to workaholic employees. It is also reasonable to think that organizations who highly consider their human capital would be more inclined to implement individual-level interventions in order to help workaholic employees (Taris et al., 2010). In short, while the organization (e.g. Eisenberger et al., 1986), supervisor (e.g. Eisenberger et al., 2002), and coworkers (e.g. Bishop et al., 2000) have been shown to represent valuable sources of support, to the best of our knowledge, no previous study has included these three foci of support at once in order to investigate their specific impact either on work engagement or on workaholism. Nevertheless, Ng and Sorensen (2008) have stressed, in their meta-analysis, that the effects on employees of different sources of social support (e.g. perceived organizational support, perceived supervisor support, and perceived coworker support) are very dissimilar. For instance, these authors have shown that perceived supervisor support is more strongly related to several work-related outcomes (i.e. job satisfaction, affective commitment and turnover intentions) than perceived colleague support. According to Ng and Sorensen (2008), each source of social support has not necessarily the same consequences and differs in terms of strength of associations with its outcomes. Therefore, these authors recommended that researchers carefully examine the effects of each source of support in their studies. In line with these recommendations and our theoretical model presented in Figure 1, our study aims to explore the impact of perceived organizational support, perceived supervisor support, and perceived coworker support on both work engagement and workaholism which, in turn, will influence various indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Based on Ng and Sorensen's (2008) recommendations and previous empirical findings, we posited the following hypotheses: H3. Work engagement mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. H4. Workaholism mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. Sample and procedure A total of 425 PhD students of a Belgian University responded to an online questionnaire related to well-being at work (a response rate of approximately 21.25 percent). Due to missing data, only 343 of these 425 questionnaires were usable and thus maintained in the final sample. The external link to the online questionnaire was sent in an e-mail describing the aim of the questionnaire, and PhD students were assured of the anonymity and confidentiality of their responses. This specific population seemed particularly relevant to assess the two forms of working hard, i.e. work engagement and workaholism. Indeed, PhD student work is characterized by long work hours per week and extended concentrating and cognitive efforts. Furthermore, this population seems to be exposed to multiple demands resulting from their job such as research, academic coursework, competition and institutional demands (e.g. Myers et al., 2012). Of this sample, 42.86 percent were males and 57.14 percent were females. In average, participants were 28.27 years of age (SD=4.43), had been employed by the university for 3.02 years (SD=2.07) and had been working with their advisor for 3.30 years (SD=2.20). Measures Because our participants spoke French, scales used in the questionnaire were translated from English to French using the translation-back-translation procedure recommended by Brislin (1980). However, when available we used validated French versions of the scales. Work-related social support Perceived organizational support was measured using a short four-item version of the Survey of Perceived Organizational Support (SPOS) (Eisenberger et al., 1986). These four items covered well the two fundamental aspects of perceived organizational support, namely "valorization of employees' contributions" and "being concerned about employees' well-being". According to Rhoades and Eisenberger (2002), because of the high internal consistencies, and the unidimensionality of the SPOS, using a short version is not problematic. A sample item is: "[Name of the organization/university] really cares about my well-being". Perceived supervisor support was measured using an adapted version of the SPOS (four items) inspired in the same manner of Rhoades et al. (2001) and Eisenberger et al. (2002), replacing the word "organization" with the term "advisor". A sample item is "Even if I did the best job possible, my advisor would fail to notice" (reverse item). Prior empirical research indicated good psychometric properties of this perceived supervisor support scale (e.g. Rhoades et al., 2001). Perceived coworker support was operationalized using an adapted version of the SPOS in four items inspired in the same manner of Bishop et al. (2000) and Ladd and Henry (2000). A sample item is "My coworkers show very little concern for me" (reverse item). Prior studies using this perceived coworker support scale showed good psychometric properties (e.g. Ladd and Henry, 2000). Participants responded to a seven-point Likert-type scale ranging from 1 ("Strongly disagree") to 7 ("Strongly agree"). Work engagement We used the short version of the "Utrecht Work Engagement Scale" in nine items (UWES) (Schaufeli et al., 2002a) to assess work engagement. The scale includes three dimensions: vigor (three items; e.g. "At my work, I feel bursting of energy"), dedication (three items; e.g. "I am enthusiastic about my job"), and absorption (three items; e.g. "I feel happy when I am working intensely"). The response scale ranged from 1 ("Never") to 7 ("Always"). Workaholism We measured workaholism using the validated ten-item short version (Del Libano et al., 2010) of the "Dutch Work Addiction Scale" (DUWAS; Schaufeli et al., 2006) which includes the two dimensions of the construct, i.e. working excessively and working compulsively. Sample items are: "I find myself continuing work after my co-workers have called it quits" (working excessively; five items) and "I often feel that there's something inside me that drives me to work hard" (working compulsively; five items). The response scale ranged from 1 ("Never") to 4 ("Always"). Job satisfaction Job satisfaction was measured with four items from Eisenberger et al. (1997). A sample item is: "All in all, I am very satisfied with my current job". The response scale ranged from 1 ("Strongly disagree") to 7 ("Strongly agree"). Perceived stress We measured perceived stress with four items from the Perceived Stress Scale (PSS) (Cohen et al., 1983). A sample item is: "In the last month, how often have you felt difficulties were piling up so high that you could not overcome them?". The response scale ranged from 1 ("Never") to 5 ("Very often"). Sleep problems We measured sleep problems with four items from the "Jenkins Sleep Quality Index" (JSQ) (Jenkins et al., 1988) assessing the most common sleep problems (i.e. difficulties falling asleep, waking up during the night, waking up and having difficulties falling asleep again, and waking up tired). The response scale, indicating how often the stated condition occurred during an average month, ranged from 1 ("Not at all") to 6 ("22 to 31 days/month"). A sample item is: "I have had difficulties to fall asleep". Control variables Gender, age, tenure in the university and tenure with the advisor were measured. Discriminant validity In order to evaluate the distinctiveness of the eight concepts included in our study (i.e. perceived organizational support, perceived supervisor support, perceived coworker support, work engagement, workaholism, job satisfaction, perceived stress, and sleep problems), we conducted confirmatory factor analyses using Mplus 6.12 (Muthen and Muthen, 1998-2011). Based on the fact that we used the same items to measure each type of work related social support (i.e. perceived organizational support, perceived supervisor support and perceived coworker support), we allowed the error covariances of these content-equivalent items to correlate freely. Additionally, due to a considerable content overlap among some items included either in the work engagement scale or the workaholism one, we allowed the error covariances of some of the paired items to correlate freely, as it has been previously done in the validation studies of these scales (Del Libano et al., 2010; Schaufeli et al., 2002b). Based on the kh2 difference test (Bentler and Bonett, 1980), results of the CFA indicated that the hypothesized measurement model fitted the data well and was superior to all more constrained models. Indeed, as displayed in Table I, the hypothesized model had a better fit than the alternative measurement models. Because our data were self-reported, we also conducted the Harman single-factor test (Podsakoff et al., 2003) by constraining all items to load on a single factor model. Results indicated that the fit of the one-factor model was very poor. Furthermore, in line with Podsakoff et al.'s (2003) and Richardson et al.'s (2009) recommendations, we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results of this additional analysis indicated that the average variance explained by the common method factor was 10.89 percent. This value is less than half of the amount of method variance (25 percent) that Williams et al. (1989) refer to for self-reported studies. Furthermore, all items in the hypothesized eight-factor model display acceptable loadings. Indeed, items loaded on their respective factors with loadings ranging from 0.58 to 0.87 for perceived organizational support, from 0.67 to 0.93 for perceived supervisor support, from 0.67 to 0.92 for perceived coworker support, from 0.50 to 0.87 for work engagement, from 0.47 to 0.69 for workaholism, from 0.84 to 0.94 for job satisfaction, from 0.69 to 0.78 for perceived stress, and from 0.55 to 0.87 for sleep problems. Based on all of this evidence, the eight variables of our model were treated as separate constructs in our subsequent analyses in order to test our hypotheses. Relationships among variables Means, standard deviations, internal reliabilities, and correlations among our variables are displayed in Table II. All Cronbach's a's were above the 0.70 criterion established by Nunnally (1978). Test of hypotheses Following Becker's (2005) recommendations, we only statistically controlled for socio-demographic variables with a significant correlation with the dependent variables in our model (i.e. mediators and outcomes). Therefore, we introduced organizational tenure and tenure with the supervisor as additional exogenous variables predicting workaholism and job satisfaction, respectively. Furthermore, tenure with the supervisor and gender were controlled for perceived stress and gender was controlled for sleep problems. It is even more important to control for these variables given that past research also indicated that they have an impact on our dependent variables. More precisely, tenure in the organization has been found to be negatively associated with workaholism (Taris et al., 2005) and job satisfaction (Duffy et al., 1998). Scholars also suggested that is important to control for tenure with the supervisor knowing that possible temporal effects might explain the influence of the relationship with the supervisor on outcomes (e.g. Wang et al., 2013). Finally, prior research indicated that woman generally report higher levels of stress (e.g. Gyllensten and Palmer, 2005) and sleep problems than men (e.g. Ohayon, 1996). Therefore, as recommended by Spector and Brannick (2011), we included these control variables in the subsequent analyses, based on reasonable theoretical or empirical evidence that these socio-demographic variables are linked to variables included in our research model. Using Mplus 6.12 (Muthen and Muthen, 1998-2011), we conducted SEM analyses in order to test our hypotheses. Because of the different response scales, all items responses were standardized prior to these analyses. Then, we compared the fit of our hypothesized model with nine alternative models. Table III displays the fit indices of all these models. As shown in this table, results indicated that the hypothesized model has a good fit to the data, as indicated by a kh2(950)=1730.55, a CFI of 0.90, a SRMR of 0.10 and a RMSEA of 0.05. However, based on the kh2 difference test (Bentler and Bonett, 1980), some kh2 changes were significant, and revealed that the alternative model 4, which adds paths between perceived organizational support and job satisfaction, between perceived stress and sleep problems, and between perceived supervisor support and job satisfaction, was superior to the hypothesized model and the alternative models 1, 2 and 3 (for more details, see Table III). Therefore, this alternative model 4 was retained as the best fitting model (kh2(946)=1658.43, RMSEA=0.05, SRMR=0.09 and CFI=0.91). Standardized parameter estimates of this alternative model 4 are presented in Figure 2. For the sakeof clarity, the effects of the control variables are detailed in the text. Organizational tenure was related to job satisfaction (g=-0.12, p<0.05) but not to workaholism (g=0.14, ns). Tenure with the supervisor was related to job satisfaction (g=0.10, p<0.05) but not to workaholism and perceived stress (g=-0.05, ns; g=0.01, ns). Gender has a significant impact on perceived stress and sleep problems (g=-0.10, p<0.05; g=-0.14, p<0.01, respectively), indicating that men perceive less stress and suffer less from sleep problems than women. Controlling for these variables, results indicated that work engagement is positively associated with job satisfaction (b=0.54, p<0.001) and negatively with perceived stress (b=-0.29, p<0.001), but not with sleep problems (b=-0.11, ns), supporting H1(a) and H1(b). In the opposite, results indicated that workaholism is negatively related to job satisfaction (b=-0.19, p<0.001), and positively related to perceived stress (b=0.36, p<0.001), and sleep problems (b=0.37, p<0.001), providing support for H2(a), H2(b) and H2(c). Furthermore, perceived organizational support is positively related to work engagement (g=0.13, p<0.05) but is not related to workaholism (g=-0.10, ns). However, perceived organizational support has direct effects on job satisfaction (g=0.19, p<0.001), perceived stress (g=-0.21, p<0.01) and sleep problems (g=-0.17, p<0.01). Perceived supervisor support is also positively linked to work engagement (g=0.35, p<0.001) but not to workaholism (g=-0.05, ns) and has a direct positive impact on job satisfaction (g=0.25, p<0.001). Additionally, results indicated that perceived coworker support is negatively related to workaholism (g=-0.15, p<0.05), but not to work engagement (g=0.01, ns). A bootstrapping analysis was performed on the final model (alternative 4; Preacher and Hayes, 2004) in order to test the unstandardized indirect effects. The results of this analysis indicated that the indirect effects of perceived organizational support on job satisfaction and perceived stress through work engagement are significant (indirect effect=0.07; BCa 95 percent CI=[0.006; 0.140] and indirect effect=-0.03; BCa 95 percent CI=[-0.078; -0.005], respectively), supporting H3(a) and H3(b). Furthermore, the indirect effects of perceived supervisor support on job satisfaction and perceived stress through work engagement are also significant (indirect effect=0.17; BCa 95 percent CI=[0.111; 0.237] and indirect effect=-0.09; BCa 95 percent CI=[-0.137; -0.049], respectively), supporting H3(a) and H3(b). Finally, results showed that the indirect effects of perceived coworker support on job satisfaction, perceived stress, and sleep problems through workaholism are significant (indirect effect=0.03; BCa 95 percent CI=[0.001; 0.066]; indirect effect=-0.04; BCa 95 percent CI=[-0.105; -0.003] and indirect effect=-0.04; BCa 95 precent CI=[-0.098; -0.002], respectively), providing support for H4(a), H4(b) and H4(c). The purpose of this study was to examine the relationships between workaholism and work engagement with various indicators of well-being (i.e. job satisfaction, perceived stress, and sleep problems). Furthermore, the study was designed to explore the potential influence of various sources of support (perceived organizational support, perceived supervisor support, and perceived coworker support) on these relationships. To the best of our knowledge, this is the first study that tests the joint influence of work engagement and workaholism on perceived stress and sleep problems. Furthermore, this is the first research that investigates the effects of three different forms of work-related social support on work engagement and workaholism, which in turn influence employee' well-being. Our findings indicated that associations of workaholism and work engagement with indicators of well-being are opposite. More precisely, workaholism relates to negative indicators of well-being (i.e. lower levels of job satisfaction and higher levels of stress and sleep problems), whereas work engagement is associated with positive outcomes (i.e. higher levels of job satisfaction and lower levels of perceived stress). Our results corroborate prior research which found that workaholism is related to lower levels of job satisfaction (e.g. Del Libano et al., 2012), more health complaints (e.g. Schaufeli et al., 2006), and higher levels of sleep problems (e.g. Kubota et al., 2010). Furthermore, our findings are in line with prior studies which showed that work engagement is positively associated with higher levels of job satisfaction (e.g. van Beek et al., 2014) and a better mental and physical health (e.g. Schaufeli et al., 2008b). Interestingly, our results indicated no significant impact of work engagement on sleep problems. Nevertheless, our findings replicate the significant negative correlation found by Hallberg and Schaufeli (2006) between work engagement and sleep disturbances, but fail to demonstrate that work engagement actually reduces sleep problems. This absence of result can be explained in several ways. Contrary to Hallberg and Schaufeli's (2006) study, our research considered the effect of workaholism and work engagement altogether on sleep problems so that workaholism can account for the majority of the variance of sleep problems. Furthermore, we measured sleep problems with another scale than Hallberg and Schaufeli (2006). Concerning the question of which source of support influences work engagement and workaholism to eventually predict well-being, our results indicated that perceived organizational support, perceived supervisor support, and perceived coworker support are empirically distinct constructs that have different effects on work engagement and workaholism. More precisely, our results showed that work engagement partially mediated the relationship between perceived organizational support and both job satisfaction and perceived stress. Perceived organizational support has indeed a direct positive impact on job satisfaction and a direct negative impact on perceived stress and sleep problems. Work engagement was also found to mediate the influence of perceived supervisor support on job satisfaction (partially) and perceived stress (totally). Finally, workaholism was found to fully mediate the relationship between perceived coworker support and job satisfaction, perceived stress, and sleep problems. In short, our results indicated that perceived organizational support and perceived supervisor support are able to foster work engagement whereas perceived coworker support is negatively associated with workaholism. To be more precise, perceived supervisor support has a higher impact on work engagement than perceived organizational support (Dkh2(1)=3.14, p<0.10). Overall, these findings thus provided evidence for Ng and Sorensen's (2008) recommendations suggesting that different sources of social support have different effects and vary in terms of strength of the associations with employees' outcomes. Furthermore, our results are in line with the multi-foci perspective in the social exchange literature (Cropanzano et al., 2004; Lavelle et al., 2007) which suggest that people can develop multiple relationships at work and have distinct social exchange relationships with diverse organizational entities, such as the organization as a whole, and with specific entities within the organization such as supervisors, coworkers, or work groups (Cropanzano et al., 2004; Lavelle et al., 2007). In this multi-foci view, employees' proximity and high frequency of interaction with local organizational representatives and constituencies provide an advantage over more encompassing organizational units, including the entire organization, for developing strong exchange relationships (e.g. Becker, 1992; Mueller and Lawler, 1999). Accordingly, studies based on this multi-foci perspective of social exchange showed, for instance, that more proximal social exchange targets (i.e. supervisors or team) are stronger predictors of employees' performance than more distal targets (e.g. the organization; Lavelle et al., 2007). According to Mueller and Lawler (1999), this phenomenon can be explained by the fact that more proximal targets provided employees with a more important sense of control over their work. Our results are therefore consistent with these studies, in showing that more proximal units of social support (i.e. supervisor support and coworker support) have stronger associations with work engagement for the former (i.e. supervisor support) and workaholism for the latter (i.e. coworker support) than a more distal unit, i.e. organizational support. By doing so, we extend previous knowledge in showing which source of support is the most effective to influence each of these two forms of working hard. More precisely, the finding that perceived supervisor support contributes more strongly to work engagement than other sources of work-related social support is consistent with suggestions made by previous authors that high frequency of interaction with supervisors helps to create strong relationships with these entities (e.g. Becker, 1992). Supervisors also play a serious role in employees' everyday work life (Liden et al., 1997) and are a critical resource in their daily work. Furthermore, our findings also indicate that perceived coworker support is the only work-related source of support present in our study to be able to reduce employees' workaholism. Perceived support from coworkers might be able to help workaholic employees to detach from their job, for instance by inciting them to engage in off-job activities (e.g. sports), by distracting them from their work, or by boosting their social life outside their work. Finally, contrary to past research (e.g. Gillet et al., 2013), our results showed that perceived coworker support does not predict work engagement. This divergence of results may be due to the fact that, in the current study, we took into account and controlled for the effects of the three sources of work-related social support altogether on work engagement and workaholism. Therefore, perceived supervisor support and, in a lesser extent, perceived organizational support account for the majority of variance in work engagement. Supporting this view, this particular finding is consistent with some prior studies (e.g. Othman and Nasurdin, 2013) that found that coworker support was not related to work engagement when the influence of supervisor support was taken into account. Limitations and perspectives for future research Despite its contributions, several limitations of this research should be mentioned. First, the cross-sectional design of the study prevents us from making any inference of causality among the variables included in our model. For instance, our results indicated that perceived coworker support is negatively related to workaholism. However, we cannot exclude the possibility that workaholic employees might perceive less support from their coworkers than non-workaholic employees. Therefore, longitudinal research with repeated measures is needed in order to investigate causal relationships with more acuity. Second, the data were exclusively based on self-reported measurements, which exposed our study to the common method variance effect. Nevertheless, our study was primarily intended to assess employees' perceptions at work and we therefore needed to measure self-perception of these constructs. As recommended by Podsakoff et al. (2003), we assured respondents of the anonymity of their responses in order to reduce this common method bias. Even with these precautions, we cannot totally exclude the possibility that common method bias may have influenced our results. Therefore, as indicated above, we also conducted the Harman's one-factor test (Podsakoff et al., 2003) in our sample and the results showed a very poor fit of a one-factor model. Furthermore, as recommended by Podsakoff et al. (2003), we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results indicated that the average variance explained in the items by the common method factor was only 10.89 percent. This evidence considerably reduces our concerns regarding this potential threat. Third, the results of this study are specific to a PhD student population and based on a very homogenous sample. In order to increase the generability of our findings, future research should thus replicate these results among various organizational and industrial settings. Fourth, given this specificity of our sample, it would have been very interesting to examine the influence of other sources of social support in the University on employees' well-being. In particular, future research should consider the influence of sources of support which are comprised between the organizational and the supervisor level, i.e. the perceived support from the faculty or from the research department. Fifth, we examined the effects of three forms of work-related social support on employees' well-being, through work engagement and workaholism, without including any job demands in our research model. However, prior studies have reported a strong positive relationship between employees' workaholism and job demands (e.g. Schaufeli et al., 2008b). Indeed, workaholic employees' tend to create their own job demands (Guglielmi et al., 2012), such as making their work more complicated by accepting new tasks (e.g. Machlowitz, 1980). In line with this, Taris et al. (2005) found that the positive relationship between workaholism and employees' exhaustion is partially mediated by job demands (i.e. work overload). In a similar vein, Schaufeli et al. (2009b) showed that role conflict was a mediator of the relationships between workaholism and employees' well-being (i.e. burnout, job satisfaction, happiness, and perceived health). Furthermore, other scholars have argued based on the COR theory that job resources might become more salient in order to influence employees' work engagement when employees face high levels of job demands (Bakker and Demerouti, 2007). In line with this view, Hakanen et al. (2005) found that when resources at work were high (i.e. positive contacts with patients, peer contacts, variability in professional skills), these resources were able to attenuate the negative effects of job demands on work engagement. Given these empirical studies, we think that future research should replicate our study by taking into account the influence of job demands in the investigated relationships. Based on the evidence above, job demands might be hypothesized as interacting with social support in predicting work engagement, whereas they might also be considered as a mediator in the relationships between workaholism and well-being (i.e. job satisfaction, perceived stress, and sleep problems). Future research should thus examine the precise role played by job demands in the theoretical model that we tested. Future research should also envisage the possibility that work-related social support might have a dark side in certain cases. In line with this idea, Beehr et al. (2010) suggested the possibility that social interactions in the workplace such as supervisor or colleague support might be harmful for employees' psychological and physical health under certain circumstances. Results of their study showed, for instance, that social interactions with the supervisor or with colleagues might increase rather than reduce employees' strains when these interactions serve to underline how stressful the situation is. Therefore, it might be possible that the positive influence of perceived supervisor support or perceived coworker support on employees' well-being found in this study is canceled or reversed under specific circumstances or for specific individuals (e.g. when employees are not in demand of social support). Future research is therefore needed to address this specific and interesting issue. Finally, because we were interested in the relative impact of each source of work-related social support, we examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support independently of their influence on each other. However, we should also note that authors have underlined that these three sources of work-related support are important entities in the work environment and that "perceptions of support given by any one of these sources is likely to influence perceptions of support given by the others" (Ng and Sorensen, 2008, p. 262). In line with this view, a large body of research has, for instance, reported a positive relationship between perceived supervisor support and perceived organizational support (e.g. Eisenberger et al., 2002; Rhoades and Eisenberger, 2002; Rhoades et al., 2001). Therefore, future research may, for instance, investigate whether perceived supervisor support influences perceived organizational support which, in turn, impacts work engagement. Practical implications Although the current findings rely on a sample of PhD students who represent a specific and peculiar category of workers, this study has valuable potential practical implications for managers and practitioners because it provides new understanding concerning the consequences of work engagement and workaholism on employees' well-being. Because work engagement is associated with positive indicators of well-being (increased job satisfaction and reduced perceived stress), whereas workaholism is linked to negative ones (reduced job satisfaction, increased perceived stress and sleep problems), managers should promote practices in order to foster work engagement and prevent workaholism. In line with this point of view, our findings indicated that the most powerful source of support to foster work engagement is perceived supervisor support. Therefore, managers should encourage supervisors to be supportive. More precisely, they should inspire supervisors to be more active in this supportive role (Newman et al., 2012). Perceived supervisor support can also be fostered by encouraging supervisors to have regular meetings with their subordinates (Newman et al., 2012) or by training them to be supportive in their role of directing, evaluating and coaching their subordinates (Eisenberger and Stinglhamber, 2011). Another possible way to foster supervisor support is by promoting a two-way communication that helps to create a climate of trust between employees and supervisors (Ng and Sorensen, 2008). If supervisors are present for their subordinates when needed and help them both instrumentally and emotionally, it might also increase levels of perceived supervisor support (Somech and Drach-Zahavy, 2013). Furthermore, our results indicated that perceived organizational support enhances work engagement, albeit to a lesser extent than perceived supervisor support. Perceived organizational support also increases job satisfaction and decreases perceived stress and sleep problems. Practically, perceived organizational support can be promoted, for instance, by maintaining open channels of communication, by providing useful resources for employees when they are in need in order to help them to do their job adequately, or by providing job security through the fixed aim of avoiding layoff as much as possible (Eisenberger and Stinglhamber, 2011). In addition, previous studies indicated that perceived organizational support can be fostered by providing effective training for employees, by enhancing employees' autonomy to fulfill their job responsibilities and by increasing procedural fairness regarding rewards and positive job conditions (Eisenberger and Stinglhamber, 2011). Finally, our results showed that perceived coworker support has a negative influence on workaholism. Therefore, managers should enhance support among coworkers in order to reduce workaholism. For example, managers can encourage informal mentoring among employees in order to build a strong social network or organize social events outside of work where employees will be invited to freely interact with coworkers (Newman et al., 2012). Managers can also help to create an organizational culture where interactions between colleagues from different departments or units are a common practice (Newman et al., 2012). Opens in a new window. Figure 1 Conceptual model Opens in a new window. Figure 2 Completely standardized path coefficients for the alternative model 4 Opens in a new window. Table I Confirmatory factor analyses fit indices for measurement models Opens in a new window. Table II Descriptive statistics and intercorrelations among variables Opens in a new window. Table III Study 1: fit indices for structural models
|
Results revealed that work engagement mediates the relationships between perceived organizational support and job satisfaction and perceived stress. Perceived organizational support has also a direct positive impact on job satisfaction and a direct negative impact on perceived stress and sleep problems. Furthermore, work engagement mediates the influence of perceived supervisor support on job satisfaction and perceived stress. Finally, workaholism was found to mediate the relationships between perceived coworker support, and job satisfaction, perceived stress, and sleep problems.
|
[SECTION: Value] "Work life has undergone tremendous changes within the last 100 years" (Frese, 2008, p. 397). The increase of work complexity, global competition and the dissolution of the unity of work in time and space linked to rapid innovation lead to employees dealing with a work environment that is more and more demanding (Frese, 2008). Additionally, the advancement in technology allows employees to use cellphones or laptops with high-speed data connections at any place, making it possible for them to work at any time (e.g. van Beek et al., 2012). Overall, these changes together with technology development might encourage employees to work harder and during longer hours (e.g. van Wijhe et al., 2011). In the scientific literature, two different types of working hard have been distinguished; i.e. an intrinsically negative form named workaholism and an intrinsically positive form named work engagement (e.g. Schaufeli et al., 2008b). A large body of research has shown that work engagement is positively related to various work outcomes and indicators of employees' well-being, whereas workaholism generally displays negative relationships with the same variables (e.g. Schaufeli and Bakker, 2004; Taris et al., 2010). More recently, some research has begun to investigate the concomitant effects of these two types of working hard on these outcomes (e.g. Del Libano et al., 2012; Schaufeli et al., 2008b). In line with this perspective, the first aim of our research is to analyze the relationships of work engagement and workaholism altogether with different indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Specifically, we expected that workaholism will be related to low well-being, as indicated by lower levels of job satisfaction and higher levels of sleep problems and perceived stress, whereas work engagement will be associated positively with high well-being (i.e. higher levels of job satisfaction, lower levels of perceived stress and sleep problems). Second, the present study examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support on these relationships. If prior work has indicated that the organization, supervisor, and coworkers represent valuable sources of support that have a positive influence on employees' well-being (Ng and Sorensen, 2008), a rather unexplored issue is how these three types of work-related social support might have a concomitant impact on the two forms of working hard, which, in turn, will influence employees' well-being. Yet, according to Ng and Sorensen (2008), "it may be unwarranted for researchers to assume the effects of perceptions of different sources of support on employees are similar" (p. 259). On the contrary, these authors stressed the importance of this issue to scholars and recommended to consider this matter by examining the specific effect of each source of social support and thus including more than one source of work-related social support in their studies. In line with this suggestion, we aimed to investigate whether the effects of three different types of work-related social support on employees' well-being are mediated by work engagement and workaholism. In doing so, our research will contribute to the work engagement and workaholism literature in examining a more comprehensive model including both antecedents and consequences of these two constructs. Additionally, the present study also helps to identify which source of support is the most effective to increase employees' well-being through the two different types of working hard. Addressing this issue may provide a better understanding of the specific effect of each type of work-related social support, which might help lead to theory development (Whetten, 1989). Furthermore, at the practical level, identifying differences in their effects is of utmost importance as it will help practitioners to implement more appropriate interventions in order to enhance employees' well-being. Nowadays, there is still debate regarding the definition of workaholism. However, in our research, we refer to Schaufeli et al. (2009a) who define workaholism as "the tendency to work excessively hard and being obsessed with work, which manifests itself in working compulsively" (p. 322). We decided to adopt this definition because it comprises the two characteristics of workaholism (i.e. working excessively and having an obsessive inner drive) that scholars identified as key and recurrent elements in the various definitions of this construct (e.g. Guglielmi et al., 2012; McMillan and O'Driscoll, 2006). Typical workaholic employees spend a great amount of their time working (van Beek et al., 2011). They experience a strong and incontrollable inner drive, need, or compulsion to work hard which is not due to external factors such as financial factors and career perspectives (Schaufeli et al., 2006). More precisely, building on the self-determination theory (Deci and Ryan, 1985), van Beek et al. (2012) showed that workaholic employees are driven by an introjected regulation (i.e. a form of extrinsic motivation). Introjected regulation is described as "a product of an internalization process in which individuals rigidly adopt external standards of self-worth and social approval without fully identifying with them" (van Beek et al., 2012, p. 33). Recently, based on Higgins's regulatory focus theory (RFT; Higgins, 1997), van Beek et al. (2014) also demonstrated that workaholic employees have higher levels of prevention focus, meaning that they are sensitive to the absence or presence of negative outcomes and use avoidance strategies. Taken together, these results support the idea that workaholic employees work hard to avoid negative feelings such as guilt, shame, irritability, and anxiety or to increase feelings of pride (e.g. van Beek et al., 2012, 2014). Workaholic employees by definition work hard and during long and excessive hours (van Beek et al., 2011). Furthermore, these employees are unable to disengage from their work and think about it continually, even when they are not working (van Beek et al., 2011). Consequently, they have less opportunity to recover from their work, such as by relaxing, and therefore might have a higher tendency to deplete their resources (Van Wijhe et al., 2014). In line with this view, prior empirical studies have shown that workaholism is related to negative outcomes for employees, such as lower job satisfaction (e.g. Del Libano et al., 2012; van Beek et al., 2014), lower life satisfaction (Bonebright et al., 2000), and poorer social relationships outside their work (Schaufeli et al., 2008b). Workaholic employees have also been found to be less happy (Schaufeli et al., 2009b), to suffer more from health complaints, and to report lower levels of self-perceived health (e.g. Schaufeli et al., 2006), and higher levels of exhaustion (e.g. Taris et al., 2005) and sleep problems (e.g. Kubota et al., 2010, 2012). Conversely, an enthusiastic involvement in the job, called work engagement, might also explain the employees' propensity to work hard. Work engagement is defined as "a positive and fulfilling work-related state that is characterized by vigor, dedication and absorption" (Schaufeli et al., 2002a, p. 72). Among these three dimensions, vigor consists of high levels of energy, mental resilience while working, and persistence when facing difficulties (Schaufeli et al., 2002a). Dedication refers to being involved in one's work and experiencing a sense of significance, inspiration, pride, and challenge at work (Schaufeli et al., 2002a). Absorption is characterized by being fully concentrated and engrossed in one's work, whereby time passes fast and people have difficulties to detach from their job (Schaufeli et al., 2002a). Work engagement (Schaufeli et al., 2002a) has been shown to be driven by intrinsic work motivation. Work engaged employees thus consider their work as interesting, enjoyable, and satisfying (van Beek et al., 2012). Recently, based on RFT theory (Higgins, 1997), work engagement has also been positively related to having a promotion focus (van Beek et al., 2014), meaning that work engaged employees are sensitive to the absence or presence of positive outcomes. This finding also indicates that work engaged employees use approach strategies and therefore are likely to use an approach that "matches to their work goals that represent their hopes, wishes, and aspirations" (van Beek et al., 2014, p. 56). In sum, engaged employees have a sense of energetic connection with their work, are happily engrossed in their job, and do not feel guilty when they are not working (Schaufeli et al., 2008b). In line with this perspective, several studies have indicated that work engagement is associated with various positive outcomes for both organizations and employees. For example, engaged employees have been shown to be more satisfied with their job (e.g. Del Libano et al., 2012; van Beek et al., 2014), to demonstrate more personal initiative (Sonnentag, 2003), to have less intention to quit the organization (Schaufeli and Bakker, 2004; van Beek et al., 2014), and to perform better than non-engaged employees (e.g. Salanova et al., 2005). Work engagement was found to be related to higher life satisfaction and a better mental and physical health (Schaufeli and Salanova, 2007; Schaufeli et al., 2008b). Furthermore, results of prior studies showed that work engagement is negatively associated with various indicators of low well-being such as suffering from psychosomatic symptoms (e.g. headaches, cardiovascular problems; Koyuncu et al., 2006; Schaufeli et al., 2008b), exhaustion from work (e.g. Koyuncu et al., 2006), and sleep problems (Hallberg and Schaufeli, 2006). Thus, in short, work engagement and workaholism characterize two different forms of psychological states and have various associations with different work attitudes and indicators of well-being. While the former is related to positive outcomes, the latter is generally associated with negative ones. In line with these previous empirical findings and arguments, we posited that: H1. Work engagement is positively related to (a) job satisfaction and negatively related to (b) perceived stress and (c) sleep problems. H2. Workaholism is negatively related to (a) job satisfaction and positively related to (b) perceived stress and (c) sleep problems. Social support According to the job demands-resources model (JD-R) (Demerouti et al., 2001; Schaufeli and Bakker, 2004), two different types of work conditions, namely job demands and job resources, influence employees' well-being via a dual process, i.e. a health impairment process (linking job demands to negative outcomes through burnout) and a motivational process (linking job resources to positive outcomes through work engagement). Job demands refer to physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological effort (e.g. time pressure, emotional demands, physical demands). Job resources are defined as the physical, psychological, social or organizational aspects of the job that reduce job demands, are functional for achieving work goals or stimulate personal growth, learning and development (e.g. supportive work environment, supervisor support, coworker support and feedback; Demerouti et al., 2001). More precisely, the JD-R model describes a positive motivational process in which job resources such as social support are able to enhance work engagement which, in turn, has positive consequences for employees and organizations. In line with this perspective, Schaufeli and Bakker (2004) have suggested that social support is able to drive an intrinsic motivational process by satisfying employees' needs for autonomy and need to belong, as well as an extrinsic motivational process by increasing the probability to reach work goals. Supervisor and coworker support, for instance, might be able to have an intrinsic motivation role by fulfilling employees' need to belong (Xanthopoulou et al., 2008). Furthermore, coworker support might create among employees the conviction that they will receive help from their colleagues when needed, which might increase their confidence that they will achieve their work goals (Xanthopoulou et al., 2008). In doing so, coworker support might also play an extrinsic motivation role. Empirical studies that have commonly investigated the positive influence of social support on work engagement have precisely focused on supervisor and coworker support (e.g. Korunka et al., 2009). Accordingly, work engagement has been found to be positively predicted by both perceived supervisor support (e.g. Gillet et al., 2013) and perceived coworker support (e.g. Schaufeli and Bakker, 2004; Xanthopoulou et al., 2008) in several studies. In contrast, the influence of perceived organizational support, defined as employees' global beliefs that the organization cares about their well-being and values their contributions (Eisenberger et al., 1986), has been less investigated. Yet, numerous studies have demonstrated the positive influence of perceived organizational support on employees' well-being. Perceived organizational support has for example been shown to increase employees' job satisfaction and to reduce their stress (e.g. Eisenberger and Stinglhamber, 2011; Rhoades and Eisenberger, 2002). Furthermore, perceived organizational support has been positively associated with work engagement in some prior studies (e.g. Caesens and Stinglhamber, 2014; Kinnunen et al., 2008; Sulea et al., 2012). Given this empirical evidence, it seems reasonable to suggest that perceived organizational support, perceived supervisor support, and perceived coworker support are able to positively influence work engagement. However, to the best of our knowledge, no study has examined the positive effects of these three forms of support altogether on work engagement. On the other hand, a very scarce literature has examined the relationship between social support and the negative type of working hard, i.e. workaholism. In this literature, it appeared that social support as a general resource is negatively linked to workaholism (Schaufeli et al., 2008a). The conservation of resources theory (COR; Hobfoll, 1985, 2002) helps to better understand how work-related social support may be negatively related to workaholism. A central tenet of the COR theory is that people "with greater resources are less vulnerable to resource loss and more capable of resource gain" (Hakanen and Roodt, 2010, p. 89). In line with this principle, social support might both help employees to cope with stressful events such as juggling multiple roles (Nicklin and McNall, 2013), and prevent them from resource depletion (Somech and Drach-Zahavy, 2013). Therefore, as an energizing resource for employees, social support might help them cope with their tendency to work hard. In line with this view, previous authors have suggested that supervisor support and coworker cohesion are related to lower levels of compulsion to work (Johnstone and Johnston, 2005). In the same vein, Taris et al. (2010) have also argued that providing supervisors with effective training might help to raise employees' awareness of the meaning, aim, and relevance of their work. This might therefore help to reduce employees' compulsion to work hard (Taris et al., 2010). Furthermore, according to the literature on perceived organizational support (Eisenberger and Stinglhamber, 2011), high levels of perceived organizational support indicate that the organization cares about employees' well-being and is willing to extend itself to provide help for employees when they need it (George et al., 1993). Therefore, it seems that supportive organizations might be more prone to offer assistance program to workaholic employees. It is also reasonable to think that organizations who highly consider their human capital would be more inclined to implement individual-level interventions in order to help workaholic employees (Taris et al., 2010). In short, while the organization (e.g. Eisenberger et al., 1986), supervisor (e.g. Eisenberger et al., 2002), and coworkers (e.g. Bishop et al., 2000) have been shown to represent valuable sources of support, to the best of our knowledge, no previous study has included these three foci of support at once in order to investigate their specific impact either on work engagement or on workaholism. Nevertheless, Ng and Sorensen (2008) have stressed, in their meta-analysis, that the effects on employees of different sources of social support (e.g. perceived organizational support, perceived supervisor support, and perceived coworker support) are very dissimilar. For instance, these authors have shown that perceived supervisor support is more strongly related to several work-related outcomes (i.e. job satisfaction, affective commitment and turnover intentions) than perceived colleague support. According to Ng and Sorensen (2008), each source of social support has not necessarily the same consequences and differs in terms of strength of associations with its outcomes. Therefore, these authors recommended that researchers carefully examine the effects of each source of support in their studies. In line with these recommendations and our theoretical model presented in Figure 1, our study aims to explore the impact of perceived organizational support, perceived supervisor support, and perceived coworker support on both work engagement and workaholism which, in turn, will influence various indicators of well-being (i.e. job satisfaction, perceived stress and sleep problems). Based on Ng and Sorensen's (2008) recommendations and previous empirical findings, we posited the following hypotheses: H3. Work engagement mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. H4. Workaholism mediates the relationships between work-related social support and (a) job satisfaction, (b) perceived stress, and (c) sleep problems. Sample and procedure A total of 425 PhD students of a Belgian University responded to an online questionnaire related to well-being at work (a response rate of approximately 21.25 percent). Due to missing data, only 343 of these 425 questionnaires were usable and thus maintained in the final sample. The external link to the online questionnaire was sent in an e-mail describing the aim of the questionnaire, and PhD students were assured of the anonymity and confidentiality of their responses. This specific population seemed particularly relevant to assess the two forms of working hard, i.e. work engagement and workaholism. Indeed, PhD student work is characterized by long work hours per week and extended concentrating and cognitive efforts. Furthermore, this population seems to be exposed to multiple demands resulting from their job such as research, academic coursework, competition and institutional demands (e.g. Myers et al., 2012). Of this sample, 42.86 percent were males and 57.14 percent were females. In average, participants were 28.27 years of age (SD=4.43), had been employed by the university for 3.02 years (SD=2.07) and had been working with their advisor for 3.30 years (SD=2.20). Measures Because our participants spoke French, scales used in the questionnaire were translated from English to French using the translation-back-translation procedure recommended by Brislin (1980). However, when available we used validated French versions of the scales. Work-related social support Perceived organizational support was measured using a short four-item version of the Survey of Perceived Organizational Support (SPOS) (Eisenberger et al., 1986). These four items covered well the two fundamental aspects of perceived organizational support, namely "valorization of employees' contributions" and "being concerned about employees' well-being". According to Rhoades and Eisenberger (2002), because of the high internal consistencies, and the unidimensionality of the SPOS, using a short version is not problematic. A sample item is: "[Name of the organization/university] really cares about my well-being". Perceived supervisor support was measured using an adapted version of the SPOS (four items) inspired in the same manner of Rhoades et al. (2001) and Eisenberger et al. (2002), replacing the word "organization" with the term "advisor". A sample item is "Even if I did the best job possible, my advisor would fail to notice" (reverse item). Prior empirical research indicated good psychometric properties of this perceived supervisor support scale (e.g. Rhoades et al., 2001). Perceived coworker support was operationalized using an adapted version of the SPOS in four items inspired in the same manner of Bishop et al. (2000) and Ladd and Henry (2000). A sample item is "My coworkers show very little concern for me" (reverse item). Prior studies using this perceived coworker support scale showed good psychometric properties (e.g. Ladd and Henry, 2000). Participants responded to a seven-point Likert-type scale ranging from 1 ("Strongly disagree") to 7 ("Strongly agree"). Work engagement We used the short version of the "Utrecht Work Engagement Scale" in nine items (UWES) (Schaufeli et al., 2002a) to assess work engagement. The scale includes three dimensions: vigor (three items; e.g. "At my work, I feel bursting of energy"), dedication (three items; e.g. "I am enthusiastic about my job"), and absorption (three items; e.g. "I feel happy when I am working intensely"). The response scale ranged from 1 ("Never") to 7 ("Always"). Workaholism We measured workaholism using the validated ten-item short version (Del Libano et al., 2010) of the "Dutch Work Addiction Scale" (DUWAS; Schaufeli et al., 2006) which includes the two dimensions of the construct, i.e. working excessively and working compulsively. Sample items are: "I find myself continuing work after my co-workers have called it quits" (working excessively; five items) and "I often feel that there's something inside me that drives me to work hard" (working compulsively; five items). The response scale ranged from 1 ("Never") to 4 ("Always"). Job satisfaction Job satisfaction was measured with four items from Eisenberger et al. (1997). A sample item is: "All in all, I am very satisfied with my current job". The response scale ranged from 1 ("Strongly disagree") to 7 ("Strongly agree"). Perceived stress We measured perceived stress with four items from the Perceived Stress Scale (PSS) (Cohen et al., 1983). A sample item is: "In the last month, how often have you felt difficulties were piling up so high that you could not overcome them?". The response scale ranged from 1 ("Never") to 5 ("Very often"). Sleep problems We measured sleep problems with four items from the "Jenkins Sleep Quality Index" (JSQ) (Jenkins et al., 1988) assessing the most common sleep problems (i.e. difficulties falling asleep, waking up during the night, waking up and having difficulties falling asleep again, and waking up tired). The response scale, indicating how often the stated condition occurred during an average month, ranged from 1 ("Not at all") to 6 ("22 to 31 days/month"). A sample item is: "I have had difficulties to fall asleep". Control variables Gender, age, tenure in the university and tenure with the advisor were measured. Discriminant validity In order to evaluate the distinctiveness of the eight concepts included in our study (i.e. perceived organizational support, perceived supervisor support, perceived coworker support, work engagement, workaholism, job satisfaction, perceived stress, and sleep problems), we conducted confirmatory factor analyses using Mplus 6.12 (Muthen and Muthen, 1998-2011). Based on the fact that we used the same items to measure each type of work related social support (i.e. perceived organizational support, perceived supervisor support and perceived coworker support), we allowed the error covariances of these content-equivalent items to correlate freely. Additionally, due to a considerable content overlap among some items included either in the work engagement scale or the workaholism one, we allowed the error covariances of some of the paired items to correlate freely, as it has been previously done in the validation studies of these scales (Del Libano et al., 2010; Schaufeli et al., 2002b). Based on the kh2 difference test (Bentler and Bonett, 1980), results of the CFA indicated that the hypothesized measurement model fitted the data well and was superior to all more constrained models. Indeed, as displayed in Table I, the hypothesized model had a better fit than the alternative measurement models. Because our data were self-reported, we also conducted the Harman single-factor test (Podsakoff et al., 2003) by constraining all items to load on a single factor model. Results indicated that the fit of the one-factor model was very poor. Furthermore, in line with Podsakoff et al.'s (2003) and Richardson et al.'s (2009) recommendations, we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results of this additional analysis indicated that the average variance explained by the common method factor was 10.89 percent. This value is less than half of the amount of method variance (25 percent) that Williams et al. (1989) refer to for self-reported studies. Furthermore, all items in the hypothesized eight-factor model display acceptable loadings. Indeed, items loaded on their respective factors with loadings ranging from 0.58 to 0.87 for perceived organizational support, from 0.67 to 0.93 for perceived supervisor support, from 0.67 to 0.92 for perceived coworker support, from 0.50 to 0.87 for work engagement, from 0.47 to 0.69 for workaholism, from 0.84 to 0.94 for job satisfaction, from 0.69 to 0.78 for perceived stress, and from 0.55 to 0.87 for sleep problems. Based on all of this evidence, the eight variables of our model were treated as separate constructs in our subsequent analyses in order to test our hypotheses. Relationships among variables Means, standard deviations, internal reliabilities, and correlations among our variables are displayed in Table II. All Cronbach's a's were above the 0.70 criterion established by Nunnally (1978). Test of hypotheses Following Becker's (2005) recommendations, we only statistically controlled for socio-demographic variables with a significant correlation with the dependent variables in our model (i.e. mediators and outcomes). Therefore, we introduced organizational tenure and tenure with the supervisor as additional exogenous variables predicting workaholism and job satisfaction, respectively. Furthermore, tenure with the supervisor and gender were controlled for perceived stress and gender was controlled for sleep problems. It is even more important to control for these variables given that past research also indicated that they have an impact on our dependent variables. More precisely, tenure in the organization has been found to be negatively associated with workaholism (Taris et al., 2005) and job satisfaction (Duffy et al., 1998). Scholars also suggested that is important to control for tenure with the supervisor knowing that possible temporal effects might explain the influence of the relationship with the supervisor on outcomes (e.g. Wang et al., 2013). Finally, prior research indicated that woman generally report higher levels of stress (e.g. Gyllensten and Palmer, 2005) and sleep problems than men (e.g. Ohayon, 1996). Therefore, as recommended by Spector and Brannick (2011), we included these control variables in the subsequent analyses, based on reasonable theoretical or empirical evidence that these socio-demographic variables are linked to variables included in our research model. Using Mplus 6.12 (Muthen and Muthen, 1998-2011), we conducted SEM analyses in order to test our hypotheses. Because of the different response scales, all items responses were standardized prior to these analyses. Then, we compared the fit of our hypothesized model with nine alternative models. Table III displays the fit indices of all these models. As shown in this table, results indicated that the hypothesized model has a good fit to the data, as indicated by a kh2(950)=1730.55, a CFI of 0.90, a SRMR of 0.10 and a RMSEA of 0.05. However, based on the kh2 difference test (Bentler and Bonett, 1980), some kh2 changes were significant, and revealed that the alternative model 4, which adds paths between perceived organizational support and job satisfaction, between perceived stress and sleep problems, and between perceived supervisor support and job satisfaction, was superior to the hypothesized model and the alternative models 1, 2 and 3 (for more details, see Table III). Therefore, this alternative model 4 was retained as the best fitting model (kh2(946)=1658.43, RMSEA=0.05, SRMR=0.09 and CFI=0.91). Standardized parameter estimates of this alternative model 4 are presented in Figure 2. For the sakeof clarity, the effects of the control variables are detailed in the text. Organizational tenure was related to job satisfaction (g=-0.12, p<0.05) but not to workaholism (g=0.14, ns). Tenure with the supervisor was related to job satisfaction (g=0.10, p<0.05) but not to workaholism and perceived stress (g=-0.05, ns; g=0.01, ns). Gender has a significant impact on perceived stress and sleep problems (g=-0.10, p<0.05; g=-0.14, p<0.01, respectively), indicating that men perceive less stress and suffer less from sleep problems than women. Controlling for these variables, results indicated that work engagement is positively associated with job satisfaction (b=0.54, p<0.001) and negatively with perceived stress (b=-0.29, p<0.001), but not with sleep problems (b=-0.11, ns), supporting H1(a) and H1(b). In the opposite, results indicated that workaholism is negatively related to job satisfaction (b=-0.19, p<0.001), and positively related to perceived stress (b=0.36, p<0.001), and sleep problems (b=0.37, p<0.001), providing support for H2(a), H2(b) and H2(c). Furthermore, perceived organizational support is positively related to work engagement (g=0.13, p<0.05) but is not related to workaholism (g=-0.10, ns). However, perceived organizational support has direct effects on job satisfaction (g=0.19, p<0.001), perceived stress (g=-0.21, p<0.01) and sleep problems (g=-0.17, p<0.01). Perceived supervisor support is also positively linked to work engagement (g=0.35, p<0.001) but not to workaholism (g=-0.05, ns) and has a direct positive impact on job satisfaction (g=0.25, p<0.001). Additionally, results indicated that perceived coworker support is negatively related to workaholism (g=-0.15, p<0.05), but not to work engagement (g=0.01, ns). A bootstrapping analysis was performed on the final model (alternative 4; Preacher and Hayes, 2004) in order to test the unstandardized indirect effects. The results of this analysis indicated that the indirect effects of perceived organizational support on job satisfaction and perceived stress through work engagement are significant (indirect effect=0.07; BCa 95 percent CI=[0.006; 0.140] and indirect effect=-0.03; BCa 95 percent CI=[-0.078; -0.005], respectively), supporting H3(a) and H3(b). Furthermore, the indirect effects of perceived supervisor support on job satisfaction and perceived stress through work engagement are also significant (indirect effect=0.17; BCa 95 percent CI=[0.111; 0.237] and indirect effect=-0.09; BCa 95 percent CI=[-0.137; -0.049], respectively), supporting H3(a) and H3(b). Finally, results showed that the indirect effects of perceived coworker support on job satisfaction, perceived stress, and sleep problems through workaholism are significant (indirect effect=0.03; BCa 95 percent CI=[0.001; 0.066]; indirect effect=-0.04; BCa 95 percent CI=[-0.105; -0.003] and indirect effect=-0.04; BCa 95 precent CI=[-0.098; -0.002], respectively), providing support for H4(a), H4(b) and H4(c). The purpose of this study was to examine the relationships between workaholism and work engagement with various indicators of well-being (i.e. job satisfaction, perceived stress, and sleep problems). Furthermore, the study was designed to explore the potential influence of various sources of support (perceived organizational support, perceived supervisor support, and perceived coworker support) on these relationships. To the best of our knowledge, this is the first study that tests the joint influence of work engagement and workaholism on perceived stress and sleep problems. Furthermore, this is the first research that investigates the effects of three different forms of work-related social support on work engagement and workaholism, which in turn influence employee' well-being. Our findings indicated that associations of workaholism and work engagement with indicators of well-being are opposite. More precisely, workaholism relates to negative indicators of well-being (i.e. lower levels of job satisfaction and higher levels of stress and sleep problems), whereas work engagement is associated with positive outcomes (i.e. higher levels of job satisfaction and lower levels of perceived stress). Our results corroborate prior research which found that workaholism is related to lower levels of job satisfaction (e.g. Del Libano et al., 2012), more health complaints (e.g. Schaufeli et al., 2006), and higher levels of sleep problems (e.g. Kubota et al., 2010). Furthermore, our findings are in line with prior studies which showed that work engagement is positively associated with higher levels of job satisfaction (e.g. van Beek et al., 2014) and a better mental and physical health (e.g. Schaufeli et al., 2008b). Interestingly, our results indicated no significant impact of work engagement on sleep problems. Nevertheless, our findings replicate the significant negative correlation found by Hallberg and Schaufeli (2006) between work engagement and sleep disturbances, but fail to demonstrate that work engagement actually reduces sleep problems. This absence of result can be explained in several ways. Contrary to Hallberg and Schaufeli's (2006) study, our research considered the effect of workaholism and work engagement altogether on sleep problems so that workaholism can account for the majority of the variance of sleep problems. Furthermore, we measured sleep problems with another scale than Hallberg and Schaufeli (2006). Concerning the question of which source of support influences work engagement and workaholism to eventually predict well-being, our results indicated that perceived organizational support, perceived supervisor support, and perceived coworker support are empirically distinct constructs that have different effects on work engagement and workaholism. More precisely, our results showed that work engagement partially mediated the relationship between perceived organizational support and both job satisfaction and perceived stress. Perceived organizational support has indeed a direct positive impact on job satisfaction and a direct negative impact on perceived stress and sleep problems. Work engagement was also found to mediate the influence of perceived supervisor support on job satisfaction (partially) and perceived stress (totally). Finally, workaholism was found to fully mediate the relationship between perceived coworker support and job satisfaction, perceived stress, and sleep problems. In short, our results indicated that perceived organizational support and perceived supervisor support are able to foster work engagement whereas perceived coworker support is negatively associated with workaholism. To be more precise, perceived supervisor support has a higher impact on work engagement than perceived organizational support (Dkh2(1)=3.14, p<0.10). Overall, these findings thus provided evidence for Ng and Sorensen's (2008) recommendations suggesting that different sources of social support have different effects and vary in terms of strength of the associations with employees' outcomes. Furthermore, our results are in line with the multi-foci perspective in the social exchange literature (Cropanzano et al., 2004; Lavelle et al., 2007) which suggest that people can develop multiple relationships at work and have distinct social exchange relationships with diverse organizational entities, such as the organization as a whole, and with specific entities within the organization such as supervisors, coworkers, or work groups (Cropanzano et al., 2004; Lavelle et al., 2007). In this multi-foci view, employees' proximity and high frequency of interaction with local organizational representatives and constituencies provide an advantage over more encompassing organizational units, including the entire organization, for developing strong exchange relationships (e.g. Becker, 1992; Mueller and Lawler, 1999). Accordingly, studies based on this multi-foci perspective of social exchange showed, for instance, that more proximal social exchange targets (i.e. supervisors or team) are stronger predictors of employees' performance than more distal targets (e.g. the organization; Lavelle et al., 2007). According to Mueller and Lawler (1999), this phenomenon can be explained by the fact that more proximal targets provided employees with a more important sense of control over their work. Our results are therefore consistent with these studies, in showing that more proximal units of social support (i.e. supervisor support and coworker support) have stronger associations with work engagement for the former (i.e. supervisor support) and workaholism for the latter (i.e. coworker support) than a more distal unit, i.e. organizational support. By doing so, we extend previous knowledge in showing which source of support is the most effective to influence each of these two forms of working hard. More precisely, the finding that perceived supervisor support contributes more strongly to work engagement than other sources of work-related social support is consistent with suggestions made by previous authors that high frequency of interaction with supervisors helps to create strong relationships with these entities (e.g. Becker, 1992). Supervisors also play a serious role in employees' everyday work life (Liden et al., 1997) and are a critical resource in their daily work. Furthermore, our findings also indicate that perceived coworker support is the only work-related source of support present in our study to be able to reduce employees' workaholism. Perceived support from coworkers might be able to help workaholic employees to detach from their job, for instance by inciting them to engage in off-job activities (e.g. sports), by distracting them from their work, or by boosting their social life outside their work. Finally, contrary to past research (e.g. Gillet et al., 2013), our results showed that perceived coworker support does not predict work engagement. This divergence of results may be due to the fact that, in the current study, we took into account and controlled for the effects of the three sources of work-related social support altogether on work engagement and workaholism. Therefore, perceived supervisor support and, in a lesser extent, perceived organizational support account for the majority of variance in work engagement. Supporting this view, this particular finding is consistent with some prior studies (e.g. Othman and Nasurdin, 2013) that found that coworker support was not related to work engagement when the influence of supervisor support was taken into account. Limitations and perspectives for future research Despite its contributions, several limitations of this research should be mentioned. First, the cross-sectional design of the study prevents us from making any inference of causality among the variables included in our model. For instance, our results indicated that perceived coworker support is negatively related to workaholism. However, we cannot exclude the possibility that workaholic employees might perceive less support from their coworkers than non-workaholic employees. Therefore, longitudinal research with repeated measures is needed in order to investigate causal relationships with more acuity. Second, the data were exclusively based on self-reported measurements, which exposed our study to the common method variance effect. Nevertheless, our study was primarily intended to assess employees' perceptions at work and we therefore needed to measure self-perception of these constructs. As recommended by Podsakoff et al. (2003), we assured respondents of the anonymity of their responses in order to reduce this common method bias. Even with these precautions, we cannot totally exclude the possibility that common method bias may have influenced our results. Therefore, as indicated above, we also conducted the Harman's one-factor test (Podsakoff et al., 2003) in our sample and the results showed a very poor fit of a one-factor model. Furthermore, as recommended by Podsakoff et al. (2003), we also tested a model wherein the items loaded both on their respective hypothesized latent constructs and on a common method factor. The results indicated that the average variance explained in the items by the common method factor was only 10.89 percent. This evidence considerably reduces our concerns regarding this potential threat. Third, the results of this study are specific to a PhD student population and based on a very homogenous sample. In order to increase the generability of our findings, future research should thus replicate these results among various organizational and industrial settings. Fourth, given this specificity of our sample, it would have been very interesting to examine the influence of other sources of social support in the University on employees' well-being. In particular, future research should consider the influence of sources of support which are comprised between the organizational and the supervisor level, i.e. the perceived support from the faculty or from the research department. Fifth, we examined the effects of three forms of work-related social support on employees' well-being, through work engagement and workaholism, without including any job demands in our research model. However, prior studies have reported a strong positive relationship between employees' workaholism and job demands (e.g. Schaufeli et al., 2008b). Indeed, workaholic employees' tend to create their own job demands (Guglielmi et al., 2012), such as making their work more complicated by accepting new tasks (e.g. Machlowitz, 1980). In line with this, Taris et al. (2005) found that the positive relationship between workaholism and employees' exhaustion is partially mediated by job demands (i.e. work overload). In a similar vein, Schaufeli et al. (2009b) showed that role conflict was a mediator of the relationships between workaholism and employees' well-being (i.e. burnout, job satisfaction, happiness, and perceived health). Furthermore, other scholars have argued based on the COR theory that job resources might become more salient in order to influence employees' work engagement when employees face high levels of job demands (Bakker and Demerouti, 2007). In line with this view, Hakanen et al. (2005) found that when resources at work were high (i.e. positive contacts with patients, peer contacts, variability in professional skills), these resources were able to attenuate the negative effects of job demands on work engagement. Given these empirical studies, we think that future research should replicate our study by taking into account the influence of job demands in the investigated relationships. Based on the evidence above, job demands might be hypothesized as interacting with social support in predicting work engagement, whereas they might also be considered as a mediator in the relationships between workaholism and well-being (i.e. job satisfaction, perceived stress, and sleep problems). Future research should thus examine the precise role played by job demands in the theoretical model that we tested. Future research should also envisage the possibility that work-related social support might have a dark side in certain cases. In line with this idea, Beehr et al. (2010) suggested the possibility that social interactions in the workplace such as supervisor or colleague support might be harmful for employees' psychological and physical health under certain circumstances. Results of their study showed, for instance, that social interactions with the supervisor or with colleagues might increase rather than reduce employees' strains when these interactions serve to underline how stressful the situation is. Therefore, it might be possible that the positive influence of perceived supervisor support or perceived coworker support on employees' well-being found in this study is canceled or reversed under specific circumstances or for specific individuals (e.g. when employees are not in demand of social support). Future research is therefore needed to address this specific and interesting issue. Finally, because we were interested in the relative impact of each source of work-related social support, we examined the influence of perceived organizational support, perceived supervisor support and perceived coworker support independently of their influence on each other. However, we should also note that authors have underlined that these three sources of work-related support are important entities in the work environment and that "perceptions of support given by any one of these sources is likely to influence perceptions of support given by the others" (Ng and Sorensen, 2008, p. 262). In line with this view, a large body of research has, for instance, reported a positive relationship between perceived supervisor support and perceived organizational support (e.g. Eisenberger et al., 2002; Rhoades and Eisenberger, 2002; Rhoades et al., 2001). Therefore, future research may, for instance, investigate whether perceived supervisor support influences perceived organizational support which, in turn, impacts work engagement. Practical implications Although the current findings rely on a sample of PhD students who represent a specific and peculiar category of workers, this study has valuable potential practical implications for managers and practitioners because it provides new understanding concerning the consequences of work engagement and workaholism on employees' well-being. Because work engagement is associated with positive indicators of well-being (increased job satisfaction and reduced perceived stress), whereas workaholism is linked to negative ones (reduced job satisfaction, increased perceived stress and sleep problems), managers should promote practices in order to foster work engagement and prevent workaholism. In line with this point of view, our findings indicated that the most powerful source of support to foster work engagement is perceived supervisor support. Therefore, managers should encourage supervisors to be supportive. More precisely, they should inspire supervisors to be more active in this supportive role (Newman et al., 2012). Perceived supervisor support can also be fostered by encouraging supervisors to have regular meetings with their subordinates (Newman et al., 2012) or by training them to be supportive in their role of directing, evaluating and coaching their subordinates (Eisenberger and Stinglhamber, 2011). Another possible way to foster supervisor support is by promoting a two-way communication that helps to create a climate of trust between employees and supervisors (Ng and Sorensen, 2008). If supervisors are present for their subordinates when needed and help them both instrumentally and emotionally, it might also increase levels of perceived supervisor support (Somech and Drach-Zahavy, 2013). Furthermore, our results indicated that perceived organizational support enhances work engagement, albeit to a lesser extent than perceived supervisor support. Perceived organizational support also increases job satisfaction and decreases perceived stress and sleep problems. Practically, perceived organizational support can be promoted, for instance, by maintaining open channels of communication, by providing useful resources for employees when they are in need in order to help them to do their job adequately, or by providing job security through the fixed aim of avoiding layoff as much as possible (Eisenberger and Stinglhamber, 2011). In addition, previous studies indicated that perceived organizational support can be fostered by providing effective training for employees, by enhancing employees' autonomy to fulfill their job responsibilities and by increasing procedural fairness regarding rewards and positive job conditions (Eisenberger and Stinglhamber, 2011). Finally, our results showed that perceived coworker support has a negative influence on workaholism. Therefore, managers should enhance support among coworkers in order to reduce workaholism. For example, managers can encourage informal mentoring among employees in order to build a strong social network or organize social events outside of work where employees will be invited to freely interact with coworkers (Newman et al., 2012). Managers can also help to create an organizational culture where interactions between colleagues from different departments or units are a common practice (Newman et al., 2012). Opens in a new window. Figure 1 Conceptual model Opens in a new window. Figure 2 Completely standardized path coefficients for the alternative model 4 Opens in a new window. Table I Confirmatory factor analyses fit indices for measurement models Opens in a new window. Table II Descriptive statistics and intercorrelations among variables Opens in a new window. Table III Study 1: fit indices for structural models
|
The findings suggest that managers should promote practices in order to foster work engagement and prevent workaholism. In line with this, the findings indicated that the most powerful source of support that fosters work engagement is perceived supervisor support. Organizations should, therefore, train their supervisors to be supportive in their role of directing, evaluating and coaching subordinates or encourage supervisors to have regular meetings with their subordinates. Additionally, the results showed that perceived coworker support is the only source of work-related social support that has a negative influence on workaholism. Managers should foster coworker support, for instance by encouraging informal mentoring among employees in order to build a strong social network.
|
[SECTION: Purpose] As with any change-orientated activity, social entrepreneurship (SE) has not evolved in a vacuum, but rather within a complex framework of political, economic and social change occurring at global and local levels (Johnson, 2000; Kramer, 2005; Harding, 2006). The contribution of social entrepreneurs is being increasingly celebrated, as was witnessed at the World Economic Forum's (2006) Conference on Africa in Cape Town recently. Similarly, Warren Buffett's $30.7 billion donation to the Bill & Melinda Gates Foundation (Cole, 2006) demonstrates that venture philanthropy represents a significant change in how people think about transferring wealth. SE has evolved into the mainstream after years of marginalisation on the edges of the non-profit sector. Venture philanthropists, grant sponsors, boards of directors, non-profit entrepreneurs, consultants and academics are now all interested in the field of SE (Boschee, 2001; Kramer, 2005; Frumkin, 2006). Over the last decade, a critical mass of foundations, academics, non-profit organisations, and self-identified social entrepreneurs have emerged and SE has become a distinct discipline (Kramer, 2005; Dees, 2001).Worldwide, policy makers are using the language of local capacity building as a strategy to assist impoverished communities in becoming self-reliant (Peredo and Chrisman, 2006). Exemplifying a growing trend for academic institutions to take this phenomenon seriously, many dedicated centres for SE have evolved, for example the Skoll Centre for Social Entrepreneurship at Oxford University, created by Jeff Skoll, whose mission is to advance systemic change for the benefit of communities around the world by investing in, connecting and celebrating social entrepreneurs (The Economist, 2006). Many similar institutions exist and researchers (Dees, 2001; Christie and Honing, 2006) suggest that the time is certainly ripe for entrepreneurial approaches to social problems, since SE merges the passion of a social mission with business discipline, innovation, and determination (Jackson, 2006).Although not new in the commercial/business sector, corporate governance and corporate social responsibility (CSR) have gained unprecedented prominence in the modern corporation and are well documented in academic research and popular literature (Rossouw and Van Vuuren, 2004). In South Africa (SA), where SE remains an under-researched area, the importance of SE as a phenomenon in social life is critical; social entrepreneurs contribute to an economy by providing an alternative business model for firms to trade commercially in an environmentally and socially sustainable way. They also provide an alternative delivery system for public services such as health, education, housing and community support (Harding, 2006, p. 10). Moreover, social entrepreneurs are also seen to be a growing source of solutions to issues that currently plague society, such as poverty, crime and abuse (Schuyler, 1998). Social entrepreneurs provide solutions to social, employment and economic problems where traditional market or public approaches fail (Jeffs, 2006). Yet despite these achievements, government in SA appears reluctant to directly engage with SE endeavours, viewing social entrepreneurs as innately risky - and their activities as maverick endeavours.Driving forces of SE with relevance to SA The central driver for SE are social problems (Austin et al., 2006), and driving forces for social entrepreneurs include:* politically, the devolution of social functions from national to local level and from public to private;* economically, the reduction of funding from the public purse; and* socially, problems of increasing complexity and magnitude (Lock, 2001, p. 1).In SA, SE has unequivocal application where traditional government initiatives are unable to satisfy the entire social deficit, where an effort to reduce dependency on social welfare/grants is currently being instituted, and where the survival of many non-governmental organisations (NGOs) is at stake. Such challenges are exacerbated by a social context characterised by massive inequalities in education, housing, the HIV/AIDS pandemic, and high unemployment and poverty rates (Rwigema and Venter, 2004). Accompanying these massive social deficits, many governmental and philanthropic efforts have fallen far short of expectations, with social sector institutions often viewed as inefficient, ineffective, and unresponsive. In particular, policymakers have limited guidance and recognise that the invisible hand frequently fails to assert itself in the most socially beneficial outcomes (Christie and Honing, 2006). Moreover, many poverty alleviation programmes have degenerated into global charity events rather than serving local needs, since most projects have been conceived and managed by development agencies rather than by members of the community, resulting in a lack of ownership on the part of the target beneficiaries (Peredo and Chrisman, 2006).Such failures suggest that there are many gaps in understanding SE activities under conditions of material poverty and in different cultural settings (Peredo and Chrisman, 2006). Consequently the focus of this study is on delineating the SE construct through a focused literature review and identifying the factors necessary for successful SE to flourish. As in the international arena, due to a surge in the establishment of non-profit organisations, SE in SA has proliferated in recent decades, although as an academic enquiry SE is still emergent (Austin et al., 2006). Based on these limitations, a study exploring the nature of SE and how to practise it successfully seems justified, particularly within a non-Western context. Once the various theoretical issues and debates that have made significant contributions to the evolution of SE theory and practice are scrutinised, the author will assume a position relative to these debates and then conduct empirical investigations.Broadly the paper seeks to interrogate existing SE theory and then, relative to these existing controversies, analyse quantitatively students' intentions to engage in SE. Furthermore, the different types of SE activities are measured in conjunction with the level of entrepreneurial and managerial skills typically associated with successful social entrepreneurs. The rationale for focusing this study on SE skills can be found in the many instances where it is impossible to obtain start-up funds without demonstrating proof of concept together with commensurate abilities required to execute such an initiative. Those who fund social entrepreneurs are looking to invest in people with a demonstrated ability to create change, and the factors that matter most are the financial, strategic, managerial, and innovative abilities of social entrepreneurs (Kramer, 2005). Therefore, an investigation into the mix of managerial and entrepreneurial skills associated with successful SE is crucially important to this study. The following sections focus on defining and operationalising SE, and then identifying and surveying the different types of skills required for successful SE practices. In examining the SE construct several definitions are investigated and their components are analysed. As used in social sciences research, a construct is an idea specifically invented for theory-building purposes; a construct also combines simpler concepts especially when the idea is least observable and complex to measure (Cooper and Emory, 1995). To a large extent SE embodies such tendencies, where social entrepreneurs are reformers and revolutionaries, as described by Schumpeter (1934) but with a social mission; they affect fundamental change in the way things are done in the social sector (Dees, 1998). Social entrepreneurs are perceived as heading mission-based businesses rather than operating as charities. The entrepreneurs seek to create systemic changes and sustainable improvements, and they take risks on behalf of the people their organisation serves (Brinckerhoff, 2000). Though they may act locally, their actions have the potential to stimulate global improvements in various fields, whether it is in education, health care, economic development, the environment, the arts, or any other social field (Dees, 1998).The language of social entrepreneurship may be new, but the phenomenon is not. Peter Drucker (1979, p. 453) introduced the concept of social enterprise when he advocated that even the most private of private enterprises is an organ of society and serves a social function. He also advocated a need for the social sector in addition to the private sector of business and the public sector of government to satisfy social needs and provide a sense of citizenship and community. Similarly, Spear (2004) poses the question of whether SE is about creating social enterprise or is more concerned with those particular aspects of entrepreneurship that have a social dimension. Based on the Global Entrepreneurship Monitor (GEM) report, SE is defined as follows:Social entrepreneurship is any attempt at new social enterprise activity or new enterprise creation, such as self-employment, a new enterprise, or the expansion of a existing social enterprise by an individual, teams of individuals or established social enterprise, with social or community goals as its base and where the profit is invested in the activity in the activity or venture itself rather than returned to investors (Harding, 2006, p. 5).Subscribing to the precept that "Social entrepreneurs are one species in the genus entrepreneur", Dees (2001, pp. 2-4) sees social entrepreneurs playing the role of change agents in the social sector, by adopting a mission to create and sustain social value (not just private value), by recognising and relentlessly pursuing new opportunities to serve that mission, engaging in a process of continuous innovation, adaptation and learning, acting boldly without being limited by resources currently at hand, and exhibiting greater accountability to the constituencies served and for the outcomes created.Each element in these definitions is based on the body of entrepreneurship research and this is the core of what distinguishes social entrepreneurs from business entrepreneurs, even from socially responsible businesses. It is also worth noting that these definitions, primarily individualistic in their conception, fail to adequately acknowledge a collective form of entrepreneurship. Indeed, scholars now highlight the importance of recognising entrepreneurship as building on a collective process of learning and innovation (Peredo and Chrisman, 2006). Peredo and Chrisman (2006, p. 310) developed the concept of community-based enterprise (CBE), which they define as a community acting corporately as both entrepreneur and enterprise in pursuit of the common good. Documented cases of CBE include the Mondragon Corporation Cooperative in Spain (Morrison, 1991). Moreover, some of the oldest and some of the most modern social enterprises are co-operatives. A co-operative is defined as an autonomous association of voluntarily united persons who meet their common social, economic and cultural needs through a jointly owned and democratically controlled enterprise. It is estimated that there are about 800 million co-operative members around the world. The Social Enterprise Coalition is an example of this type of co-operative (Cabinet Office, 2007).Such views resonate with Cooper and Denner's (1998) perspective; culture as capital - a theory of social capital, which refers to the relationships and networks from which individuals are able to derive institutional support. Social capital is cumulative, leads to benefits in the social world, and can be converted into other forms of capital. Based on these collective propositions, SE can be viewed as a process that serves as catalyst for social change, and varies according to socio-economic and cultural environments. Combining insights from sociology, political science, and organisational theory, Mair and Marti (2006) propose the concept of embeddedness to emphasise the importance of the continuous interaction between social entrepreneurs and the context in which they are embedded. This discussion is relevant to South Africa, which is largely characterised as a collectivist nation, and where a concept like Ubuntu (together with an element of high community involvement) is in conflict with individualism yet differs from collectivism, where the rights of the individual are subjugated to a common good. It is this collectively enabling approach that is essential for collective SE, which is more socially orientated that builds on strengths rather than dwelling on deficits, and encompasses socio-structural factors among the sources and remedies for human problems (Bandura, 1997).These perspectives are reinforced when Weerawardena and Mort (2006), advance the concept of SE through empirical research and find it is a bounded multi-dimensional construct deeply rooted in an organisation's social mission with its drive for sustainability, which in turn is shaped by environmental dynamism. Similarly, Giddens's (1998) view is that SE is the way to reconstruct welfare and build social partnerships between public, social and business sectors by harnessing the dynamism of markets with a public interest focus. Consequently, profit is not the gauge of value creation, nor is customer satisfaction, and social impact is the gauge in SE. Social entrepreneurs look for a long-term social return on investment. Indeed they are not simply driven by the perception of a social need or by their compassion; rather, they have a vision of how to achieve improvement and they are determined to achieve their vision (Dees, 2001; Bornstein, 1998).SE differences and similarities in meaning In general, based on established literature, the concept of SE remains poorly defined and its boundaries to other fields remain blurred (Mair and Marti, 2006). Conceptual differences are noticeable in definitions of social entrepreneurship (focus on process or behaviour), social entrepreneurs (focus on founder of initiative), and social enterprise (focus on tangible outcome of SE). Peredo and McLean (2006) propose that one could easily ask what makes SE social, and what makes it entrepreneurship. Research on SE is clearly based on the knowledge base of entrepreneurship, and any definition of SE is shaped by the prevailing findings on entrepreneurship theory and practice. Although it is beyond the scope of this article to expound on the field of entrepreneurship, it provides a contemporary definition, which views the field of entrepreneurship as a "scholarly examination of how, by whom, and to what effect opportunities for creating future goods and services are discovered, evaluated and exploited" (Shane and Venkataraman, 2001, p. 218).The social element in definitions is often used to differentiate SE from commercial entrepreneurship, including the altruistic motive associated with SE and the profit motive with commercial entrepreneurship. However Mair and Marti (2006) argue that such a dichotomy is incorrect since SE, although based on ethical and moral issues, could include less altruistic reasons such as personal fulfilment, and the creation of fresh markets and new jobs. Correspondingly, commercial entrepreneurship also comprises a social aspect, as previously mentioned in terms of CSR. Rather than profit versus non-profit, Mair and Marti (2006) suggest that the main difference between business and social entrepreneurship lies in the relative priority given to social wealth creation versus economic wealth creation. Similarly, Peredo and McLean (2006) interpret a range of social entrepreneurs, along a continuum of possibilities ranging from entirely social benefits accrued to a firm, to social goals being the only requirement of the firm. Such conceptualisations reflect the absence of defined boundaries of the SE phenomena. An additional difficulty in defining SE is differentiating the small scale, often voluntary or charitable work done by individuals making a social difference, from the social entrepreneur who establishes a high turnover social enterprise (Harding, 2006). Because of their structure and constitution social entrepreneurs are able to serve a triple bottom line achieving profitability, societal impact and environmental sustainability, simultaneously (Harding, 2006).Theoretical conclusions A summary of the SE academic literature suggests a number of themes, preoccupations and domains have emerged (Weerawardena and Mort, 2006), which may generally comprise:* SE being expressed in a vast array of economic, educational, welfare and social activities, reflecting such diverse activities;* SE may be conceptualised in a number of contexts, i.e. public sector, community, social action organisations and charities; and* the role of innovativeness, proactiveness and risk-taking in SE are emphasised by distinguishing SE from other forms of community work.In order to draw some conclusions from these varying definitional controversies, an attempt is made to offer a position in relation to such debates, which allows for further interpretation and analysis. Existing theory has revealed a commonality across all definitions of SE. This is the fact that the underlying drive for social entrepreneurs is to create social value, rather than personal and shareholder wealth, and that the activity is characterised by innovation or the creation of something new rather than simply the replication of existing enterprises or practices. In concordance with other SE reports (Harding, 2006, p. 5) it is argued that the SE definition must reflect two critical features of a social as opposed to a mainstream enterprise:1. the project has social goals rather than profit objectives; and2. revenue is used to support social goals instead of shareholder returns.The definition of sustainability within the context of the non-profit sector is quite different from that of the for-profit sector, with the advocacy of sustainability versus stability being contentious in view of organisations having sustainable finances but no community support, and therefore probably not being sustainable (Johnson, 2003). Due to the exploratory nature of the study and since specific associations are predicted between the variables under study, hypotheses are formulated. These hypotheses are based on SE definitional controversies, and distinctions between managerial and entrepreneurial skills typically associated with successful SE. The economic value of entrepreneurial ability which is acquired through education can be identified through the work of Schultz (1980), who recognised that the returns that actually accrue to education are substantially undervalued. Despite early notions that entrepreneurship is an innate skill, recent studies (e.g. Fayolle et al., 2005) indicate that entrepreneurship education influences both current behaviour and future intentions. Identifying business opportunities and having confidence in personal skills to establish a business may be enhanced through education and training, with evidence suggesting that those with more education are more likely to pursue opportunities for entrepreneurship (high-growth ventures) (Gibb, 2000).In developing a body of theory on SE, Austin et al. (2006) highlight the differences between social and commercial entrepreneurship, and based on a prevailing commercial model, explore new parameters when applied to SE. Although this distinction clearly overlaps with previous differences highlighted on social goals versus profit, it can be interpreted that the distinction between social and commercial entrepreneurship is not dichotomous, but better conceptualised as a continuum ranging from purely social to purely economic. Some key differences that emerge from case examples (Austin et al., 2006) are:* SE focuses on serving basic, long-standing needs more effectively through innovative approaches rather than considering commercial entrepreneurship, which tends to focus on breakthroughs and new needs.* The context of SE differs from commercial entrepreneurship in the way that the interaction between a social ventures mission statement and performance measurement systems influences entrepreneurial behaviour (quantification of social impact is difficult).* The nature of the human and financial resources for SE differs in some key respects because of difficulties in resource mobilisation. New analysis (Rouse and Jayawarna, 2006) of the policy options available for improving finance to disadvantaged groups and obtaining social inclusion is pivotal towards understanding SE.Similarly, Thompson et al. (2000) distinguish between social entrepreneurs and managers, the former being catalysts for entrepreneurial projects, while the latter are critical for seeing initiatives through. Additionally, differences exist between non-profit versus for-profit social entrepreneurs, particularly where the advantages of collective wisdom versus personal skills are concerned, and where the focus is on long-term capacity versus short-term financial gain. Since the focus of this study is on the creation of social value through innovation, it is recognised that the mix of managerial competencies appropriate to successful SE may however differ in significant ways from the mix relevant to success in entrepreneurship excluding the social component (Peredo and McLean, 2006). Because of this distinction, a definition of an entrepreneurial competency/skill is offered:An entrepreneurial competency consists of a combination of skills, knowledge and resources that distinguishes entrepreneurs from their competitors (Fiet, 2000, p. 107).Several emergent themes of SE competencies arise from in-depth case study interviews (Thompson, 2002; Weerawardena and Mort, 2006): networking, people management, fund raising, mentoring, business training, environmental dynamics, innovativeness, proactiveness, risk management, sustainability, social mission, and opportunity recognition. Through case study exploration of sensemaking, Mills and Pawson (2006) raise questions about the centrality of the notion of risk in new start entrepreneurs' rationales for the enterprise development decisions they make. Additionally, Thompson (2002) uses an SE map to identify four central themes, which are:1. job creation;2. utilisation of buildings;3. volunteer support; and4. focus on helping people in need.Similarly, Brinckerhoff (2001) provides a SE readiness checklist incorporating the areas of mission, risk, systems, skills, space, and finance. Based on these skills, it seems the ability to develop a network of relationships is a hallmark of visionary social entrepreneurs, as is the ability to communicate an inspiring vision for motivating staff, partners, and volunteers (Thompson et al., 2000).Orloff (2002) identifies one element as key to both the emergence of a social venture partnership and its continued success - leadership, i.e. the right person heading up the organisation. Lock's (2001) report on strategic alliances between non-profit and for-profit organisations reflects the following criteria key to the success of the program:* a real and tangible mission and vision;* reliability and commitment of partners;* trust between the partners;* setting aside competitiveness for funding purposes; and* power-based action plans.Similarly, in identifying factors contributing to SE success, Sharir and Lerner (2006) demonstrate that eight variables contribute to success, arranged in order of their value:1. the entrepreneur's social network;2. total dedication to the venture's success;3. whether the capital base is at the establishment base;4. acceptance of the idea in public discourse;5. the composition of the venturing team (salaried versus volunteer workers);6. forming long term collaborations within the public and non-profit sectors;7. the ability of the service to pass the market test; and8. the entrepreneur's previous managerial experience.These findings should also be read in conjunction with the type of enterprise on which a social entrepreneur embarks, and are likely to be a function of skills, trades, and resources available within the community (Peredo and Chrisman, 2006). The notion of stakeholder engagement is taken further by Fuller-Love et al. (2006), where a scenario analysis exercise enabled key stakeholders to confront and deal with considerable uncertainties by developing a shared understanding of the barriers to small firm growth and rural economic regeneration. Furthermore, the start-up and success of social entrepreneurs may alter how the feasibility of engaging in an entrepreneurship is gauged, and how the success of one venture increases the perceptions of the acceptability and desirability of other social initiatives.Many social entrepreneurs find that lessons accumulated from the pioneers in the field are invaluable for future success, and consequently many prescriptions are offered (Boschee, 2001; Fernsler, 2006; Emerson, 1997; Brinckerhoff, 2001), some of which are:* earned income is paramount;* practise organised abandonment (focus efforts and resources);* unrelated business activities are dangerous;* recognise the difference between innovators, entrepreneurs, and managers;* prevent the non-profit culture from becoming an obstacle (take risks, relinquish control);* emphasise customer service/anticipate the need for large amounts of start-up capital; and* conduct market and pricing research/pay a good wage.Consolidation of these theoretical issues, together with the prescriptions offered for successful SE practices, demonstrates that the underlying drive for social entrepreneurs is creating social value. This activity is characterised by innovation or the creation of something new using a mix of managerial and entrepreneurial skills. The following hypotheses are subsequently formulated and statistically tested for significance:H1. Social entrepreneurship is best exemplified through a mix of skills which reflect distinct factor structures in terms of entrepreneurial and managerial competencies.H2. There are significant differences between respondents who are currently starting/involved with or managing a social enterprise, and those who are not. Extending the SE construct, it seems reasonable to assess the prospective social entrepreneur's capacity for practising SE with a modified skills instrument as gleaned from the literature. The justification for using a positivist approach to establish a skill set, rather than rely on a qualitative methodology, is supported in previous investigations (Turner and Martin, 2005; Chell et al., 2005). Analysing non-quantified data on several variables from many cases is often described as beyond the cognitive and affective limits of most researchers (Davidsson, 2004). It could further be argued that applying formal measurement and statistical analysis to the different skills levels cannot truly be deemed positivistic approach. Nothing in the nature of this data would prevent deeper speculations and insights from emerging when analysed; moreover, published research is full of exploratory findings and the use of techniques - such as factor analysis - that a true positivist would deem unscientific (Davidsson, 2004).A mechanism for measuring SE, the social entrepreneurship activity (SEA) index, as conceptualised in the UK Global Entrepreneurship Monitor (GEM) 2005 report (Harding et al., 2005), has been adapted for the purpose of this study to measure students' SE intentions. An intention is a representation of a future course of action to be performed (Ajzen, 1991); it is not simply an expectation of future actions but a proactive commitment to bringing them about. Intentions and actions are different aspects of a functional relationship separated in time. Intentions centre round plans of action. In the absence of intention, action is unlikely to occur (Bandura, 2001). Additionally, two questions pertaining to the respondents' involvement or inclinations towards trying to start/manage any kind of social, voluntary or community service, activity or initiative were posed as yes-or-no questions.Moreover, an instrument was designed to measure typical skills associated with successful social entrepreneurs. This skill set, which was initially investigated through qualitative case studies and lessons learnt in successful SE practices (e.g. Thompson, 2002; Weerawardena and Mort, 2006), was further validated through quantitative factor analysis. Hence, for the SE skills instrument, several competency/skill items were measured on a five-point Likert scale, constituting a mix of entrepreneurial and managerial skills. Pilot testing was used to detect weaknesses in the instrument. Based on the recommendations for the correct size of a pilot group, i.e. 25-100 respondents not necessarily being statistically selective (Cooper and Emory, 1995), the instrument was pre-tested on colleagues (n=5) and actual respondents (n=30) for further refinement of the instrument. The questionnaire's length, instructions to respondents, and anonymity were all considered in the final questionnaire design in order to generate a high response rate (Cooper and Emory, 1995). Notwithstanding these precautions, due to the exploratory nature of the study, the importance of the validity and reliability of these measures was considered and factor analysis was employed.Sampling In terms of sampling, the objective was to use students and not the general population. Student populations add control and homogeneity because individuals who are studying have been identified as being more likely to have an interest in pursuing SE (Harding et al., 2005). Respondents in this group possess the talent, interest and energy to become the next generation of social and civic leaders (Canadian Centre for Social Entrepreneurship, 2001, p. 9). Based on the indicators constituting social entrepreneurship, it was decided to target university students from various faculties at different levels of study (undergraduate to post-graduate), of various ethnic backgrounds, in order to obtain representativeness of a typical student population. As a matter of practicality the instrument was distributed to students of various faculties in a classroom setting, which allowed the researcher to maintain control over the environment, and ensured that a high response rate was achieved (n=287).A judgmental sampling approach was used to represent sample characteristics of respondents most likely to be social entrepreneurs. Hemmasi and Hoelscher (2005) consider the common practice of using university students as proxies for entrepreneurs to be convincing. They find that the student sample strongly resembles actual entrepreneurs, provided that it has high entrepreneurial potential. This notion was extended to include social entrepreneurs. Sample characteristics The sample characteristics (see Table I) are reflected in percentages in terms of: gender, male (48.8 per cent), and female (49.7 per cent); regarding age groups (17-20: 64.6 per cent), 21-24: 30.6 per cent); in respect of education (those who completed matric and were undergraduate students: 94.1 per cent); the faculty of registration was Engineering and the Built Environment (33.3 per cent), whereas Management (24.7 per cent), Art and Design (20.3 per cent), and Economic and Financial Sciences (15.5 per cent), with negligible participation from the Health and Sciences faculties. In terms of ethnicity, respondents categorised themselves as Black Africans (81.8 per cent), Caucasians/Whites (9.6 per cent), Asians (4. 8 per cent), or Coloured South Africans (2.1 per cent). Although such ethnic/racial distributions are not typical of all university populations in South Africa, these categorisations are representative of the broader South African population demographics. In terms of the type of SEA the highest recorded category was religious activities (25.3 per cent), followed by sport (19 per cent), and education (12.8 per cent), with even distributions among other categories accounting for the balance.Factor and reliability analysis Kaiser-Meyer-Olkin's measure of sampling adequacy and Bartlett's test of sphericity were used for the factor analysis and the extraction method was based on principal axis factoring. Two factors were extracted on eight iterations, with eigenvalues of 4.799 and 1.287, which explained 39.9 per cent and 10.7 per cent of variance respectively. Referring to Table II, all items had factor loadings above 30, with items 12, 14, 15 constituting factor 2 (named here as the core SE factor). Factor 1 (with a mix of entrepreneurial/managerial items) was represented by the majority of the remaining nine items. The two factors were correlated at 0.556. The Cronbach's a values for factors 1 and 2 were 0.836 and 0.712, respectively, with a composite factor a of 0.858.The validity and reliability of the instrument used to assess SE competencies were established, and offer insights into the levels and mix of skills used by current and potential social entrepreneurs; specifically the eclectic mix of managerial and entrepreneurial skills are both reaffirmed as being necessary for practising SE. Partial support for H1 is offered in the two factors that were obtained, and were consequently named as the core skill set (factor 2), and the entrepreneurial/managerial skill set (factor 1). It could be argued that the three items representing factor 2 - fund raising, administering the project and visionary leadership - are critical to and constitute the core of any type of SE engagement. Factor 1, comprising the majority of the items used to measure SE skills - a mix of nine managerial and entrepreneurial items - indicates that individuals displaying both sets of skills, may regard themselves as being more efficacious than when only relying on one set of skills. Although based on the empirical results, both factors also registered a relatively high mean score.In accordance with the statistical findings (Turner and Martin, 2005), it is apparent that although the SE construct is naturally focused on distinct entrepreneurial competencies, these skills are complemented by more traditional management skills. These two sets of skills are not perceived to be mutually exclusive, as both sets are required for successful SE. Complementary competencies are a key determinant for successful SE (Turner and Martin, 2005). A pure managerial approach, without reference to entrepreneurship skills, would have counteracted the purposes of practising SE, as was conceptualised for this study.Mean score analysis Descriptive statistics were calculated for the two first-order factors and the one second-order factors (see Table I). Based on the initial descriptives, it was established that age, level of education, and ethnic group categories were skewed and fell predominantly into one category, and were accordingly excluded from inferential testing. The mean scores for both the factors are relatively high, i.e. above the midpoint on the 1-5 Likert scale.Inferential statistical testing Where mean scores on separate factors were calculated, the following test results were analysed:* gender - the independent samples t-test procedure was carried out with no significant differences detected at the 0.05 level;* concerning the current or future SEA, no significant differences at the 0.05 level were detected; and* regarding the faculty of registration, using ANOVA (see Table III), for factor 1, there appeared to be a 0.004 (4.00 per cent) probability of obtaining an F value of 4.506 if no differences among group means in the population were evident: since this probability does not exceed the 0.05 level, one can conclude that there are significant differences for factor 1 relating to type of faculty.Hence, in determining which specific faculties differed on SEA, a more stringent test - i.e. the multiple comparison Scheffe test - was calculated for the dependent variables as factor 1 and the final factor solution. There was a difference within specific faculties; engineering and built environment (EBE) versus economic and financial sciences (EFS) (factor 1=0.010), and (factor 2=0.022).For H2, in relation to differences between factors among the study variables, the only statistically significant differences were found between types of faculties, with the EFS faculty having significant higher scores on both factors. It is plausible that the sample of respondents from the EFS faculty consider their abilities in financial and managerial related matters to be more advanced than those of other faculties' members, probably as a result of exposure to the similar type of discourse used in SE and the EFS faculty.In relation to previous studies these findings also contradict the order and priority of variables where substantive differences in networking have a much lower correlation than stipulated in Sharir and Lerner's (2006) findings, specifically where being innovative, focused and conducting research are higher order priorities. These differences imply a different skill set arrangement with commensurate differences in educational and training priorities which target fostering innovation and employing research principles amongst other skills. Although the results are modest, particularly in the number of active and future social entrepreneurs, they are not trivial. Such findings are reasonably good for exploratory research in a new domain such as SE in SA. It is a first in SA where SEA was empirically derived and associated competencies were measured. The results indicate that the profiles of the individuals represented in this sample are typical of potential social entrepreneurs, with previous studies confirming that such respondents are likely to have interest in pursuing SE. Although any generalisations would mar the rigour of the analysis undertaken, it is tempting to categorise SEA in SA as generally low.Comparatively, as conceptualised for the UK GEM report and for this paper, SEA does not measure all socially motivated enterprise activity, but rather provides an indication of the propensity of particular groups for striving towards entrepreneurial rather than economic means. In the UK the SEA rate comprises 3.2 per cent of the adult population, which is directly comparable with their total entrepreneurial activity (TEA) rate of 6.2 per cent. The UK sample results indicate that 24.4 per cent of the sample is currently trying to start a venture; however the majority, 73.9 per cent answered "no" to any such inclination towards SE initiatives. In this study, 17.2 per cent of respondents indicated "yes" to current involvement with SEA, and 81.4 per cent answered "no" to such involvement. However, a meaningful comparison between these rates is not entirely appropriate as the UK GEM 2006 uses survey data from 27,296 18-64-year-olds randomly stratified.Based on this study's exclusive sample the results indicate that students are likely as a group to be engaged in SE, although no comparison with any non-student population groups were made. In the UK, some 5 per cent of the student population are social entrepreneurs compared with 3.5 per cent of those in full-time employment (Harding et al., 2005). This indicates that younger people more likely to be involved in social initiatives; the highest SEA rate of 3.9 per cent is in the 18-24 age group, compared to 2.7 per cent in the 25-34 age group, with significant differences between youngest and oldest age groups. Education is also predictor of propensity to be a social entrepreneur in the UK, and with 5.5 per cent of people with postgraduate qualifications socially active compared with 2.4 per cent of those who only have undergraduate qualifications. Ethnic group differences were also reported in the UK GEM, where non-white groups (5 per cent) are more likely to be social entrepreneurs than their white counterparts (3 per cent) (Harding, 2006, p. 15). As the findings indicate, the prevalence of SE is more widespread amongst younger people with education, and who are labour market inactive, characteristic of the study sample.Conceptual integrations Based on the existing literature, it seems that recently there has been an upsurge in SEA, driven by changes in the competitive environment. Presently, non-profit organisations are operating in a highly competitive environment characterised by tighter financial restrictions, with several organisations vying for the same donor funds (Weerawardena and Mort, 2006). Currently, the non-profit sector is facing intensifying demands for improved effectiveness and sustainability in light of diminishing funding from traditional sources. Moreover the increasing concentration of wealth in the private sector is mitigating calls for greater social responsibility and more proactive responses to complex social problems (Johnson, 2000, p. 1). Internationally, the SE situation is much the same, with non-governmental developmental organisations (NGDOs) working in developing countries, noted for their role as primarily providing subsidies on behalf of global donors; and creating the circumstances for "patronage, dependency, pathological institutional behaviour and financial malpractice" (Johnson, 2000, p. 3). What may be called a beggar mentality has emerged in many communities where there have been massive aid interventions (Peredo and Chrisman, 2006, p. 311).Established research indicates a wide range of both entrepreneurial and managerial skills, with significant overlaps, as being necessary for successful SE. Like business entrepreneurs, social entrepreneurs initiate and implement innovative programmes, despite being differently motivated, the challenges they face during start-ups are similar to those faced by business entrepreneurs (Sharir and Lerner, 2006). The commercial entrepreneur thrives on innovation, competition and profit, whereas the social entrepreneur prospers on innovation and inclusiveness for changing the systems and patterns of societies (Jeffs, 2006). Moreover it seems that a core set of skills seems indispensable for undertaking SE, even though a large number of elements play a role in SE, i.e. local culture, community management practices, previous occupational or technical skills, perceptions of macroeconomic, and the legal, social, and political environments (Peredo and Chrisman, 2006).Challenges for social entrepreneurs Social entrepreneurs and philanthropic efforts are not exempt from criticism and widespread flaws are evident in their fundamentals. This specifically refers to the unjustifiably high administration costs, which remain unremedied to this day (The Economist, 2006). Little effort has been devoted to measuring results involving the double bottom line (financial and social performance) or the triple bottom line (financial, social and environmental), while being readily susceptible to statistical manipulation, the vague and undefined goals of empowering people or changing lives further obfuscate the outputs of SEA. Cook et al., (2003, p. 64) highlight the false premises and dangerous precedents and standards for SE when they argue that using a private entrepreneurial model in pursuing social justice aims, which cannot be valued in the market, is likely to violate the case for market efficacy. Hence the difficulty social entrepreneurs experience becomes apparent when balancing resource allocation between profit-making and welfare-providing activities. In fact, it could be argued that it is undesirable to implement a welfare system where the beneficiaries are subject to the vagaries of the entrepreneurial model (Seelos and Mair, 2005).Recent research (Madden and Scaife, 2006) has identified key barriers for SE community engagement, among which are overwhelming requests and choice of viable options, lack of formal processes for handling requests, and lack of vision for community engagement - all of which are also highly relevant towards explaining the low SEA rate as reported in the findings.Study limitations The study is limited by the early stage of theoretical development in the SE construct, and by any related measures. Moreover, research is limited by the restricted sampling frame. By using students the psychological diversity of the general population is possibly underestimated, even though SEA is predominant among student populations. Since survey data was self-reported; the study is also susceptible to bias (e.g. self-serving bias with regard to skills level).Study implications A contentious issue in SE, because of the newness of the concept, is that there are few institutional mechanisms in place to support this work (Johnson, 2000). Related to this issue of support is the question of training and capacity building for SE, if SE is defined as principally employing entrepreneurial and managerial skills to the non-profit sector, then these skills are fairly replicable. However if SE is defined as a highly creative and innovative individual approach, replication will be much more difficult to achieve and the focus would then be on developing conditions in which latent entrepreneurial talent could be harnessed for social purposes (Johnson, 2000). Moreover, social entrepreneurs based in the community are able to add value in ways that are often not possible through mainstream policies, i.e. their closeness to the community, and their perception for having a capacity for innovation that autocratic bureaucracies traditionally do not have (Turner and Martin, 2005).As with mainstream entrepreneurship, SE activity is heavily influenced by access to training, modelling, and by promoting SE as an alternative business model within schools, colleges, and universities, exposure and training could induce early stage SEA. As construed in literature, social entrepreneurs are community-centric and rely heavily on networks and support structures, such networks being easy and cheap to establish (Harding, 2006, Sharir and Lerner, 2006). Since competencies can be nurtured, and since funding requests often require concomitant competencies to add value, a positive link between SE success and skills, training and development for SE should be mandatory (such as the school for social entrepreneurs in the UK) (Sharir and Lerner, 2006). Perhaps particularly in SA, which is currently beset by social inequalities, social entrepreneurs should look for the most effective methods of serving their social mandate through funding and sponsoring the activities of community-based projects. By developing capacity through relevant interventions and partnerships, social entrepreneurs can add value and meet the needs of groups who have been failed by previous government attempts in social redress. However, government also has a role in fostering a culture of social enterprise by raising awareness of social enterprises among students through education and through disseminating information and providing resources for promoting social entrepreneurship. Opens in a new window.Table I Descriptive statistics on variables Opens in a new window.Table II Factor structure for SE skills Opens in a new window.Table III ANOVA for faculty registration
|
- Various theoretical issues and debates were investigated in order to measure quantitatively social entrepreneurship (SE) activity (SEA), together with the different skills associated with successful SE in South Africa.
|
[SECTION: Method] As with any change-orientated activity, social entrepreneurship (SE) has not evolved in a vacuum, but rather within a complex framework of political, economic and social change occurring at global and local levels (Johnson, 2000; Kramer, 2005; Harding, 2006). The contribution of social entrepreneurs is being increasingly celebrated, as was witnessed at the World Economic Forum's (2006) Conference on Africa in Cape Town recently. Similarly, Warren Buffett's $30.7 billion donation to the Bill & Melinda Gates Foundation (Cole, 2006) demonstrates that venture philanthropy represents a significant change in how people think about transferring wealth. SE has evolved into the mainstream after years of marginalisation on the edges of the non-profit sector. Venture philanthropists, grant sponsors, boards of directors, non-profit entrepreneurs, consultants and academics are now all interested in the field of SE (Boschee, 2001; Kramer, 2005; Frumkin, 2006). Over the last decade, a critical mass of foundations, academics, non-profit organisations, and self-identified social entrepreneurs have emerged and SE has become a distinct discipline (Kramer, 2005; Dees, 2001).Worldwide, policy makers are using the language of local capacity building as a strategy to assist impoverished communities in becoming self-reliant (Peredo and Chrisman, 2006). Exemplifying a growing trend for academic institutions to take this phenomenon seriously, many dedicated centres for SE have evolved, for example the Skoll Centre for Social Entrepreneurship at Oxford University, created by Jeff Skoll, whose mission is to advance systemic change for the benefit of communities around the world by investing in, connecting and celebrating social entrepreneurs (The Economist, 2006). Many similar institutions exist and researchers (Dees, 2001; Christie and Honing, 2006) suggest that the time is certainly ripe for entrepreneurial approaches to social problems, since SE merges the passion of a social mission with business discipline, innovation, and determination (Jackson, 2006).Although not new in the commercial/business sector, corporate governance and corporate social responsibility (CSR) have gained unprecedented prominence in the modern corporation and are well documented in academic research and popular literature (Rossouw and Van Vuuren, 2004). In South Africa (SA), where SE remains an under-researched area, the importance of SE as a phenomenon in social life is critical; social entrepreneurs contribute to an economy by providing an alternative business model for firms to trade commercially in an environmentally and socially sustainable way. They also provide an alternative delivery system for public services such as health, education, housing and community support (Harding, 2006, p. 10). Moreover, social entrepreneurs are also seen to be a growing source of solutions to issues that currently plague society, such as poverty, crime and abuse (Schuyler, 1998). Social entrepreneurs provide solutions to social, employment and economic problems where traditional market or public approaches fail (Jeffs, 2006). Yet despite these achievements, government in SA appears reluctant to directly engage with SE endeavours, viewing social entrepreneurs as innately risky - and their activities as maverick endeavours.Driving forces of SE with relevance to SA The central driver for SE are social problems (Austin et al., 2006), and driving forces for social entrepreneurs include:* politically, the devolution of social functions from national to local level and from public to private;* economically, the reduction of funding from the public purse; and* socially, problems of increasing complexity and magnitude (Lock, 2001, p. 1).In SA, SE has unequivocal application where traditional government initiatives are unable to satisfy the entire social deficit, where an effort to reduce dependency on social welfare/grants is currently being instituted, and where the survival of many non-governmental organisations (NGOs) is at stake. Such challenges are exacerbated by a social context characterised by massive inequalities in education, housing, the HIV/AIDS pandemic, and high unemployment and poverty rates (Rwigema and Venter, 2004). Accompanying these massive social deficits, many governmental and philanthropic efforts have fallen far short of expectations, with social sector institutions often viewed as inefficient, ineffective, and unresponsive. In particular, policymakers have limited guidance and recognise that the invisible hand frequently fails to assert itself in the most socially beneficial outcomes (Christie and Honing, 2006). Moreover, many poverty alleviation programmes have degenerated into global charity events rather than serving local needs, since most projects have been conceived and managed by development agencies rather than by members of the community, resulting in a lack of ownership on the part of the target beneficiaries (Peredo and Chrisman, 2006).Such failures suggest that there are many gaps in understanding SE activities under conditions of material poverty and in different cultural settings (Peredo and Chrisman, 2006). Consequently the focus of this study is on delineating the SE construct through a focused literature review and identifying the factors necessary for successful SE to flourish. As in the international arena, due to a surge in the establishment of non-profit organisations, SE in SA has proliferated in recent decades, although as an academic enquiry SE is still emergent (Austin et al., 2006). Based on these limitations, a study exploring the nature of SE and how to practise it successfully seems justified, particularly within a non-Western context. Once the various theoretical issues and debates that have made significant contributions to the evolution of SE theory and practice are scrutinised, the author will assume a position relative to these debates and then conduct empirical investigations.Broadly the paper seeks to interrogate existing SE theory and then, relative to these existing controversies, analyse quantitatively students' intentions to engage in SE. Furthermore, the different types of SE activities are measured in conjunction with the level of entrepreneurial and managerial skills typically associated with successful social entrepreneurs. The rationale for focusing this study on SE skills can be found in the many instances where it is impossible to obtain start-up funds without demonstrating proof of concept together with commensurate abilities required to execute such an initiative. Those who fund social entrepreneurs are looking to invest in people with a demonstrated ability to create change, and the factors that matter most are the financial, strategic, managerial, and innovative abilities of social entrepreneurs (Kramer, 2005). Therefore, an investigation into the mix of managerial and entrepreneurial skills associated with successful SE is crucially important to this study. The following sections focus on defining and operationalising SE, and then identifying and surveying the different types of skills required for successful SE practices. In examining the SE construct several definitions are investigated and their components are analysed. As used in social sciences research, a construct is an idea specifically invented for theory-building purposes; a construct also combines simpler concepts especially when the idea is least observable and complex to measure (Cooper and Emory, 1995). To a large extent SE embodies such tendencies, where social entrepreneurs are reformers and revolutionaries, as described by Schumpeter (1934) but with a social mission; they affect fundamental change in the way things are done in the social sector (Dees, 1998). Social entrepreneurs are perceived as heading mission-based businesses rather than operating as charities. The entrepreneurs seek to create systemic changes and sustainable improvements, and they take risks on behalf of the people their organisation serves (Brinckerhoff, 2000). Though they may act locally, their actions have the potential to stimulate global improvements in various fields, whether it is in education, health care, economic development, the environment, the arts, or any other social field (Dees, 1998).The language of social entrepreneurship may be new, but the phenomenon is not. Peter Drucker (1979, p. 453) introduced the concept of social enterprise when he advocated that even the most private of private enterprises is an organ of society and serves a social function. He also advocated a need for the social sector in addition to the private sector of business and the public sector of government to satisfy social needs and provide a sense of citizenship and community. Similarly, Spear (2004) poses the question of whether SE is about creating social enterprise or is more concerned with those particular aspects of entrepreneurship that have a social dimension. Based on the Global Entrepreneurship Monitor (GEM) report, SE is defined as follows:Social entrepreneurship is any attempt at new social enterprise activity or new enterprise creation, such as self-employment, a new enterprise, or the expansion of a existing social enterprise by an individual, teams of individuals or established social enterprise, with social or community goals as its base and where the profit is invested in the activity in the activity or venture itself rather than returned to investors (Harding, 2006, p. 5).Subscribing to the precept that "Social entrepreneurs are one species in the genus entrepreneur", Dees (2001, pp. 2-4) sees social entrepreneurs playing the role of change agents in the social sector, by adopting a mission to create and sustain social value (not just private value), by recognising and relentlessly pursuing new opportunities to serve that mission, engaging in a process of continuous innovation, adaptation and learning, acting boldly without being limited by resources currently at hand, and exhibiting greater accountability to the constituencies served and for the outcomes created.Each element in these definitions is based on the body of entrepreneurship research and this is the core of what distinguishes social entrepreneurs from business entrepreneurs, even from socially responsible businesses. It is also worth noting that these definitions, primarily individualistic in their conception, fail to adequately acknowledge a collective form of entrepreneurship. Indeed, scholars now highlight the importance of recognising entrepreneurship as building on a collective process of learning and innovation (Peredo and Chrisman, 2006). Peredo and Chrisman (2006, p. 310) developed the concept of community-based enterprise (CBE), which they define as a community acting corporately as both entrepreneur and enterprise in pursuit of the common good. Documented cases of CBE include the Mondragon Corporation Cooperative in Spain (Morrison, 1991). Moreover, some of the oldest and some of the most modern social enterprises are co-operatives. A co-operative is defined as an autonomous association of voluntarily united persons who meet their common social, economic and cultural needs through a jointly owned and democratically controlled enterprise. It is estimated that there are about 800 million co-operative members around the world. The Social Enterprise Coalition is an example of this type of co-operative (Cabinet Office, 2007).Such views resonate with Cooper and Denner's (1998) perspective; culture as capital - a theory of social capital, which refers to the relationships and networks from which individuals are able to derive institutional support. Social capital is cumulative, leads to benefits in the social world, and can be converted into other forms of capital. Based on these collective propositions, SE can be viewed as a process that serves as catalyst for social change, and varies according to socio-economic and cultural environments. Combining insights from sociology, political science, and organisational theory, Mair and Marti (2006) propose the concept of embeddedness to emphasise the importance of the continuous interaction between social entrepreneurs and the context in which they are embedded. This discussion is relevant to South Africa, which is largely characterised as a collectivist nation, and where a concept like Ubuntu (together with an element of high community involvement) is in conflict with individualism yet differs from collectivism, where the rights of the individual are subjugated to a common good. It is this collectively enabling approach that is essential for collective SE, which is more socially orientated that builds on strengths rather than dwelling on deficits, and encompasses socio-structural factors among the sources and remedies for human problems (Bandura, 1997).These perspectives are reinforced when Weerawardena and Mort (2006), advance the concept of SE through empirical research and find it is a bounded multi-dimensional construct deeply rooted in an organisation's social mission with its drive for sustainability, which in turn is shaped by environmental dynamism. Similarly, Giddens's (1998) view is that SE is the way to reconstruct welfare and build social partnerships between public, social and business sectors by harnessing the dynamism of markets with a public interest focus. Consequently, profit is not the gauge of value creation, nor is customer satisfaction, and social impact is the gauge in SE. Social entrepreneurs look for a long-term social return on investment. Indeed they are not simply driven by the perception of a social need or by their compassion; rather, they have a vision of how to achieve improvement and they are determined to achieve their vision (Dees, 2001; Bornstein, 1998).SE differences and similarities in meaning In general, based on established literature, the concept of SE remains poorly defined and its boundaries to other fields remain blurred (Mair and Marti, 2006). Conceptual differences are noticeable in definitions of social entrepreneurship (focus on process or behaviour), social entrepreneurs (focus on founder of initiative), and social enterprise (focus on tangible outcome of SE). Peredo and McLean (2006) propose that one could easily ask what makes SE social, and what makes it entrepreneurship. Research on SE is clearly based on the knowledge base of entrepreneurship, and any definition of SE is shaped by the prevailing findings on entrepreneurship theory and practice. Although it is beyond the scope of this article to expound on the field of entrepreneurship, it provides a contemporary definition, which views the field of entrepreneurship as a "scholarly examination of how, by whom, and to what effect opportunities for creating future goods and services are discovered, evaluated and exploited" (Shane and Venkataraman, 2001, p. 218).The social element in definitions is often used to differentiate SE from commercial entrepreneurship, including the altruistic motive associated with SE and the profit motive with commercial entrepreneurship. However Mair and Marti (2006) argue that such a dichotomy is incorrect since SE, although based on ethical and moral issues, could include less altruistic reasons such as personal fulfilment, and the creation of fresh markets and new jobs. Correspondingly, commercial entrepreneurship also comprises a social aspect, as previously mentioned in terms of CSR. Rather than profit versus non-profit, Mair and Marti (2006) suggest that the main difference between business and social entrepreneurship lies in the relative priority given to social wealth creation versus economic wealth creation. Similarly, Peredo and McLean (2006) interpret a range of social entrepreneurs, along a continuum of possibilities ranging from entirely social benefits accrued to a firm, to social goals being the only requirement of the firm. Such conceptualisations reflect the absence of defined boundaries of the SE phenomena. An additional difficulty in defining SE is differentiating the small scale, often voluntary or charitable work done by individuals making a social difference, from the social entrepreneur who establishes a high turnover social enterprise (Harding, 2006). Because of their structure and constitution social entrepreneurs are able to serve a triple bottom line achieving profitability, societal impact and environmental sustainability, simultaneously (Harding, 2006).Theoretical conclusions A summary of the SE academic literature suggests a number of themes, preoccupations and domains have emerged (Weerawardena and Mort, 2006), which may generally comprise:* SE being expressed in a vast array of economic, educational, welfare and social activities, reflecting such diverse activities;* SE may be conceptualised in a number of contexts, i.e. public sector, community, social action organisations and charities; and* the role of innovativeness, proactiveness and risk-taking in SE are emphasised by distinguishing SE from other forms of community work.In order to draw some conclusions from these varying definitional controversies, an attempt is made to offer a position in relation to such debates, which allows for further interpretation and analysis. Existing theory has revealed a commonality across all definitions of SE. This is the fact that the underlying drive for social entrepreneurs is to create social value, rather than personal and shareholder wealth, and that the activity is characterised by innovation or the creation of something new rather than simply the replication of existing enterprises or practices. In concordance with other SE reports (Harding, 2006, p. 5) it is argued that the SE definition must reflect two critical features of a social as opposed to a mainstream enterprise:1. the project has social goals rather than profit objectives; and2. revenue is used to support social goals instead of shareholder returns.The definition of sustainability within the context of the non-profit sector is quite different from that of the for-profit sector, with the advocacy of sustainability versus stability being contentious in view of organisations having sustainable finances but no community support, and therefore probably not being sustainable (Johnson, 2003). Due to the exploratory nature of the study and since specific associations are predicted between the variables under study, hypotheses are formulated. These hypotheses are based on SE definitional controversies, and distinctions between managerial and entrepreneurial skills typically associated with successful SE. The economic value of entrepreneurial ability which is acquired through education can be identified through the work of Schultz (1980), who recognised that the returns that actually accrue to education are substantially undervalued. Despite early notions that entrepreneurship is an innate skill, recent studies (e.g. Fayolle et al., 2005) indicate that entrepreneurship education influences both current behaviour and future intentions. Identifying business opportunities and having confidence in personal skills to establish a business may be enhanced through education and training, with evidence suggesting that those with more education are more likely to pursue opportunities for entrepreneurship (high-growth ventures) (Gibb, 2000).In developing a body of theory on SE, Austin et al. (2006) highlight the differences between social and commercial entrepreneurship, and based on a prevailing commercial model, explore new parameters when applied to SE. Although this distinction clearly overlaps with previous differences highlighted on social goals versus profit, it can be interpreted that the distinction between social and commercial entrepreneurship is not dichotomous, but better conceptualised as a continuum ranging from purely social to purely economic. Some key differences that emerge from case examples (Austin et al., 2006) are:* SE focuses on serving basic, long-standing needs more effectively through innovative approaches rather than considering commercial entrepreneurship, which tends to focus on breakthroughs and new needs.* The context of SE differs from commercial entrepreneurship in the way that the interaction between a social ventures mission statement and performance measurement systems influences entrepreneurial behaviour (quantification of social impact is difficult).* The nature of the human and financial resources for SE differs in some key respects because of difficulties in resource mobilisation. New analysis (Rouse and Jayawarna, 2006) of the policy options available for improving finance to disadvantaged groups and obtaining social inclusion is pivotal towards understanding SE.Similarly, Thompson et al. (2000) distinguish between social entrepreneurs and managers, the former being catalysts for entrepreneurial projects, while the latter are critical for seeing initiatives through. Additionally, differences exist between non-profit versus for-profit social entrepreneurs, particularly where the advantages of collective wisdom versus personal skills are concerned, and where the focus is on long-term capacity versus short-term financial gain. Since the focus of this study is on the creation of social value through innovation, it is recognised that the mix of managerial competencies appropriate to successful SE may however differ in significant ways from the mix relevant to success in entrepreneurship excluding the social component (Peredo and McLean, 2006). Because of this distinction, a definition of an entrepreneurial competency/skill is offered:An entrepreneurial competency consists of a combination of skills, knowledge and resources that distinguishes entrepreneurs from their competitors (Fiet, 2000, p. 107).Several emergent themes of SE competencies arise from in-depth case study interviews (Thompson, 2002; Weerawardena and Mort, 2006): networking, people management, fund raising, mentoring, business training, environmental dynamics, innovativeness, proactiveness, risk management, sustainability, social mission, and opportunity recognition. Through case study exploration of sensemaking, Mills and Pawson (2006) raise questions about the centrality of the notion of risk in new start entrepreneurs' rationales for the enterprise development decisions they make. Additionally, Thompson (2002) uses an SE map to identify four central themes, which are:1. job creation;2. utilisation of buildings;3. volunteer support; and4. focus on helping people in need.Similarly, Brinckerhoff (2001) provides a SE readiness checklist incorporating the areas of mission, risk, systems, skills, space, and finance. Based on these skills, it seems the ability to develop a network of relationships is a hallmark of visionary social entrepreneurs, as is the ability to communicate an inspiring vision for motivating staff, partners, and volunteers (Thompson et al., 2000).Orloff (2002) identifies one element as key to both the emergence of a social venture partnership and its continued success - leadership, i.e. the right person heading up the organisation. Lock's (2001) report on strategic alliances between non-profit and for-profit organisations reflects the following criteria key to the success of the program:* a real and tangible mission and vision;* reliability and commitment of partners;* trust between the partners;* setting aside competitiveness for funding purposes; and* power-based action plans.Similarly, in identifying factors contributing to SE success, Sharir and Lerner (2006) demonstrate that eight variables contribute to success, arranged in order of their value:1. the entrepreneur's social network;2. total dedication to the venture's success;3. whether the capital base is at the establishment base;4. acceptance of the idea in public discourse;5. the composition of the venturing team (salaried versus volunteer workers);6. forming long term collaborations within the public and non-profit sectors;7. the ability of the service to pass the market test; and8. the entrepreneur's previous managerial experience.These findings should also be read in conjunction with the type of enterprise on which a social entrepreneur embarks, and are likely to be a function of skills, trades, and resources available within the community (Peredo and Chrisman, 2006). The notion of stakeholder engagement is taken further by Fuller-Love et al. (2006), where a scenario analysis exercise enabled key stakeholders to confront and deal with considerable uncertainties by developing a shared understanding of the barriers to small firm growth and rural economic regeneration. Furthermore, the start-up and success of social entrepreneurs may alter how the feasibility of engaging in an entrepreneurship is gauged, and how the success of one venture increases the perceptions of the acceptability and desirability of other social initiatives.Many social entrepreneurs find that lessons accumulated from the pioneers in the field are invaluable for future success, and consequently many prescriptions are offered (Boschee, 2001; Fernsler, 2006; Emerson, 1997; Brinckerhoff, 2001), some of which are:* earned income is paramount;* practise organised abandonment (focus efforts and resources);* unrelated business activities are dangerous;* recognise the difference between innovators, entrepreneurs, and managers;* prevent the non-profit culture from becoming an obstacle (take risks, relinquish control);* emphasise customer service/anticipate the need for large amounts of start-up capital; and* conduct market and pricing research/pay a good wage.Consolidation of these theoretical issues, together with the prescriptions offered for successful SE practices, demonstrates that the underlying drive for social entrepreneurs is creating social value. This activity is characterised by innovation or the creation of something new using a mix of managerial and entrepreneurial skills. The following hypotheses are subsequently formulated and statistically tested for significance:H1. Social entrepreneurship is best exemplified through a mix of skills which reflect distinct factor structures in terms of entrepreneurial and managerial competencies.H2. There are significant differences between respondents who are currently starting/involved with or managing a social enterprise, and those who are not. Extending the SE construct, it seems reasonable to assess the prospective social entrepreneur's capacity for practising SE with a modified skills instrument as gleaned from the literature. The justification for using a positivist approach to establish a skill set, rather than rely on a qualitative methodology, is supported in previous investigations (Turner and Martin, 2005; Chell et al., 2005). Analysing non-quantified data on several variables from many cases is often described as beyond the cognitive and affective limits of most researchers (Davidsson, 2004). It could further be argued that applying formal measurement and statistical analysis to the different skills levels cannot truly be deemed positivistic approach. Nothing in the nature of this data would prevent deeper speculations and insights from emerging when analysed; moreover, published research is full of exploratory findings and the use of techniques - such as factor analysis - that a true positivist would deem unscientific (Davidsson, 2004).A mechanism for measuring SE, the social entrepreneurship activity (SEA) index, as conceptualised in the UK Global Entrepreneurship Monitor (GEM) 2005 report (Harding et al., 2005), has been adapted for the purpose of this study to measure students' SE intentions. An intention is a representation of a future course of action to be performed (Ajzen, 1991); it is not simply an expectation of future actions but a proactive commitment to bringing them about. Intentions and actions are different aspects of a functional relationship separated in time. Intentions centre round plans of action. In the absence of intention, action is unlikely to occur (Bandura, 2001). Additionally, two questions pertaining to the respondents' involvement or inclinations towards trying to start/manage any kind of social, voluntary or community service, activity or initiative were posed as yes-or-no questions.Moreover, an instrument was designed to measure typical skills associated with successful social entrepreneurs. This skill set, which was initially investigated through qualitative case studies and lessons learnt in successful SE practices (e.g. Thompson, 2002; Weerawardena and Mort, 2006), was further validated through quantitative factor analysis. Hence, for the SE skills instrument, several competency/skill items were measured on a five-point Likert scale, constituting a mix of entrepreneurial and managerial skills. Pilot testing was used to detect weaknesses in the instrument. Based on the recommendations for the correct size of a pilot group, i.e. 25-100 respondents not necessarily being statistically selective (Cooper and Emory, 1995), the instrument was pre-tested on colleagues (n=5) and actual respondents (n=30) for further refinement of the instrument. The questionnaire's length, instructions to respondents, and anonymity were all considered in the final questionnaire design in order to generate a high response rate (Cooper and Emory, 1995). Notwithstanding these precautions, due to the exploratory nature of the study, the importance of the validity and reliability of these measures was considered and factor analysis was employed.Sampling In terms of sampling, the objective was to use students and not the general population. Student populations add control and homogeneity because individuals who are studying have been identified as being more likely to have an interest in pursuing SE (Harding et al., 2005). Respondents in this group possess the talent, interest and energy to become the next generation of social and civic leaders (Canadian Centre for Social Entrepreneurship, 2001, p. 9). Based on the indicators constituting social entrepreneurship, it was decided to target university students from various faculties at different levels of study (undergraduate to post-graduate), of various ethnic backgrounds, in order to obtain representativeness of a typical student population. As a matter of practicality the instrument was distributed to students of various faculties in a classroom setting, which allowed the researcher to maintain control over the environment, and ensured that a high response rate was achieved (n=287).A judgmental sampling approach was used to represent sample characteristics of respondents most likely to be social entrepreneurs. Hemmasi and Hoelscher (2005) consider the common practice of using university students as proxies for entrepreneurs to be convincing. They find that the student sample strongly resembles actual entrepreneurs, provided that it has high entrepreneurial potential. This notion was extended to include social entrepreneurs. Sample characteristics The sample characteristics (see Table I) are reflected in percentages in terms of: gender, male (48.8 per cent), and female (49.7 per cent); regarding age groups (17-20: 64.6 per cent), 21-24: 30.6 per cent); in respect of education (those who completed matric and were undergraduate students: 94.1 per cent); the faculty of registration was Engineering and the Built Environment (33.3 per cent), whereas Management (24.7 per cent), Art and Design (20.3 per cent), and Economic and Financial Sciences (15.5 per cent), with negligible participation from the Health and Sciences faculties. In terms of ethnicity, respondents categorised themselves as Black Africans (81.8 per cent), Caucasians/Whites (9.6 per cent), Asians (4. 8 per cent), or Coloured South Africans (2.1 per cent). Although such ethnic/racial distributions are not typical of all university populations in South Africa, these categorisations are representative of the broader South African population demographics. In terms of the type of SEA the highest recorded category was religious activities (25.3 per cent), followed by sport (19 per cent), and education (12.8 per cent), with even distributions among other categories accounting for the balance.Factor and reliability analysis Kaiser-Meyer-Olkin's measure of sampling adequacy and Bartlett's test of sphericity were used for the factor analysis and the extraction method was based on principal axis factoring. Two factors were extracted on eight iterations, with eigenvalues of 4.799 and 1.287, which explained 39.9 per cent and 10.7 per cent of variance respectively. Referring to Table II, all items had factor loadings above 30, with items 12, 14, 15 constituting factor 2 (named here as the core SE factor). Factor 1 (with a mix of entrepreneurial/managerial items) was represented by the majority of the remaining nine items. The two factors were correlated at 0.556. The Cronbach's a values for factors 1 and 2 were 0.836 and 0.712, respectively, with a composite factor a of 0.858.The validity and reliability of the instrument used to assess SE competencies were established, and offer insights into the levels and mix of skills used by current and potential social entrepreneurs; specifically the eclectic mix of managerial and entrepreneurial skills are both reaffirmed as being necessary for practising SE. Partial support for H1 is offered in the two factors that were obtained, and were consequently named as the core skill set (factor 2), and the entrepreneurial/managerial skill set (factor 1). It could be argued that the three items representing factor 2 - fund raising, administering the project and visionary leadership - are critical to and constitute the core of any type of SE engagement. Factor 1, comprising the majority of the items used to measure SE skills - a mix of nine managerial and entrepreneurial items - indicates that individuals displaying both sets of skills, may regard themselves as being more efficacious than when only relying on one set of skills. Although based on the empirical results, both factors also registered a relatively high mean score.In accordance with the statistical findings (Turner and Martin, 2005), it is apparent that although the SE construct is naturally focused on distinct entrepreneurial competencies, these skills are complemented by more traditional management skills. These two sets of skills are not perceived to be mutually exclusive, as both sets are required for successful SE. Complementary competencies are a key determinant for successful SE (Turner and Martin, 2005). A pure managerial approach, without reference to entrepreneurship skills, would have counteracted the purposes of practising SE, as was conceptualised for this study.Mean score analysis Descriptive statistics were calculated for the two first-order factors and the one second-order factors (see Table I). Based on the initial descriptives, it was established that age, level of education, and ethnic group categories were skewed and fell predominantly into one category, and were accordingly excluded from inferential testing. The mean scores for both the factors are relatively high, i.e. above the midpoint on the 1-5 Likert scale.Inferential statistical testing Where mean scores on separate factors were calculated, the following test results were analysed:* gender - the independent samples t-test procedure was carried out with no significant differences detected at the 0.05 level;* concerning the current or future SEA, no significant differences at the 0.05 level were detected; and* regarding the faculty of registration, using ANOVA (see Table III), for factor 1, there appeared to be a 0.004 (4.00 per cent) probability of obtaining an F value of 4.506 if no differences among group means in the population were evident: since this probability does not exceed the 0.05 level, one can conclude that there are significant differences for factor 1 relating to type of faculty.Hence, in determining which specific faculties differed on SEA, a more stringent test - i.e. the multiple comparison Scheffe test - was calculated for the dependent variables as factor 1 and the final factor solution. There was a difference within specific faculties; engineering and built environment (EBE) versus economic and financial sciences (EFS) (factor 1=0.010), and (factor 2=0.022).For H2, in relation to differences between factors among the study variables, the only statistically significant differences were found between types of faculties, with the EFS faculty having significant higher scores on both factors. It is plausible that the sample of respondents from the EFS faculty consider their abilities in financial and managerial related matters to be more advanced than those of other faculties' members, probably as a result of exposure to the similar type of discourse used in SE and the EFS faculty.In relation to previous studies these findings also contradict the order and priority of variables where substantive differences in networking have a much lower correlation than stipulated in Sharir and Lerner's (2006) findings, specifically where being innovative, focused and conducting research are higher order priorities. These differences imply a different skill set arrangement with commensurate differences in educational and training priorities which target fostering innovation and employing research principles amongst other skills. Although the results are modest, particularly in the number of active and future social entrepreneurs, they are not trivial. Such findings are reasonably good for exploratory research in a new domain such as SE in SA. It is a first in SA where SEA was empirically derived and associated competencies were measured. The results indicate that the profiles of the individuals represented in this sample are typical of potential social entrepreneurs, with previous studies confirming that such respondents are likely to have interest in pursuing SE. Although any generalisations would mar the rigour of the analysis undertaken, it is tempting to categorise SEA in SA as generally low.Comparatively, as conceptualised for the UK GEM report and for this paper, SEA does not measure all socially motivated enterprise activity, but rather provides an indication of the propensity of particular groups for striving towards entrepreneurial rather than economic means. In the UK the SEA rate comprises 3.2 per cent of the adult population, which is directly comparable with their total entrepreneurial activity (TEA) rate of 6.2 per cent. The UK sample results indicate that 24.4 per cent of the sample is currently trying to start a venture; however the majority, 73.9 per cent answered "no" to any such inclination towards SE initiatives. In this study, 17.2 per cent of respondents indicated "yes" to current involvement with SEA, and 81.4 per cent answered "no" to such involvement. However, a meaningful comparison between these rates is not entirely appropriate as the UK GEM 2006 uses survey data from 27,296 18-64-year-olds randomly stratified.Based on this study's exclusive sample the results indicate that students are likely as a group to be engaged in SE, although no comparison with any non-student population groups were made. In the UK, some 5 per cent of the student population are social entrepreneurs compared with 3.5 per cent of those in full-time employment (Harding et al., 2005). This indicates that younger people more likely to be involved in social initiatives; the highest SEA rate of 3.9 per cent is in the 18-24 age group, compared to 2.7 per cent in the 25-34 age group, with significant differences between youngest and oldest age groups. Education is also predictor of propensity to be a social entrepreneur in the UK, and with 5.5 per cent of people with postgraduate qualifications socially active compared with 2.4 per cent of those who only have undergraduate qualifications. Ethnic group differences were also reported in the UK GEM, where non-white groups (5 per cent) are more likely to be social entrepreneurs than their white counterparts (3 per cent) (Harding, 2006, p. 15). As the findings indicate, the prevalence of SE is more widespread amongst younger people with education, and who are labour market inactive, characteristic of the study sample.Conceptual integrations Based on the existing literature, it seems that recently there has been an upsurge in SEA, driven by changes in the competitive environment. Presently, non-profit organisations are operating in a highly competitive environment characterised by tighter financial restrictions, with several organisations vying for the same donor funds (Weerawardena and Mort, 2006). Currently, the non-profit sector is facing intensifying demands for improved effectiveness and sustainability in light of diminishing funding from traditional sources. Moreover the increasing concentration of wealth in the private sector is mitigating calls for greater social responsibility and more proactive responses to complex social problems (Johnson, 2000, p. 1). Internationally, the SE situation is much the same, with non-governmental developmental organisations (NGDOs) working in developing countries, noted for their role as primarily providing subsidies on behalf of global donors; and creating the circumstances for "patronage, dependency, pathological institutional behaviour and financial malpractice" (Johnson, 2000, p. 3). What may be called a beggar mentality has emerged in many communities where there have been massive aid interventions (Peredo and Chrisman, 2006, p. 311).Established research indicates a wide range of both entrepreneurial and managerial skills, with significant overlaps, as being necessary for successful SE. Like business entrepreneurs, social entrepreneurs initiate and implement innovative programmes, despite being differently motivated, the challenges they face during start-ups are similar to those faced by business entrepreneurs (Sharir and Lerner, 2006). The commercial entrepreneur thrives on innovation, competition and profit, whereas the social entrepreneur prospers on innovation and inclusiveness for changing the systems and patterns of societies (Jeffs, 2006). Moreover it seems that a core set of skills seems indispensable for undertaking SE, even though a large number of elements play a role in SE, i.e. local culture, community management practices, previous occupational or technical skills, perceptions of macroeconomic, and the legal, social, and political environments (Peredo and Chrisman, 2006).Challenges for social entrepreneurs Social entrepreneurs and philanthropic efforts are not exempt from criticism and widespread flaws are evident in their fundamentals. This specifically refers to the unjustifiably high administration costs, which remain unremedied to this day (The Economist, 2006). Little effort has been devoted to measuring results involving the double bottom line (financial and social performance) or the triple bottom line (financial, social and environmental), while being readily susceptible to statistical manipulation, the vague and undefined goals of empowering people or changing lives further obfuscate the outputs of SEA. Cook et al., (2003, p. 64) highlight the false premises and dangerous precedents and standards for SE when they argue that using a private entrepreneurial model in pursuing social justice aims, which cannot be valued in the market, is likely to violate the case for market efficacy. Hence the difficulty social entrepreneurs experience becomes apparent when balancing resource allocation between profit-making and welfare-providing activities. In fact, it could be argued that it is undesirable to implement a welfare system where the beneficiaries are subject to the vagaries of the entrepreneurial model (Seelos and Mair, 2005).Recent research (Madden and Scaife, 2006) has identified key barriers for SE community engagement, among which are overwhelming requests and choice of viable options, lack of formal processes for handling requests, and lack of vision for community engagement - all of which are also highly relevant towards explaining the low SEA rate as reported in the findings.Study limitations The study is limited by the early stage of theoretical development in the SE construct, and by any related measures. Moreover, research is limited by the restricted sampling frame. By using students the psychological diversity of the general population is possibly underestimated, even though SEA is predominant among student populations. Since survey data was self-reported; the study is also susceptible to bias (e.g. self-serving bias with regard to skills level).Study implications A contentious issue in SE, because of the newness of the concept, is that there are few institutional mechanisms in place to support this work (Johnson, 2000). Related to this issue of support is the question of training and capacity building for SE, if SE is defined as principally employing entrepreneurial and managerial skills to the non-profit sector, then these skills are fairly replicable. However if SE is defined as a highly creative and innovative individual approach, replication will be much more difficult to achieve and the focus would then be on developing conditions in which latent entrepreneurial talent could be harnessed for social purposes (Johnson, 2000). Moreover, social entrepreneurs based in the community are able to add value in ways that are often not possible through mainstream policies, i.e. their closeness to the community, and their perception for having a capacity for innovation that autocratic bureaucracies traditionally do not have (Turner and Martin, 2005).As with mainstream entrepreneurship, SE activity is heavily influenced by access to training, modelling, and by promoting SE as an alternative business model within schools, colleges, and universities, exposure and training could induce early stage SEA. As construed in literature, social entrepreneurs are community-centric and rely heavily on networks and support structures, such networks being easy and cheap to establish (Harding, 2006, Sharir and Lerner, 2006). Since competencies can be nurtured, and since funding requests often require concomitant competencies to add value, a positive link between SE success and skills, training and development for SE should be mandatory (such as the school for social entrepreneurs in the UK) (Sharir and Lerner, 2006). Perhaps particularly in SA, which is currently beset by social inequalities, social entrepreneurs should look for the most effective methods of serving their social mandate through funding and sponsoring the activities of community-based projects. By developing capacity through relevant interventions and partnerships, social entrepreneurs can add value and meet the needs of groups who have been failed by previous government attempts in social redress. However, government also has a role in fostering a culture of social enterprise by raising awareness of social enterprises among students through education and through disseminating information and providing resources for promoting social entrepreneurship. Opens in a new window.Table I Descriptive statistics on variables Opens in a new window.Table II Factor structure for SE skills Opens in a new window.Table III ANOVA for faculty registration
|
- This was primarily an exploratory study, using factor analysis and inferential statistical testing, based on a surveyed sample of 287 respondents, undertaken to measure SEA and concomitant SE skills. Empirical findings were interrogated in the context of existing research and comparisons with established SEA rates were made.
|
[SECTION: Findings] As with any change-orientated activity, social entrepreneurship (SE) has not evolved in a vacuum, but rather within a complex framework of political, economic and social change occurring at global and local levels (Johnson, 2000; Kramer, 2005; Harding, 2006). The contribution of social entrepreneurs is being increasingly celebrated, as was witnessed at the World Economic Forum's (2006) Conference on Africa in Cape Town recently. Similarly, Warren Buffett's $30.7 billion donation to the Bill & Melinda Gates Foundation (Cole, 2006) demonstrates that venture philanthropy represents a significant change in how people think about transferring wealth. SE has evolved into the mainstream after years of marginalisation on the edges of the non-profit sector. Venture philanthropists, grant sponsors, boards of directors, non-profit entrepreneurs, consultants and academics are now all interested in the field of SE (Boschee, 2001; Kramer, 2005; Frumkin, 2006). Over the last decade, a critical mass of foundations, academics, non-profit organisations, and self-identified social entrepreneurs have emerged and SE has become a distinct discipline (Kramer, 2005; Dees, 2001).Worldwide, policy makers are using the language of local capacity building as a strategy to assist impoverished communities in becoming self-reliant (Peredo and Chrisman, 2006). Exemplifying a growing trend for academic institutions to take this phenomenon seriously, many dedicated centres for SE have evolved, for example the Skoll Centre for Social Entrepreneurship at Oxford University, created by Jeff Skoll, whose mission is to advance systemic change for the benefit of communities around the world by investing in, connecting and celebrating social entrepreneurs (The Economist, 2006). Many similar institutions exist and researchers (Dees, 2001; Christie and Honing, 2006) suggest that the time is certainly ripe for entrepreneurial approaches to social problems, since SE merges the passion of a social mission with business discipline, innovation, and determination (Jackson, 2006).Although not new in the commercial/business sector, corporate governance and corporate social responsibility (CSR) have gained unprecedented prominence in the modern corporation and are well documented in academic research and popular literature (Rossouw and Van Vuuren, 2004). In South Africa (SA), where SE remains an under-researched area, the importance of SE as a phenomenon in social life is critical; social entrepreneurs contribute to an economy by providing an alternative business model for firms to trade commercially in an environmentally and socially sustainable way. They also provide an alternative delivery system for public services such as health, education, housing and community support (Harding, 2006, p. 10). Moreover, social entrepreneurs are also seen to be a growing source of solutions to issues that currently plague society, such as poverty, crime and abuse (Schuyler, 1998). Social entrepreneurs provide solutions to social, employment and economic problems where traditional market or public approaches fail (Jeffs, 2006). Yet despite these achievements, government in SA appears reluctant to directly engage with SE endeavours, viewing social entrepreneurs as innately risky - and their activities as maverick endeavours.Driving forces of SE with relevance to SA The central driver for SE are social problems (Austin et al., 2006), and driving forces for social entrepreneurs include:* politically, the devolution of social functions from national to local level and from public to private;* economically, the reduction of funding from the public purse; and* socially, problems of increasing complexity and magnitude (Lock, 2001, p. 1).In SA, SE has unequivocal application where traditional government initiatives are unable to satisfy the entire social deficit, where an effort to reduce dependency on social welfare/grants is currently being instituted, and where the survival of many non-governmental organisations (NGOs) is at stake. Such challenges are exacerbated by a social context characterised by massive inequalities in education, housing, the HIV/AIDS pandemic, and high unemployment and poverty rates (Rwigema and Venter, 2004). Accompanying these massive social deficits, many governmental and philanthropic efforts have fallen far short of expectations, with social sector institutions often viewed as inefficient, ineffective, and unresponsive. In particular, policymakers have limited guidance and recognise that the invisible hand frequently fails to assert itself in the most socially beneficial outcomes (Christie and Honing, 2006). Moreover, many poverty alleviation programmes have degenerated into global charity events rather than serving local needs, since most projects have been conceived and managed by development agencies rather than by members of the community, resulting in a lack of ownership on the part of the target beneficiaries (Peredo and Chrisman, 2006).Such failures suggest that there are many gaps in understanding SE activities under conditions of material poverty and in different cultural settings (Peredo and Chrisman, 2006). Consequently the focus of this study is on delineating the SE construct through a focused literature review and identifying the factors necessary for successful SE to flourish. As in the international arena, due to a surge in the establishment of non-profit organisations, SE in SA has proliferated in recent decades, although as an academic enquiry SE is still emergent (Austin et al., 2006). Based on these limitations, a study exploring the nature of SE and how to practise it successfully seems justified, particularly within a non-Western context. Once the various theoretical issues and debates that have made significant contributions to the evolution of SE theory and practice are scrutinised, the author will assume a position relative to these debates and then conduct empirical investigations.Broadly the paper seeks to interrogate existing SE theory and then, relative to these existing controversies, analyse quantitatively students' intentions to engage in SE. Furthermore, the different types of SE activities are measured in conjunction with the level of entrepreneurial and managerial skills typically associated with successful social entrepreneurs. The rationale for focusing this study on SE skills can be found in the many instances where it is impossible to obtain start-up funds without demonstrating proof of concept together with commensurate abilities required to execute such an initiative. Those who fund social entrepreneurs are looking to invest in people with a demonstrated ability to create change, and the factors that matter most are the financial, strategic, managerial, and innovative abilities of social entrepreneurs (Kramer, 2005). Therefore, an investigation into the mix of managerial and entrepreneurial skills associated with successful SE is crucially important to this study. The following sections focus on defining and operationalising SE, and then identifying and surveying the different types of skills required for successful SE practices. In examining the SE construct several definitions are investigated and their components are analysed. As used in social sciences research, a construct is an idea specifically invented for theory-building purposes; a construct also combines simpler concepts especially when the idea is least observable and complex to measure (Cooper and Emory, 1995). To a large extent SE embodies such tendencies, where social entrepreneurs are reformers and revolutionaries, as described by Schumpeter (1934) but with a social mission; they affect fundamental change in the way things are done in the social sector (Dees, 1998). Social entrepreneurs are perceived as heading mission-based businesses rather than operating as charities. The entrepreneurs seek to create systemic changes and sustainable improvements, and they take risks on behalf of the people their organisation serves (Brinckerhoff, 2000). Though they may act locally, their actions have the potential to stimulate global improvements in various fields, whether it is in education, health care, economic development, the environment, the arts, or any other social field (Dees, 1998).The language of social entrepreneurship may be new, but the phenomenon is not. Peter Drucker (1979, p. 453) introduced the concept of social enterprise when he advocated that even the most private of private enterprises is an organ of society and serves a social function. He also advocated a need for the social sector in addition to the private sector of business and the public sector of government to satisfy social needs and provide a sense of citizenship and community. Similarly, Spear (2004) poses the question of whether SE is about creating social enterprise or is more concerned with those particular aspects of entrepreneurship that have a social dimension. Based on the Global Entrepreneurship Monitor (GEM) report, SE is defined as follows:Social entrepreneurship is any attempt at new social enterprise activity or new enterprise creation, such as self-employment, a new enterprise, or the expansion of a existing social enterprise by an individual, teams of individuals or established social enterprise, with social or community goals as its base and where the profit is invested in the activity in the activity or venture itself rather than returned to investors (Harding, 2006, p. 5).Subscribing to the precept that "Social entrepreneurs are one species in the genus entrepreneur", Dees (2001, pp. 2-4) sees social entrepreneurs playing the role of change agents in the social sector, by adopting a mission to create and sustain social value (not just private value), by recognising and relentlessly pursuing new opportunities to serve that mission, engaging in a process of continuous innovation, adaptation and learning, acting boldly without being limited by resources currently at hand, and exhibiting greater accountability to the constituencies served and for the outcomes created.Each element in these definitions is based on the body of entrepreneurship research and this is the core of what distinguishes social entrepreneurs from business entrepreneurs, even from socially responsible businesses. It is also worth noting that these definitions, primarily individualistic in their conception, fail to adequately acknowledge a collective form of entrepreneurship. Indeed, scholars now highlight the importance of recognising entrepreneurship as building on a collective process of learning and innovation (Peredo and Chrisman, 2006). Peredo and Chrisman (2006, p. 310) developed the concept of community-based enterprise (CBE), which they define as a community acting corporately as both entrepreneur and enterprise in pursuit of the common good. Documented cases of CBE include the Mondragon Corporation Cooperative in Spain (Morrison, 1991). Moreover, some of the oldest and some of the most modern social enterprises are co-operatives. A co-operative is defined as an autonomous association of voluntarily united persons who meet their common social, economic and cultural needs through a jointly owned and democratically controlled enterprise. It is estimated that there are about 800 million co-operative members around the world. The Social Enterprise Coalition is an example of this type of co-operative (Cabinet Office, 2007).Such views resonate with Cooper and Denner's (1998) perspective; culture as capital - a theory of social capital, which refers to the relationships and networks from which individuals are able to derive institutional support. Social capital is cumulative, leads to benefits in the social world, and can be converted into other forms of capital. Based on these collective propositions, SE can be viewed as a process that serves as catalyst for social change, and varies according to socio-economic and cultural environments. Combining insights from sociology, political science, and organisational theory, Mair and Marti (2006) propose the concept of embeddedness to emphasise the importance of the continuous interaction between social entrepreneurs and the context in which they are embedded. This discussion is relevant to South Africa, which is largely characterised as a collectivist nation, and where a concept like Ubuntu (together with an element of high community involvement) is in conflict with individualism yet differs from collectivism, where the rights of the individual are subjugated to a common good. It is this collectively enabling approach that is essential for collective SE, which is more socially orientated that builds on strengths rather than dwelling on deficits, and encompasses socio-structural factors among the sources and remedies for human problems (Bandura, 1997).These perspectives are reinforced when Weerawardena and Mort (2006), advance the concept of SE through empirical research and find it is a bounded multi-dimensional construct deeply rooted in an organisation's social mission with its drive for sustainability, which in turn is shaped by environmental dynamism. Similarly, Giddens's (1998) view is that SE is the way to reconstruct welfare and build social partnerships between public, social and business sectors by harnessing the dynamism of markets with a public interest focus. Consequently, profit is not the gauge of value creation, nor is customer satisfaction, and social impact is the gauge in SE. Social entrepreneurs look for a long-term social return on investment. Indeed they are not simply driven by the perception of a social need or by their compassion; rather, they have a vision of how to achieve improvement and they are determined to achieve their vision (Dees, 2001; Bornstein, 1998).SE differences and similarities in meaning In general, based on established literature, the concept of SE remains poorly defined and its boundaries to other fields remain blurred (Mair and Marti, 2006). Conceptual differences are noticeable in definitions of social entrepreneurship (focus on process or behaviour), social entrepreneurs (focus on founder of initiative), and social enterprise (focus on tangible outcome of SE). Peredo and McLean (2006) propose that one could easily ask what makes SE social, and what makes it entrepreneurship. Research on SE is clearly based on the knowledge base of entrepreneurship, and any definition of SE is shaped by the prevailing findings on entrepreneurship theory and practice. Although it is beyond the scope of this article to expound on the field of entrepreneurship, it provides a contemporary definition, which views the field of entrepreneurship as a "scholarly examination of how, by whom, and to what effect opportunities for creating future goods and services are discovered, evaluated and exploited" (Shane and Venkataraman, 2001, p. 218).The social element in definitions is often used to differentiate SE from commercial entrepreneurship, including the altruistic motive associated with SE and the profit motive with commercial entrepreneurship. However Mair and Marti (2006) argue that such a dichotomy is incorrect since SE, although based on ethical and moral issues, could include less altruistic reasons such as personal fulfilment, and the creation of fresh markets and new jobs. Correspondingly, commercial entrepreneurship also comprises a social aspect, as previously mentioned in terms of CSR. Rather than profit versus non-profit, Mair and Marti (2006) suggest that the main difference between business and social entrepreneurship lies in the relative priority given to social wealth creation versus economic wealth creation. Similarly, Peredo and McLean (2006) interpret a range of social entrepreneurs, along a continuum of possibilities ranging from entirely social benefits accrued to a firm, to social goals being the only requirement of the firm. Such conceptualisations reflect the absence of defined boundaries of the SE phenomena. An additional difficulty in defining SE is differentiating the small scale, often voluntary or charitable work done by individuals making a social difference, from the social entrepreneur who establishes a high turnover social enterprise (Harding, 2006). Because of their structure and constitution social entrepreneurs are able to serve a triple bottom line achieving profitability, societal impact and environmental sustainability, simultaneously (Harding, 2006).Theoretical conclusions A summary of the SE academic literature suggests a number of themes, preoccupations and domains have emerged (Weerawardena and Mort, 2006), which may generally comprise:* SE being expressed in a vast array of economic, educational, welfare and social activities, reflecting such diverse activities;* SE may be conceptualised in a number of contexts, i.e. public sector, community, social action organisations and charities; and* the role of innovativeness, proactiveness and risk-taking in SE are emphasised by distinguishing SE from other forms of community work.In order to draw some conclusions from these varying definitional controversies, an attempt is made to offer a position in relation to such debates, which allows for further interpretation and analysis. Existing theory has revealed a commonality across all definitions of SE. This is the fact that the underlying drive for social entrepreneurs is to create social value, rather than personal and shareholder wealth, and that the activity is characterised by innovation or the creation of something new rather than simply the replication of existing enterprises or practices. In concordance with other SE reports (Harding, 2006, p. 5) it is argued that the SE definition must reflect two critical features of a social as opposed to a mainstream enterprise:1. the project has social goals rather than profit objectives; and2. revenue is used to support social goals instead of shareholder returns.The definition of sustainability within the context of the non-profit sector is quite different from that of the for-profit sector, with the advocacy of sustainability versus stability being contentious in view of organisations having sustainable finances but no community support, and therefore probably not being sustainable (Johnson, 2003). Due to the exploratory nature of the study and since specific associations are predicted between the variables under study, hypotheses are formulated. These hypotheses are based on SE definitional controversies, and distinctions between managerial and entrepreneurial skills typically associated with successful SE. The economic value of entrepreneurial ability which is acquired through education can be identified through the work of Schultz (1980), who recognised that the returns that actually accrue to education are substantially undervalued. Despite early notions that entrepreneurship is an innate skill, recent studies (e.g. Fayolle et al., 2005) indicate that entrepreneurship education influences both current behaviour and future intentions. Identifying business opportunities and having confidence in personal skills to establish a business may be enhanced through education and training, with evidence suggesting that those with more education are more likely to pursue opportunities for entrepreneurship (high-growth ventures) (Gibb, 2000).In developing a body of theory on SE, Austin et al. (2006) highlight the differences between social and commercial entrepreneurship, and based on a prevailing commercial model, explore new parameters when applied to SE. Although this distinction clearly overlaps with previous differences highlighted on social goals versus profit, it can be interpreted that the distinction between social and commercial entrepreneurship is not dichotomous, but better conceptualised as a continuum ranging from purely social to purely economic. Some key differences that emerge from case examples (Austin et al., 2006) are:* SE focuses on serving basic, long-standing needs more effectively through innovative approaches rather than considering commercial entrepreneurship, which tends to focus on breakthroughs and new needs.* The context of SE differs from commercial entrepreneurship in the way that the interaction between a social ventures mission statement and performance measurement systems influences entrepreneurial behaviour (quantification of social impact is difficult).* The nature of the human and financial resources for SE differs in some key respects because of difficulties in resource mobilisation. New analysis (Rouse and Jayawarna, 2006) of the policy options available for improving finance to disadvantaged groups and obtaining social inclusion is pivotal towards understanding SE.Similarly, Thompson et al. (2000) distinguish between social entrepreneurs and managers, the former being catalysts for entrepreneurial projects, while the latter are critical for seeing initiatives through. Additionally, differences exist between non-profit versus for-profit social entrepreneurs, particularly where the advantages of collective wisdom versus personal skills are concerned, and where the focus is on long-term capacity versus short-term financial gain. Since the focus of this study is on the creation of social value through innovation, it is recognised that the mix of managerial competencies appropriate to successful SE may however differ in significant ways from the mix relevant to success in entrepreneurship excluding the social component (Peredo and McLean, 2006). Because of this distinction, a definition of an entrepreneurial competency/skill is offered:An entrepreneurial competency consists of a combination of skills, knowledge and resources that distinguishes entrepreneurs from their competitors (Fiet, 2000, p. 107).Several emergent themes of SE competencies arise from in-depth case study interviews (Thompson, 2002; Weerawardena and Mort, 2006): networking, people management, fund raising, mentoring, business training, environmental dynamics, innovativeness, proactiveness, risk management, sustainability, social mission, and opportunity recognition. Through case study exploration of sensemaking, Mills and Pawson (2006) raise questions about the centrality of the notion of risk in new start entrepreneurs' rationales for the enterprise development decisions they make. Additionally, Thompson (2002) uses an SE map to identify four central themes, which are:1. job creation;2. utilisation of buildings;3. volunteer support; and4. focus on helping people in need.Similarly, Brinckerhoff (2001) provides a SE readiness checklist incorporating the areas of mission, risk, systems, skills, space, and finance. Based on these skills, it seems the ability to develop a network of relationships is a hallmark of visionary social entrepreneurs, as is the ability to communicate an inspiring vision for motivating staff, partners, and volunteers (Thompson et al., 2000).Orloff (2002) identifies one element as key to both the emergence of a social venture partnership and its continued success - leadership, i.e. the right person heading up the organisation. Lock's (2001) report on strategic alliances between non-profit and for-profit organisations reflects the following criteria key to the success of the program:* a real and tangible mission and vision;* reliability and commitment of partners;* trust between the partners;* setting aside competitiveness for funding purposes; and* power-based action plans.Similarly, in identifying factors contributing to SE success, Sharir and Lerner (2006) demonstrate that eight variables contribute to success, arranged in order of their value:1. the entrepreneur's social network;2. total dedication to the venture's success;3. whether the capital base is at the establishment base;4. acceptance of the idea in public discourse;5. the composition of the venturing team (salaried versus volunteer workers);6. forming long term collaborations within the public and non-profit sectors;7. the ability of the service to pass the market test; and8. the entrepreneur's previous managerial experience.These findings should also be read in conjunction with the type of enterprise on which a social entrepreneur embarks, and are likely to be a function of skills, trades, and resources available within the community (Peredo and Chrisman, 2006). The notion of stakeholder engagement is taken further by Fuller-Love et al. (2006), where a scenario analysis exercise enabled key stakeholders to confront and deal with considerable uncertainties by developing a shared understanding of the barriers to small firm growth and rural economic regeneration. Furthermore, the start-up and success of social entrepreneurs may alter how the feasibility of engaging in an entrepreneurship is gauged, and how the success of one venture increases the perceptions of the acceptability and desirability of other social initiatives.Many social entrepreneurs find that lessons accumulated from the pioneers in the field are invaluable for future success, and consequently many prescriptions are offered (Boschee, 2001; Fernsler, 2006; Emerson, 1997; Brinckerhoff, 2001), some of which are:* earned income is paramount;* practise organised abandonment (focus efforts and resources);* unrelated business activities are dangerous;* recognise the difference between innovators, entrepreneurs, and managers;* prevent the non-profit culture from becoming an obstacle (take risks, relinquish control);* emphasise customer service/anticipate the need for large amounts of start-up capital; and* conduct market and pricing research/pay a good wage.Consolidation of these theoretical issues, together with the prescriptions offered for successful SE practices, demonstrates that the underlying drive for social entrepreneurs is creating social value. This activity is characterised by innovation or the creation of something new using a mix of managerial and entrepreneurial skills. The following hypotheses are subsequently formulated and statistically tested for significance:H1. Social entrepreneurship is best exemplified through a mix of skills which reflect distinct factor structures in terms of entrepreneurial and managerial competencies.H2. There are significant differences between respondents who are currently starting/involved with or managing a social enterprise, and those who are not. Extending the SE construct, it seems reasonable to assess the prospective social entrepreneur's capacity for practising SE with a modified skills instrument as gleaned from the literature. The justification for using a positivist approach to establish a skill set, rather than rely on a qualitative methodology, is supported in previous investigations (Turner and Martin, 2005; Chell et al., 2005). Analysing non-quantified data on several variables from many cases is often described as beyond the cognitive and affective limits of most researchers (Davidsson, 2004). It could further be argued that applying formal measurement and statistical analysis to the different skills levels cannot truly be deemed positivistic approach. Nothing in the nature of this data would prevent deeper speculations and insights from emerging when analysed; moreover, published research is full of exploratory findings and the use of techniques - such as factor analysis - that a true positivist would deem unscientific (Davidsson, 2004).A mechanism for measuring SE, the social entrepreneurship activity (SEA) index, as conceptualised in the UK Global Entrepreneurship Monitor (GEM) 2005 report (Harding et al., 2005), has been adapted for the purpose of this study to measure students' SE intentions. An intention is a representation of a future course of action to be performed (Ajzen, 1991); it is not simply an expectation of future actions but a proactive commitment to bringing them about. Intentions and actions are different aspects of a functional relationship separated in time. Intentions centre round plans of action. In the absence of intention, action is unlikely to occur (Bandura, 2001). Additionally, two questions pertaining to the respondents' involvement or inclinations towards trying to start/manage any kind of social, voluntary or community service, activity or initiative were posed as yes-or-no questions.Moreover, an instrument was designed to measure typical skills associated with successful social entrepreneurs. This skill set, which was initially investigated through qualitative case studies and lessons learnt in successful SE practices (e.g. Thompson, 2002; Weerawardena and Mort, 2006), was further validated through quantitative factor analysis. Hence, for the SE skills instrument, several competency/skill items were measured on a five-point Likert scale, constituting a mix of entrepreneurial and managerial skills. Pilot testing was used to detect weaknesses in the instrument. Based on the recommendations for the correct size of a pilot group, i.e. 25-100 respondents not necessarily being statistically selective (Cooper and Emory, 1995), the instrument was pre-tested on colleagues (n=5) and actual respondents (n=30) for further refinement of the instrument. The questionnaire's length, instructions to respondents, and anonymity were all considered in the final questionnaire design in order to generate a high response rate (Cooper and Emory, 1995). Notwithstanding these precautions, due to the exploratory nature of the study, the importance of the validity and reliability of these measures was considered and factor analysis was employed.Sampling In terms of sampling, the objective was to use students and not the general population. Student populations add control and homogeneity because individuals who are studying have been identified as being more likely to have an interest in pursuing SE (Harding et al., 2005). Respondents in this group possess the talent, interest and energy to become the next generation of social and civic leaders (Canadian Centre for Social Entrepreneurship, 2001, p. 9). Based on the indicators constituting social entrepreneurship, it was decided to target university students from various faculties at different levels of study (undergraduate to post-graduate), of various ethnic backgrounds, in order to obtain representativeness of a typical student population. As a matter of practicality the instrument was distributed to students of various faculties in a classroom setting, which allowed the researcher to maintain control over the environment, and ensured that a high response rate was achieved (n=287).A judgmental sampling approach was used to represent sample characteristics of respondents most likely to be social entrepreneurs. Hemmasi and Hoelscher (2005) consider the common practice of using university students as proxies for entrepreneurs to be convincing. They find that the student sample strongly resembles actual entrepreneurs, provided that it has high entrepreneurial potential. This notion was extended to include social entrepreneurs. Sample characteristics The sample characteristics (see Table I) are reflected in percentages in terms of: gender, male (48.8 per cent), and female (49.7 per cent); regarding age groups (17-20: 64.6 per cent), 21-24: 30.6 per cent); in respect of education (those who completed matric and were undergraduate students: 94.1 per cent); the faculty of registration was Engineering and the Built Environment (33.3 per cent), whereas Management (24.7 per cent), Art and Design (20.3 per cent), and Economic and Financial Sciences (15.5 per cent), with negligible participation from the Health and Sciences faculties. In terms of ethnicity, respondents categorised themselves as Black Africans (81.8 per cent), Caucasians/Whites (9.6 per cent), Asians (4. 8 per cent), or Coloured South Africans (2.1 per cent). Although such ethnic/racial distributions are not typical of all university populations in South Africa, these categorisations are representative of the broader South African population demographics. In terms of the type of SEA the highest recorded category was religious activities (25.3 per cent), followed by sport (19 per cent), and education (12.8 per cent), with even distributions among other categories accounting for the balance.Factor and reliability analysis Kaiser-Meyer-Olkin's measure of sampling adequacy and Bartlett's test of sphericity were used for the factor analysis and the extraction method was based on principal axis factoring. Two factors were extracted on eight iterations, with eigenvalues of 4.799 and 1.287, which explained 39.9 per cent and 10.7 per cent of variance respectively. Referring to Table II, all items had factor loadings above 30, with items 12, 14, 15 constituting factor 2 (named here as the core SE factor). Factor 1 (with a mix of entrepreneurial/managerial items) was represented by the majority of the remaining nine items. The two factors were correlated at 0.556. The Cronbach's a values for factors 1 and 2 were 0.836 and 0.712, respectively, with a composite factor a of 0.858.The validity and reliability of the instrument used to assess SE competencies were established, and offer insights into the levels and mix of skills used by current and potential social entrepreneurs; specifically the eclectic mix of managerial and entrepreneurial skills are both reaffirmed as being necessary for practising SE. Partial support for H1 is offered in the two factors that were obtained, and were consequently named as the core skill set (factor 2), and the entrepreneurial/managerial skill set (factor 1). It could be argued that the three items representing factor 2 - fund raising, administering the project and visionary leadership - are critical to and constitute the core of any type of SE engagement. Factor 1, comprising the majority of the items used to measure SE skills - a mix of nine managerial and entrepreneurial items - indicates that individuals displaying both sets of skills, may regard themselves as being more efficacious than when only relying on one set of skills. Although based on the empirical results, both factors also registered a relatively high mean score.In accordance with the statistical findings (Turner and Martin, 2005), it is apparent that although the SE construct is naturally focused on distinct entrepreneurial competencies, these skills are complemented by more traditional management skills. These two sets of skills are not perceived to be mutually exclusive, as both sets are required for successful SE. Complementary competencies are a key determinant for successful SE (Turner and Martin, 2005). A pure managerial approach, without reference to entrepreneurship skills, would have counteracted the purposes of practising SE, as was conceptualised for this study.Mean score analysis Descriptive statistics were calculated for the two first-order factors and the one second-order factors (see Table I). Based on the initial descriptives, it was established that age, level of education, and ethnic group categories were skewed and fell predominantly into one category, and were accordingly excluded from inferential testing. The mean scores for both the factors are relatively high, i.e. above the midpoint on the 1-5 Likert scale.Inferential statistical testing Where mean scores on separate factors were calculated, the following test results were analysed:* gender - the independent samples t-test procedure was carried out with no significant differences detected at the 0.05 level;* concerning the current or future SEA, no significant differences at the 0.05 level were detected; and* regarding the faculty of registration, using ANOVA (see Table III), for factor 1, there appeared to be a 0.004 (4.00 per cent) probability of obtaining an F value of 4.506 if no differences among group means in the population were evident: since this probability does not exceed the 0.05 level, one can conclude that there are significant differences for factor 1 relating to type of faculty.Hence, in determining which specific faculties differed on SEA, a more stringent test - i.e. the multiple comparison Scheffe test - was calculated for the dependent variables as factor 1 and the final factor solution. There was a difference within specific faculties; engineering and built environment (EBE) versus economic and financial sciences (EFS) (factor 1=0.010), and (factor 2=0.022).For H2, in relation to differences between factors among the study variables, the only statistically significant differences were found between types of faculties, with the EFS faculty having significant higher scores on both factors. It is plausible that the sample of respondents from the EFS faculty consider their abilities in financial and managerial related matters to be more advanced than those of other faculties' members, probably as a result of exposure to the similar type of discourse used in SE and the EFS faculty.In relation to previous studies these findings also contradict the order and priority of variables where substantive differences in networking have a much lower correlation than stipulated in Sharir and Lerner's (2006) findings, specifically where being innovative, focused and conducting research are higher order priorities. These differences imply a different skill set arrangement with commensurate differences in educational and training priorities which target fostering innovation and employing research principles amongst other skills. Although the results are modest, particularly in the number of active and future social entrepreneurs, they are not trivial. Such findings are reasonably good for exploratory research in a new domain such as SE in SA. It is a first in SA where SEA was empirically derived and associated competencies were measured. The results indicate that the profiles of the individuals represented in this sample are typical of potential social entrepreneurs, with previous studies confirming that such respondents are likely to have interest in pursuing SE. Although any generalisations would mar the rigour of the analysis undertaken, it is tempting to categorise SEA in SA as generally low.Comparatively, as conceptualised for the UK GEM report and for this paper, SEA does not measure all socially motivated enterprise activity, but rather provides an indication of the propensity of particular groups for striving towards entrepreneurial rather than economic means. In the UK the SEA rate comprises 3.2 per cent of the adult population, which is directly comparable with their total entrepreneurial activity (TEA) rate of 6.2 per cent. The UK sample results indicate that 24.4 per cent of the sample is currently trying to start a venture; however the majority, 73.9 per cent answered "no" to any such inclination towards SE initiatives. In this study, 17.2 per cent of respondents indicated "yes" to current involvement with SEA, and 81.4 per cent answered "no" to such involvement. However, a meaningful comparison between these rates is not entirely appropriate as the UK GEM 2006 uses survey data from 27,296 18-64-year-olds randomly stratified.Based on this study's exclusive sample the results indicate that students are likely as a group to be engaged in SE, although no comparison with any non-student population groups were made. In the UK, some 5 per cent of the student population are social entrepreneurs compared with 3.5 per cent of those in full-time employment (Harding et al., 2005). This indicates that younger people more likely to be involved in social initiatives; the highest SEA rate of 3.9 per cent is in the 18-24 age group, compared to 2.7 per cent in the 25-34 age group, with significant differences between youngest and oldest age groups. Education is also predictor of propensity to be a social entrepreneur in the UK, and with 5.5 per cent of people with postgraduate qualifications socially active compared with 2.4 per cent of those who only have undergraduate qualifications. Ethnic group differences were also reported in the UK GEM, where non-white groups (5 per cent) are more likely to be social entrepreneurs than their white counterparts (3 per cent) (Harding, 2006, p. 15). As the findings indicate, the prevalence of SE is more widespread amongst younger people with education, and who are labour market inactive, characteristic of the study sample.Conceptual integrations Based on the existing literature, it seems that recently there has been an upsurge in SEA, driven by changes in the competitive environment. Presently, non-profit organisations are operating in a highly competitive environment characterised by tighter financial restrictions, with several organisations vying for the same donor funds (Weerawardena and Mort, 2006). Currently, the non-profit sector is facing intensifying demands for improved effectiveness and sustainability in light of diminishing funding from traditional sources. Moreover the increasing concentration of wealth in the private sector is mitigating calls for greater social responsibility and more proactive responses to complex social problems (Johnson, 2000, p. 1). Internationally, the SE situation is much the same, with non-governmental developmental organisations (NGDOs) working in developing countries, noted for their role as primarily providing subsidies on behalf of global donors; and creating the circumstances for "patronage, dependency, pathological institutional behaviour and financial malpractice" (Johnson, 2000, p. 3). What may be called a beggar mentality has emerged in many communities where there have been massive aid interventions (Peredo and Chrisman, 2006, p. 311).Established research indicates a wide range of both entrepreneurial and managerial skills, with significant overlaps, as being necessary for successful SE. Like business entrepreneurs, social entrepreneurs initiate and implement innovative programmes, despite being differently motivated, the challenges they face during start-ups are similar to those faced by business entrepreneurs (Sharir and Lerner, 2006). The commercial entrepreneur thrives on innovation, competition and profit, whereas the social entrepreneur prospers on innovation and inclusiveness for changing the systems and patterns of societies (Jeffs, 2006). Moreover it seems that a core set of skills seems indispensable for undertaking SE, even though a large number of elements play a role in SE, i.e. local culture, community management practices, previous occupational or technical skills, perceptions of macroeconomic, and the legal, social, and political environments (Peredo and Chrisman, 2006).Challenges for social entrepreneurs Social entrepreneurs and philanthropic efforts are not exempt from criticism and widespread flaws are evident in their fundamentals. This specifically refers to the unjustifiably high administration costs, which remain unremedied to this day (The Economist, 2006). Little effort has been devoted to measuring results involving the double bottom line (financial and social performance) or the triple bottom line (financial, social and environmental), while being readily susceptible to statistical manipulation, the vague and undefined goals of empowering people or changing lives further obfuscate the outputs of SEA. Cook et al., (2003, p. 64) highlight the false premises and dangerous precedents and standards for SE when they argue that using a private entrepreneurial model in pursuing social justice aims, which cannot be valued in the market, is likely to violate the case for market efficacy. Hence the difficulty social entrepreneurs experience becomes apparent when balancing resource allocation between profit-making and welfare-providing activities. In fact, it could be argued that it is undesirable to implement a welfare system where the beneficiaries are subject to the vagaries of the entrepreneurial model (Seelos and Mair, 2005).Recent research (Madden and Scaife, 2006) has identified key barriers for SE community engagement, among which are overwhelming requests and choice of viable options, lack of formal processes for handling requests, and lack of vision for community engagement - all of which are also highly relevant towards explaining the low SEA rate as reported in the findings.Study limitations The study is limited by the early stage of theoretical development in the SE construct, and by any related measures. Moreover, research is limited by the restricted sampling frame. By using students the psychological diversity of the general population is possibly underestimated, even though SEA is predominant among student populations. Since survey data was self-reported; the study is also susceptible to bias (e.g. self-serving bias with regard to skills level).Study implications A contentious issue in SE, because of the newness of the concept, is that there are few institutional mechanisms in place to support this work (Johnson, 2000). Related to this issue of support is the question of training and capacity building for SE, if SE is defined as principally employing entrepreneurial and managerial skills to the non-profit sector, then these skills are fairly replicable. However if SE is defined as a highly creative and innovative individual approach, replication will be much more difficult to achieve and the focus would then be on developing conditions in which latent entrepreneurial talent could be harnessed for social purposes (Johnson, 2000). Moreover, social entrepreneurs based in the community are able to add value in ways that are often not possible through mainstream policies, i.e. their closeness to the community, and their perception for having a capacity for innovation that autocratic bureaucracies traditionally do not have (Turner and Martin, 2005).As with mainstream entrepreneurship, SE activity is heavily influenced by access to training, modelling, and by promoting SE as an alternative business model within schools, colleges, and universities, exposure and training could induce early stage SEA. As construed in literature, social entrepreneurs are community-centric and rely heavily on networks and support structures, such networks being easy and cheap to establish (Harding, 2006, Sharir and Lerner, 2006). Since competencies can be nurtured, and since funding requests often require concomitant competencies to add value, a positive link between SE success and skills, training and development for SE should be mandatory (such as the school for social entrepreneurs in the UK) (Sharir and Lerner, 2006). Perhaps particularly in SA, which is currently beset by social inequalities, social entrepreneurs should look for the most effective methods of serving their social mandate through funding and sponsoring the activities of community-based projects. By developing capacity through relevant interventions and partnerships, social entrepreneurs can add value and meet the needs of groups who have been failed by previous government attempts in social redress. However, government also has a role in fostering a culture of social enterprise by raising awareness of social enterprises among students through education and through disseminating information and providing resources for promoting social entrepreneurship. Opens in a new window.Table I Descriptive statistics on variables Opens in a new window.Table II Factor structure for SE skills Opens in a new window.Table III ANOVA for faculty registration
|
- The findings were modest, particularly about the number of active and future social entrepreneurs. Moreover the validity and reliability of the instrument used to measure skills was established, offering insights into SEA and the types of skills associated with SE.
|
[SECTION: Value] As with any change-orientated activity, social entrepreneurship (SE) has not evolved in a vacuum, but rather within a complex framework of political, economic and social change occurring at global and local levels (Johnson, 2000; Kramer, 2005; Harding, 2006). The contribution of social entrepreneurs is being increasingly celebrated, as was witnessed at the World Economic Forum's (2006) Conference on Africa in Cape Town recently. Similarly, Warren Buffett's $30.7 billion donation to the Bill & Melinda Gates Foundation (Cole, 2006) demonstrates that venture philanthropy represents a significant change in how people think about transferring wealth. SE has evolved into the mainstream after years of marginalisation on the edges of the non-profit sector. Venture philanthropists, grant sponsors, boards of directors, non-profit entrepreneurs, consultants and academics are now all interested in the field of SE (Boschee, 2001; Kramer, 2005; Frumkin, 2006). Over the last decade, a critical mass of foundations, academics, non-profit organisations, and self-identified social entrepreneurs have emerged and SE has become a distinct discipline (Kramer, 2005; Dees, 2001).Worldwide, policy makers are using the language of local capacity building as a strategy to assist impoverished communities in becoming self-reliant (Peredo and Chrisman, 2006). Exemplifying a growing trend for academic institutions to take this phenomenon seriously, many dedicated centres for SE have evolved, for example the Skoll Centre for Social Entrepreneurship at Oxford University, created by Jeff Skoll, whose mission is to advance systemic change for the benefit of communities around the world by investing in, connecting and celebrating social entrepreneurs (The Economist, 2006). Many similar institutions exist and researchers (Dees, 2001; Christie and Honing, 2006) suggest that the time is certainly ripe for entrepreneurial approaches to social problems, since SE merges the passion of a social mission with business discipline, innovation, and determination (Jackson, 2006).Although not new in the commercial/business sector, corporate governance and corporate social responsibility (CSR) have gained unprecedented prominence in the modern corporation and are well documented in academic research and popular literature (Rossouw and Van Vuuren, 2004). In South Africa (SA), where SE remains an under-researched area, the importance of SE as a phenomenon in social life is critical; social entrepreneurs contribute to an economy by providing an alternative business model for firms to trade commercially in an environmentally and socially sustainable way. They also provide an alternative delivery system for public services such as health, education, housing and community support (Harding, 2006, p. 10). Moreover, social entrepreneurs are also seen to be a growing source of solutions to issues that currently plague society, such as poverty, crime and abuse (Schuyler, 1998). Social entrepreneurs provide solutions to social, employment and economic problems where traditional market or public approaches fail (Jeffs, 2006). Yet despite these achievements, government in SA appears reluctant to directly engage with SE endeavours, viewing social entrepreneurs as innately risky - and their activities as maverick endeavours.Driving forces of SE with relevance to SA The central driver for SE are social problems (Austin et al., 2006), and driving forces for social entrepreneurs include:* politically, the devolution of social functions from national to local level and from public to private;* economically, the reduction of funding from the public purse; and* socially, problems of increasing complexity and magnitude (Lock, 2001, p. 1).In SA, SE has unequivocal application where traditional government initiatives are unable to satisfy the entire social deficit, where an effort to reduce dependency on social welfare/grants is currently being instituted, and where the survival of many non-governmental organisations (NGOs) is at stake. Such challenges are exacerbated by a social context characterised by massive inequalities in education, housing, the HIV/AIDS pandemic, and high unemployment and poverty rates (Rwigema and Venter, 2004). Accompanying these massive social deficits, many governmental and philanthropic efforts have fallen far short of expectations, with social sector institutions often viewed as inefficient, ineffective, and unresponsive. In particular, policymakers have limited guidance and recognise that the invisible hand frequently fails to assert itself in the most socially beneficial outcomes (Christie and Honing, 2006). Moreover, many poverty alleviation programmes have degenerated into global charity events rather than serving local needs, since most projects have been conceived and managed by development agencies rather than by members of the community, resulting in a lack of ownership on the part of the target beneficiaries (Peredo and Chrisman, 2006).Such failures suggest that there are many gaps in understanding SE activities under conditions of material poverty and in different cultural settings (Peredo and Chrisman, 2006). Consequently the focus of this study is on delineating the SE construct through a focused literature review and identifying the factors necessary for successful SE to flourish. As in the international arena, due to a surge in the establishment of non-profit organisations, SE in SA has proliferated in recent decades, although as an academic enquiry SE is still emergent (Austin et al., 2006). Based on these limitations, a study exploring the nature of SE and how to practise it successfully seems justified, particularly within a non-Western context. Once the various theoretical issues and debates that have made significant contributions to the evolution of SE theory and practice are scrutinised, the author will assume a position relative to these debates and then conduct empirical investigations.Broadly the paper seeks to interrogate existing SE theory and then, relative to these existing controversies, analyse quantitatively students' intentions to engage in SE. Furthermore, the different types of SE activities are measured in conjunction with the level of entrepreneurial and managerial skills typically associated with successful social entrepreneurs. The rationale for focusing this study on SE skills can be found in the many instances where it is impossible to obtain start-up funds without demonstrating proof of concept together with commensurate abilities required to execute such an initiative. Those who fund social entrepreneurs are looking to invest in people with a demonstrated ability to create change, and the factors that matter most are the financial, strategic, managerial, and innovative abilities of social entrepreneurs (Kramer, 2005). Therefore, an investigation into the mix of managerial and entrepreneurial skills associated with successful SE is crucially important to this study. The following sections focus on defining and operationalising SE, and then identifying and surveying the different types of skills required for successful SE practices. In examining the SE construct several definitions are investigated and their components are analysed. As used in social sciences research, a construct is an idea specifically invented for theory-building purposes; a construct also combines simpler concepts especially when the idea is least observable and complex to measure (Cooper and Emory, 1995). To a large extent SE embodies such tendencies, where social entrepreneurs are reformers and revolutionaries, as described by Schumpeter (1934) but with a social mission; they affect fundamental change in the way things are done in the social sector (Dees, 1998). Social entrepreneurs are perceived as heading mission-based businesses rather than operating as charities. The entrepreneurs seek to create systemic changes and sustainable improvements, and they take risks on behalf of the people their organisation serves (Brinckerhoff, 2000). Though they may act locally, their actions have the potential to stimulate global improvements in various fields, whether it is in education, health care, economic development, the environment, the arts, or any other social field (Dees, 1998).The language of social entrepreneurship may be new, but the phenomenon is not. Peter Drucker (1979, p. 453) introduced the concept of social enterprise when he advocated that even the most private of private enterprises is an organ of society and serves a social function. He also advocated a need for the social sector in addition to the private sector of business and the public sector of government to satisfy social needs and provide a sense of citizenship and community. Similarly, Spear (2004) poses the question of whether SE is about creating social enterprise or is more concerned with those particular aspects of entrepreneurship that have a social dimension. Based on the Global Entrepreneurship Monitor (GEM) report, SE is defined as follows:Social entrepreneurship is any attempt at new social enterprise activity or new enterprise creation, such as self-employment, a new enterprise, or the expansion of a existing social enterprise by an individual, teams of individuals or established social enterprise, with social or community goals as its base and where the profit is invested in the activity in the activity or venture itself rather than returned to investors (Harding, 2006, p. 5).Subscribing to the precept that "Social entrepreneurs are one species in the genus entrepreneur", Dees (2001, pp. 2-4) sees social entrepreneurs playing the role of change agents in the social sector, by adopting a mission to create and sustain social value (not just private value), by recognising and relentlessly pursuing new opportunities to serve that mission, engaging in a process of continuous innovation, adaptation and learning, acting boldly without being limited by resources currently at hand, and exhibiting greater accountability to the constituencies served and for the outcomes created.Each element in these definitions is based on the body of entrepreneurship research and this is the core of what distinguishes social entrepreneurs from business entrepreneurs, even from socially responsible businesses. It is also worth noting that these definitions, primarily individualistic in their conception, fail to adequately acknowledge a collective form of entrepreneurship. Indeed, scholars now highlight the importance of recognising entrepreneurship as building on a collective process of learning and innovation (Peredo and Chrisman, 2006). Peredo and Chrisman (2006, p. 310) developed the concept of community-based enterprise (CBE), which they define as a community acting corporately as both entrepreneur and enterprise in pursuit of the common good. Documented cases of CBE include the Mondragon Corporation Cooperative in Spain (Morrison, 1991). Moreover, some of the oldest and some of the most modern social enterprises are co-operatives. A co-operative is defined as an autonomous association of voluntarily united persons who meet their common social, economic and cultural needs through a jointly owned and democratically controlled enterprise. It is estimated that there are about 800 million co-operative members around the world. The Social Enterprise Coalition is an example of this type of co-operative (Cabinet Office, 2007).Such views resonate with Cooper and Denner's (1998) perspective; culture as capital - a theory of social capital, which refers to the relationships and networks from which individuals are able to derive institutional support. Social capital is cumulative, leads to benefits in the social world, and can be converted into other forms of capital. Based on these collective propositions, SE can be viewed as a process that serves as catalyst for social change, and varies according to socio-economic and cultural environments. Combining insights from sociology, political science, and organisational theory, Mair and Marti (2006) propose the concept of embeddedness to emphasise the importance of the continuous interaction between social entrepreneurs and the context in which they are embedded. This discussion is relevant to South Africa, which is largely characterised as a collectivist nation, and where a concept like Ubuntu (together with an element of high community involvement) is in conflict with individualism yet differs from collectivism, where the rights of the individual are subjugated to a common good. It is this collectively enabling approach that is essential for collective SE, which is more socially orientated that builds on strengths rather than dwelling on deficits, and encompasses socio-structural factors among the sources and remedies for human problems (Bandura, 1997).These perspectives are reinforced when Weerawardena and Mort (2006), advance the concept of SE through empirical research and find it is a bounded multi-dimensional construct deeply rooted in an organisation's social mission with its drive for sustainability, which in turn is shaped by environmental dynamism. Similarly, Giddens's (1998) view is that SE is the way to reconstruct welfare and build social partnerships between public, social and business sectors by harnessing the dynamism of markets with a public interest focus. Consequently, profit is not the gauge of value creation, nor is customer satisfaction, and social impact is the gauge in SE. Social entrepreneurs look for a long-term social return on investment. Indeed they are not simply driven by the perception of a social need or by their compassion; rather, they have a vision of how to achieve improvement and they are determined to achieve their vision (Dees, 2001; Bornstein, 1998).SE differences and similarities in meaning In general, based on established literature, the concept of SE remains poorly defined and its boundaries to other fields remain blurred (Mair and Marti, 2006). Conceptual differences are noticeable in definitions of social entrepreneurship (focus on process or behaviour), social entrepreneurs (focus on founder of initiative), and social enterprise (focus on tangible outcome of SE). Peredo and McLean (2006) propose that one could easily ask what makes SE social, and what makes it entrepreneurship. Research on SE is clearly based on the knowledge base of entrepreneurship, and any definition of SE is shaped by the prevailing findings on entrepreneurship theory and practice. Although it is beyond the scope of this article to expound on the field of entrepreneurship, it provides a contemporary definition, which views the field of entrepreneurship as a "scholarly examination of how, by whom, and to what effect opportunities for creating future goods and services are discovered, evaluated and exploited" (Shane and Venkataraman, 2001, p. 218).The social element in definitions is often used to differentiate SE from commercial entrepreneurship, including the altruistic motive associated with SE and the profit motive with commercial entrepreneurship. However Mair and Marti (2006) argue that such a dichotomy is incorrect since SE, although based on ethical and moral issues, could include less altruistic reasons such as personal fulfilment, and the creation of fresh markets and new jobs. Correspondingly, commercial entrepreneurship also comprises a social aspect, as previously mentioned in terms of CSR. Rather than profit versus non-profit, Mair and Marti (2006) suggest that the main difference between business and social entrepreneurship lies in the relative priority given to social wealth creation versus economic wealth creation. Similarly, Peredo and McLean (2006) interpret a range of social entrepreneurs, along a continuum of possibilities ranging from entirely social benefits accrued to a firm, to social goals being the only requirement of the firm. Such conceptualisations reflect the absence of defined boundaries of the SE phenomena. An additional difficulty in defining SE is differentiating the small scale, often voluntary or charitable work done by individuals making a social difference, from the social entrepreneur who establishes a high turnover social enterprise (Harding, 2006). Because of their structure and constitution social entrepreneurs are able to serve a triple bottom line achieving profitability, societal impact and environmental sustainability, simultaneously (Harding, 2006).Theoretical conclusions A summary of the SE academic literature suggests a number of themes, preoccupations and domains have emerged (Weerawardena and Mort, 2006), which may generally comprise:* SE being expressed in a vast array of economic, educational, welfare and social activities, reflecting such diverse activities;* SE may be conceptualised in a number of contexts, i.e. public sector, community, social action organisations and charities; and* the role of innovativeness, proactiveness and risk-taking in SE are emphasised by distinguishing SE from other forms of community work.In order to draw some conclusions from these varying definitional controversies, an attempt is made to offer a position in relation to such debates, which allows for further interpretation and analysis. Existing theory has revealed a commonality across all definitions of SE. This is the fact that the underlying drive for social entrepreneurs is to create social value, rather than personal and shareholder wealth, and that the activity is characterised by innovation or the creation of something new rather than simply the replication of existing enterprises or practices. In concordance with other SE reports (Harding, 2006, p. 5) it is argued that the SE definition must reflect two critical features of a social as opposed to a mainstream enterprise:1. the project has social goals rather than profit objectives; and2. revenue is used to support social goals instead of shareholder returns.The definition of sustainability within the context of the non-profit sector is quite different from that of the for-profit sector, with the advocacy of sustainability versus stability being contentious in view of organisations having sustainable finances but no community support, and therefore probably not being sustainable (Johnson, 2003). Due to the exploratory nature of the study and since specific associations are predicted between the variables under study, hypotheses are formulated. These hypotheses are based on SE definitional controversies, and distinctions between managerial and entrepreneurial skills typically associated with successful SE. The economic value of entrepreneurial ability which is acquired through education can be identified through the work of Schultz (1980), who recognised that the returns that actually accrue to education are substantially undervalued. Despite early notions that entrepreneurship is an innate skill, recent studies (e.g. Fayolle et al., 2005) indicate that entrepreneurship education influences both current behaviour and future intentions. Identifying business opportunities and having confidence in personal skills to establish a business may be enhanced through education and training, with evidence suggesting that those with more education are more likely to pursue opportunities for entrepreneurship (high-growth ventures) (Gibb, 2000).In developing a body of theory on SE, Austin et al. (2006) highlight the differences between social and commercial entrepreneurship, and based on a prevailing commercial model, explore new parameters when applied to SE. Although this distinction clearly overlaps with previous differences highlighted on social goals versus profit, it can be interpreted that the distinction between social and commercial entrepreneurship is not dichotomous, but better conceptualised as a continuum ranging from purely social to purely economic. Some key differences that emerge from case examples (Austin et al., 2006) are:* SE focuses on serving basic, long-standing needs more effectively through innovative approaches rather than considering commercial entrepreneurship, which tends to focus on breakthroughs and new needs.* The context of SE differs from commercial entrepreneurship in the way that the interaction between a social ventures mission statement and performance measurement systems influences entrepreneurial behaviour (quantification of social impact is difficult).* The nature of the human and financial resources for SE differs in some key respects because of difficulties in resource mobilisation. New analysis (Rouse and Jayawarna, 2006) of the policy options available for improving finance to disadvantaged groups and obtaining social inclusion is pivotal towards understanding SE.Similarly, Thompson et al. (2000) distinguish between social entrepreneurs and managers, the former being catalysts for entrepreneurial projects, while the latter are critical for seeing initiatives through. Additionally, differences exist between non-profit versus for-profit social entrepreneurs, particularly where the advantages of collective wisdom versus personal skills are concerned, and where the focus is on long-term capacity versus short-term financial gain. Since the focus of this study is on the creation of social value through innovation, it is recognised that the mix of managerial competencies appropriate to successful SE may however differ in significant ways from the mix relevant to success in entrepreneurship excluding the social component (Peredo and McLean, 2006). Because of this distinction, a definition of an entrepreneurial competency/skill is offered:An entrepreneurial competency consists of a combination of skills, knowledge and resources that distinguishes entrepreneurs from their competitors (Fiet, 2000, p. 107).Several emergent themes of SE competencies arise from in-depth case study interviews (Thompson, 2002; Weerawardena and Mort, 2006): networking, people management, fund raising, mentoring, business training, environmental dynamics, innovativeness, proactiveness, risk management, sustainability, social mission, and opportunity recognition. Through case study exploration of sensemaking, Mills and Pawson (2006) raise questions about the centrality of the notion of risk in new start entrepreneurs' rationales for the enterprise development decisions they make. Additionally, Thompson (2002) uses an SE map to identify four central themes, which are:1. job creation;2. utilisation of buildings;3. volunteer support; and4. focus on helping people in need.Similarly, Brinckerhoff (2001) provides a SE readiness checklist incorporating the areas of mission, risk, systems, skills, space, and finance. Based on these skills, it seems the ability to develop a network of relationships is a hallmark of visionary social entrepreneurs, as is the ability to communicate an inspiring vision for motivating staff, partners, and volunteers (Thompson et al., 2000).Orloff (2002) identifies one element as key to both the emergence of a social venture partnership and its continued success - leadership, i.e. the right person heading up the organisation. Lock's (2001) report on strategic alliances between non-profit and for-profit organisations reflects the following criteria key to the success of the program:* a real and tangible mission and vision;* reliability and commitment of partners;* trust between the partners;* setting aside competitiveness for funding purposes; and* power-based action plans.Similarly, in identifying factors contributing to SE success, Sharir and Lerner (2006) demonstrate that eight variables contribute to success, arranged in order of their value:1. the entrepreneur's social network;2. total dedication to the venture's success;3. whether the capital base is at the establishment base;4. acceptance of the idea in public discourse;5. the composition of the venturing team (salaried versus volunteer workers);6. forming long term collaborations within the public and non-profit sectors;7. the ability of the service to pass the market test; and8. the entrepreneur's previous managerial experience.These findings should also be read in conjunction with the type of enterprise on which a social entrepreneur embarks, and are likely to be a function of skills, trades, and resources available within the community (Peredo and Chrisman, 2006). The notion of stakeholder engagement is taken further by Fuller-Love et al. (2006), where a scenario analysis exercise enabled key stakeholders to confront and deal with considerable uncertainties by developing a shared understanding of the barriers to small firm growth and rural economic regeneration. Furthermore, the start-up and success of social entrepreneurs may alter how the feasibility of engaging in an entrepreneurship is gauged, and how the success of one venture increases the perceptions of the acceptability and desirability of other social initiatives.Many social entrepreneurs find that lessons accumulated from the pioneers in the field are invaluable for future success, and consequently many prescriptions are offered (Boschee, 2001; Fernsler, 2006; Emerson, 1997; Brinckerhoff, 2001), some of which are:* earned income is paramount;* practise organised abandonment (focus efforts and resources);* unrelated business activities are dangerous;* recognise the difference between innovators, entrepreneurs, and managers;* prevent the non-profit culture from becoming an obstacle (take risks, relinquish control);* emphasise customer service/anticipate the need for large amounts of start-up capital; and* conduct market and pricing research/pay a good wage.Consolidation of these theoretical issues, together with the prescriptions offered for successful SE practices, demonstrates that the underlying drive for social entrepreneurs is creating social value. This activity is characterised by innovation or the creation of something new using a mix of managerial and entrepreneurial skills. The following hypotheses are subsequently formulated and statistically tested for significance:H1. Social entrepreneurship is best exemplified through a mix of skills which reflect distinct factor structures in terms of entrepreneurial and managerial competencies.H2. There are significant differences between respondents who are currently starting/involved with or managing a social enterprise, and those who are not. Extending the SE construct, it seems reasonable to assess the prospective social entrepreneur's capacity for practising SE with a modified skills instrument as gleaned from the literature. The justification for using a positivist approach to establish a skill set, rather than rely on a qualitative methodology, is supported in previous investigations (Turner and Martin, 2005; Chell et al., 2005). Analysing non-quantified data on several variables from many cases is often described as beyond the cognitive and affective limits of most researchers (Davidsson, 2004). It could further be argued that applying formal measurement and statistical analysis to the different skills levels cannot truly be deemed positivistic approach. Nothing in the nature of this data would prevent deeper speculations and insights from emerging when analysed; moreover, published research is full of exploratory findings and the use of techniques - such as factor analysis - that a true positivist would deem unscientific (Davidsson, 2004).A mechanism for measuring SE, the social entrepreneurship activity (SEA) index, as conceptualised in the UK Global Entrepreneurship Monitor (GEM) 2005 report (Harding et al., 2005), has been adapted for the purpose of this study to measure students' SE intentions. An intention is a representation of a future course of action to be performed (Ajzen, 1991); it is not simply an expectation of future actions but a proactive commitment to bringing them about. Intentions and actions are different aspects of a functional relationship separated in time. Intentions centre round plans of action. In the absence of intention, action is unlikely to occur (Bandura, 2001). Additionally, two questions pertaining to the respondents' involvement or inclinations towards trying to start/manage any kind of social, voluntary or community service, activity or initiative were posed as yes-or-no questions.Moreover, an instrument was designed to measure typical skills associated with successful social entrepreneurs. This skill set, which was initially investigated through qualitative case studies and lessons learnt in successful SE practices (e.g. Thompson, 2002; Weerawardena and Mort, 2006), was further validated through quantitative factor analysis. Hence, for the SE skills instrument, several competency/skill items were measured on a five-point Likert scale, constituting a mix of entrepreneurial and managerial skills. Pilot testing was used to detect weaknesses in the instrument. Based on the recommendations for the correct size of a pilot group, i.e. 25-100 respondents not necessarily being statistically selective (Cooper and Emory, 1995), the instrument was pre-tested on colleagues (n=5) and actual respondents (n=30) for further refinement of the instrument. The questionnaire's length, instructions to respondents, and anonymity were all considered in the final questionnaire design in order to generate a high response rate (Cooper and Emory, 1995). Notwithstanding these precautions, due to the exploratory nature of the study, the importance of the validity and reliability of these measures was considered and factor analysis was employed.Sampling In terms of sampling, the objective was to use students and not the general population. Student populations add control and homogeneity because individuals who are studying have been identified as being more likely to have an interest in pursuing SE (Harding et al., 2005). Respondents in this group possess the talent, interest and energy to become the next generation of social and civic leaders (Canadian Centre for Social Entrepreneurship, 2001, p. 9). Based on the indicators constituting social entrepreneurship, it was decided to target university students from various faculties at different levels of study (undergraduate to post-graduate), of various ethnic backgrounds, in order to obtain representativeness of a typical student population. As a matter of practicality the instrument was distributed to students of various faculties in a classroom setting, which allowed the researcher to maintain control over the environment, and ensured that a high response rate was achieved (n=287).A judgmental sampling approach was used to represent sample characteristics of respondents most likely to be social entrepreneurs. Hemmasi and Hoelscher (2005) consider the common practice of using university students as proxies for entrepreneurs to be convincing. They find that the student sample strongly resembles actual entrepreneurs, provided that it has high entrepreneurial potential. This notion was extended to include social entrepreneurs. Sample characteristics The sample characteristics (see Table I) are reflected in percentages in terms of: gender, male (48.8 per cent), and female (49.7 per cent); regarding age groups (17-20: 64.6 per cent), 21-24: 30.6 per cent); in respect of education (those who completed matric and were undergraduate students: 94.1 per cent); the faculty of registration was Engineering and the Built Environment (33.3 per cent), whereas Management (24.7 per cent), Art and Design (20.3 per cent), and Economic and Financial Sciences (15.5 per cent), with negligible participation from the Health and Sciences faculties. In terms of ethnicity, respondents categorised themselves as Black Africans (81.8 per cent), Caucasians/Whites (9.6 per cent), Asians (4. 8 per cent), or Coloured South Africans (2.1 per cent). Although such ethnic/racial distributions are not typical of all university populations in South Africa, these categorisations are representative of the broader South African population demographics. In terms of the type of SEA the highest recorded category was religious activities (25.3 per cent), followed by sport (19 per cent), and education (12.8 per cent), with even distributions among other categories accounting for the balance.Factor and reliability analysis Kaiser-Meyer-Olkin's measure of sampling adequacy and Bartlett's test of sphericity were used for the factor analysis and the extraction method was based on principal axis factoring. Two factors were extracted on eight iterations, with eigenvalues of 4.799 and 1.287, which explained 39.9 per cent and 10.7 per cent of variance respectively. Referring to Table II, all items had factor loadings above 30, with items 12, 14, 15 constituting factor 2 (named here as the core SE factor). Factor 1 (with a mix of entrepreneurial/managerial items) was represented by the majority of the remaining nine items. The two factors were correlated at 0.556. The Cronbach's a values for factors 1 and 2 were 0.836 and 0.712, respectively, with a composite factor a of 0.858.The validity and reliability of the instrument used to assess SE competencies were established, and offer insights into the levels and mix of skills used by current and potential social entrepreneurs; specifically the eclectic mix of managerial and entrepreneurial skills are both reaffirmed as being necessary for practising SE. Partial support for H1 is offered in the two factors that were obtained, and were consequently named as the core skill set (factor 2), and the entrepreneurial/managerial skill set (factor 1). It could be argued that the three items representing factor 2 - fund raising, administering the project and visionary leadership - are critical to and constitute the core of any type of SE engagement. Factor 1, comprising the majority of the items used to measure SE skills - a mix of nine managerial and entrepreneurial items - indicates that individuals displaying both sets of skills, may regard themselves as being more efficacious than when only relying on one set of skills. Although based on the empirical results, both factors also registered a relatively high mean score.In accordance with the statistical findings (Turner and Martin, 2005), it is apparent that although the SE construct is naturally focused on distinct entrepreneurial competencies, these skills are complemented by more traditional management skills. These two sets of skills are not perceived to be mutually exclusive, as both sets are required for successful SE. Complementary competencies are a key determinant for successful SE (Turner and Martin, 2005). A pure managerial approach, without reference to entrepreneurship skills, would have counteracted the purposes of practising SE, as was conceptualised for this study.Mean score analysis Descriptive statistics were calculated for the two first-order factors and the one second-order factors (see Table I). Based on the initial descriptives, it was established that age, level of education, and ethnic group categories were skewed and fell predominantly into one category, and were accordingly excluded from inferential testing. The mean scores for both the factors are relatively high, i.e. above the midpoint on the 1-5 Likert scale.Inferential statistical testing Where mean scores on separate factors were calculated, the following test results were analysed:* gender - the independent samples t-test procedure was carried out with no significant differences detected at the 0.05 level;* concerning the current or future SEA, no significant differences at the 0.05 level were detected; and* regarding the faculty of registration, using ANOVA (see Table III), for factor 1, there appeared to be a 0.004 (4.00 per cent) probability of obtaining an F value of 4.506 if no differences among group means in the population were evident: since this probability does not exceed the 0.05 level, one can conclude that there are significant differences for factor 1 relating to type of faculty.Hence, in determining which specific faculties differed on SEA, a more stringent test - i.e. the multiple comparison Scheffe test - was calculated for the dependent variables as factor 1 and the final factor solution. There was a difference within specific faculties; engineering and built environment (EBE) versus economic and financial sciences (EFS) (factor 1=0.010), and (factor 2=0.022).For H2, in relation to differences between factors among the study variables, the only statistically significant differences were found between types of faculties, with the EFS faculty having significant higher scores on both factors. It is plausible that the sample of respondents from the EFS faculty consider their abilities in financial and managerial related matters to be more advanced than those of other faculties' members, probably as a result of exposure to the similar type of discourse used in SE and the EFS faculty.In relation to previous studies these findings also contradict the order and priority of variables where substantive differences in networking have a much lower correlation than stipulated in Sharir and Lerner's (2006) findings, specifically where being innovative, focused and conducting research are higher order priorities. These differences imply a different skill set arrangement with commensurate differences in educational and training priorities which target fostering innovation and employing research principles amongst other skills. Although the results are modest, particularly in the number of active and future social entrepreneurs, they are not trivial. Such findings are reasonably good for exploratory research in a new domain such as SE in SA. It is a first in SA where SEA was empirically derived and associated competencies were measured. The results indicate that the profiles of the individuals represented in this sample are typical of potential social entrepreneurs, with previous studies confirming that such respondents are likely to have interest in pursuing SE. Although any generalisations would mar the rigour of the analysis undertaken, it is tempting to categorise SEA in SA as generally low.Comparatively, as conceptualised for the UK GEM report and for this paper, SEA does not measure all socially motivated enterprise activity, but rather provides an indication of the propensity of particular groups for striving towards entrepreneurial rather than economic means. In the UK the SEA rate comprises 3.2 per cent of the adult population, which is directly comparable with their total entrepreneurial activity (TEA) rate of 6.2 per cent. The UK sample results indicate that 24.4 per cent of the sample is currently trying to start a venture; however the majority, 73.9 per cent answered "no" to any such inclination towards SE initiatives. In this study, 17.2 per cent of respondents indicated "yes" to current involvement with SEA, and 81.4 per cent answered "no" to such involvement. However, a meaningful comparison between these rates is not entirely appropriate as the UK GEM 2006 uses survey data from 27,296 18-64-year-olds randomly stratified.Based on this study's exclusive sample the results indicate that students are likely as a group to be engaged in SE, although no comparison with any non-student population groups were made. In the UK, some 5 per cent of the student population are social entrepreneurs compared with 3.5 per cent of those in full-time employment (Harding et al., 2005). This indicates that younger people more likely to be involved in social initiatives; the highest SEA rate of 3.9 per cent is in the 18-24 age group, compared to 2.7 per cent in the 25-34 age group, with significant differences between youngest and oldest age groups. Education is also predictor of propensity to be a social entrepreneur in the UK, and with 5.5 per cent of people with postgraduate qualifications socially active compared with 2.4 per cent of those who only have undergraduate qualifications. Ethnic group differences were also reported in the UK GEM, where non-white groups (5 per cent) are more likely to be social entrepreneurs than their white counterparts (3 per cent) (Harding, 2006, p. 15). As the findings indicate, the prevalence of SE is more widespread amongst younger people with education, and who are labour market inactive, characteristic of the study sample.Conceptual integrations Based on the existing literature, it seems that recently there has been an upsurge in SEA, driven by changes in the competitive environment. Presently, non-profit organisations are operating in a highly competitive environment characterised by tighter financial restrictions, with several organisations vying for the same donor funds (Weerawardena and Mort, 2006). Currently, the non-profit sector is facing intensifying demands for improved effectiveness and sustainability in light of diminishing funding from traditional sources. Moreover the increasing concentration of wealth in the private sector is mitigating calls for greater social responsibility and more proactive responses to complex social problems (Johnson, 2000, p. 1). Internationally, the SE situation is much the same, with non-governmental developmental organisations (NGDOs) working in developing countries, noted for their role as primarily providing subsidies on behalf of global donors; and creating the circumstances for "patronage, dependency, pathological institutional behaviour and financial malpractice" (Johnson, 2000, p. 3). What may be called a beggar mentality has emerged in many communities where there have been massive aid interventions (Peredo and Chrisman, 2006, p. 311).Established research indicates a wide range of both entrepreneurial and managerial skills, with significant overlaps, as being necessary for successful SE. Like business entrepreneurs, social entrepreneurs initiate and implement innovative programmes, despite being differently motivated, the challenges they face during start-ups are similar to those faced by business entrepreneurs (Sharir and Lerner, 2006). The commercial entrepreneur thrives on innovation, competition and profit, whereas the social entrepreneur prospers on innovation and inclusiveness for changing the systems and patterns of societies (Jeffs, 2006). Moreover it seems that a core set of skills seems indispensable for undertaking SE, even though a large number of elements play a role in SE, i.e. local culture, community management practices, previous occupational or technical skills, perceptions of macroeconomic, and the legal, social, and political environments (Peredo and Chrisman, 2006).Challenges for social entrepreneurs Social entrepreneurs and philanthropic efforts are not exempt from criticism and widespread flaws are evident in their fundamentals. This specifically refers to the unjustifiably high administration costs, which remain unremedied to this day (The Economist, 2006). Little effort has been devoted to measuring results involving the double bottom line (financial and social performance) or the triple bottom line (financial, social and environmental), while being readily susceptible to statistical manipulation, the vague and undefined goals of empowering people or changing lives further obfuscate the outputs of SEA. Cook et al., (2003, p. 64) highlight the false premises and dangerous precedents and standards for SE when they argue that using a private entrepreneurial model in pursuing social justice aims, which cannot be valued in the market, is likely to violate the case for market efficacy. Hence the difficulty social entrepreneurs experience becomes apparent when balancing resource allocation between profit-making and welfare-providing activities. In fact, it could be argued that it is undesirable to implement a welfare system where the beneficiaries are subject to the vagaries of the entrepreneurial model (Seelos and Mair, 2005).Recent research (Madden and Scaife, 2006) has identified key barriers for SE community engagement, among which are overwhelming requests and choice of viable options, lack of formal processes for handling requests, and lack of vision for community engagement - all of which are also highly relevant towards explaining the low SEA rate as reported in the findings.Study limitations The study is limited by the early stage of theoretical development in the SE construct, and by any related measures. Moreover, research is limited by the restricted sampling frame. By using students the psychological diversity of the general population is possibly underestimated, even though SEA is predominant among student populations. Since survey data was self-reported; the study is also susceptible to bias (e.g. self-serving bias with regard to skills level).Study implications A contentious issue in SE, because of the newness of the concept, is that there are few institutional mechanisms in place to support this work (Johnson, 2000). Related to this issue of support is the question of training and capacity building for SE, if SE is defined as principally employing entrepreneurial and managerial skills to the non-profit sector, then these skills are fairly replicable. However if SE is defined as a highly creative and innovative individual approach, replication will be much more difficult to achieve and the focus would then be on developing conditions in which latent entrepreneurial talent could be harnessed for social purposes (Johnson, 2000). Moreover, social entrepreneurs based in the community are able to add value in ways that are often not possible through mainstream policies, i.e. their closeness to the community, and their perception for having a capacity for innovation that autocratic bureaucracies traditionally do not have (Turner and Martin, 2005).As with mainstream entrepreneurship, SE activity is heavily influenced by access to training, modelling, and by promoting SE as an alternative business model within schools, colleges, and universities, exposure and training could induce early stage SEA. As construed in literature, social entrepreneurs are community-centric and rely heavily on networks and support structures, such networks being easy and cheap to establish (Harding, 2006, Sharir and Lerner, 2006). Since competencies can be nurtured, and since funding requests often require concomitant competencies to add value, a positive link between SE success and skills, training and development for SE should be mandatory (such as the school for social entrepreneurs in the UK) (Sharir and Lerner, 2006). Perhaps particularly in SA, which is currently beset by social inequalities, social entrepreneurs should look for the most effective methods of serving their social mandate through funding and sponsoring the activities of community-based projects. By developing capacity through relevant interventions and partnerships, social entrepreneurs can add value and meet the needs of groups who have been failed by previous government attempts in social redress. However, government also has a role in fostering a culture of social enterprise by raising awareness of social enterprises among students through education and through disseminating information and providing resources for promoting social entrepreneurship. Opens in a new window.Table I Descriptive statistics on variables Opens in a new window.Table II Factor structure for SE skills Opens in a new window.Table III ANOVA for faculty registration
|
- The study is limited by being in the early stage of theoretical development on the SE construct. The interpretation of the empirical findings, understanding SE and the associated skills, may serve as catalyst for this emerging and important activity in SA.
|
[SECTION: Purpose] Concern over the relatively high numbers of women managers leaving organizations has been growing. Many organizations have developed initiatives with the specific aim of supporting women's career progression to the higher echelons of corporate life, such as mentoring programmes and women's networks. The retention of valued talent is recognized as a priority and organizations strive to brand themselves as an employer of choice. Such strategies have had some success and higher proportions of women are found in more senior positions, arguably having broken through the glass ceiling. Despite this progress, women continue to leave organizations in higher proportions than their male counterparts at senior levels, and there is little in the literature examining this phenomenon. This paper attempts to fill this space.The aim of this paper is to explore discursively how women partners represent and describe their decisions to leave a professional services firm. This context is important in that the nature and structure of these services (usually project-based) place demands on senior staff, particularly partners, in return for high levels of extrinsic and some intrinsic rewards. The demands include not only high levels of professionalism but also extreme commitment, such as the ability to travel nationally and internationally on demand, to work around the clock at whichever premises whenever necessary, and to provide speedy and efficient solutions to the clients' problems. Kumra and Vinnicombe (2008) provide an account of the nature of such firms. This study begins by considering the very limited literature on women leaving organizations followed by an examination of the discourse of choice within the work-life balance literature. After outlining the methodology for our study of 31 women partners who have left a global management consultancy firm, we present empirical evidence from the women themselves. We next discuss the implications of such evidence and make suggestions for further research in this important area. Despite the level of concern expressed in many organizations, and Belkin's (2003) "opt-out revolution" article which discussed the push of job dissatisfaction and the pull of motherhood, a search revealed very little extant academic literature on senior women who have left organizations. Of course, the decision to leave an organization does not necessarily mean that women wish to permanently turn their backs on corporate life. Many women take career breaks at some stage in their careers, and Hewlett and Luce (2005) point out the ease with which women can "off-ramp" and the difficulties they face when planning to return to organizational life. The kaleidoscope career model (Mainiero and Sullivan, 2005) focuses on the "fit" of work and family, and offers an explanation for the large numbers of middle management who leave corporate life. Women talked about opportunities and possibilities as well as the blocks experienced in creating their own path, which provide challenge and allow specific needs to be met.Mallon and Cohen (2001) studied women's transitions from careers within organizations into self-employment, seeking a further understanding of how women themselves experience and make sense of changing careers. Dissatisfaction with organizational life, changes in organizations, which contradicted personal values and principles, and an imbalance of personal and professional life were key factors in the decision to move to self-employment. Looking at more senior women leaving organizations, Marshall (1995, 2000) identified no single pattern, but emphasized the complex nature of such decisions, each of which is individual and multi-faceted. Factors identified by Marshall regarding women's decisions to leave employment included leaving changed roles that had become untenable, blocked promotion prospects and wanting a more balanced life. One particular theme was their experience of difficulties with inappropriate and often hostile interpersonal behaviour, often for the first time in their careers. It was not that they could not cope, but that they did not respect or want to work in such unproductive environments. Several such factors contributed to individual decisions in accumulating and complex ways. In particular, the choice relating to work-life integration emerged from these studies as a key explanatory factor. Work-life balance or work-life integration? The language used to describe the integration of work and non-work domains reflects its socially constructed evolution. Research emphasis has moved from "conflict" through "seeking balance" to "integration" (Burke, 2004). Similarly, there has been a shift away from "work-family" or "family friendly" when referring to supportive organizational policies to "work-life" in order to remove the emphasis on parents, especially mothers. "Work-life" has also received criticism with its suggestion that work and life are somehow separate (Eikhof et al., 2007), rather than work being a part of life.The term "work/personal life integration" was offered by Rapoport et al. (2002) who seek to acknowledge the importance of individual priorities and choices with the use of the word "integration", rather than "balance". They suggest that balance indicates an equal split of time between the two domains, which is an unrealistic state of affairs, whereas integration focuses on a sense of satisfaction in both the work and non-work domains. But "integration" also suggests the blending together of work and personal life, and individuals do not always want to manage the two areas by merging them and some may prefer to keep them separate. Thus, authors have begun to refer to the harmonizing of work and the rest of life (Lewis and Cooper, 2005; Gambles et al., 2006) to indicate their interaction in positive ways. The demands from the work and non-work domains are not absolute and cannot necessarily be easily measured. The demands vary, as do individual responses to such demands. It could be argued that people self-impose expectations with regard to performance of both work responsibilities and household and other non-work obligations (Quick et al., 2004). Managing such expectations can enable an individual to cope with conflicting priorities and Quick et al. place more emphasis on the importance of energy in a given situation, rather than the amount of time spent there. So the argument moves away from a sense of balance or equality of the different domains, and acknowledges the relevance of timely emotional engagement within each domain and the ability to focus on situational requirements. But this still suggests a large element of choice, whereas Caproni (2004) argues that the language used in the work-life balance debate adds to the pressures experienced by individuals who are seeking to achieve this elusive state of satisfaction with both work and non-work domains. She describes the conceptualization of work-life balance as individualistic and achievement oriented:[...] setting us up to strive for one more thing that we cannot achieve and, in doing so, keeping us too focused, busy and tired to explore the consequences of our thinking and actions (Caproni, 2004, p. 212).The family context The continual working towards balance can also imply a greater choice over life decisions than often exists. For instance, care may have to be provided for children or for elderly parents, but the demands for such care are often unpredictable, due to combinations of circumstances, thus reducing the element of choice and control (Caproni, 2004). Additionally, high-quality regular childcare for older children (6+), specifically after school care, is more difficult to obtain than the more routine requirements sought for pre-school children (Moore et al., 2007).A study, which compared single women without children with married women with and without children found that all three groups experienced similar levels of difficulty of balancing work and non-work (Hamilton et al., 2006). This under-researched group of never-married women without children experienced greater pressure to take on additional tasks late in the evening or at weekends, precisely because they were viewed as having fewer family obligations than others (Anderson et al., 1994, cited in Hamilton et al., 2006). Similarly, work was described as "all-encompassing", leaving few resources for seeking activities outside the work place (2006, p. 408). This study provides a different, yet important perspective to the discussion of choice experienced by these individuals in the decisions they make regarding work-life integration.Negative or positive? However, not everyone agrees that balance has been about seeking satisfaction in both domains. A different interpretation suggests that one of the flawed assumptions in the work-life balance debate is that work has been portrayed as negative and problematic, with individuals wanting to reduce the time spent at work as a result (Eikhof et al., 2007). These authors suggest that work-life balance programmes ignore the possibility that people may gain satisfaction and fulfillment from work, and state that a common, and inaccurate, premise for flexible working arrangements is that "work-life balance provisions are introduced to help employees reconcile what they want to do (care) with what they have to do (work)" (Eikhof et al., 2007, p. 327, brackets in the original). They argue that employees may want to work and that the work-life balance debate tends to ignore this as a possibility. However, others talk about positive spillover (Kirchmeyer, 1993) and the enrichment, which takes place between work and family (Greenhaus and Powell, 2006).Control and workplace flexibility discourses Lewis et al. (2007, p. 361) argue that there are in fact two overlapping work-life balance discourses; "the personal control of time" and "workplace flexibility" with both including a dimension of choice. The former indicates that individuals are able to make their own decisions about the priorities in their lives around work, career, family and other aspects of life, paying little attention to the gendered assumptions about commitment and competence which underpin the concept of the ideal worker (Rapoport et al., 2002). Flexibility discourses emphasise the choice available to employees regarding where, when and how much to work, and again may not challenge the gendered constraints to adoption of family welfare assistants. These two discourses are evident in a study by Drew and Murtagh (2005). First, senior managers felt unable to control their work time in terms of demonstrating what might be considered "normal hours", because of the long hours culture. Similarly, the flexibility discourse was highlighted with flexitime and home working arrangements seen as incompatible with senior management posts, particularly by the male managers.Genuine choice or pressure of the "ideal worker" Additionally, women may view working as a financial necessity rather than a real choice (Houston and Marks, 2003) partly because of the huge effort needed to overcome the psychological and practical barriers in order to work. This does not, of course, preclude the experiencing of some satisfaction as a result of working.Managers and professionals are particularly susceptible to the "ideal worker" norm and its view of domesticity and the subsequent doubt over their commitment to their employer and their career if they stray from that ideal by adopting a pattern of work, which involves less face time. The ideal worker has historically been seen as someone who can give their time unstintingly and willingly to their employing organization, and have no conflicting demands on their time (Rodgers and Rodgers, 1989; Pitt-Catsouphes et al., 2006). Alongside this is the assumption of the existence of another adult-based full-time in the home to attend to domestic and caring responsibilities. In the twenty-first century, many families do not have such a structure of full-time breadwinner and full-time homemaker (Marks, 2006) and households may consist of different mixes of number of adults, age and number of children or no children, and presence or absence of elderly dependents (Ransome, 2007).Other literature focuses on the issue of choice in women's careers. For instance, Lyonette and Crompton (2008) talk about the choices women accountants make in their careers, finding some indication of women choosing to stick at the level below partner because of the increased demands and pressures, which would result from such a promotion, including the adverse impact on family life. They point out that choices are made from a range of "realistic" possibilities, within the bounds of constraining factors, and for women, domestic responsibilities are examples of such constraints.Hence, the language in the work-life balance arena shapes the way in which the constraints are framed, choices made and outcomes achieved. The personal control of time and the ability to work flexibly are two discourses that contain both constraints and choices, but whilst personal control of time may be easier for some senior staff, the ability to work flexibly may not always be available for those at the top of some types of organization, especially client-based services. For women at partner level in professional service firms at the peak of their careers, in the mature life stage, probably having already acquired significant financial resources, one choice may be to exit or "off-ramp", rather than continue their professional career within their firm. This study examines what happened in the case of those women who made the choice to leave. In this study, the discursive approach is viewed as an examination of how language (in the form of spoken interaction) is used to construct and change the social world, while challenging the accepted ways of looking at the world (Dick, 2004). We use Watson's (1995, p. 816) description of discourse as a:[...] connected set of statements, concepts, terms and expressions, which constitutes a way of talking or writing about a particular issue, thus framing the way people understand and act with respect to that issue.So it is a form of sense making, both on the part of the interviewees, who here make decisions about the information to include and to omit in their accounts of their decision to leave the firm (Potter and Wetherell, 1987), and also on the part of us, the interviewers and researchers. We examine the varied discursive constructions used by the participants to achieve their particular purposes, whether this is their portrayal of themselves as valued partners or of making sense of what is happening within the firm, particularly with regard to the number of women partners leaving. As Watson points out, these discursive constructions are similar to interpretative repertoires (Gilbert and Mulkay, 1984, cited in Edley, 2001, p. 197) which are "quite separate ways of talking about or constructing" a given topic. Similarly, Clarke et al. (2009) refer to "antagonistic discourses" where individuals present self-narratives, which incorporate contrasting positions. Following an approach from an international management consultancy firm, we sought interviews with women partners who had left the firm over the previous three years. Access was provided by the Head of Diversity who contacted the 47 women leavers in that period to see if they would be willing to be interviewed. The project arose as a result of the women partners leaving in higher percentages than male partners, causing concern within the firm, not least because of a range of initiatives, which had been introduced over the preceding years to support women's career progression. The 36 women who responded were contacted by the project manager to set-up semi-structured interviews, and 31 female partners eventually participated, a response rate of 66 per cent. Only one interview was undertaken face-to-face in the UK, as most of the women were resident overseas. We therefore continued using telephone interviews over a period of about a month, after piloting the method to ensure that an open and frank discussion would be possible. The interview schedule was e-mailed to the respondents the day before the interview. Interviews lasted about 45 minutes and were recorded with the permission of the interviewees. Anonymity was guaranteed, and raw data were not given to the firm, although a report was delivered including recommendations for good practice.The first step in the analysis involved the reading and re-reading of the transcripts by all three researchers and then a discussion to identify discursive patterns within the text. The discursive approach to analysis requires a familiarity with the data (Kelan, 2008) and this continued through the use of NVivo, organizing and categorizing the data, focusing on the similarities and differences in the ways participants talked about choice and the decisions made during their employment with the firm and their decisions to leave. Further discussion occurred, leading to deeper understanding of the concepts invoked by the texts. In reporting the findings some demographic detail will be supplied to provide some context to the quotations used, but the aim of this paper is to understand how the women made sense of the choices they made, rather than to seek any connection between choices made and factors such as gender, parental status, etc.The sample The women ranged in age from 37 to 60 and two-thirds were married or with a partner. In total of 16 women had children under the age of 18 and the other 15 either had no children or their children were grown up. Five women were currently at home full-time with children of varying ages and a further four were retired. A total of 11 women were in paid employment undertaking a variety of roles with differing levels of responsibilities. Three women had their own businesses, and nine were involved with voluntary work, sometimes alongside paid work. Length of employment with the firm ranged from four to 29 years and the women had left up to four years previously. Most recently, 21 of the women had worked in North America and the remaining ten had worked in the continents of Africa, Australia, Asia and Europe. The financial situation was a major factor for most of the women. About 18 out of the 31 women had left three years previous to the interviews at a time of general economic downturn and many of these talked about the financial package, which was made available to them. For some, this allowed them to maintain their desired lifestyle without having to earn money in the future.The following section identifies two contradictory discursive constructions: that of loyalty for the firm as a wonderful place to work, and second a discourse of the lack of choice and control over lifestyle with regard to meeting priorities from both the work and non-work domains. It is important to note that these discourses were not separate but were enmeshed within the interviews. We will discuss these discourses, highlighting differing elements of the discourse of choice and lifestyle as we contrast it with the loyalty discourse. Loyalty This discourse emerged through representations of the women's positive experiences during their time with the firm and their statements of the high regard in which they held the firm. The previous limited research has highlighted the negative factors, which combined to push women towards the decision to leave organizations. In this study, we identify resistance to criticizing the firm through the loyalty discourse.Sherri's talk illustrates the emphasis on the firm as a good place to work, despite reservations, which may have occurred over time. She evidently felt that it was important to provide a context for her honesty about some of the more negative experiences, which contributed towards her decision to leave:I would say that in the grand scheme of things [ABC] is probably the best place to work. I really don't want comments taken out of context because I still stayed with [ABC] for 20 years. I really did check this a number of times in my career when I was low and thinking of leaving, and I looked at the options and the other organizations I could work with, etc. and I still believe that [ABC] is a great place to work (Sherri, no children).Sherri refers to the ongoing choice she has made over a number of years to remain with the firm. The loyalty discourse included a strong need for the interviewees to present their affection for the firm indicating a sense of respect for the firm itself as an organization and also for the people who work there. Suzan's view represents this high regard for previous colleagues:I have a high regard for the company and the people and everything it stands for (Suzan, children 12-18 years).Part of the loyalty was evidenced through a strong sense of identity with the firm, and gratitude for the opportunities, which had been made available and fully utilised:The firm was a very big part of me for a very long time (Joya, no children).I love [ABC], I really love [ABC] and I had a great career and I'm very thankful and grateful for what I learned and the people with whom I worked and what I achieved, it was a great, great career. I loved it (Megan, children 5-12 years).Despite the eventual decision to leave, these women were keen to stress their successful careers in a highly regarded firm. Joya's comment in particular offers some explanation for the emphasis on loyalty. Her social identity was enmeshed with the reputation of the firm, and to acknowledge and allow criticism of the firm would therefore involve self-criticism which these successful women would not be comfortable with.Choice and lifestyle Issues around choice and lifestyle emerged in three ways: choice in the desire for greater integration of work and non-work, choice within a context of constraints and choice and the demanding role of partner. Our intention is to demonstrate the contradictory nature of the loyalty discourse with the significant sets of choice discourses on which the participants drew to explain their decision to leave the firm.Choice in desire for greater integration The choice discourse manifests through the representations of individual priorities when the women described the position of work within their whole lives. There was no sense of seeking an equal division of time within the work and the non-work domains, and similarly the women did not talk of blending or integrating their work and personal lives.Aileen expressed her loyalty and affection for the firm: "ABC is a wonderful company and I think very highly of it" but she also explained how she had recently gone through a divorce and wanted to reduce her time away from home and her children at such a distressing time for them. However, she perceived herself to have no option other than to continue to work in the same way:I felt [the senior partners] were, although nice, they were very unsympathetic to the situation and I just don't feel that there were any options there [...] It was kind of like you understand what the expectations are, you've always been a good performer, you know what it takes and either accept it or don't [...] There was no discussion about part time, it was just accept it; you're going to have to travel if you want to be there. There was just no sympathy at all (Aileen, children 12-18 years).So the discourse of choice becomes polarized and these women demonstrated their awareness of the extreme options they had to consider. Kim expressed her appreciation of the initiatives within the firm, which had been introduced to support women's career progression:I know everything the firm's done and we've done so much to help retain senior women [...] We've done so many great things, so I want to give credit to all of that but every once in a while I just came to the realisation that you want to have this great career that requires a tremendous amount of time and commitment and you just can't balance everything sometimes (Kim, children under 12 years).There was an increasing recognition by some of the women of the demands which family life placed on them and Kim went on to explain the shift in her desires away from doing whatever was required by the firm:The year prior to when I had my third child, I tried a part time schedule and that helped but I think once I was home with all three kids, getting involved with their lives more, over the year and a half that I was home on this leave of absence, it just became clearer and clearer to me that I wanted to be at home with them. The balance I had been trying to achieve was so difficult; I had to make a choice, so that's why I finally chose to resign this past January (Kim, children under 12 years).There appeared to be a gradual acknowledgement that any sort of balance between the work and non-work domains was not possible. Alexis suggested that the only practical way to achieve such a balance was sequentially, prioritizing work or family at different times:I came to the realization at one point because I was in that group of women who felt you could have it all and I ultimately came to the realization yes you can have it all but not all at once, you just have to take different stages of your life and I just had to come off the consulting career path for five years at important stages of my parenting (Alexis, grown up children).Yet, she too presented an interesting contradictory perspective in her statement: "I think very highly of ABC and I found ABC to be very, very supportive of my time and participation in the firm". So she had to step-off the career path to attend to parenting priorities, and yet still spoke of the support she received, so again illustrating the contradictory elements of this discourse.The difficulty in achieving any sort of compromise between the demands of work and home was echoed by Libby who used particularly emotive language as she described the options which she had clearly rejected, but which would have enabled her to maintain her previous high levels of performance:I wanted to come back to work, I loved my job, but I found no matter how much I tried I just couldn't be the same top performer that I was before I had my baby. Sure, I could have made the decision to outsource my family and get a full time live-in nanny and continue to work the same hours, but I didn't want to do that. I didn't want to sell my family off to somebody else (Libby, children under five years).Libby provides an interesting contrast to many of the other women who engage in both the discourses of loyalty, and of choice. Although she expresses enthusiasm, it is for her job rather than for the firm. Instead of the loyalty and affection, there was a sense of resentment and anger at the lack of support and understanding which effectively removed the option of staying with the firm. It would have been at too great a cost.Similarly, Cassie explained the discrepancy between the wants and needs of junior colleagues and the lifestyle, which they observed of the existing partners:I've seen too many associate partners leave because they've said to me I don't want that lifestyle, I need to have some time to have children and deal with elder care and whatever, so I think the firm really needs to address that because they are losing a lot of talent (Cassie, children under five years).So there are some fundamental life choices within this discourse of choice, requiring women to consider their future and the investment of time needed to achieve their lifestyle goals:Do I want to take this job or do I want to go and do the things I want to do, i.e. get married, have a family and all that? Now I'm not saying you can only have one or the other, but I was thinking that being involved in work at [ABC], I have forgotten about a lot of things in life, you know what I'm saying? (Aisah, children under five years).Choice in a context of constraints The choice discourse is presented paradoxically, with a spectrum, which has "all work" at one end and "all family" at the other. Despite the use of various strategies over time, the women were experiencing a feeling of being pushed into decisions they find problematic. However, there is also uncertainty in the way forward. Laura questioned the boundaries she would be looking to put in place if she were to return to a role similar to her previous one with the firm:I have thought if a big project came up and [ABC] asked for me to come back, what would be the parameters that would be ideal with respect to that balance because it certainly does change at every point in time, every life change (Laura, children 12-18 years).Freeing themselves up to spend more time with their children was a key part of the discourse for the mothers in the sample. Women with children of differing ages had concluded that it was the right time to work and travel less, in order to spend more time with family. For instance, Kerry, who had "been with [ABC] for 20 years and had a wonderful career with [ABC] and had fantastic opportunities" went on to say:I feel a little bit like I've missed out on the first ten to twelve years of their lives and I wanted to be more involved in their teenage years than I was in their toddler years and be available to them [...] Unfortunately in my professional role in [ABC] it was not possible to have that time and flexibility to get involved in their lives to the extent that I wanted to (Kerry, children under 12 years).Similarly, Ellen explained:I love [ABC]. [ABC] gave me the life I have right now [...] But being around for my two children is my big role right now [...] I want to be there for this part of their life, I don't want to miss my kids growing up (Ellen, children under 12 years).So Kerry and Ellen have availed themselves of the only option they could see within ABC, that of leaving the firm that they regard so highly, as there is little support to enable them to meet their other priorities.Choice and the demanding role of partner In a different way, those without children talked of the importance of spouses/partners, families and friends in their lives, and of the challenge of managing the competing demands on their time. The demanding nature of the partner role makes it difficult to allocate significant amounts of time to others without the constant distractions from the office and/or the client, as Sherri described:My parents are aging, they're not well, and I all of a sudden decided I absolutely had to spend more time with them and that's not just on weekends, it's going and spending large blocks of quality time with them when I can just focus on them and not be on the phone or on the computer and everything else back to the work site (Sherri, no children).The issue of travel featured strongly in the choice discourse, as it was described as an inherent part of the role of partner by these women and was a major constraining factor in the choices they experienced. The partner role requires a great deal of travelling, taking all of them away from their families and friends on a continual and relentless basis. In the quotation below, Agnes emphasizes her developing competence at creating non-work time at weekends, but the inability to address the demands placed on her by the need to travel:I still have an awful lot of respect for [ABC], but the travelling - everything else I could manage because the work hours I could manage. I tended to work quite a bit but I controlled it myself and I got better about not working weekends and so everything else I can manage but the travelling (Agnes, children under five years).However, Kim explained that it is not just the travel, but the whole nature of the job and the expectations of senior partners and clients, even when working locally to one's home base described as "an in town job". Such a working arrangement added the additional pressure of expectations from immediate family of a greater presence in the home, because of the assumption of more normal working hours:The client comes first and that means whatever you need, whatever hours are required, it can be just as hard to be on an in town job, so even when there is no travel, but you've got to be at the client at 7 a.m. for a meeting or there's a crisis and you're there until 11. Sometimes I found being in town can be just as demanding because your family thinks you're at home, so why aren't you coming home for dinner, and eating with them? You're still spending all your time away (Kim, children under 12 years). About half of these women had children under the age of 18, yet the issues of motherhood were not about coordinating childcare arrangements or dividing up parenting with a partner, but about wanting to spend time with children at varying stages in their lives. Several of those without children, or with grown up children, talked of wanting the opportunity to have broader experiences in their lives, mentioning wider family and friends. Unfortunately, many tended to see the issue as very clearly defined in terms of either their role within the firm or a life outside the firm.The women in this sample were not necessarily off-ramping but seeking to work in a way, which gave them control over their lives, especially with respect to minimizing the amount of travel, which took them away from home on a regular basis. Although there was evidence of the pull of motherhood mentioned by Belkin (2003), these women did not express dissatisfaction with organizational life per se. On the contrary, they stressed positive elements of much of their careers, expressing loyalty and commitment to the firm, which many still held in high regard.As is common with discourses (Watson, 1995), two dominant themes co-existed in the accounts of these successful women leavers, i.e. loyalty and choice regarding work-life integration. Yet the hegemonic positioning of the choice discourse meant that other alternatives were suppressed. Women showed loyalty and affection towards their firm, but this was not reciprocated in the form of some temporary control over their personal lives or some flexibility so that they could manage less stressfully their non-work responsibilities. The flexible working offerings for women lower down the hierarchy were much celebrated by the firm, but there was no flexibility in the partner tier. These women perceived that the firm was expecting all or nothing, and did not seem to recognize the paradoxical nature of their loyalty to the firm and their decision to leave.Although there were combinations of factors, which contributed to the women's decisions to leave the firm, the focus of this paper has been the discourse of choice with particular emphasis on lifestyle. There was little evidence of either balance in the sense of division of time between the work and non-work domains, or of integration or the blending of work and personal life. The discourse presented was of a forced and extreme choice.However, it is clear that the women partners had certainly gained satisfaction and fulfillment from their jobs over a long period of time. The issue was not about "wanting to care" and "having to work", therefore demonstrating support for the inaccuracy of such a claim, pointed out by Eikhof et al. (2007). The financial package available at the point of departure and high levels of remuneration, often over a period of many years, enabled them to leave, given that they felt unable to continue to work within the firm because of the extreme demands that the partner role required.The "personal control of time" and "workplace flexibility" (Lewis et al., 2007) were both absent, highlighting "work life imbalance". These women clearly did not fit the "ideal worker" norm. Many were single or in a relationship with another full-time worker, hence the lack of the "other" adult based in the home and needed by the ideal worker to be able to maintain their level of dedication to the job role. Also these women had an increasing desire to attend to demands from outside the work domain. This study adds to what is known about the importance of family life for women, with many of those who were not mothers still experiencing a tension between the demands of day-to-day organizational life at partnership level and the need to give attention to extended family and friends. Importantly, this study uses data from women who have left the same organization within a relatively brief period of time, providing unique data and valuable insights into what is known about women in senior positions who choose to leave and yet display tremendous loyalty and affection to the firm. These findings therefore strengthen the choice discourse, which serves to neutralize and suppress feelings of discontent over the constraints imposed by the firm's cultural expectations of those in the role of partner. This effectively removes the responsibility from the organization of the need to facilitate the work-life integration of partners.The study has several limitations. First, we only talked to women partners who had left the firm and left at a particular time of economic downturn when a significant financial package was available to them. Speaking to women partners who had maintained their partner role in the firm during the same time period would have produced a useful comparison. Similarly, the inclusion of men in the sample would allow useful gender comparisons and this provides an opportunity for future research.Understanding the factors, which led these valued women partners to leave the organization, not withstanding the initiatives and support which had facilitated their achievement of such positions, will allow the organization to review the demands placed on partners. The extreme expectations, particularly involving excessive travel and time away from the family, should be reviewed in the light of these findings, if the firm wishes to stem the flow of women partners.
|
- This paper is based on the experiences of 31 women who have recently left partner roles within an international management consultancy firm. The purpose of this paper is to explore discursively their perceptions of choice within their decisions to leave.
|
[SECTION: Method] Concern over the relatively high numbers of women managers leaving organizations has been growing. Many organizations have developed initiatives with the specific aim of supporting women's career progression to the higher echelons of corporate life, such as mentoring programmes and women's networks. The retention of valued talent is recognized as a priority and organizations strive to brand themselves as an employer of choice. Such strategies have had some success and higher proportions of women are found in more senior positions, arguably having broken through the glass ceiling. Despite this progress, women continue to leave organizations in higher proportions than their male counterparts at senior levels, and there is little in the literature examining this phenomenon. This paper attempts to fill this space.The aim of this paper is to explore discursively how women partners represent and describe their decisions to leave a professional services firm. This context is important in that the nature and structure of these services (usually project-based) place demands on senior staff, particularly partners, in return for high levels of extrinsic and some intrinsic rewards. The demands include not only high levels of professionalism but also extreme commitment, such as the ability to travel nationally and internationally on demand, to work around the clock at whichever premises whenever necessary, and to provide speedy and efficient solutions to the clients' problems. Kumra and Vinnicombe (2008) provide an account of the nature of such firms. This study begins by considering the very limited literature on women leaving organizations followed by an examination of the discourse of choice within the work-life balance literature. After outlining the methodology for our study of 31 women partners who have left a global management consultancy firm, we present empirical evidence from the women themselves. We next discuss the implications of such evidence and make suggestions for further research in this important area. Despite the level of concern expressed in many organizations, and Belkin's (2003) "opt-out revolution" article which discussed the push of job dissatisfaction and the pull of motherhood, a search revealed very little extant academic literature on senior women who have left organizations. Of course, the decision to leave an organization does not necessarily mean that women wish to permanently turn their backs on corporate life. Many women take career breaks at some stage in their careers, and Hewlett and Luce (2005) point out the ease with which women can "off-ramp" and the difficulties they face when planning to return to organizational life. The kaleidoscope career model (Mainiero and Sullivan, 2005) focuses on the "fit" of work and family, and offers an explanation for the large numbers of middle management who leave corporate life. Women talked about opportunities and possibilities as well as the blocks experienced in creating their own path, which provide challenge and allow specific needs to be met.Mallon and Cohen (2001) studied women's transitions from careers within organizations into self-employment, seeking a further understanding of how women themselves experience and make sense of changing careers. Dissatisfaction with organizational life, changes in organizations, which contradicted personal values and principles, and an imbalance of personal and professional life were key factors in the decision to move to self-employment. Looking at more senior women leaving organizations, Marshall (1995, 2000) identified no single pattern, but emphasized the complex nature of such decisions, each of which is individual and multi-faceted. Factors identified by Marshall regarding women's decisions to leave employment included leaving changed roles that had become untenable, blocked promotion prospects and wanting a more balanced life. One particular theme was their experience of difficulties with inappropriate and often hostile interpersonal behaviour, often for the first time in their careers. It was not that they could not cope, but that they did not respect or want to work in such unproductive environments. Several such factors contributed to individual decisions in accumulating and complex ways. In particular, the choice relating to work-life integration emerged from these studies as a key explanatory factor. Work-life balance or work-life integration? The language used to describe the integration of work and non-work domains reflects its socially constructed evolution. Research emphasis has moved from "conflict" through "seeking balance" to "integration" (Burke, 2004). Similarly, there has been a shift away from "work-family" or "family friendly" when referring to supportive organizational policies to "work-life" in order to remove the emphasis on parents, especially mothers. "Work-life" has also received criticism with its suggestion that work and life are somehow separate (Eikhof et al., 2007), rather than work being a part of life.The term "work/personal life integration" was offered by Rapoport et al. (2002) who seek to acknowledge the importance of individual priorities and choices with the use of the word "integration", rather than "balance". They suggest that balance indicates an equal split of time between the two domains, which is an unrealistic state of affairs, whereas integration focuses on a sense of satisfaction in both the work and non-work domains. But "integration" also suggests the blending together of work and personal life, and individuals do not always want to manage the two areas by merging them and some may prefer to keep them separate. Thus, authors have begun to refer to the harmonizing of work and the rest of life (Lewis and Cooper, 2005; Gambles et al., 2006) to indicate their interaction in positive ways. The demands from the work and non-work domains are not absolute and cannot necessarily be easily measured. The demands vary, as do individual responses to such demands. It could be argued that people self-impose expectations with regard to performance of both work responsibilities and household and other non-work obligations (Quick et al., 2004). Managing such expectations can enable an individual to cope with conflicting priorities and Quick et al. place more emphasis on the importance of energy in a given situation, rather than the amount of time spent there. So the argument moves away from a sense of balance or equality of the different domains, and acknowledges the relevance of timely emotional engagement within each domain and the ability to focus on situational requirements. But this still suggests a large element of choice, whereas Caproni (2004) argues that the language used in the work-life balance debate adds to the pressures experienced by individuals who are seeking to achieve this elusive state of satisfaction with both work and non-work domains. She describes the conceptualization of work-life balance as individualistic and achievement oriented:[...] setting us up to strive for one more thing that we cannot achieve and, in doing so, keeping us too focused, busy and tired to explore the consequences of our thinking and actions (Caproni, 2004, p. 212).The family context The continual working towards balance can also imply a greater choice over life decisions than often exists. For instance, care may have to be provided for children or for elderly parents, but the demands for such care are often unpredictable, due to combinations of circumstances, thus reducing the element of choice and control (Caproni, 2004). Additionally, high-quality regular childcare for older children (6+), specifically after school care, is more difficult to obtain than the more routine requirements sought for pre-school children (Moore et al., 2007).A study, which compared single women without children with married women with and without children found that all three groups experienced similar levels of difficulty of balancing work and non-work (Hamilton et al., 2006). This under-researched group of never-married women without children experienced greater pressure to take on additional tasks late in the evening or at weekends, precisely because they were viewed as having fewer family obligations than others (Anderson et al., 1994, cited in Hamilton et al., 2006). Similarly, work was described as "all-encompassing", leaving few resources for seeking activities outside the work place (2006, p. 408). This study provides a different, yet important perspective to the discussion of choice experienced by these individuals in the decisions they make regarding work-life integration.Negative or positive? However, not everyone agrees that balance has been about seeking satisfaction in both domains. A different interpretation suggests that one of the flawed assumptions in the work-life balance debate is that work has been portrayed as negative and problematic, with individuals wanting to reduce the time spent at work as a result (Eikhof et al., 2007). These authors suggest that work-life balance programmes ignore the possibility that people may gain satisfaction and fulfillment from work, and state that a common, and inaccurate, premise for flexible working arrangements is that "work-life balance provisions are introduced to help employees reconcile what they want to do (care) with what they have to do (work)" (Eikhof et al., 2007, p. 327, brackets in the original). They argue that employees may want to work and that the work-life balance debate tends to ignore this as a possibility. However, others talk about positive spillover (Kirchmeyer, 1993) and the enrichment, which takes place between work and family (Greenhaus and Powell, 2006).Control and workplace flexibility discourses Lewis et al. (2007, p. 361) argue that there are in fact two overlapping work-life balance discourses; "the personal control of time" and "workplace flexibility" with both including a dimension of choice. The former indicates that individuals are able to make their own decisions about the priorities in their lives around work, career, family and other aspects of life, paying little attention to the gendered assumptions about commitment and competence which underpin the concept of the ideal worker (Rapoport et al., 2002). Flexibility discourses emphasise the choice available to employees regarding where, when and how much to work, and again may not challenge the gendered constraints to adoption of family welfare assistants. These two discourses are evident in a study by Drew and Murtagh (2005). First, senior managers felt unable to control their work time in terms of demonstrating what might be considered "normal hours", because of the long hours culture. Similarly, the flexibility discourse was highlighted with flexitime and home working arrangements seen as incompatible with senior management posts, particularly by the male managers.Genuine choice or pressure of the "ideal worker" Additionally, women may view working as a financial necessity rather than a real choice (Houston and Marks, 2003) partly because of the huge effort needed to overcome the psychological and practical barriers in order to work. This does not, of course, preclude the experiencing of some satisfaction as a result of working.Managers and professionals are particularly susceptible to the "ideal worker" norm and its view of domesticity and the subsequent doubt over their commitment to their employer and their career if they stray from that ideal by adopting a pattern of work, which involves less face time. The ideal worker has historically been seen as someone who can give their time unstintingly and willingly to their employing organization, and have no conflicting demands on their time (Rodgers and Rodgers, 1989; Pitt-Catsouphes et al., 2006). Alongside this is the assumption of the existence of another adult-based full-time in the home to attend to domestic and caring responsibilities. In the twenty-first century, many families do not have such a structure of full-time breadwinner and full-time homemaker (Marks, 2006) and households may consist of different mixes of number of adults, age and number of children or no children, and presence or absence of elderly dependents (Ransome, 2007).Other literature focuses on the issue of choice in women's careers. For instance, Lyonette and Crompton (2008) talk about the choices women accountants make in their careers, finding some indication of women choosing to stick at the level below partner because of the increased demands and pressures, which would result from such a promotion, including the adverse impact on family life. They point out that choices are made from a range of "realistic" possibilities, within the bounds of constraining factors, and for women, domestic responsibilities are examples of such constraints.Hence, the language in the work-life balance arena shapes the way in which the constraints are framed, choices made and outcomes achieved. The personal control of time and the ability to work flexibly are two discourses that contain both constraints and choices, but whilst personal control of time may be easier for some senior staff, the ability to work flexibly may not always be available for those at the top of some types of organization, especially client-based services. For women at partner level in professional service firms at the peak of their careers, in the mature life stage, probably having already acquired significant financial resources, one choice may be to exit or "off-ramp", rather than continue their professional career within their firm. This study examines what happened in the case of those women who made the choice to leave. In this study, the discursive approach is viewed as an examination of how language (in the form of spoken interaction) is used to construct and change the social world, while challenging the accepted ways of looking at the world (Dick, 2004). We use Watson's (1995, p. 816) description of discourse as a:[...] connected set of statements, concepts, terms and expressions, which constitutes a way of talking or writing about a particular issue, thus framing the way people understand and act with respect to that issue.So it is a form of sense making, both on the part of the interviewees, who here make decisions about the information to include and to omit in their accounts of their decision to leave the firm (Potter and Wetherell, 1987), and also on the part of us, the interviewers and researchers. We examine the varied discursive constructions used by the participants to achieve their particular purposes, whether this is their portrayal of themselves as valued partners or of making sense of what is happening within the firm, particularly with regard to the number of women partners leaving. As Watson points out, these discursive constructions are similar to interpretative repertoires (Gilbert and Mulkay, 1984, cited in Edley, 2001, p. 197) which are "quite separate ways of talking about or constructing" a given topic. Similarly, Clarke et al. (2009) refer to "antagonistic discourses" where individuals present self-narratives, which incorporate contrasting positions. Following an approach from an international management consultancy firm, we sought interviews with women partners who had left the firm over the previous three years. Access was provided by the Head of Diversity who contacted the 47 women leavers in that period to see if they would be willing to be interviewed. The project arose as a result of the women partners leaving in higher percentages than male partners, causing concern within the firm, not least because of a range of initiatives, which had been introduced over the preceding years to support women's career progression. The 36 women who responded were contacted by the project manager to set-up semi-structured interviews, and 31 female partners eventually participated, a response rate of 66 per cent. Only one interview was undertaken face-to-face in the UK, as most of the women were resident overseas. We therefore continued using telephone interviews over a period of about a month, after piloting the method to ensure that an open and frank discussion would be possible. The interview schedule was e-mailed to the respondents the day before the interview. Interviews lasted about 45 minutes and were recorded with the permission of the interviewees. Anonymity was guaranteed, and raw data were not given to the firm, although a report was delivered including recommendations for good practice.The first step in the analysis involved the reading and re-reading of the transcripts by all three researchers and then a discussion to identify discursive patterns within the text. The discursive approach to analysis requires a familiarity with the data (Kelan, 2008) and this continued through the use of NVivo, organizing and categorizing the data, focusing on the similarities and differences in the ways participants talked about choice and the decisions made during their employment with the firm and their decisions to leave. Further discussion occurred, leading to deeper understanding of the concepts invoked by the texts. In reporting the findings some demographic detail will be supplied to provide some context to the quotations used, but the aim of this paper is to understand how the women made sense of the choices they made, rather than to seek any connection between choices made and factors such as gender, parental status, etc.The sample The women ranged in age from 37 to 60 and two-thirds were married or with a partner. In total of 16 women had children under the age of 18 and the other 15 either had no children or their children were grown up. Five women were currently at home full-time with children of varying ages and a further four were retired. A total of 11 women were in paid employment undertaking a variety of roles with differing levels of responsibilities. Three women had their own businesses, and nine were involved with voluntary work, sometimes alongside paid work. Length of employment with the firm ranged from four to 29 years and the women had left up to four years previously. Most recently, 21 of the women had worked in North America and the remaining ten had worked in the continents of Africa, Australia, Asia and Europe. The financial situation was a major factor for most of the women. About 18 out of the 31 women had left three years previous to the interviews at a time of general economic downturn and many of these talked about the financial package, which was made available to them. For some, this allowed them to maintain their desired lifestyle without having to earn money in the future.The following section identifies two contradictory discursive constructions: that of loyalty for the firm as a wonderful place to work, and second a discourse of the lack of choice and control over lifestyle with regard to meeting priorities from both the work and non-work domains. It is important to note that these discourses were not separate but were enmeshed within the interviews. We will discuss these discourses, highlighting differing elements of the discourse of choice and lifestyle as we contrast it with the loyalty discourse. Loyalty This discourse emerged through representations of the women's positive experiences during their time with the firm and their statements of the high regard in which they held the firm. The previous limited research has highlighted the negative factors, which combined to push women towards the decision to leave organizations. In this study, we identify resistance to criticizing the firm through the loyalty discourse.Sherri's talk illustrates the emphasis on the firm as a good place to work, despite reservations, which may have occurred over time. She evidently felt that it was important to provide a context for her honesty about some of the more negative experiences, which contributed towards her decision to leave:I would say that in the grand scheme of things [ABC] is probably the best place to work. I really don't want comments taken out of context because I still stayed with [ABC] for 20 years. I really did check this a number of times in my career when I was low and thinking of leaving, and I looked at the options and the other organizations I could work with, etc. and I still believe that [ABC] is a great place to work (Sherri, no children).Sherri refers to the ongoing choice she has made over a number of years to remain with the firm. The loyalty discourse included a strong need for the interviewees to present their affection for the firm indicating a sense of respect for the firm itself as an organization and also for the people who work there. Suzan's view represents this high regard for previous colleagues:I have a high regard for the company and the people and everything it stands for (Suzan, children 12-18 years).Part of the loyalty was evidenced through a strong sense of identity with the firm, and gratitude for the opportunities, which had been made available and fully utilised:The firm was a very big part of me for a very long time (Joya, no children).I love [ABC], I really love [ABC] and I had a great career and I'm very thankful and grateful for what I learned and the people with whom I worked and what I achieved, it was a great, great career. I loved it (Megan, children 5-12 years).Despite the eventual decision to leave, these women were keen to stress their successful careers in a highly regarded firm. Joya's comment in particular offers some explanation for the emphasis on loyalty. Her social identity was enmeshed with the reputation of the firm, and to acknowledge and allow criticism of the firm would therefore involve self-criticism which these successful women would not be comfortable with.Choice and lifestyle Issues around choice and lifestyle emerged in three ways: choice in the desire for greater integration of work and non-work, choice within a context of constraints and choice and the demanding role of partner. Our intention is to demonstrate the contradictory nature of the loyalty discourse with the significant sets of choice discourses on which the participants drew to explain their decision to leave the firm.Choice in desire for greater integration The choice discourse manifests through the representations of individual priorities when the women described the position of work within their whole lives. There was no sense of seeking an equal division of time within the work and the non-work domains, and similarly the women did not talk of blending or integrating their work and personal lives.Aileen expressed her loyalty and affection for the firm: "ABC is a wonderful company and I think very highly of it" but she also explained how she had recently gone through a divorce and wanted to reduce her time away from home and her children at such a distressing time for them. However, she perceived herself to have no option other than to continue to work in the same way:I felt [the senior partners] were, although nice, they were very unsympathetic to the situation and I just don't feel that there were any options there [...] It was kind of like you understand what the expectations are, you've always been a good performer, you know what it takes and either accept it or don't [...] There was no discussion about part time, it was just accept it; you're going to have to travel if you want to be there. There was just no sympathy at all (Aileen, children 12-18 years).So the discourse of choice becomes polarized and these women demonstrated their awareness of the extreme options they had to consider. Kim expressed her appreciation of the initiatives within the firm, which had been introduced to support women's career progression:I know everything the firm's done and we've done so much to help retain senior women [...] We've done so many great things, so I want to give credit to all of that but every once in a while I just came to the realisation that you want to have this great career that requires a tremendous amount of time and commitment and you just can't balance everything sometimes (Kim, children under 12 years).There was an increasing recognition by some of the women of the demands which family life placed on them and Kim went on to explain the shift in her desires away from doing whatever was required by the firm:The year prior to when I had my third child, I tried a part time schedule and that helped but I think once I was home with all three kids, getting involved with their lives more, over the year and a half that I was home on this leave of absence, it just became clearer and clearer to me that I wanted to be at home with them. The balance I had been trying to achieve was so difficult; I had to make a choice, so that's why I finally chose to resign this past January (Kim, children under 12 years).There appeared to be a gradual acknowledgement that any sort of balance between the work and non-work domains was not possible. Alexis suggested that the only practical way to achieve such a balance was sequentially, prioritizing work or family at different times:I came to the realization at one point because I was in that group of women who felt you could have it all and I ultimately came to the realization yes you can have it all but not all at once, you just have to take different stages of your life and I just had to come off the consulting career path for five years at important stages of my parenting (Alexis, grown up children).Yet, she too presented an interesting contradictory perspective in her statement: "I think very highly of ABC and I found ABC to be very, very supportive of my time and participation in the firm". So she had to step-off the career path to attend to parenting priorities, and yet still spoke of the support she received, so again illustrating the contradictory elements of this discourse.The difficulty in achieving any sort of compromise between the demands of work and home was echoed by Libby who used particularly emotive language as she described the options which she had clearly rejected, but which would have enabled her to maintain her previous high levels of performance:I wanted to come back to work, I loved my job, but I found no matter how much I tried I just couldn't be the same top performer that I was before I had my baby. Sure, I could have made the decision to outsource my family and get a full time live-in nanny and continue to work the same hours, but I didn't want to do that. I didn't want to sell my family off to somebody else (Libby, children under five years).Libby provides an interesting contrast to many of the other women who engage in both the discourses of loyalty, and of choice. Although she expresses enthusiasm, it is for her job rather than for the firm. Instead of the loyalty and affection, there was a sense of resentment and anger at the lack of support and understanding which effectively removed the option of staying with the firm. It would have been at too great a cost.Similarly, Cassie explained the discrepancy between the wants and needs of junior colleagues and the lifestyle, which they observed of the existing partners:I've seen too many associate partners leave because they've said to me I don't want that lifestyle, I need to have some time to have children and deal with elder care and whatever, so I think the firm really needs to address that because they are losing a lot of talent (Cassie, children under five years).So there are some fundamental life choices within this discourse of choice, requiring women to consider their future and the investment of time needed to achieve their lifestyle goals:Do I want to take this job or do I want to go and do the things I want to do, i.e. get married, have a family and all that? Now I'm not saying you can only have one or the other, but I was thinking that being involved in work at [ABC], I have forgotten about a lot of things in life, you know what I'm saying? (Aisah, children under five years).Choice in a context of constraints The choice discourse is presented paradoxically, with a spectrum, which has "all work" at one end and "all family" at the other. Despite the use of various strategies over time, the women were experiencing a feeling of being pushed into decisions they find problematic. However, there is also uncertainty in the way forward. Laura questioned the boundaries she would be looking to put in place if she were to return to a role similar to her previous one with the firm:I have thought if a big project came up and [ABC] asked for me to come back, what would be the parameters that would be ideal with respect to that balance because it certainly does change at every point in time, every life change (Laura, children 12-18 years).Freeing themselves up to spend more time with their children was a key part of the discourse for the mothers in the sample. Women with children of differing ages had concluded that it was the right time to work and travel less, in order to spend more time with family. For instance, Kerry, who had "been with [ABC] for 20 years and had a wonderful career with [ABC] and had fantastic opportunities" went on to say:I feel a little bit like I've missed out on the first ten to twelve years of their lives and I wanted to be more involved in their teenage years than I was in their toddler years and be available to them [...] Unfortunately in my professional role in [ABC] it was not possible to have that time and flexibility to get involved in their lives to the extent that I wanted to (Kerry, children under 12 years).Similarly, Ellen explained:I love [ABC]. [ABC] gave me the life I have right now [...] But being around for my two children is my big role right now [...] I want to be there for this part of their life, I don't want to miss my kids growing up (Ellen, children under 12 years).So Kerry and Ellen have availed themselves of the only option they could see within ABC, that of leaving the firm that they regard so highly, as there is little support to enable them to meet their other priorities.Choice and the demanding role of partner In a different way, those without children talked of the importance of spouses/partners, families and friends in their lives, and of the challenge of managing the competing demands on their time. The demanding nature of the partner role makes it difficult to allocate significant amounts of time to others without the constant distractions from the office and/or the client, as Sherri described:My parents are aging, they're not well, and I all of a sudden decided I absolutely had to spend more time with them and that's not just on weekends, it's going and spending large blocks of quality time with them when I can just focus on them and not be on the phone or on the computer and everything else back to the work site (Sherri, no children).The issue of travel featured strongly in the choice discourse, as it was described as an inherent part of the role of partner by these women and was a major constraining factor in the choices they experienced. The partner role requires a great deal of travelling, taking all of them away from their families and friends on a continual and relentless basis. In the quotation below, Agnes emphasizes her developing competence at creating non-work time at weekends, but the inability to address the demands placed on her by the need to travel:I still have an awful lot of respect for [ABC], but the travelling - everything else I could manage because the work hours I could manage. I tended to work quite a bit but I controlled it myself and I got better about not working weekends and so everything else I can manage but the travelling (Agnes, children under five years).However, Kim explained that it is not just the travel, but the whole nature of the job and the expectations of senior partners and clients, even when working locally to one's home base described as "an in town job". Such a working arrangement added the additional pressure of expectations from immediate family of a greater presence in the home, because of the assumption of more normal working hours:The client comes first and that means whatever you need, whatever hours are required, it can be just as hard to be on an in town job, so even when there is no travel, but you've got to be at the client at 7 a.m. for a meeting or there's a crisis and you're there until 11. Sometimes I found being in town can be just as demanding because your family thinks you're at home, so why aren't you coming home for dinner, and eating with them? You're still spending all your time away (Kim, children under 12 years). About half of these women had children under the age of 18, yet the issues of motherhood were not about coordinating childcare arrangements or dividing up parenting with a partner, but about wanting to spend time with children at varying stages in their lives. Several of those without children, or with grown up children, talked of wanting the opportunity to have broader experiences in their lives, mentioning wider family and friends. Unfortunately, many tended to see the issue as very clearly defined in terms of either their role within the firm or a life outside the firm.The women in this sample were not necessarily off-ramping but seeking to work in a way, which gave them control over their lives, especially with respect to minimizing the amount of travel, which took them away from home on a regular basis. Although there was evidence of the pull of motherhood mentioned by Belkin (2003), these women did not express dissatisfaction with organizational life per se. On the contrary, they stressed positive elements of much of their careers, expressing loyalty and commitment to the firm, which many still held in high regard.As is common with discourses (Watson, 1995), two dominant themes co-existed in the accounts of these successful women leavers, i.e. loyalty and choice regarding work-life integration. Yet the hegemonic positioning of the choice discourse meant that other alternatives were suppressed. Women showed loyalty and affection towards their firm, but this was not reciprocated in the form of some temporary control over their personal lives or some flexibility so that they could manage less stressfully their non-work responsibilities. The flexible working offerings for women lower down the hierarchy were much celebrated by the firm, but there was no flexibility in the partner tier. These women perceived that the firm was expecting all or nothing, and did not seem to recognize the paradoxical nature of their loyalty to the firm and their decision to leave.Although there were combinations of factors, which contributed to the women's decisions to leave the firm, the focus of this paper has been the discourse of choice with particular emphasis on lifestyle. There was little evidence of either balance in the sense of division of time between the work and non-work domains, or of integration or the blending of work and personal life. The discourse presented was of a forced and extreme choice.However, it is clear that the women partners had certainly gained satisfaction and fulfillment from their jobs over a long period of time. The issue was not about "wanting to care" and "having to work", therefore demonstrating support for the inaccuracy of such a claim, pointed out by Eikhof et al. (2007). The financial package available at the point of departure and high levels of remuneration, often over a period of many years, enabled them to leave, given that they felt unable to continue to work within the firm because of the extreme demands that the partner role required.The "personal control of time" and "workplace flexibility" (Lewis et al., 2007) were both absent, highlighting "work life imbalance". These women clearly did not fit the "ideal worker" norm. Many were single or in a relationship with another full-time worker, hence the lack of the "other" adult based in the home and needed by the ideal worker to be able to maintain their level of dedication to the job role. Also these women had an increasing desire to attend to demands from outside the work domain. This study adds to what is known about the importance of family life for women, with many of those who were not mothers still experiencing a tension between the demands of day-to-day organizational life at partnership level and the need to give attention to extended family and friends. Importantly, this study uses data from women who have left the same organization within a relatively brief period of time, providing unique data and valuable insights into what is known about women in senior positions who choose to leave and yet display tremendous loyalty and affection to the firm. These findings therefore strengthen the choice discourse, which serves to neutralize and suppress feelings of discontent over the constraints imposed by the firm's cultural expectations of those in the role of partner. This effectively removes the responsibility from the organization of the need to facilitate the work-life integration of partners.The study has several limitations. First, we only talked to women partners who had left the firm and left at a particular time of economic downturn when a significant financial package was available to them. Speaking to women partners who had maintained their partner role in the firm during the same time period would have produced a useful comparison. Similarly, the inclusion of men in the sample would allow useful gender comparisons and this provides an opportunity for future research.Understanding the factors, which led these valued women partners to leave the organization, not withstanding the initiatives and support which had facilitated their achievement of such positions, will allow the organization to review the demands placed on partners. The extreme expectations, particularly involving excessive travel and time away from the family, should be reviewed in the light of these findings, if the firm wishes to stem the flow of women partners.
|
- Data were collected from 31 women using semi-structured telephone interviews, a 66 per cent response rate. A discursive approach to analysis was adopted.
|
[SECTION: Findings] Concern over the relatively high numbers of women managers leaving organizations has been growing. Many organizations have developed initiatives with the specific aim of supporting women's career progression to the higher echelons of corporate life, such as mentoring programmes and women's networks. The retention of valued talent is recognized as a priority and organizations strive to brand themselves as an employer of choice. Such strategies have had some success and higher proportions of women are found in more senior positions, arguably having broken through the glass ceiling. Despite this progress, women continue to leave organizations in higher proportions than their male counterparts at senior levels, and there is little in the literature examining this phenomenon. This paper attempts to fill this space.The aim of this paper is to explore discursively how women partners represent and describe their decisions to leave a professional services firm. This context is important in that the nature and structure of these services (usually project-based) place demands on senior staff, particularly partners, in return for high levels of extrinsic and some intrinsic rewards. The demands include not only high levels of professionalism but also extreme commitment, such as the ability to travel nationally and internationally on demand, to work around the clock at whichever premises whenever necessary, and to provide speedy and efficient solutions to the clients' problems. Kumra and Vinnicombe (2008) provide an account of the nature of such firms. This study begins by considering the very limited literature on women leaving organizations followed by an examination of the discourse of choice within the work-life balance literature. After outlining the methodology for our study of 31 women partners who have left a global management consultancy firm, we present empirical evidence from the women themselves. We next discuss the implications of such evidence and make suggestions for further research in this important area. Despite the level of concern expressed in many organizations, and Belkin's (2003) "opt-out revolution" article which discussed the push of job dissatisfaction and the pull of motherhood, a search revealed very little extant academic literature on senior women who have left organizations. Of course, the decision to leave an organization does not necessarily mean that women wish to permanently turn their backs on corporate life. Many women take career breaks at some stage in their careers, and Hewlett and Luce (2005) point out the ease with which women can "off-ramp" and the difficulties they face when planning to return to organizational life. The kaleidoscope career model (Mainiero and Sullivan, 2005) focuses on the "fit" of work and family, and offers an explanation for the large numbers of middle management who leave corporate life. Women talked about opportunities and possibilities as well as the blocks experienced in creating their own path, which provide challenge and allow specific needs to be met.Mallon and Cohen (2001) studied women's transitions from careers within organizations into self-employment, seeking a further understanding of how women themselves experience and make sense of changing careers. Dissatisfaction with organizational life, changes in organizations, which contradicted personal values and principles, and an imbalance of personal and professional life were key factors in the decision to move to self-employment. Looking at more senior women leaving organizations, Marshall (1995, 2000) identified no single pattern, but emphasized the complex nature of such decisions, each of which is individual and multi-faceted. Factors identified by Marshall regarding women's decisions to leave employment included leaving changed roles that had become untenable, blocked promotion prospects and wanting a more balanced life. One particular theme was their experience of difficulties with inappropriate and often hostile interpersonal behaviour, often for the first time in their careers. It was not that they could not cope, but that they did not respect or want to work in such unproductive environments. Several such factors contributed to individual decisions in accumulating and complex ways. In particular, the choice relating to work-life integration emerged from these studies as a key explanatory factor. Work-life balance or work-life integration? The language used to describe the integration of work and non-work domains reflects its socially constructed evolution. Research emphasis has moved from "conflict" through "seeking balance" to "integration" (Burke, 2004). Similarly, there has been a shift away from "work-family" or "family friendly" when referring to supportive organizational policies to "work-life" in order to remove the emphasis on parents, especially mothers. "Work-life" has also received criticism with its suggestion that work and life are somehow separate (Eikhof et al., 2007), rather than work being a part of life.The term "work/personal life integration" was offered by Rapoport et al. (2002) who seek to acknowledge the importance of individual priorities and choices with the use of the word "integration", rather than "balance". They suggest that balance indicates an equal split of time between the two domains, which is an unrealistic state of affairs, whereas integration focuses on a sense of satisfaction in both the work and non-work domains. But "integration" also suggests the blending together of work and personal life, and individuals do not always want to manage the two areas by merging them and some may prefer to keep them separate. Thus, authors have begun to refer to the harmonizing of work and the rest of life (Lewis and Cooper, 2005; Gambles et al., 2006) to indicate their interaction in positive ways. The demands from the work and non-work domains are not absolute and cannot necessarily be easily measured. The demands vary, as do individual responses to such demands. It could be argued that people self-impose expectations with regard to performance of both work responsibilities and household and other non-work obligations (Quick et al., 2004). Managing such expectations can enable an individual to cope with conflicting priorities and Quick et al. place more emphasis on the importance of energy in a given situation, rather than the amount of time spent there. So the argument moves away from a sense of balance or equality of the different domains, and acknowledges the relevance of timely emotional engagement within each domain and the ability to focus on situational requirements. But this still suggests a large element of choice, whereas Caproni (2004) argues that the language used in the work-life balance debate adds to the pressures experienced by individuals who are seeking to achieve this elusive state of satisfaction with both work and non-work domains. She describes the conceptualization of work-life balance as individualistic and achievement oriented:[...] setting us up to strive for one more thing that we cannot achieve and, in doing so, keeping us too focused, busy and tired to explore the consequences of our thinking and actions (Caproni, 2004, p. 212).The family context The continual working towards balance can also imply a greater choice over life decisions than often exists. For instance, care may have to be provided for children or for elderly parents, but the demands for such care are often unpredictable, due to combinations of circumstances, thus reducing the element of choice and control (Caproni, 2004). Additionally, high-quality regular childcare for older children (6+), specifically after school care, is more difficult to obtain than the more routine requirements sought for pre-school children (Moore et al., 2007).A study, which compared single women without children with married women with and without children found that all three groups experienced similar levels of difficulty of balancing work and non-work (Hamilton et al., 2006). This under-researched group of never-married women without children experienced greater pressure to take on additional tasks late in the evening or at weekends, precisely because they were viewed as having fewer family obligations than others (Anderson et al., 1994, cited in Hamilton et al., 2006). Similarly, work was described as "all-encompassing", leaving few resources for seeking activities outside the work place (2006, p. 408). This study provides a different, yet important perspective to the discussion of choice experienced by these individuals in the decisions they make regarding work-life integration.Negative or positive? However, not everyone agrees that balance has been about seeking satisfaction in both domains. A different interpretation suggests that one of the flawed assumptions in the work-life balance debate is that work has been portrayed as negative and problematic, with individuals wanting to reduce the time spent at work as a result (Eikhof et al., 2007). These authors suggest that work-life balance programmes ignore the possibility that people may gain satisfaction and fulfillment from work, and state that a common, and inaccurate, premise for flexible working arrangements is that "work-life balance provisions are introduced to help employees reconcile what they want to do (care) with what they have to do (work)" (Eikhof et al., 2007, p. 327, brackets in the original). They argue that employees may want to work and that the work-life balance debate tends to ignore this as a possibility. However, others talk about positive spillover (Kirchmeyer, 1993) and the enrichment, which takes place between work and family (Greenhaus and Powell, 2006).Control and workplace flexibility discourses Lewis et al. (2007, p. 361) argue that there are in fact two overlapping work-life balance discourses; "the personal control of time" and "workplace flexibility" with both including a dimension of choice. The former indicates that individuals are able to make their own decisions about the priorities in their lives around work, career, family and other aspects of life, paying little attention to the gendered assumptions about commitment and competence which underpin the concept of the ideal worker (Rapoport et al., 2002). Flexibility discourses emphasise the choice available to employees regarding where, when and how much to work, and again may not challenge the gendered constraints to adoption of family welfare assistants. These two discourses are evident in a study by Drew and Murtagh (2005). First, senior managers felt unable to control their work time in terms of demonstrating what might be considered "normal hours", because of the long hours culture. Similarly, the flexibility discourse was highlighted with flexitime and home working arrangements seen as incompatible with senior management posts, particularly by the male managers.Genuine choice or pressure of the "ideal worker" Additionally, women may view working as a financial necessity rather than a real choice (Houston and Marks, 2003) partly because of the huge effort needed to overcome the psychological and practical barriers in order to work. This does not, of course, preclude the experiencing of some satisfaction as a result of working.Managers and professionals are particularly susceptible to the "ideal worker" norm and its view of domesticity and the subsequent doubt over their commitment to their employer and their career if they stray from that ideal by adopting a pattern of work, which involves less face time. The ideal worker has historically been seen as someone who can give their time unstintingly and willingly to their employing organization, and have no conflicting demands on their time (Rodgers and Rodgers, 1989; Pitt-Catsouphes et al., 2006). Alongside this is the assumption of the existence of another adult-based full-time in the home to attend to domestic and caring responsibilities. In the twenty-first century, many families do not have such a structure of full-time breadwinner and full-time homemaker (Marks, 2006) and households may consist of different mixes of number of adults, age and number of children or no children, and presence or absence of elderly dependents (Ransome, 2007).Other literature focuses on the issue of choice in women's careers. For instance, Lyonette and Crompton (2008) talk about the choices women accountants make in their careers, finding some indication of women choosing to stick at the level below partner because of the increased demands and pressures, which would result from such a promotion, including the adverse impact on family life. They point out that choices are made from a range of "realistic" possibilities, within the bounds of constraining factors, and for women, domestic responsibilities are examples of such constraints.Hence, the language in the work-life balance arena shapes the way in which the constraints are framed, choices made and outcomes achieved. The personal control of time and the ability to work flexibly are two discourses that contain both constraints and choices, but whilst personal control of time may be easier for some senior staff, the ability to work flexibly may not always be available for those at the top of some types of organization, especially client-based services. For women at partner level in professional service firms at the peak of their careers, in the mature life stage, probably having already acquired significant financial resources, one choice may be to exit or "off-ramp", rather than continue their professional career within their firm. This study examines what happened in the case of those women who made the choice to leave. In this study, the discursive approach is viewed as an examination of how language (in the form of spoken interaction) is used to construct and change the social world, while challenging the accepted ways of looking at the world (Dick, 2004). We use Watson's (1995, p. 816) description of discourse as a:[...] connected set of statements, concepts, terms and expressions, which constitutes a way of talking or writing about a particular issue, thus framing the way people understand and act with respect to that issue.So it is a form of sense making, both on the part of the interviewees, who here make decisions about the information to include and to omit in their accounts of their decision to leave the firm (Potter and Wetherell, 1987), and also on the part of us, the interviewers and researchers. We examine the varied discursive constructions used by the participants to achieve their particular purposes, whether this is their portrayal of themselves as valued partners or of making sense of what is happening within the firm, particularly with regard to the number of women partners leaving. As Watson points out, these discursive constructions are similar to interpretative repertoires (Gilbert and Mulkay, 1984, cited in Edley, 2001, p. 197) which are "quite separate ways of talking about or constructing" a given topic. Similarly, Clarke et al. (2009) refer to "antagonistic discourses" where individuals present self-narratives, which incorporate contrasting positions. Following an approach from an international management consultancy firm, we sought interviews with women partners who had left the firm over the previous three years. Access was provided by the Head of Diversity who contacted the 47 women leavers in that period to see if they would be willing to be interviewed. The project arose as a result of the women partners leaving in higher percentages than male partners, causing concern within the firm, not least because of a range of initiatives, which had been introduced over the preceding years to support women's career progression. The 36 women who responded were contacted by the project manager to set-up semi-structured interviews, and 31 female partners eventually participated, a response rate of 66 per cent. Only one interview was undertaken face-to-face in the UK, as most of the women were resident overseas. We therefore continued using telephone interviews over a period of about a month, after piloting the method to ensure that an open and frank discussion would be possible. The interview schedule was e-mailed to the respondents the day before the interview. Interviews lasted about 45 minutes and were recorded with the permission of the interviewees. Anonymity was guaranteed, and raw data were not given to the firm, although a report was delivered including recommendations for good practice.The first step in the analysis involved the reading and re-reading of the transcripts by all three researchers and then a discussion to identify discursive patterns within the text. The discursive approach to analysis requires a familiarity with the data (Kelan, 2008) and this continued through the use of NVivo, organizing and categorizing the data, focusing on the similarities and differences in the ways participants talked about choice and the decisions made during their employment with the firm and their decisions to leave. Further discussion occurred, leading to deeper understanding of the concepts invoked by the texts. In reporting the findings some demographic detail will be supplied to provide some context to the quotations used, but the aim of this paper is to understand how the women made sense of the choices they made, rather than to seek any connection between choices made and factors such as gender, parental status, etc.The sample The women ranged in age from 37 to 60 and two-thirds were married or with a partner. In total of 16 women had children under the age of 18 and the other 15 either had no children or their children were grown up. Five women were currently at home full-time with children of varying ages and a further four were retired. A total of 11 women were in paid employment undertaking a variety of roles with differing levels of responsibilities. Three women had their own businesses, and nine were involved with voluntary work, sometimes alongside paid work. Length of employment with the firm ranged from four to 29 years and the women had left up to four years previously. Most recently, 21 of the women had worked in North America and the remaining ten had worked in the continents of Africa, Australia, Asia and Europe. The financial situation was a major factor for most of the women. About 18 out of the 31 women had left three years previous to the interviews at a time of general economic downturn and many of these talked about the financial package, which was made available to them. For some, this allowed them to maintain their desired lifestyle without having to earn money in the future.The following section identifies two contradictory discursive constructions: that of loyalty for the firm as a wonderful place to work, and second a discourse of the lack of choice and control over lifestyle with regard to meeting priorities from both the work and non-work domains. It is important to note that these discourses were not separate but were enmeshed within the interviews. We will discuss these discourses, highlighting differing elements of the discourse of choice and lifestyle as we contrast it with the loyalty discourse. Loyalty This discourse emerged through representations of the women's positive experiences during their time with the firm and their statements of the high regard in which they held the firm. The previous limited research has highlighted the negative factors, which combined to push women towards the decision to leave organizations. In this study, we identify resistance to criticizing the firm through the loyalty discourse.Sherri's talk illustrates the emphasis on the firm as a good place to work, despite reservations, which may have occurred over time. She evidently felt that it was important to provide a context for her honesty about some of the more negative experiences, which contributed towards her decision to leave:I would say that in the grand scheme of things [ABC] is probably the best place to work. I really don't want comments taken out of context because I still stayed with [ABC] for 20 years. I really did check this a number of times in my career when I was low and thinking of leaving, and I looked at the options and the other organizations I could work with, etc. and I still believe that [ABC] is a great place to work (Sherri, no children).Sherri refers to the ongoing choice she has made over a number of years to remain with the firm. The loyalty discourse included a strong need for the interviewees to present their affection for the firm indicating a sense of respect for the firm itself as an organization and also for the people who work there. Suzan's view represents this high regard for previous colleagues:I have a high regard for the company and the people and everything it stands for (Suzan, children 12-18 years).Part of the loyalty was evidenced through a strong sense of identity with the firm, and gratitude for the opportunities, which had been made available and fully utilised:The firm was a very big part of me for a very long time (Joya, no children).I love [ABC], I really love [ABC] and I had a great career and I'm very thankful and grateful for what I learned and the people with whom I worked and what I achieved, it was a great, great career. I loved it (Megan, children 5-12 years).Despite the eventual decision to leave, these women were keen to stress their successful careers in a highly regarded firm. Joya's comment in particular offers some explanation for the emphasis on loyalty. Her social identity was enmeshed with the reputation of the firm, and to acknowledge and allow criticism of the firm would therefore involve self-criticism which these successful women would not be comfortable with.Choice and lifestyle Issues around choice and lifestyle emerged in three ways: choice in the desire for greater integration of work and non-work, choice within a context of constraints and choice and the demanding role of partner. Our intention is to demonstrate the contradictory nature of the loyalty discourse with the significant sets of choice discourses on which the participants drew to explain their decision to leave the firm.Choice in desire for greater integration The choice discourse manifests through the representations of individual priorities when the women described the position of work within their whole lives. There was no sense of seeking an equal division of time within the work and the non-work domains, and similarly the women did not talk of blending or integrating their work and personal lives.Aileen expressed her loyalty and affection for the firm: "ABC is a wonderful company and I think very highly of it" but she also explained how she had recently gone through a divorce and wanted to reduce her time away from home and her children at such a distressing time for them. However, she perceived herself to have no option other than to continue to work in the same way:I felt [the senior partners] were, although nice, they were very unsympathetic to the situation and I just don't feel that there were any options there [...] It was kind of like you understand what the expectations are, you've always been a good performer, you know what it takes and either accept it or don't [...] There was no discussion about part time, it was just accept it; you're going to have to travel if you want to be there. There was just no sympathy at all (Aileen, children 12-18 years).So the discourse of choice becomes polarized and these women demonstrated their awareness of the extreme options they had to consider. Kim expressed her appreciation of the initiatives within the firm, which had been introduced to support women's career progression:I know everything the firm's done and we've done so much to help retain senior women [...] We've done so many great things, so I want to give credit to all of that but every once in a while I just came to the realisation that you want to have this great career that requires a tremendous amount of time and commitment and you just can't balance everything sometimes (Kim, children under 12 years).There was an increasing recognition by some of the women of the demands which family life placed on them and Kim went on to explain the shift in her desires away from doing whatever was required by the firm:The year prior to when I had my third child, I tried a part time schedule and that helped but I think once I was home with all three kids, getting involved with their lives more, over the year and a half that I was home on this leave of absence, it just became clearer and clearer to me that I wanted to be at home with them. The balance I had been trying to achieve was so difficult; I had to make a choice, so that's why I finally chose to resign this past January (Kim, children under 12 years).There appeared to be a gradual acknowledgement that any sort of balance between the work and non-work domains was not possible. Alexis suggested that the only practical way to achieve such a balance was sequentially, prioritizing work or family at different times:I came to the realization at one point because I was in that group of women who felt you could have it all and I ultimately came to the realization yes you can have it all but not all at once, you just have to take different stages of your life and I just had to come off the consulting career path for five years at important stages of my parenting (Alexis, grown up children).Yet, she too presented an interesting contradictory perspective in her statement: "I think very highly of ABC and I found ABC to be very, very supportive of my time and participation in the firm". So she had to step-off the career path to attend to parenting priorities, and yet still spoke of the support she received, so again illustrating the contradictory elements of this discourse.The difficulty in achieving any sort of compromise between the demands of work and home was echoed by Libby who used particularly emotive language as she described the options which she had clearly rejected, but which would have enabled her to maintain her previous high levels of performance:I wanted to come back to work, I loved my job, but I found no matter how much I tried I just couldn't be the same top performer that I was before I had my baby. Sure, I could have made the decision to outsource my family and get a full time live-in nanny and continue to work the same hours, but I didn't want to do that. I didn't want to sell my family off to somebody else (Libby, children under five years).Libby provides an interesting contrast to many of the other women who engage in both the discourses of loyalty, and of choice. Although she expresses enthusiasm, it is for her job rather than for the firm. Instead of the loyalty and affection, there was a sense of resentment and anger at the lack of support and understanding which effectively removed the option of staying with the firm. It would have been at too great a cost.Similarly, Cassie explained the discrepancy between the wants and needs of junior colleagues and the lifestyle, which they observed of the existing partners:I've seen too many associate partners leave because they've said to me I don't want that lifestyle, I need to have some time to have children and deal with elder care and whatever, so I think the firm really needs to address that because they are losing a lot of talent (Cassie, children under five years).So there are some fundamental life choices within this discourse of choice, requiring women to consider their future and the investment of time needed to achieve their lifestyle goals:Do I want to take this job or do I want to go and do the things I want to do, i.e. get married, have a family and all that? Now I'm not saying you can only have one or the other, but I was thinking that being involved in work at [ABC], I have forgotten about a lot of things in life, you know what I'm saying? (Aisah, children under five years).Choice in a context of constraints The choice discourse is presented paradoxically, with a spectrum, which has "all work" at one end and "all family" at the other. Despite the use of various strategies over time, the women were experiencing a feeling of being pushed into decisions they find problematic. However, there is also uncertainty in the way forward. Laura questioned the boundaries she would be looking to put in place if she were to return to a role similar to her previous one with the firm:I have thought if a big project came up and [ABC] asked for me to come back, what would be the parameters that would be ideal with respect to that balance because it certainly does change at every point in time, every life change (Laura, children 12-18 years).Freeing themselves up to spend more time with their children was a key part of the discourse for the mothers in the sample. Women with children of differing ages had concluded that it was the right time to work and travel less, in order to spend more time with family. For instance, Kerry, who had "been with [ABC] for 20 years and had a wonderful career with [ABC] and had fantastic opportunities" went on to say:I feel a little bit like I've missed out on the first ten to twelve years of their lives and I wanted to be more involved in their teenage years than I was in their toddler years and be available to them [...] Unfortunately in my professional role in [ABC] it was not possible to have that time and flexibility to get involved in their lives to the extent that I wanted to (Kerry, children under 12 years).Similarly, Ellen explained:I love [ABC]. [ABC] gave me the life I have right now [...] But being around for my two children is my big role right now [...] I want to be there for this part of their life, I don't want to miss my kids growing up (Ellen, children under 12 years).So Kerry and Ellen have availed themselves of the only option they could see within ABC, that of leaving the firm that they regard so highly, as there is little support to enable them to meet their other priorities.Choice and the demanding role of partner In a different way, those without children talked of the importance of spouses/partners, families and friends in their lives, and of the challenge of managing the competing demands on their time. The demanding nature of the partner role makes it difficult to allocate significant amounts of time to others without the constant distractions from the office and/or the client, as Sherri described:My parents are aging, they're not well, and I all of a sudden decided I absolutely had to spend more time with them and that's not just on weekends, it's going and spending large blocks of quality time with them when I can just focus on them and not be on the phone or on the computer and everything else back to the work site (Sherri, no children).The issue of travel featured strongly in the choice discourse, as it was described as an inherent part of the role of partner by these women and was a major constraining factor in the choices they experienced. The partner role requires a great deal of travelling, taking all of them away from their families and friends on a continual and relentless basis. In the quotation below, Agnes emphasizes her developing competence at creating non-work time at weekends, but the inability to address the demands placed on her by the need to travel:I still have an awful lot of respect for [ABC], but the travelling - everything else I could manage because the work hours I could manage. I tended to work quite a bit but I controlled it myself and I got better about not working weekends and so everything else I can manage but the travelling (Agnes, children under five years).However, Kim explained that it is not just the travel, but the whole nature of the job and the expectations of senior partners and clients, even when working locally to one's home base described as "an in town job". Such a working arrangement added the additional pressure of expectations from immediate family of a greater presence in the home, because of the assumption of more normal working hours:The client comes first and that means whatever you need, whatever hours are required, it can be just as hard to be on an in town job, so even when there is no travel, but you've got to be at the client at 7 a.m. for a meeting or there's a crisis and you're there until 11. Sometimes I found being in town can be just as demanding because your family thinks you're at home, so why aren't you coming home for dinner, and eating with them? You're still spending all your time away (Kim, children under 12 years). About half of these women had children under the age of 18, yet the issues of motherhood were not about coordinating childcare arrangements or dividing up parenting with a partner, but about wanting to spend time with children at varying stages in their lives. Several of those without children, or with grown up children, talked of wanting the opportunity to have broader experiences in their lives, mentioning wider family and friends. Unfortunately, many tended to see the issue as very clearly defined in terms of either their role within the firm or a life outside the firm.The women in this sample were not necessarily off-ramping but seeking to work in a way, which gave them control over their lives, especially with respect to minimizing the amount of travel, which took them away from home on a regular basis. Although there was evidence of the pull of motherhood mentioned by Belkin (2003), these women did not express dissatisfaction with organizational life per se. On the contrary, they stressed positive elements of much of their careers, expressing loyalty and commitment to the firm, which many still held in high regard.As is common with discourses (Watson, 1995), two dominant themes co-existed in the accounts of these successful women leavers, i.e. loyalty and choice regarding work-life integration. Yet the hegemonic positioning of the choice discourse meant that other alternatives were suppressed. Women showed loyalty and affection towards their firm, but this was not reciprocated in the form of some temporary control over their personal lives or some flexibility so that they could manage less stressfully their non-work responsibilities. The flexible working offerings for women lower down the hierarchy were much celebrated by the firm, but there was no flexibility in the partner tier. These women perceived that the firm was expecting all or nothing, and did not seem to recognize the paradoxical nature of their loyalty to the firm and their decision to leave.Although there were combinations of factors, which contributed to the women's decisions to leave the firm, the focus of this paper has been the discourse of choice with particular emphasis on lifestyle. There was little evidence of either balance in the sense of division of time between the work and non-work domains, or of integration or the blending of work and personal life. The discourse presented was of a forced and extreme choice.However, it is clear that the women partners had certainly gained satisfaction and fulfillment from their jobs over a long period of time. The issue was not about "wanting to care" and "having to work", therefore demonstrating support for the inaccuracy of such a claim, pointed out by Eikhof et al. (2007). The financial package available at the point of departure and high levels of remuneration, often over a period of many years, enabled them to leave, given that they felt unable to continue to work within the firm because of the extreme demands that the partner role required.The "personal control of time" and "workplace flexibility" (Lewis et al., 2007) were both absent, highlighting "work life imbalance". These women clearly did not fit the "ideal worker" norm. Many were single or in a relationship with another full-time worker, hence the lack of the "other" adult based in the home and needed by the ideal worker to be able to maintain their level of dedication to the job role. Also these women had an increasing desire to attend to demands from outside the work domain. This study adds to what is known about the importance of family life for women, with many of those who were not mothers still experiencing a tension between the demands of day-to-day organizational life at partnership level and the need to give attention to extended family and friends. Importantly, this study uses data from women who have left the same organization within a relatively brief period of time, providing unique data and valuable insights into what is known about women in senior positions who choose to leave and yet display tremendous loyalty and affection to the firm. These findings therefore strengthen the choice discourse, which serves to neutralize and suppress feelings of discontent over the constraints imposed by the firm's cultural expectations of those in the role of partner. This effectively removes the responsibility from the organization of the need to facilitate the work-life integration of partners.The study has several limitations. First, we only talked to women partners who had left the firm and left at a particular time of economic downturn when a significant financial package was available to them. Speaking to women partners who had maintained their partner role in the firm during the same time period would have produced a useful comparison. Similarly, the inclusion of men in the sample would allow useful gender comparisons and this provides an opportunity for future research.Understanding the factors, which led these valued women partners to leave the organization, not withstanding the initiatives and support which had facilitated their achievement of such positions, will allow the organization to review the demands placed on partners. The extreme expectations, particularly involving excessive travel and time away from the family, should be reviewed in the light of these findings, if the firm wishes to stem the flow of women partners.
|
- The decision to leave is the culmination of many interacting factors at a time when a financial incentive for resignation is available. Findings present here focus on discourses of loyalty to and affection for the company and work-life integration.
|
[SECTION: Value] Concern over the relatively high numbers of women managers leaving organizations has been growing. Many organizations have developed initiatives with the specific aim of supporting women's career progression to the higher echelons of corporate life, such as mentoring programmes and women's networks. The retention of valued talent is recognized as a priority and organizations strive to brand themselves as an employer of choice. Such strategies have had some success and higher proportions of women are found in more senior positions, arguably having broken through the glass ceiling. Despite this progress, women continue to leave organizations in higher proportions than their male counterparts at senior levels, and there is little in the literature examining this phenomenon. This paper attempts to fill this space.The aim of this paper is to explore discursively how women partners represent and describe their decisions to leave a professional services firm. This context is important in that the nature and structure of these services (usually project-based) place demands on senior staff, particularly partners, in return for high levels of extrinsic and some intrinsic rewards. The demands include not only high levels of professionalism but also extreme commitment, such as the ability to travel nationally and internationally on demand, to work around the clock at whichever premises whenever necessary, and to provide speedy and efficient solutions to the clients' problems. Kumra and Vinnicombe (2008) provide an account of the nature of such firms. This study begins by considering the very limited literature on women leaving organizations followed by an examination of the discourse of choice within the work-life balance literature. After outlining the methodology for our study of 31 women partners who have left a global management consultancy firm, we present empirical evidence from the women themselves. We next discuss the implications of such evidence and make suggestions for further research in this important area. Despite the level of concern expressed in many organizations, and Belkin's (2003) "opt-out revolution" article which discussed the push of job dissatisfaction and the pull of motherhood, a search revealed very little extant academic literature on senior women who have left organizations. Of course, the decision to leave an organization does not necessarily mean that women wish to permanently turn their backs on corporate life. Many women take career breaks at some stage in their careers, and Hewlett and Luce (2005) point out the ease with which women can "off-ramp" and the difficulties they face when planning to return to organizational life. The kaleidoscope career model (Mainiero and Sullivan, 2005) focuses on the "fit" of work and family, and offers an explanation for the large numbers of middle management who leave corporate life. Women talked about opportunities and possibilities as well as the blocks experienced in creating their own path, which provide challenge and allow specific needs to be met.Mallon and Cohen (2001) studied women's transitions from careers within organizations into self-employment, seeking a further understanding of how women themselves experience and make sense of changing careers. Dissatisfaction with organizational life, changes in organizations, which contradicted personal values and principles, and an imbalance of personal and professional life were key factors in the decision to move to self-employment. Looking at more senior women leaving organizations, Marshall (1995, 2000) identified no single pattern, but emphasized the complex nature of such decisions, each of which is individual and multi-faceted. Factors identified by Marshall regarding women's decisions to leave employment included leaving changed roles that had become untenable, blocked promotion prospects and wanting a more balanced life. One particular theme was their experience of difficulties with inappropriate and often hostile interpersonal behaviour, often for the first time in their careers. It was not that they could not cope, but that they did not respect or want to work in such unproductive environments. Several such factors contributed to individual decisions in accumulating and complex ways. In particular, the choice relating to work-life integration emerged from these studies as a key explanatory factor. Work-life balance or work-life integration? The language used to describe the integration of work and non-work domains reflects its socially constructed evolution. Research emphasis has moved from "conflict" through "seeking balance" to "integration" (Burke, 2004). Similarly, there has been a shift away from "work-family" or "family friendly" when referring to supportive organizational policies to "work-life" in order to remove the emphasis on parents, especially mothers. "Work-life" has also received criticism with its suggestion that work and life are somehow separate (Eikhof et al., 2007), rather than work being a part of life.The term "work/personal life integration" was offered by Rapoport et al. (2002) who seek to acknowledge the importance of individual priorities and choices with the use of the word "integration", rather than "balance". They suggest that balance indicates an equal split of time between the two domains, which is an unrealistic state of affairs, whereas integration focuses on a sense of satisfaction in both the work and non-work domains. But "integration" also suggests the blending together of work and personal life, and individuals do not always want to manage the two areas by merging them and some may prefer to keep them separate. Thus, authors have begun to refer to the harmonizing of work and the rest of life (Lewis and Cooper, 2005; Gambles et al., 2006) to indicate their interaction in positive ways. The demands from the work and non-work domains are not absolute and cannot necessarily be easily measured. The demands vary, as do individual responses to such demands. It could be argued that people self-impose expectations with regard to performance of both work responsibilities and household and other non-work obligations (Quick et al., 2004). Managing such expectations can enable an individual to cope with conflicting priorities and Quick et al. place more emphasis on the importance of energy in a given situation, rather than the amount of time spent there. So the argument moves away from a sense of balance or equality of the different domains, and acknowledges the relevance of timely emotional engagement within each domain and the ability to focus on situational requirements. But this still suggests a large element of choice, whereas Caproni (2004) argues that the language used in the work-life balance debate adds to the pressures experienced by individuals who are seeking to achieve this elusive state of satisfaction with both work and non-work domains. She describes the conceptualization of work-life balance as individualistic and achievement oriented:[...] setting us up to strive for one more thing that we cannot achieve and, in doing so, keeping us too focused, busy and tired to explore the consequences of our thinking and actions (Caproni, 2004, p. 212).The family context The continual working towards balance can also imply a greater choice over life decisions than often exists. For instance, care may have to be provided for children or for elderly parents, but the demands for such care are often unpredictable, due to combinations of circumstances, thus reducing the element of choice and control (Caproni, 2004). Additionally, high-quality regular childcare for older children (6+), specifically after school care, is more difficult to obtain than the more routine requirements sought for pre-school children (Moore et al., 2007).A study, which compared single women without children with married women with and without children found that all three groups experienced similar levels of difficulty of balancing work and non-work (Hamilton et al., 2006). This under-researched group of never-married women without children experienced greater pressure to take on additional tasks late in the evening or at weekends, precisely because they were viewed as having fewer family obligations than others (Anderson et al., 1994, cited in Hamilton et al., 2006). Similarly, work was described as "all-encompassing", leaving few resources for seeking activities outside the work place (2006, p. 408). This study provides a different, yet important perspective to the discussion of choice experienced by these individuals in the decisions they make regarding work-life integration.Negative or positive? However, not everyone agrees that balance has been about seeking satisfaction in both domains. A different interpretation suggests that one of the flawed assumptions in the work-life balance debate is that work has been portrayed as negative and problematic, with individuals wanting to reduce the time spent at work as a result (Eikhof et al., 2007). These authors suggest that work-life balance programmes ignore the possibility that people may gain satisfaction and fulfillment from work, and state that a common, and inaccurate, premise for flexible working arrangements is that "work-life balance provisions are introduced to help employees reconcile what they want to do (care) with what they have to do (work)" (Eikhof et al., 2007, p. 327, brackets in the original). They argue that employees may want to work and that the work-life balance debate tends to ignore this as a possibility. However, others talk about positive spillover (Kirchmeyer, 1993) and the enrichment, which takes place between work and family (Greenhaus and Powell, 2006).Control and workplace flexibility discourses Lewis et al. (2007, p. 361) argue that there are in fact two overlapping work-life balance discourses; "the personal control of time" and "workplace flexibility" with both including a dimension of choice. The former indicates that individuals are able to make their own decisions about the priorities in their lives around work, career, family and other aspects of life, paying little attention to the gendered assumptions about commitment and competence which underpin the concept of the ideal worker (Rapoport et al., 2002). Flexibility discourses emphasise the choice available to employees regarding where, when and how much to work, and again may not challenge the gendered constraints to adoption of family welfare assistants. These two discourses are evident in a study by Drew and Murtagh (2005). First, senior managers felt unable to control their work time in terms of demonstrating what might be considered "normal hours", because of the long hours culture. Similarly, the flexibility discourse was highlighted with flexitime and home working arrangements seen as incompatible with senior management posts, particularly by the male managers.Genuine choice or pressure of the "ideal worker" Additionally, women may view working as a financial necessity rather than a real choice (Houston and Marks, 2003) partly because of the huge effort needed to overcome the psychological and practical barriers in order to work. This does not, of course, preclude the experiencing of some satisfaction as a result of working.Managers and professionals are particularly susceptible to the "ideal worker" norm and its view of domesticity and the subsequent doubt over their commitment to their employer and their career if they stray from that ideal by adopting a pattern of work, which involves less face time. The ideal worker has historically been seen as someone who can give their time unstintingly and willingly to their employing organization, and have no conflicting demands on their time (Rodgers and Rodgers, 1989; Pitt-Catsouphes et al., 2006). Alongside this is the assumption of the existence of another adult-based full-time in the home to attend to domestic and caring responsibilities. In the twenty-first century, many families do not have such a structure of full-time breadwinner and full-time homemaker (Marks, 2006) and households may consist of different mixes of number of adults, age and number of children or no children, and presence or absence of elderly dependents (Ransome, 2007).Other literature focuses on the issue of choice in women's careers. For instance, Lyonette and Crompton (2008) talk about the choices women accountants make in their careers, finding some indication of women choosing to stick at the level below partner because of the increased demands and pressures, which would result from such a promotion, including the adverse impact on family life. They point out that choices are made from a range of "realistic" possibilities, within the bounds of constraining factors, and for women, domestic responsibilities are examples of such constraints.Hence, the language in the work-life balance arena shapes the way in which the constraints are framed, choices made and outcomes achieved. The personal control of time and the ability to work flexibly are two discourses that contain both constraints and choices, but whilst personal control of time may be easier for some senior staff, the ability to work flexibly may not always be available for those at the top of some types of organization, especially client-based services. For women at partner level in professional service firms at the peak of their careers, in the mature life stage, probably having already acquired significant financial resources, one choice may be to exit or "off-ramp", rather than continue their professional career within their firm. This study examines what happened in the case of those women who made the choice to leave. In this study, the discursive approach is viewed as an examination of how language (in the form of spoken interaction) is used to construct and change the social world, while challenging the accepted ways of looking at the world (Dick, 2004). We use Watson's (1995, p. 816) description of discourse as a:[...] connected set of statements, concepts, terms and expressions, which constitutes a way of talking or writing about a particular issue, thus framing the way people understand and act with respect to that issue.So it is a form of sense making, both on the part of the interviewees, who here make decisions about the information to include and to omit in their accounts of their decision to leave the firm (Potter and Wetherell, 1987), and also on the part of us, the interviewers and researchers. We examine the varied discursive constructions used by the participants to achieve their particular purposes, whether this is their portrayal of themselves as valued partners or of making sense of what is happening within the firm, particularly with regard to the number of women partners leaving. As Watson points out, these discursive constructions are similar to interpretative repertoires (Gilbert and Mulkay, 1984, cited in Edley, 2001, p. 197) which are "quite separate ways of talking about or constructing" a given topic. Similarly, Clarke et al. (2009) refer to "antagonistic discourses" where individuals present self-narratives, which incorporate contrasting positions. Following an approach from an international management consultancy firm, we sought interviews with women partners who had left the firm over the previous three years. Access was provided by the Head of Diversity who contacted the 47 women leavers in that period to see if they would be willing to be interviewed. The project arose as a result of the women partners leaving in higher percentages than male partners, causing concern within the firm, not least because of a range of initiatives, which had been introduced over the preceding years to support women's career progression. The 36 women who responded were contacted by the project manager to set-up semi-structured interviews, and 31 female partners eventually participated, a response rate of 66 per cent. Only one interview was undertaken face-to-face in the UK, as most of the women were resident overseas. We therefore continued using telephone interviews over a period of about a month, after piloting the method to ensure that an open and frank discussion would be possible. The interview schedule was e-mailed to the respondents the day before the interview. Interviews lasted about 45 minutes and were recorded with the permission of the interviewees. Anonymity was guaranteed, and raw data were not given to the firm, although a report was delivered including recommendations for good practice.The first step in the analysis involved the reading and re-reading of the transcripts by all three researchers and then a discussion to identify discursive patterns within the text. The discursive approach to analysis requires a familiarity with the data (Kelan, 2008) and this continued through the use of NVivo, organizing and categorizing the data, focusing on the similarities and differences in the ways participants talked about choice and the decisions made during their employment with the firm and their decisions to leave. Further discussion occurred, leading to deeper understanding of the concepts invoked by the texts. In reporting the findings some demographic detail will be supplied to provide some context to the quotations used, but the aim of this paper is to understand how the women made sense of the choices they made, rather than to seek any connection between choices made and factors such as gender, parental status, etc.The sample The women ranged in age from 37 to 60 and two-thirds were married or with a partner. In total of 16 women had children under the age of 18 and the other 15 either had no children or their children were grown up. Five women were currently at home full-time with children of varying ages and a further four were retired. A total of 11 women were in paid employment undertaking a variety of roles with differing levels of responsibilities. Three women had their own businesses, and nine were involved with voluntary work, sometimes alongside paid work. Length of employment with the firm ranged from four to 29 years and the women had left up to four years previously. Most recently, 21 of the women had worked in North America and the remaining ten had worked in the continents of Africa, Australia, Asia and Europe. The financial situation was a major factor for most of the women. About 18 out of the 31 women had left three years previous to the interviews at a time of general economic downturn and many of these talked about the financial package, which was made available to them. For some, this allowed them to maintain their desired lifestyle without having to earn money in the future.The following section identifies two contradictory discursive constructions: that of loyalty for the firm as a wonderful place to work, and second a discourse of the lack of choice and control over lifestyle with regard to meeting priorities from both the work and non-work domains. It is important to note that these discourses were not separate but were enmeshed within the interviews. We will discuss these discourses, highlighting differing elements of the discourse of choice and lifestyle as we contrast it with the loyalty discourse. Loyalty This discourse emerged through representations of the women's positive experiences during their time with the firm and their statements of the high regard in which they held the firm. The previous limited research has highlighted the negative factors, which combined to push women towards the decision to leave organizations. In this study, we identify resistance to criticizing the firm through the loyalty discourse.Sherri's talk illustrates the emphasis on the firm as a good place to work, despite reservations, which may have occurred over time. She evidently felt that it was important to provide a context for her honesty about some of the more negative experiences, which contributed towards her decision to leave:I would say that in the grand scheme of things [ABC] is probably the best place to work. I really don't want comments taken out of context because I still stayed with [ABC] for 20 years. I really did check this a number of times in my career when I was low and thinking of leaving, and I looked at the options and the other organizations I could work with, etc. and I still believe that [ABC] is a great place to work (Sherri, no children).Sherri refers to the ongoing choice she has made over a number of years to remain with the firm. The loyalty discourse included a strong need for the interviewees to present their affection for the firm indicating a sense of respect for the firm itself as an organization and also for the people who work there. Suzan's view represents this high regard for previous colleagues:I have a high regard for the company and the people and everything it stands for (Suzan, children 12-18 years).Part of the loyalty was evidenced through a strong sense of identity with the firm, and gratitude for the opportunities, which had been made available and fully utilised:The firm was a very big part of me for a very long time (Joya, no children).I love [ABC], I really love [ABC] and I had a great career and I'm very thankful and grateful for what I learned and the people with whom I worked and what I achieved, it was a great, great career. I loved it (Megan, children 5-12 years).Despite the eventual decision to leave, these women were keen to stress their successful careers in a highly regarded firm. Joya's comment in particular offers some explanation for the emphasis on loyalty. Her social identity was enmeshed with the reputation of the firm, and to acknowledge and allow criticism of the firm would therefore involve self-criticism which these successful women would not be comfortable with.Choice and lifestyle Issues around choice and lifestyle emerged in three ways: choice in the desire for greater integration of work and non-work, choice within a context of constraints and choice and the demanding role of partner. Our intention is to demonstrate the contradictory nature of the loyalty discourse with the significant sets of choice discourses on which the participants drew to explain their decision to leave the firm.Choice in desire for greater integration The choice discourse manifests through the representations of individual priorities when the women described the position of work within their whole lives. There was no sense of seeking an equal division of time within the work and the non-work domains, and similarly the women did not talk of blending or integrating their work and personal lives.Aileen expressed her loyalty and affection for the firm: "ABC is a wonderful company and I think very highly of it" but she also explained how she had recently gone through a divorce and wanted to reduce her time away from home and her children at such a distressing time for them. However, she perceived herself to have no option other than to continue to work in the same way:I felt [the senior partners] were, although nice, they were very unsympathetic to the situation and I just don't feel that there were any options there [...] It was kind of like you understand what the expectations are, you've always been a good performer, you know what it takes and either accept it or don't [...] There was no discussion about part time, it was just accept it; you're going to have to travel if you want to be there. There was just no sympathy at all (Aileen, children 12-18 years).So the discourse of choice becomes polarized and these women demonstrated their awareness of the extreme options they had to consider. Kim expressed her appreciation of the initiatives within the firm, which had been introduced to support women's career progression:I know everything the firm's done and we've done so much to help retain senior women [...] We've done so many great things, so I want to give credit to all of that but every once in a while I just came to the realisation that you want to have this great career that requires a tremendous amount of time and commitment and you just can't balance everything sometimes (Kim, children under 12 years).There was an increasing recognition by some of the women of the demands which family life placed on them and Kim went on to explain the shift in her desires away from doing whatever was required by the firm:The year prior to when I had my third child, I tried a part time schedule and that helped but I think once I was home with all three kids, getting involved with their lives more, over the year and a half that I was home on this leave of absence, it just became clearer and clearer to me that I wanted to be at home with them. The balance I had been trying to achieve was so difficult; I had to make a choice, so that's why I finally chose to resign this past January (Kim, children under 12 years).There appeared to be a gradual acknowledgement that any sort of balance between the work and non-work domains was not possible. Alexis suggested that the only practical way to achieve such a balance was sequentially, prioritizing work or family at different times:I came to the realization at one point because I was in that group of women who felt you could have it all and I ultimately came to the realization yes you can have it all but not all at once, you just have to take different stages of your life and I just had to come off the consulting career path for five years at important stages of my parenting (Alexis, grown up children).Yet, she too presented an interesting contradictory perspective in her statement: "I think very highly of ABC and I found ABC to be very, very supportive of my time and participation in the firm". So she had to step-off the career path to attend to parenting priorities, and yet still spoke of the support she received, so again illustrating the contradictory elements of this discourse.The difficulty in achieving any sort of compromise between the demands of work and home was echoed by Libby who used particularly emotive language as she described the options which she had clearly rejected, but which would have enabled her to maintain her previous high levels of performance:I wanted to come back to work, I loved my job, but I found no matter how much I tried I just couldn't be the same top performer that I was before I had my baby. Sure, I could have made the decision to outsource my family and get a full time live-in nanny and continue to work the same hours, but I didn't want to do that. I didn't want to sell my family off to somebody else (Libby, children under five years).Libby provides an interesting contrast to many of the other women who engage in both the discourses of loyalty, and of choice. Although she expresses enthusiasm, it is for her job rather than for the firm. Instead of the loyalty and affection, there was a sense of resentment and anger at the lack of support and understanding which effectively removed the option of staying with the firm. It would have been at too great a cost.Similarly, Cassie explained the discrepancy between the wants and needs of junior colleagues and the lifestyle, which they observed of the existing partners:I've seen too many associate partners leave because they've said to me I don't want that lifestyle, I need to have some time to have children and deal with elder care and whatever, so I think the firm really needs to address that because they are losing a lot of talent (Cassie, children under five years).So there are some fundamental life choices within this discourse of choice, requiring women to consider their future and the investment of time needed to achieve their lifestyle goals:Do I want to take this job or do I want to go and do the things I want to do, i.e. get married, have a family and all that? Now I'm not saying you can only have one or the other, but I was thinking that being involved in work at [ABC], I have forgotten about a lot of things in life, you know what I'm saying? (Aisah, children under five years).Choice in a context of constraints The choice discourse is presented paradoxically, with a spectrum, which has "all work" at one end and "all family" at the other. Despite the use of various strategies over time, the women were experiencing a feeling of being pushed into decisions they find problematic. However, there is also uncertainty in the way forward. Laura questioned the boundaries she would be looking to put in place if she were to return to a role similar to her previous one with the firm:I have thought if a big project came up and [ABC] asked for me to come back, what would be the parameters that would be ideal with respect to that balance because it certainly does change at every point in time, every life change (Laura, children 12-18 years).Freeing themselves up to spend more time with their children was a key part of the discourse for the mothers in the sample. Women with children of differing ages had concluded that it was the right time to work and travel less, in order to spend more time with family. For instance, Kerry, who had "been with [ABC] for 20 years and had a wonderful career with [ABC] and had fantastic opportunities" went on to say:I feel a little bit like I've missed out on the first ten to twelve years of their lives and I wanted to be more involved in their teenage years than I was in their toddler years and be available to them [...] Unfortunately in my professional role in [ABC] it was not possible to have that time and flexibility to get involved in their lives to the extent that I wanted to (Kerry, children under 12 years).Similarly, Ellen explained:I love [ABC]. [ABC] gave me the life I have right now [...] But being around for my two children is my big role right now [...] I want to be there for this part of their life, I don't want to miss my kids growing up (Ellen, children under 12 years).So Kerry and Ellen have availed themselves of the only option they could see within ABC, that of leaving the firm that they regard so highly, as there is little support to enable them to meet their other priorities.Choice and the demanding role of partner In a different way, those without children talked of the importance of spouses/partners, families and friends in their lives, and of the challenge of managing the competing demands on their time. The demanding nature of the partner role makes it difficult to allocate significant amounts of time to others without the constant distractions from the office and/or the client, as Sherri described:My parents are aging, they're not well, and I all of a sudden decided I absolutely had to spend more time with them and that's not just on weekends, it's going and spending large blocks of quality time with them when I can just focus on them and not be on the phone or on the computer and everything else back to the work site (Sherri, no children).The issue of travel featured strongly in the choice discourse, as it was described as an inherent part of the role of partner by these women and was a major constraining factor in the choices they experienced. The partner role requires a great deal of travelling, taking all of them away from their families and friends on a continual and relentless basis. In the quotation below, Agnes emphasizes her developing competence at creating non-work time at weekends, but the inability to address the demands placed on her by the need to travel:I still have an awful lot of respect for [ABC], but the travelling - everything else I could manage because the work hours I could manage. I tended to work quite a bit but I controlled it myself and I got better about not working weekends and so everything else I can manage but the travelling (Agnes, children under five years).However, Kim explained that it is not just the travel, but the whole nature of the job and the expectations of senior partners and clients, even when working locally to one's home base described as "an in town job". Such a working arrangement added the additional pressure of expectations from immediate family of a greater presence in the home, because of the assumption of more normal working hours:The client comes first and that means whatever you need, whatever hours are required, it can be just as hard to be on an in town job, so even when there is no travel, but you've got to be at the client at 7 a.m. for a meeting or there's a crisis and you're there until 11. Sometimes I found being in town can be just as demanding because your family thinks you're at home, so why aren't you coming home for dinner, and eating with them? You're still spending all your time away (Kim, children under 12 years). About half of these women had children under the age of 18, yet the issues of motherhood were not about coordinating childcare arrangements or dividing up parenting with a partner, but about wanting to spend time with children at varying stages in their lives. Several of those without children, or with grown up children, talked of wanting the opportunity to have broader experiences in their lives, mentioning wider family and friends. Unfortunately, many tended to see the issue as very clearly defined in terms of either their role within the firm or a life outside the firm.The women in this sample were not necessarily off-ramping but seeking to work in a way, which gave them control over their lives, especially with respect to minimizing the amount of travel, which took them away from home on a regular basis. Although there was evidence of the pull of motherhood mentioned by Belkin (2003), these women did not express dissatisfaction with organizational life per se. On the contrary, they stressed positive elements of much of their careers, expressing loyalty and commitment to the firm, which many still held in high regard.As is common with discourses (Watson, 1995), two dominant themes co-existed in the accounts of these successful women leavers, i.e. loyalty and choice regarding work-life integration. Yet the hegemonic positioning of the choice discourse meant that other alternatives were suppressed. Women showed loyalty and affection towards their firm, but this was not reciprocated in the form of some temporary control over their personal lives or some flexibility so that they could manage less stressfully their non-work responsibilities. The flexible working offerings for women lower down the hierarchy were much celebrated by the firm, but there was no flexibility in the partner tier. These women perceived that the firm was expecting all or nothing, and did not seem to recognize the paradoxical nature of their loyalty to the firm and their decision to leave.Although there were combinations of factors, which contributed to the women's decisions to leave the firm, the focus of this paper has been the discourse of choice with particular emphasis on lifestyle. There was little evidence of either balance in the sense of division of time between the work and non-work domains, or of integration or the blending of work and personal life. The discourse presented was of a forced and extreme choice.However, it is clear that the women partners had certainly gained satisfaction and fulfillment from their jobs over a long period of time. The issue was not about "wanting to care" and "having to work", therefore demonstrating support for the inaccuracy of such a claim, pointed out by Eikhof et al. (2007). The financial package available at the point of departure and high levels of remuneration, often over a period of many years, enabled them to leave, given that they felt unable to continue to work within the firm because of the extreme demands that the partner role required.The "personal control of time" and "workplace flexibility" (Lewis et al., 2007) were both absent, highlighting "work life imbalance". These women clearly did not fit the "ideal worker" norm. Many were single or in a relationship with another full-time worker, hence the lack of the "other" adult based in the home and needed by the ideal worker to be able to maintain their level of dedication to the job role. Also these women had an increasing desire to attend to demands from outside the work domain. This study adds to what is known about the importance of family life for women, with many of those who were not mothers still experiencing a tension between the demands of day-to-day organizational life at partnership level and the need to give attention to extended family and friends. Importantly, this study uses data from women who have left the same organization within a relatively brief period of time, providing unique data and valuable insights into what is known about women in senior positions who choose to leave and yet display tremendous loyalty and affection to the firm. These findings therefore strengthen the choice discourse, which serves to neutralize and suppress feelings of discontent over the constraints imposed by the firm's cultural expectations of those in the role of partner. This effectively removes the responsibility from the organization of the need to facilitate the work-life integration of partners.The study has several limitations. First, we only talked to women partners who had left the firm and left at a particular time of economic downturn when a significant financial package was available to them. Speaking to women partners who had maintained their partner role in the firm during the same time period would have produced a useful comparison. Similarly, the inclusion of men in the sample would allow useful gender comparisons and this provides an opportunity for future research.Understanding the factors, which led these valued women partners to leave the organization, not withstanding the initiatives and support which had facilitated their achievement of such positions, will allow the organization to review the demands placed on partners. The extreme expectations, particularly involving excessive travel and time away from the family, should be reviewed in the light of these findings, if the firm wishes to stem the flow of women partners.
|
- Limitations include access only to women who have left the firm, allowing for no comparison with those who were still partners. Additionally, we were unable to speak to any of the male partners who have left the firm in the same timescales, although in smaller proportions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.