id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
210974030
pes2o/s2orc
v3-fos-license
Development of Personal Selling Standard and Improvement of MSME X's Wedding Promotion Material Currently, the type of industry that still growing is the wedding industry. The increasing number of marriages in Indonesia contributes to the need for wedding services. MSME X owners seek the opportunities needed to provide services and various options for the needs of marriage. MSME X is currently held in Jakarta and Bekasi. This study discusses the business coaching process report MSME X personal selling and improving promotional materials in the form of brochures to attract consumers. The challenge in selling wedding services is that consumers cannot feel or touch services first. Preliminary analysis shows that salesperson use brochures as sales aids, there are differences in the level of sales competency among marketing employees, so personal selling activities are not optimal and affect the company's sales targets. Primary data collected through in depth interviews and observations, while secondary data is obtained from owners, the internet, and management books. The collected data will be analyzed using qualitative methods. This research focuses on the formulation of personal selling standards, and improvement of promotional materials as a tool for salesperson in conducting sales of wedding services. The results of this business coaching process explain the steps that can be used to conduct wedding service sales and the effect of improving promotional materials that help resolve limitations in the marketing aspects of MSME X. Keywords―Business Coaching, MSME, Personal Selling Standard, Promotional Materials, Wedding Industry. I. INTRODUCTION 1 Micro, Small and Medium Enterprises (MSMEs) are the largest number of business groups and play an important role in the Indonesian economy. MSMEs are not much affected by the monetary crisis that occurred in Indonesia. After the impact of the crisis, the number of small microenterprises has actually increased [1]. The research conducted by Obi, et al. (2018) [2], prove that small and medium enterprises significantly encourage economic growth, especially in developing countries. Given the important role of MSMEs in developing countries' economic growth, the role of the government in strengthening MSMEs and making them more successful is very important. Ndiaye et al, (2018) [3], reported the importance of MSMEs in shaping theeconomic landscape of developing countries, this could be the basis for the government in creating programs to create MSMEs and motivating existing MSMEs to maintain performance and improve growth. MSME X has operated four ballrooms in Jakarta and Bekasi that offer wedding services. The increasing number of marriages is an opportunity for the company's business continuity. Data from the Badan Pusat Statistik (2018) [4], shows an increase in national marriage rates during 2016 to 2017, the number of marriages in the DKI Jakarta province in 2017 reached 56,355 and marriages in West Java reached 400,311. The use of wedding services in MSME X in recent years shows growth. Even though it grows every year, a company engaged in property rentals such as MSME X has a fairly low occupancy rate. The occupancy rate of MSME X wedding services per year has not reached 50% and meets the target of MSME X owners. Based on interviews with Rika as the owner, marketing activities were carried out by MSME X including the use of sales promotions and through personal selling carried out by marketing employees. To attract attention and offered value to customers, MSME X uses advertising media, one of which uses brochures. Brochures are used to attract consumers' attention and convince consumers to make purchasing decisions [5]. But the brochure of MSME X offers does not encourage consumers to purchase services offered by marketing employees. Wedding services offered by MSME X are included in the category of people processing services. According to Lovelock & Wirtz, (2016) [6], to receive this type of service category customers must physically enter the service system. According to Antczak & Sypniewska (2017) [7], personal selling has a special role in the service provider industry. The role that must be done is to build relationships based on trust, because the customer cannot feel or touch the service first. MSME X assigns marketing employees to interact with prospective customers with the aim of making presentations, answering questions and to get orders, this activity is carried out through marketing offices, telephone and messaging applications on mobile phones. Constraints owned by MSME X include promotional material not attractive to consumers and marketing employees has limited skills in the sales process to consumers, this is because the company does not have a The 1 st International Conference on Business and Management of Technology (IConBMT) August 3rd 2019, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia personal selling standard for marketing employees in conducting services and delivering messages when communicating and making sales to consumers and do not have promotional material to attract consumers to make purchases. Therefore, this research through business coaching program aims to develop operational standards in conducting personal selling and improve the appeal of wedding service promotion materials so as to help resolve the limited sales skills experienced by marketing employees and appeal of MSME X's promotion material A. Personal Selling The role of communication in the service business is not only through media advertising, social media, public relations and sales forces but must be seen more broadly such as location and atmosphere of service facilities, company design features such as consistent use of color and graphic elements, employee appearance and behavior and website design. All of these elements contribute to the impression in the customer's mind, strengthen or conflict with the specific content of the communication message conveyed by the company. Submitting messages to consumers needs to determine the content, structure, and style of messages that will be communicated, how they are presented, and the media that is best suited to reach the intended consumers [6]. Personal selling involves direct contact between buyers and sellers, either face to face or through some form of communication such as telephone and internet. Companies use the internet to improve customer relations, internet use is designed to be a complementary tool in personal selling to increase sales [5]. Kotler, et al (2016) [8], describes the main steps that can be taken to implement personal selling practices effectively in selling products as follows: prospective customers and qualifications, pre-approaches, presentations and demonstrations, dealing with customer objections, closure, follow-up and maintaining relationships. According to Antczak & Sypniewska (2017) [7], Knowledge, advice and professionalism of sales staff rank more than the industry related to goods. There are several differences in the stages of effective personal selling practices in the service provider industry, including: 1. Steps to attract customers, at this stage the sales staff explained the benefits of the company's services and the explanation of the service process that will be received by the customer when using the services offered by the company. 2. Conversation/meeting, at this stage the salesperson needs to know what the customer needs, who makes the buying decision and how the customer purchases the process. The approach to prospective customers can be done by making a direct visit, telephone call, email or letter. 3. Product presentations, at this stage salesperson need to tell their products to prospective customers by showing the reasons and ways in which products can help resolve customer problems, this can be done using the FABV approach, namely features, advantages, benefits and value. 4. Convincing customers, at this stage salesperson need to look for objections that customers feel. Salesperson need to use a positive approach, asking customers to clarify perceived objections by giving questions such that customers answer their objections or turn them into reasons to buy. 5. Transactions, at this stage the sales force directs the customer to place an order and close the sale. Salesperson need to offer a certain boost to get a sale closing, such as additional services, additional amounts, or other prizes. 6. Maintaining relationships, this step is done to ensure customer satisfaction and clarify all the problems that the customer might have. After transactions, after the transaction, the salesperson follows up to ensure that the service is chosen according to the needs and desires of the customer, and maintains good relations with the customer. B. Promotional Material According to Lovelock & Wirtz (2016) [6], corporate design roles such as consistent use of colors and graphic elements, appearance and design of promotional materials. Contribute to the impression in the customer's mind, strengthen or conflict with the content of the communication message conveyed by the company. Submitting messages to consumers needs to determine the content, structure, and style of messages that will be communicated, how they are presented, and the media that is best suited to reach the intended consumers. Advertising and display of promotional material with a consistent visual element can lead to smoothness and message preferences in the minds of prospective consumers; Visual themes that are in line with consumer expectations can also cause positive effects, they like the appearance of advertisements and the display of promotional materials when brand evaluations [9]. According to Belch & Belch (2018) [5], the marketing division needs promotional materials that can be used by salesperson to attract customers, one of which is brochures. Using brochures is part of the IMC program that is under the control of marketers. Brochures can be used to convince buyers and convince buyers to make their decisions. Brochure display must have design and communication criteria to be more attractive to potential consumers, so that it is effective to communicate with the offerings submitted [10]. Developing print advertisements such as brochures need to pay attention to the basic components of print advertising such as bodies, headlines, visual styles and executions, and layouts. The headlines are the words that will be read first or positioned to attract the most attention. The headlines serve to entice readers to read the message want to convey by the company, the visual part of an advertisement clearly become an important part. Display visual elements often become the dominant part in print advertisements and play an important role in determining the effectiveness of ads [5]. The visual part of an advertisement must attract attention, communicate ideas or images, and work synergistically with the title and content of information to produce effective messages. The layout is a physical arrangement of various parts of the advertisement, including the main headline, subtitle, contents of information, illustrations, and any identification. The layout shows where each part of the ad will be placed and provides guidance to the people who are working on the ads [5]. III. METHODOLOGY This business coaching study is a case study research using qualitative methods. There are 10 business coaching program sessions at MSME X. The first session began in November 2018 in one of the ballroom operated by MSME X located in the Grand Galaxy Bekasi. The second to fifth sessions have been conducted from December 2018 to February 2019 to analyze the company's business conditions and to identify problems. The seventh to tenth sessions have been conducted from March to May to implement solutions based on identified problems. A. Method of Collecting Data This study uses qualitative research methods, namely an approach that uses data in the form of written or oral sentences, phenomena, behaviors, events, knowledge and objects of study that can be observed by researchers. The implementation of business coaching uses several types of data, namely primary and secondary data. Primary data collection techniques use interviews to obtain complete data from informants, namely business owners and MSME X employees regarding the overall picture of the current business [11]. The method of interviewing is done by asking questions directly and openly to informants about problems related to research. The interview was conducted with Rika as the owner and director to ask about the business conditions of the MSME X as a whole. In addition, an interview with Risma was conducted to ask about marketing activities and Vera to ask about the administration of MSME X. The results of this interview will be part of the process of identifying company problems. Furthermore, using the observation method, data collection is done by observing the object of research directly and recording the facts found in the field to complete the data relating to the problem. Observation also makes it easy to identify problems. In terms of secondary data collection, a literature study was conducted to obtain information and knowledge about the wedding services and to get the best solutions for problems solving. Literature studies are obtained from various sources such as scientific articles, journals, and other sources. B. Data Analysis Method The collected data were analyzed using six analysis tools, such as PESTLE Analysis, market opportunity analysis, competitor analysis, analysis of the business model canvas (BMC), marketing mix analysis and SWOT analysis. After carrying out the analysis, the final step is to draw conclusions from all data obtained. Conclusions are obtained by understanding the data that has been presented and used as information for business coaching purposes. 1) PESTLE Analysis Understanding the external factors that have an impact on a company can be done using PESTLE analysis, the factors in PESTLE analysis include Politic, Economy, Social, Technology, Legal, and Environmental [12]. 2) Market Opportunity Analysis Market opportunities can be determined using analytical tools namely Segmenting, Targeting, and Positioning (STP). Careful analysis of the market must lead to market opportunities. Determine a profitable target market, where companies believe that customer needs and opportunities are not met, and where companies can compete effectively [5]. 3) Competitor Analysis According to Porter (2008) [13], a five forces analysis is a framework used to analyze the level of competition in industry and the development of business strategies. This analysis assumes that industrial attractiveness, in which companies operate, is determined by market structure because of the reason that market structures influence the behavior of market participants [14]. The five key factors used by the analysis to identify and evaluate potential opportunities and risks are threats of new entrants; threat of threat of substitutes; bargaining power of suppliers; the power of bargaining power of customers; competitive rivalry. 4) Business Model Canvas Analysis (BMC) The canvas model business (BMC) is a tool that can be used to visualize the design of existing business models or redesign potential business models on one page. BMC contains nine basic building blocks and visualizes the logic of how organizations create, give and capture value, covering four main areas of business, namely, customers, value offerings, infrastructure and financial feasibility [15]. 5) Service Marketing Mix Marketers usually use four basic strategic elements when marketing physical goods, namely products, prices, places, and promotions. Lovelock & Wirtz (2016) [6] develops The 1 st International Conference on Business and Management of Technology (IConBMT) August 3rd 2019, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia additional elements in the marketing mix to deal with problems arising from marketing services or services by adding three other elements, namely processes, people involved in people processes and physical evidence, these seven elements are called as "7 P" from service marketing. 6) SWOT Analysis SWOT analysis is a tool used to measure the elements of a company's strengths and weaknesses, its market opportunities, and external threats to its welfare in the future. Based on the results of the SWOT analysis, the TOWS matrix can then be used to deduce strategies that can be applied to find competitive advantage [12]. 1) PESTLE Analysis Based on the results of the PESTLE analysis of the six external factors, there are external factors that can become opportunities and pose a threat to MSME X. Political factors pose a threat to MSME X, tax reduction policies imposed by the government open up the possibility of increasing competitors in the marriage industry. The economic growth rate (LPE) of Bekasi City over a period of four years tends to move dynamically. LPE is one indicator of the success of development from the efficiency aspect initiated by the government. The higher the LPE, the better there will be an improvement in development in the regional economic sectors. This economic condition is an opportunity for MSMEs X located in Bekasi City. Throughout 2017, data from the Badan Pusat Statistik (2018) [16], show there are 16,339 marriages in Bekasi City. MSME X is a MSME engaged in the field of marriage, the growth of marital numbers is an opportunity for the company's business continuity. The development of technology raises a variety of new things, one of which is a marketplace site that provides a variety of wedding vendor services. The use of these sites can be utilized by MSME X to reach a wider market, make sales, and communicate with prospective customers. Legal factors also pose a threat to the sustainability of MSME X businesses, Bekasi City has the second highest minimum wage rate (UMK) in West Java Province, even exceeding DKI Jakarta. In 2019, the UMK of Bekasi City experienced an increase of 8% from 2018. The increase in MSEs was a threat to SMEs X because it could increase the cost of employee salaries borne by the company. 2) Market Opportunity Analysis Consumers owned by MSME X demographically are individuals of all ages or men who come from the middle to upper class economy, geographically located in Jakarta, Bogor, Tangerang, Depok, and Bekasi. Behavioral, namely individuals who need a ballroom for weddings with a luxurious appearance, easily accessible locations, large capacity, and provide services for the overall event needs. Based on psychographic consumers who want prestigious ballroom rental at affordable prices and provide convenience in organizing the event. MSME X's positions as a ballroom for weddings services that have a luxurious appearance by prioritizing excellent facilities and competitive prices. In addition, the company provides a one-stop service to facilitate consumers in organizing weddings. The appropriate strategy is the best cost provider strategy. This strategy is carried out by satisfying the buyers' desire for added value and imposing a lower price compared to competitors with similar product offerings [12]. 3) Competitor Analysis The competition intensity of the wedding industry similar to MSME X has moderate to high industrial interest. The threat of substitute products in this industry is high, because of the large number and variety of substitute products that consumers can use to hold weddings. The level of revalence in this industry tends to be high, this is due to the high competition of promotional activities carried out between companies to obtain consumers. Competition from the threat of new entrants, bargaining power of suppliers, and bargaining power of buyers is low. MSME X has a great opportunity to develop its business by expanding promotional activities to attract consumers. 4) Analysis of Business Model Canvas (BMC) The picture below shows the business of MSME X's canvas model: The results of the business analysis of the MSME X canvas model show that the consumer segments based on behavior, namely individuals who need a ballroom to hold large-capacity, practical and all-serving weddings. Based on psychographics, consumers who want luxury facilities at competitive prices. Based on demographics, male and female consumers aged over 17 years. The company geographically reaches customer segments in the Jakarta, Bogor, Depok, Tangerang and Bekasi areas. The value proposition of the company is a luxurious ballroom and an elegant atmosphere, large-capacity with equipment that can be adjusted according to consumer The 1 st International Conference on Business and Management of Technology (IConBMT) August 3rd 2019, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia needs and an easily accessible location with various toll road access facilities. MSME X provides the marketing office to reach customers, customers can come directly to see the ballroom and feel the value offered by the company. The company has marketing employees to reach customers using the phone. In addition, MSME X has a website and uses social media to reach customer segments. MSME X builds relationships with each customer segment, based on customer acquisition motivation, through a website that provides information about the products offered, location plans, prices and contact marketing division. MSME X revenue stream is obtained from ballroom rental services from wedding services customers and from partner vendors who rent booths when events. Resources owned by MSME X are divided into physical and human resources. Physical resources consist of offices, ballroom and supporting equipment for organizing events (air conditioning, carpets, sound systems). While human resources are employees in the marketing, financial and operational divisions. The key activity carried out by MSME X is to provide ballrooms, marketing employees will receive messages from consumers and then communicate and help provide event services according to customer requests. Salesperson will contact partners (vendors) to ensure customer needs are met as they wish, professional event management services are also carried out to ensure the event takes place in accordance with the wishes of the customer. In order to maintain or enhance the value proposition, MSMEs X need to optimize resources to be able to produce quality performance, one of which is the performance of human resources, especially in the field of marketing can be improved. Wedding service sales activities have not been done optimally, this is because the competency level of employee sales is low and there is no personal selling standard among marketing employees. The costs required by MSME X to run a business consist of fixed costs and variable costs. The fixed costs needed by the company in a certain period are website usage and ballroom rental costs. Ballrooms used in business enterprises are not owned by MSME X, so to run a wedding service business, the company needs to spend a ballroom rental each year. Variable costs paid proportionally according to business activities include salary expense, electricity, internet, advertisements and ballrooms. MSME X in its business process has a variety of partners (vendors) to serve customer needs at wedding services. Various partner choices are added value offered by the company, with a variety of partner choices, customers can easily fulfill the needs and desires of wedding events. 5) Service Marketing Mix Product, MSME X offers ballroom rentals for weddings and professional event management services. The advantage of the company is providing a one-stop service to facilitate consumers by providing a number of package options that adapt to the needs and desires of consumers. Price, MSME X offers its products in a package with options that can be tailored to the wishes and consumer's budget Place, The location of the ballroom is quite strategic, located on the second floor of the Grand Galaxy Park Mall, in the residential area of Grand Galaxy City. MSME X can be accessed easily because it is close to various toll accesses. Promotion, MSMEs X uses several promotional mix tools including personal selling, sales promotion to promote products and increase sales by giving prizes, and holding wedding exhibitions that have taken place six times. Process, Business process of MSME X starts from receiving messages delivered by consumers who are interested in company services. The salesperson will receive messages, make sales and serve consumers until the event is completed. People, MSMEs X has 13 employees, divided into several divisions, namely marketing and operations, finance, and human resources. The marketing division's employees are in charge of handling the entire sales activities, from prospecting, qualifying, presentation, closing and customer service. Physical Evidence, MSME X has a ballroom with a capacity of up to 2000-2500 people with a luxurious look and elegant atmosphere, a ballroom equipped with carpets, air conditioners, good lighting and a maximum sound system. In addition, equipment in the ballroom can be adjusted with vendor assistance in accordance with consumer needs. 6) SWOT Analysis Based on the SWOT Analysis, the figure below illustrates the TOWS Matrix of MSME X: Figure 2. MSME X's canvas model Based on the TOWS matrix, the solutions that can be done by MSME X are W1-O1O2, W2-O1O2 and S1S2-O1O2, namely making effective personal selling standards by highlighting the superiority of services from MSME X The 1 st International Conference on Business and Management of Technology (IConBMT) August 3rd 2019, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia and maximizing the appearance of attractive promotional material and explaining the company's superiority. B. Discussion of Solution Implementation 1) Strandard of Personal Selling For Wedding Services Based on the business coaching activities that have been conducted on MSME X, the following is the implementation of solutions to problems that have been analyzed in the form of operational standards in conducting personal selling activities for the sale of wedding services. The researcher used the effective personal selling framework from Antczak & Sypniewska (2017) [7], in order to make a personal selling standard for the sale of MSME X wedding services, namely as follows: a. Stages of Attracting Customers At this stage customer qualifications need to be carried out by asking a number of questions which include the budget needed, the authority to buy, the need for a product or service and ensuring the use of the product or service [7]. The first step taken by MSME X marketing employees is to say hello, introduce the name and introduce yourself as a wedding consultant. After that, ask what can be helped, estimate the date of the event, and plan the number of invited guests. After knowing the information from prospective customers, the wedding consultant will confirm the availability of venues and packages for the number of invited guests who have been selected. The wedding consultant provides information that the available package is all-in and can be tailored to the needs and desires of consumers. In addition, it is also informed that MSME X has a large selection of vendors to fulfill the wishes of prospective customers and families. At the end, the wedding consultant asked where the prospective customers get information about MSME X. b. Stages of Conversation / Conducting Meetings At this stage, marketing employees need to know the characteristics and needs of prospective customers [7]. This stage the wedding consultant asks consumers that the wedding event will be held is the wedding event to the consumer's family, this is done to find out the experience of the consumer's family. Wedding consultants also ask for plans for the concept of consumer weddings (National or International) and inform that the MSME X provides many vendor options (entertainment, catering, decoration, dressing studios, documentation) so that they can be customized as desired by prospective customers and families. For details, prospective customers are invited by wedding consultants to visit the MSME X location in order to experience the wedding atmosphere there. The wedding consultant asked for a plan for the time the prospective customer came to visit the MSME X for a survey and learned more detailed information. c. Product Presentation Stages At this stage marketing employees need to tell and show that the product can help solve the problems of prospective customers, can use features, advantages, benefits and value approaches [7]. The wedding consultant met directly with prospective customers to explain the features of the company. The advantages and benefits of MSME X are also explained by the wedding consultant that MSME X is the best convention hall with facilities in Bekasi. The benefit is that MSME X makes it easy for its customers because it has an all-inclusive package. Wedding consultants need to remind the value of MSME X, that the price and condition of the ballroom offered is the best price and best value wedding services in Bekasi. d. Stages of Convincing Customers Consumers may have psychological and logical resistance [8], therefore wedding consultants need to take a positive approach to prospective customers. If potential customers are concerned about the price, wedding consultants can offer sales promotion by mentioning the overall value of the prize that the prospective customer will get. At this stage, wedding consultants have to assure all the problems that are feared by prospective customers. e. Stage of Transactions Antczak & Sypniewska (2017) [7], explaining that at this stage it is necessary to direct potential customers to place orders and close sales. Wedding consultant explained the ease of payment offered by MSME X such as payment terms can be paid in installments after paying down payment (DP). Sales promotions that have been previously offered, need to be explained that the promo is only valid for a limited time, this is to encourage prospective customers to place an order. Wedding consultant informs that the promo will be lost if the customer does not immediately place an order by paying a down payment. f. Stages of Maintaining Relationships This stage is to ensure customer satisfaction and clarify all the problems that customers may have. Wedding consultants help develop customers in vendor selection, prepared order documents that will be confirmed by the customer. After the event takes place, the customer is given a feedback form that is useful for knowing customer satisfaction and input for MSME X. 2) Promotional Material Display After conducting discussions with marketing owners and employees and making direct observations, in the process of personal selling MSME X marketing employees used promotional tools in the form of offer brochures. The display of promotional material offered by MSME X is considered not optimal in attracting consumers to purchase products, this is because the display of the offered brochure does not yet have the appearance and information that attracts consumers. The offer brochure is designed with the use of gold, dark blue and white in accordance with the guidelines for using MSME X colors. Brochures measuring the size of A4 paper are 21cm x 29.7cm and can be folded into 3 sides, The 1 st International Conference on Business and Management of Technology (IConBMT) August 3rd 2019, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia the advantages of brochures of this size are many pages and small shapes that are easily carried by consumers. The layout of all components in the brochure is arranged with a design that is concise and easily understood by the recipient of the message. The first page of the offering brochure displays the main headline which indirectly conveys information on elegant and pleasant wedding services, this page contains the MSME X logo, photos of couples who are married at the MSME X ballroom, company slogans and MSME X website addresses. Visual appearance becomes the dominant part in the offering brochure, the selection of photos and information content is used to communicate the company's strengths compared to competitors. On the first page, information about the role of the wedding consultant and the superiority of facilities and vendor choices from the MSME X are displayed, this is useful to produce messages that are effective in the minds of consumers. The second page in the brochure displays information about the profile of MSME X containing product excellence such as a large capacity of up to 2500 people, architectural design that provides an elegant and magnificent atmosphere, There is a phrase "One Stop Wedding Solution" to show the services provided by MSMEs that include everything needed by consumers in holding weddings. Information about service components is displayed using illustrations to make it easier for consumers to read and understand the various types of service components offered. Furthermore, there is one side that informs the company's contact, this side contains the office telephone number and marketing employees, the website address to the address and map of the MSME X to make it easier for consumers to contact the company. V. CONCLUSION Based on the results of research from business coaching it can be concluded that in designing personal selling standards companies can use theories that are adapted according to the company's business processes. Personal selling standards are used as a tool to solve the limitations of sales employees marketing skills, the standard used is displayed in the form of workflows to facilitate understanding of employees. The use of personal selling standards needs to be applied consistently and periodically evaluated to maintain the quality of service and sales of the company. Improving the display of promotional material in the form of brochures, need to pay attention to the basic components of print advertising such as bodies, headlines, visual styles and executions, and layout. By repairing consumer brochures assessing MSME X to be more professional, the information obtained by consumers becomes more attractive and gives a positive impression in the minds of consumers. Display brochures that are designed professionally and convey information clearly can attract consumers to contact the company and provide a good impression in the minds of consumers.
2020-01-30T09:03:24.604Z
2019-12-25T00:00:00.000
{ "year": 2019, "sha1": "4007d07f41b23eb5e781b0c75a5a73ee3c6b1d93", "oa_license": "CCBYSA", "oa_url": "http://iptek.its.ac.id/index.php/jps/article/download/6297/4136", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "4007d07f41b23eb5e781b0c75a5a73ee3c6b1d93", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
232288769
pes2o/s2orc
v3-fos-license
A multilobulated asymptomatic umbilical nodule revealing endometriosis Abstract Primary umbilical endometriosis is unusual clinical presentation of endometriosis. Its diagnosis can be challenging due to lack of knowledge. This condition should be listed in the differential diagnosis of umbilical disorders. | 1451 CHAMLI et AL. multilobulated nodule of 20 × 25 mm in size protruding from her umbilicus (Figure 1). A biopsy of the nodule revealed the presence of multiple endometrial glandular tubes surrounded by an endometrial-type stroma without signs of malignancy ( Figure 2A,B). Thus, the diagnosis of primary umbilical endometriosis was assessed. Surgical resection was planned for the patient. | DISCUSSION Cutaneous endometriosis is the most common extrapelvic endometriosis. It was first described by Rokitansky in 1860 and defined by the presence of an extrauterine functional endometrial tissue formation in the skin. It is classified in a primary and secondary form. 1 Secondary endometriosis is the most common and occurs often after surgery on abdominal and pelvic scars such as cesarean section, hysterectomy, and laparoscopy. Primary umbilical endometriosis (PUE) occurs spontaneously and is a very rare benign condition. The incidence of this disease is about 0.5 to 1% of extrapelvic endometriosis. 2 It mostly occurs in female of reproductive age and is usually characterized by a discolored painful mass of the umbilicus with cyclic bleeding and/or swelling according to the menstrual cycle. 3 In the study of Santos and coworkers, the mean age was 33 years; all patients had a complaint from pain and bleeding in the menstruation period. 1 However, asymptomatic PUE is much less common, as reported in our patient. Hence, it may go unrecognized leading to a delayed diagnosis. In the literature, the size of the lesion ranges from 0.5 to 4 cm. The nodule may be brownish, violaceous, dark bluish, or flesh colored, depending on the amount of hemorrhage and depth of penetration of ectopic endometrial tissue. 1 In the review of Victory et al, the majority of patients presented these lesions colors': brown, blue, purple, black, and red. 5 And in the study of Santos, 83% of lesions were violaceous and erythematous-red in 16%. 1 Differential diagnoses should include keloid, umbilical hernia, granuloma, sebaceous cyst, and urachus anomaly. Malignant diseases such as nodular melanoma, lymphoma, and Sister Marie Joseph's nodule should also be considered. 4 Histopathological findings remain the gold standard for the definitive assessment of PUE and to exclude malignancy. It shows dilated intradermal endometrial glands surrounded by cellular endometrial-type stroma. 3 Magnetic resonance imaging can be helpful, although no imaging tools are necessary to retain the diagnosis. Dermoscopy features are controversial but could be helpful to distinguish from malignant Sister Mary Joseph nodule. 4 Surgical excision remains the most effective treatment to reduce the risk of malignant transformation and to avoid recurrence. 1 Medical treatment with oral contraceptives, such as progesterone, Danazol, norethisterone, or GnRH analogs, may help in reducing the size of the lesion and is often used as a diagnostic tool itself (anne 2017). Pathogenesis of primary cutaneous endometriosis is still unclear. Endometrial cells might migrate to the umbilicus through the abdomen, the lymphatic system, and/ or remnants of embryonic cells in the umbilical fold. 5 In summary, our case describes an unusual location of endometriosis. This condition should be listed in the differential diagnosis of umbilical disorders. ACKNOWLEDGMENT Published with written consent of the patient. CONFLICT OF INTEREST None declared. gave final approval. All authors: read and approved the final manuscript and agree to be finally accountable for ensuring the integrity of and accuracy of the work. ETHICAL APPROVAL Appropriate consent has been obtained, prior to submission, for the publication of images and data. DATA AVAILABILITY STATEMENT Data sharing was not applicable to this article as no datasets were generated or analyzed during the current study.
2021-03-21T13:16:10.652Z
2021-01-12T00:00:00.000
{ "year": 2021, "sha1": "97f110f939d5ee7c94a67e3c157282ffa69c8dc1", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.3798", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97f110f939d5ee7c94a67e3c157282ffa69c8dc1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258083647
pes2o/s2orc
v3-fos-license
A health-systems journey towards more people-centred care: lessons from neglected tropical disease programme integration in Liberia Background Neglected tropical diseases (NTDs) are associated with high levels of morbidity and disability as a result of stigma and social exclusion. To date, the management of NTDs has been largely biomedical. Consequently, ongoing policy and programme reform within the NTD community is demanding the development of more holistic disease management, disability and inclusion (DMDI) approaches. Simultaneously, integrated, people-centred health systems are increasingly viewed as essential to ensure the efficient, effective and sustainable attainment of Universal Health Coverage. Currently, there has been minimal consideration of the extent to which the development of holistic DMDI strategies are aligned to and can support the development of people-centred health systems. The Liberian NTD programme is at the forefront of trying to establish a more integrated, person-centred approach to the management of NTDs and provides a unique learning site for health systems decision makers to consider how shifts in vertical programme delivery can support overarching systems strengthening efforts that are designed to promote the attainment of health equity. Methods We use a qualitative case study approach to explore how policy and programme reform of the NTD programme in Liberia supports systems change to enable the development of integrated people-centred services. Results A cumulation of factors, catalysed by the shock to the health system presented by the Ebola epidemic, created a window of opportunity for policy change. However, programmatic change aimed at achieving person-centred practice was more challenging. Deep reliance on donor funding for health service delivery in Liberia limits the availability of flexible funding, and the ongoing funding prioritization towards specific disease conditions limits flexibility in health systems design that can shape more person-centred care. Conclusion Sheikh et al.’s four key aspects of people centred health systems, that is, (1) putting peoples voices and needs first; (2) people centredness in service delivery; (3) relationships matter: health systems as social institutions; and (4) values drive people centred health systems, enable the illumination of varying push and pull factors that can facilitate or hinder the alignment of DMDI interventions with the development of people-centred health systems to support disease programme integration and the attainment of health equity. Background Neglected tropical diseases (NTDs) are associated with mortality and high levels of morbidity and disability as a result of stigma and social exclusion [1][2][3][4]. The WHO 2020 road map, 'accelerating work to overcome the global impact of neglected tropical diseases' , prioritized the control, elimination and in some cases eradication of these diseases by 2020, through two major strategies [5]. The first strategy was innovative and intensified disease management (IDM), which supports disease management through the primary health care system. The second focused on preventive chemotherapy and transmission control (PCT) through the implementation of large-scale, population-based drug administration, usually termed mass drug administration (MDA) [5]. MDA originated from the African Programme for Onchocerciasis Control (APOC), whereby freely donated medicines were distributed by community health volunteers to at-risk populations in response to the high levels of visible suffering resulting from river blindness [6]. Despite an early focus on the alleviation of suffering, it is the people affected by NTDs who have arguably become the most forgotten throughout multiple decades of vertical NTD programme delivery focused on MDA. Furthermore, both IDM and PCT have commonly had a heavy biomedical focus, with limited acknowledgement of the social causes and consequences of diseases [4]. Given that the majority of these NTDs do not cause death but instead lifelong morbidity and disability, a more holistic approach to the management of NTDs is needed that supports affected persons to negotiate the physical, psychological and social implications [2,4]. WHO's new NTD roadmap (2021-2030) 'ending the neglect to attain the sustainable development goals' recognizes the need for revived cross-cutting action to provide integrated people-centred care for persons affected by NTDs [7]. However, the evidence base that can support the development of integrated person-centred approaches to NTD management is still emerging. People-centred health systems (PCHS) are viewed as essential by many in the health systems community to ensure the efficient, effective and sustainable attainment of universal health coverage (UHC) [8][9][10]. An essential value in the development of PCHS is a movement away from a system focused on health institutions or disease, to one that focuses on the needs of people, whilst recognizing the central importance of relationships and values in driving systems change [11,12]. Thus, PCHS favour integration of vertical disease programmes that enables: 'health services to take the responsibility to operate specific activities designed to control a health problem…and become one of several channels for the programme to implement its activities, which then become part of the broader package of activities delivered by these multipurpose general health services (13pA2)' However, within health systems discourses, the relationship between disease control programmes and health services, and the added value of disease control programme integration, has long been debated [13,14]. Tensions are perceived due to a dichotomy in the underlying value base or objective of disease programmes in comparison with those of integrated and generalized health systems [13][14][15]. Criel et al. [13] and Marchal et al. [14] present comparisons of the main elements of disease control and health systems perspectives as shown in Table 1. The development of PCHS aligns to the priorities of integrated and generalized health systems. PCHS are often most successful when linked to other efforts or Table 1 Core elements underpinning disease control and health systems perspectives Source: Adapted from [13,14] Disease control programmes Generalized health care systems drivers for change, for example in improving health equity [16], as PCHS demand shifts in accountability away from compliance to government-defined targets (bureaucratic accountability) towards systems that enable responsiveness to the needs of service users (external social accountability) [17,18]. Health systems become empowering, and users become the stakeholders to whom services are accountable [13]. Quality of life-as opposed to quality of care-becomes the critical foci of system design, which is driven by holistic needs of communities and social health determinants rather than common epidemiological profiles [8]. Co-production of services between communities, providers and policy makers is prioritized, supporting a shift from paternalistic care delivery towards enabling systems strengthening and ultimately shaping improved health and wellbeing [8,9]. Sheikh, Ranson and Gilson [12] draw this thinking together and outline four core aspects that are central to the development of integrated PCHS: (1) putting people's voices and needs first; (2) emphasizing people centredness in service delivery; (3) viewing health systems as social institutions; and (4) understanding that values drive people centred health systems (Fig. 1) [12]. To date, a focus on biomedical management of NTDs, driven by the priorities of large pharmaceutical corporations and international donors [19], has dominated academic literature, policy and programming related to NTDs, arguably in opposition to integrated people-centred approaches. Terms such as morbidity management and disability prevention (MMDP) that focus on the cure, prevention or medical management of disease condition reflect this and have translated to little importance being placed on the lived experiences and values of affected populations [3, 1) Putting People's Voices and Needs First: relies on the creation of spaces where individuals and communities can influence the health system that seeks to serve their interests. This may include the establishment of participatory governance mechanisms that challenge power imbalances and hold systems accountable. 2) People Centredness in Service Delivery: services must be designed and delivered in such a way that places people at the centre as opposed to being structured around disease or for health worker convenience. This relies on services being high-quality, lifelong, accessible and adaptable. 3) Relationships Matter: Health Systems as Social Institutions: acknowledges that health systems are made up of interconnected actors who interact through a web of complex social, economic and organisational infrastructure which themselves are shaped by societal norms and structures. Effectiveness of systems is intrinsically linked to the strength of relationships and change brought about by actor abilities to navigate these processes. 4) Values Drive People Centred Health Systems: decision making becomes informed by values of justice, rights, respect and equality and a drive for high quality primary health care. Just as values of systems actors drive decision making, changes within the system shape values. 4], thus compromising the delivery of services that place people at the centre. However, as reflected within WHO's 2021-2030 NTD roadmap, there is a shift in thinking amongst NTD practitioners and associated policy dialogues to focus on strategies that promote the more holistic concept of disease management, disability and inclusion (DMDI) as further described in Fig. 2 [4]. The prioritization of DMDI is aimed at ensuring a full integrated person-centred continuum of care for individuals affected by NTDs, rather than one which is dominated by biomedical approaches and marginalizes lived experiences [4]. Criel et al. [13] suggest that decisions about what and how to integrate disease control programmes are complex. Firstly, it is necessary to understand how desirable integration is, i.e. what is the added value in asking health care systems to add disease-focused activities into routine service provision? Secondly, is integration possible based on ability to standardize tasks and the necessity of specialized services? Thirdly, what is the opportunity of integration, i.e. can it help or hinder health systems development [13]? Despite integration being perceived as desirable for DMDI service delivery, little is currently known about how to effectively shift from NTD programmes that run parallel to routine health service delivery and focus on MDA, to those which focus on integrated, people-centred, longitudinal and lifelong care. Furthermore, in the development of more integrated, people-centred approaches in the response to NTDs, there is currently minimal synthesis of evidence from the broader health systems literature regarding factors that promote such approaches. The aim of this paper is to explore the opportunity and possibility that the development of integrated DMDI strategies for NTDs present for the development of PCHS and to articulate how this learning can support the attainment of health equity for affected populations. Literature on health systems strengthening emphasizes that systems are highly context dependent and shaped by complex social dynamics [12]. It is critical to be able to understand and address such dynamics in the development and implementation of new interventions, as it is these complex and locally constituted relationships that shape how different processes of systems integration occur [11]. Current narratives surrounding the development of integrated, people-centred responses to NTDs are largely framed within global rather than local terms, which is potentially problematic in supporting the development of context-specific solutions that are responsive to locally constituted relationships [11]. The Liberian NTD programme is at the forefront of trying to establish a more integrated, person-centred approach to the management of NTDs through the development of their 'Integrated Case Management Strategy' [20]. This strategy focuses on DMDI for a number of endemic NTDs and their associated morbidities including, Buruli ulcer, lymphoedema, hydrocele, leprosy and Yaws [20]. The four core pillars of integration within this approach were: (1) government ownership and partnership across divisions to enhance resource management; (2) resource mobilization by integrating NTDs within all relevant national policies and increasing community awareness; (3) scaling up access to interventions with a specific focus on active case searching through MDA campaigns, establishing a centre of excellence for NTDs, strengthening the supply chain, and ensuring better community access to treatment and management interventions; and (4) improving surveillance through integration of NTDs within data management systems. Prior to DMDI as a term was developed by the Neglected Tropical Disease NGO Network (NNN) working group for morbidity management in consultation with NTD practitioners within the NNN, based on their tacit knowledge. The purpose of the DMDI approach is to try to foster a more holistic terminology that aligns with interactional approaches to disability and the creation of person-centred health systems. 'Disease management' recognises the need for medical approaches to the morbidity associated with NTDs. 'Disability' is included to emphasise that disability is a consequence of impairment or condition within a particular context and to ensure that social manifestations and other often nonmedicalised consequences such as stigma and mental health are not ignored. Finally, 'inclusion' is intended to reflect the need to include people living with the consequence of NTDs in programme design and society more generally (Mieras, Anand at al. 2016). the development and launch of this plan in October 2016, there was no clear DMDI strategy; disease management associated with NTDs was completed on an ad hoc basis [1]. The context of programme and policy reform in Liberia is therefore used as a case study within this paper. Drawing on the varying experiences of national programme implementers and non-governmental development organizations (NGDO) partners, we explore the creation and roll-out of a national integrated DMDI policy for NTDs. We consider how far the key aspects of NTD programme reform align to the discourse around the development of PCHS and to what extent social relationships influence the successes and failings within the process. Study design We use a qualitative case study approach to explore how policy and programmatic reform of a vertical NTD programme supports systems change towards the development of people-centred systems and services. Stake [21] describes our single case study approach as 'instrumental' as it is designed to facilitate thinking within NTD and health systems communities [21] regarding the specific issue of DMDI service integration and the development of PCHS, as opposed to being thought of as 'typical of other cases' [21,22]. Our case study approach allows for the intense focus on a single phenomenon (policy and programme reform) within a real-life context (Liberiasee Fig. 3). Through the use of multiple data sources, our exploration acknowledges that 'cases' and contexts are However, many challenges still remained, including: a rural health delivery gap (at the end of the policy 41% of households (15% urban and 66% rural) had no ready access to a primary health care facility); weak information and data management systems; and under-resourced responses to several chronic conditions including mental health disorders (Lee et al., 2011). The National Health Policy and Plan followed, designed to last for 10 years with a focus on systems reform to effectively and efficiently deliver comprehensive quality health and social welfare services. The Essential Package of Health Services (2011-2021) was also created and included new services such as NTDs and mental health. However, the 2014 to 2016 Ebola epidemic in West Africa had devastating consequences for Liberia's health system and led to a breakdown in trust between communities and service providers, leading to another period of rapid policy reform and reflection (Dean et al., 2019). The 'Investment Plan for Building a Resilient Health System' (2015) became a critical guiding document for the health system and prioritised the integration of vertical disease programmes to support in addressing underlying systems weaknesses. It was during this period of reform, that the NTD programme strategy for 'Integrated Case Management of Neglected Tropical Diseases' was developed. constantly changing and multiple variables and considerations bring complexity to our analyses [22,23]. Data collection Data collection took place between December 2016 and December 2018 and involved interviews with key informants and ethnographic observations of meetings at national and international level. Data were collected by L.D. (MSc) and G.N. (MSc), both of whom have experience collecting data with stakeholders across all levels of the health system. Key informant interviews We conducted 13 individual and 1 paired semi-structured interview(s) with purposively selected key informants at the national and county level. Key informants were selected due to their role in NTD programme delivery or associated activities and included: Civil Society organizations, NGDOs or donor representatives [4]; National Ministry of Health staff [6] and members of the county health team [4] from three counties where integrated case management activities are currently being implemented (Bong, Nimba and Maryland). Interviews explored the generation and content of the integrated case management plan; implementation of integrated disease management; and informants' perceptions of key strengths and challenges for disease management, disability and inclusion. Data analysis We recorded all interviews and transcribed them verbatim. Data were stored and analysed using NVIVO 10. Notes from participant observations were also typed up and, where required, points of clarity discussed with G.N. (local field assistant) and the NTD programme team. We analysed all data thematically. Initially we coded grouped data inductively to explore core factors that were related to the interface between NTD programmes and the health system in relation to (a) policy development and (b) policy or programme implementation. Subsequently, higher-level analysis was guided by Sheikh, Ranson and Gilson [12] core aspects of people-centred health systems (Fig. 1). Data analysis was completed as a collaborative process between L.D., R.T., G.N. and S.T.; K.K. and A.B. were consulted to support with data interpretation. Reflexive diary To enhance the trustworthiness of key informant interview analysis, this manuscript also draws on experiences of the lead author (L.D.) as documented in a reflexive diary. This included critical reflections from key meetings, discussions and county supervision activities that were relevant to the development, adaptation and implementation of the integrated case management strategy. Detailed field notes and critical reflections were taken throughout the data collection period. Results Our results are organized into three key sections with emergent themes linked to each subsection also presented. The first theme, policy development, focuses specifically on NTD policy reform in Liberia in the wake of the Ebola epidemic. The second theme, policy and programme implementation, is concerned with how policy change translates to change within the NTD programme. Finally, theme 3, reflections and the road ahead, explores challenges and the way forward for the NTD programme in Liberia as it aims to development more person-centred responses to NTDs. Policy development Maximizing a window of opportunity for policy and programme change The creation of the integrated case management plan in Liberia was shaped by the cumulation of multiple factors that created a clear window of opportunity for policy and programme change. Informants described how integration of various disease programmes, specifically leprosy and Buruli ulcer, had been a key national NTD programme priority for many years with the co-implementation, primarily of disease mapping activities, beginning just prior to the Ebola outbreak. Ebola interrupted the progression of such activities and limited the establishment of a fully integrated NTD programme whilst also emphasizing clear health systems weaknesses. In the period immediately after the Ebola outbreak, health systems priorities were observed to change, with a range of actors within the health system coming together to work out the best way forward to be more responsive and resilient to the population's health needs. National health policy reform, including the establishment of the 'Investment Plan for Building a Resilient Health System in Liberia' prioritized a push towards programme integration [24]. The merging of programmes, particularly those focused on NTDs requiring case management, was therefore seen as essential 'to save resources and time' (National MoH Staff ). It was during this time that the NTD programme was able to use adjustments within national policy that prioritized a shift towards vertical programme integration to lobby support and political will from WHO, NGDO partners and the Ministry of Health to make a critical change to NTD policy and programme implementation structures. 'prior to Ebola, the voices of the Ministry of Health were absent in designing programmes. No funding NGDO partners who were engaged in the process also described the need to prioritize the viewpoint of Liberian NTD programme staff and other health systems actors with a key focus of policy reform on 'capacity strengthening of the system' and a hope that the development of the new strategy would 'minimise the disease focus of NTD programmes and emphasise people' (NGDO Partner Representative). Informants emphasized the importance of 'designing the integrated case management plan around the four pillars of the existing NTD master plan to encourage support [for programme and policy reform] from WHO' (National MoH Staff ) as it was a policy or programme format with which they were familiar. Prioritizing the view of affected persons Programme implementers from all levels of the health system were described as key in shaping the way that the integrated case management policy was designed, developed and implemented. However, no consensus was reached on the engagement of involving persons affected by NTDs in programme design and review meetings, and they were therefore excluded. Despite this, it was apparent from interactions with multiple programme implementers that care for the improved health and wellbeing of people affected by NTDs was at the forefront of their efforts and decision making. For example, we observed that some programme implementers would pay from their own pockets for surgical costs, school fees, food and transportation of affected persons. Reflections from key informants also emphasized a desire for a change in focus away from the biomedical construction of disease and associated interventions towards more holistic responses that aligned to their value base and experiences. For example, many informants described feeling like they needed to expand service delivery to include the provision of psycho-social support. However, they felt restricted to be able to do this within the parameters of a fragile health system when they had limited evidence of what would work and where they should target resources. Many described that dayto-day interaction with affected persons made decisions about what should or should not be included within integrated programme delivery more challenging. Implementers felt compromised in their attempts to establish an integrated programme that worked, whilst still understanding the broader needs of people affected as addressing everything at once felt 'too big' . 'psycho-social elements aren't included at the moment…but it becomes a real wrestle. One reason why integrated case management is not being implemented and adopted by the NTD community is because it feels so big…livelihoods…psycho-social support…we lose the person at the beginning who really just wants to give the pill for these diseases… we had to have an element of compromise…think through what can we do that will have the best impact…' (National MoH Staff ). 'I was called to go and confirm whether this client is a confirm lymphoedema. And when I got there a young girl 26 years got this lymphoedema leg and I talk to her it was confirmed that lymphoedema. We taught her to take care of the lymphoedema leg… You wouldn't believe this girl was bold to express her heart that she is too young to live with condition… she said she is going to take her life. Right there I realized that mental complication it has…I immediately came and I went through the mental health department and I say look I got a case you guys have to get involve this is the situation, this is a declaration' (National MoH Staff ). Consequently, long discussions with programme staff often revealed personal distress based on multiple interactions with affected persons who they felt they could not support adequately. Policy and programme implementation: as strong as the system you represent All were committed to case management being 'part of the regular health service delivery system of the country' (National MoH Staff ) and described seeking to maximize avenues for integration, which was often highlighted as easier at lower levels of the health system. Informants described some parts of integrated implementation working well, whereas others were seen to be limited. Many emphasized that earlier case detection by community health assistants was working particularly well, due to integrated training, supervision and motivation processes that were aligned to the community health division's policy and programme delivery [25]. 'unlike the past where we used to go out to actively find cases…There is a curriculum formulated to train…community health assistants, and we train the community health surveillance supervisor which is CHSS in all medical related cases. The curriculum was developed by the community health department and the NTD department' (National MoH Staff ). This was thought to be further enhanced by the programme's ability to fulfil a motivation gap for some community health volunteers through the introduction of the active case search incentive policy, whereby community health volunteers (those not currently formally incentivized by the health system) receive 5 US dollars per target NTD case identified and confirmed. This strategy was designed to reduce the demotivation of community health volunteers who are not part of the national community health assistant (CHA) programme that provides 70 US dollars per month motivation to CHAs who have undergone a 4-month training programme [25]. 'We have introduced another method called active case search incentive base and it is really for the community health volunteers…from our experience some of them were left out of the community health assistant programme so they are like demotivated and you don't find the community health assistants in all of the communities…So we communicated with the focal points and told them to inform the community health division that every case confirm gets 5 dollars (National MoH Staff )' . Operational integration at the county level was also described as having been relatively straightforward, twomonthly supportive supervision visits from national programme staff to the county level every had enabled the addition of NTDs as an agenda item within weekly county medical meetings. However, it was observed that this process was smoother in some counties than others, often dependent on the capacity of the county level NTD focal point and the personal relationship between this post and the national NTD team. Integration of the multiple indicators necessary for the effective inclusion of various NTDs within health monitoring and information systems was described as being a more laborious but essential process to enable the NTD programme to become part of routine county planning activities. Additionally, one of the biggest challenges in establishing integrated service delivery, was described as the supply chain, with many implementers asking 'how do you put something into something that is already broken?' (National MoH Staff ). This was particularly problematic for many programme implementers who found it challenging that 'we are creating the demand and we don't have the drugs. We don't have the medical supply. So, in order to mitigate that we must have the drugs' (National MoH Staff ). 'The challenges are that some of them are lacking of this support, lacking of drugs, sometimes the drugs are not on time… counties don't have the capacity to procure easily, so its national, national should be able to purchase more drugs' (County MoH Staff ). Reflections and the road ahead: challenging deep routed verticalization, disease silos and donor control Despite being presented with a clear window of opportunity for change in policy, programmatic change aimed at achieving person-centred practice was viewed as more challenging. Decisions about which diseases to include as part of integrated case management approaches appeared to be based on a bio-medical view of disease condition and the historic dichotomy between PCT and IDM diseases, with specific focus on addressing 'reversible' NTDassociated morbidity. However, over the duration of the study this viewpoint began to shift, with programme implementers becoming increasingly reflective about the inclusion of additional disease conditions and the need to link with other sectors to address wider support needs of affected persons. 'I want to believe initially we are looking at conditions that are with a burden, onchocerciasis is one of the burden conditions, but we focused on conditions that we could respond to and bring relief to the client. Like for onchocerciasis once you have gone blind it is difficult like that particular condition. For example, with hydrocele you can do the correction. With Buruli ulcer it can be some level of correction. With lymphoedema stage one if it is diagnosed early you can interrupt the progression. But when you diagnose at a later stage definitely you cannot help the situation. It is just to give some home base health care and health education. I want to believe that in the nearby future onchocerciasis will be included because it causes disadvantages once you cannot see' (National MoH Staff ) Funding of integrated approaches was, however, the most critical barrier to effective implementation of integrated programme delivery. The funding problem was described as two-fold: (1) the long-term and deep reliance on donor funding for health service delivery by the government of Liberia limits the availability of flexible funding, and (2) there is ongoing funding prioritization of donors and NGDO partners towards specific disease conditions. Informants described the resultant precarious nature of integrated programme delivery and the additional workload, stress and ongoing negotiation that such structures enforced on the national programme team was frequently observed. Many described that a change to funding flows and partnership approaches was essential to allow for the sustainability of integrated approaches. 'The challenges there is that we have only one partner that is actually supporting and limited funding from government to support these programmes…so most of the funding that come is through the partner so those are the challenges that are actually face with the program' (County MoH Staff ) Some NGDO partners described that there is limited opportunity for national NTD programmes to provide feedback to international donors regarding the rigidity of funding flows and their associated impacts on programme responses. This limits the ability of NGDO partners to work with programmes in a way that is mutually responsive to national priorities and can compromise the development of equitable partnerships between NGDO partners and national programmes. Furthermore, where NGDO partners are unwilling or unable to collaborate effectively to facilitate integrated approaches and move outside of disease silos, this was observed to be likely to limit progression towards integrated service delivery. 'the way things are structured internationally is the biggest reason these programmes have been implemented vertically for such a long time… Funding can be a problem, disease focused funding can be the most frustrating thing…it doesn't really focus on the human…health workers end up having to go and do one thing for five days and then another thing for the next five days, just because we [NGDO partners] aren't willing to work together…there is such a missed opportunity for people to work together. NTD programmes should be given more opportunities to report back…there is a lack of reflection by partners on the impact their own goals and priorities have…could NTD programmes come together to present a framework for good partnership agreement…we shouldn't be ignorant to the fact that there is a co-dependency between NTD programmes and donors…Getting people to leave their disease silos is challenging… I don't know how many more generations of leprosy experts we will need…but hopefully not that many…you win some you lose some…some people are open to new approaches…others are not' (NGDO Partner). Discussion This article aims to consider the extent to which NTD policy and programme reform can contribute towards the development of integrated PCHS through the alignment of value bases and core principles. The findings above have explored the interface between the health system and NTD policy and programme reform in relation to DMDI in Liberia. Drawing on the core elements of disease control programmes and integrated generalized health systems presented in Table 1 and adapting them to be of relevance to NTD programming in Liberia, we suggest that DMDI serves as a bridge between NTD programmes conceptualized around disease control and the development of more integrated PCHS [14,15] (Fig. 4). Our findings illuminate multiple push and pull factors that can facilitate or hinder the alignment of DMDI interventions to the development of PCHS. Within our discussion we consider these push and pull factors in relation to Sheikh, Ranson and Gilson [12] four key aspects of PCHS (Fig. 1). Putting people's voices and needs first A central tenant of putting people's voices and needs first is the way in which health systems are governed [12,17]. Effective approaches to systems governance require consideration of the roles and relations of all systems actors including international NGDO partners and affected persons, not just national governments [17]. Multiple accountability relationships exist within NTD programme governance (international NGDOs and donors to national government; and national government to affected people) that need to adapt to promote the development of person-centred approaches, these relationships are discussed in turn in this subsection. Within our study, we found that the 'window of opportunity' or 'push' for the development of an integrated PCHS in Liberia presented a critical moment for national actors to hold international NGDO partners more accountable to the provision of an NTD service that responded to their needs and priorities. This saw a long-awaited shift in the core programme objective of the Liberian NTD programme away from a sole focus on the control and elimination of NTDs, to an equally important focus on disease and health systems integration for the provision of longitudinal care for affected populations, a core value of people-centred services [26]. Thus, following a moment of health system crisis, the Liberian NTD team was able to carefully navigate deep routed and historical bureaucratic accountability of the national system (towards international targets and priorities due to chronic aid dependency [27]), and shape the redirection of their programme towards more person-centred approaches. Capacity-strengthening activities that enable a clear role and function of national actors in health governance and priority setting have been described as essential in establishing PCHS [12,28]. Our study findings support this, and highlights the need for a key moment of reflection for the NTD community as we strive to establish person-centred approaches to DMDI. We must consider how to support the full and equitable participation of national systems actors in international agenda setting, and support the adaptation of international agendas to local contexts. Despite these achievements at the national level, at lower levels of the health system true participation in decision making by affected persons represented a 'pull' for NTD programme implementers towards disease-control-centric approaches that see beneficiaries as the (passive) target of health interventions [14]. Participatory governance is essential within PCHS [17] to improve equity and ensure that those with the greatest health needs have the best ability to be able to direct resources [12], and there is increasing recognition of the capability of beneficiaries to contribute towards effective priority setting and governance processes [17,29]. By failing to incorporate mechanisms for these contributions, the NTD programme limits advancement towards a person-centred focus. The use of patient advocates to support the participation of affected persons in priority setting and resource mobilization is increasingly prioritized within the NTD community through networks such as NTD Non-Governmental Organisation Network (NNN) [3,30,31]. However, our results suggest that a critical challenge remains as to ensure that these actors are given a seat at the table in national policy and programme reform. Limited active engagement of affected persons at national and subnational levels perpetuate paternalistic approaches to care delivery [9], which can hinder quality care experiences and associated quality of life for individuals and communities [32]. Supporting the health system to understand the problems of people affected by NTDs from their own vantage point is a key and critical step in supporting health practitioners and policy implementers to design strategies that enable the delivery of high-quality care [32]. Relationships matter: health systems as social institutions As is described in the PCHS literature [33], our findings emphasize that the role of trust and ability of national programme staff to manage relationships with external (NGDO partners) and internal health systems actors was critical in shaping how far systems could respond and adapt. For example, interpersonal relationships mattered at implementation levels of the health system where integration of service delivery seemed most permissible. Supportive supervision that established effective working relationships with county health teams enabled national actors to be responsive to the priorities of staff who are the backbone of NTD service delivery; this was seen to be an essential factor in supporting a 'push' towards the development of PCHS [34]. However, regardless of the strengths of these relationships and the ability of programme actors to lobby political will and shape the generation of a new NTD programme vision in Liberia, restrictions within NGDO funding flows were still observed to stall integrated programme delivery. Donor restrictions currently render some partners unable to move outside of disease specific funding silos, thus reinforcing a 'pull' towards disease-or issue-centric service delivery [14,35]. This limits the responsiveness of NGDO partners and programmes to national health systems priorities and stalls the proposed paradigm shift within the NTD community towards more person-centred approaches. Furthermore, rigid funding flows can limit the ability of national programme implementers to fulfil their leadership and innovation potential as they are held accountable to the implementation parameters of international organizations who are frequently governed by a one-size-fits-all approach [36]. People centredness in service delivery Chronic programme verticalization, which has led to the establishment of parallel NTD programmes, shaped by the priorities of international disease experts and funding bodies [14,35], contributed to multiple 'pull' factors which limit the ability of service delivery to become fully people-centred. Health systems strengthening has seldom been prioritized by the NTD community on the basis of the rationale that NTD programmes reach areas where there have been previous health systems failings and so reliance on community resourcefulness is essential [14]. Parallel provision of NTD services has therefore failed to support and address systems weaknesses, thus limiting the absorptive capacity of an already overburdened health system [14]. For example, as is the case in Liberia, weak supply chains and scarce human resources often render systems unable to respond to the needs of affected persons at primary or secondary level due to an absence of medicines and psycho-social support services. Immediate and longitudinal support becomes compromised and the provision of continuous support for affected persons difficult [32]. Engagement with community health structures is essential to improve interconnectedness between service users and providers and is critically important for improving external accountability of the health system [18]. However, it is important to reflect on how this engagement may contribute to or undermine the people centredness of service delivery. Incentivization of health workers based on disease case finding reinforces the important surveillance element of their role but can be seen as at odds with ensuring longitudinal personcentred care [26]. Furthermore, when effectiveness is measured on the basis of disease identification count, equity of service delivery can become compromised and/ or distorted [32]. Thus, a critical dilemma for any vertical disease programme hoping to support the strengthening of PCHS is how best to support and motivate community health volunteers when they are not adequately or equitably supported within the generalized health system. Establishing quality measures for performancebased financing within DMDI programmes that extend beyond case detection could support in the development of a more comprehensive service [32]. Communitybased comprehensive services should also seek to move beyond patient-or disease-centred interactions towards approaches that see the person as a whole [32]. Programme implementers undoubtedly evidenced empathy towards the holistic needs of affected persons as a key 'push' factor towards people-centred approaches. However, this is likely to be an ongoing and key test for the NTD community as a shift in focus away from disease challenges their unit of identity. Values drive people-centred health systems Perhaps the strongest principle within the development of PCHS is that values are critical and important drivers within health systems reform [12]. Justice and a focus on people-not diseases-are a key principle underlying the proposed paradigm shift within the NTD community [4] and the main reason sighted for the increased inclusion of DMDI within the NTD 2021-2030 roadmap. Care for and a desire to support people affected by NTDs were unquestionably at the centre of key informants' motivation for the case management strategies development in Liberia and represent a key 'push' factor towards personcentred response. However, we found that NTD programme delivery in Liberia is still orientated or 'pulled' towards diseases and patients. In making a shift towards the development of integrated person-centred services, a key and ongoing challenge for the NTD community emerges in terms of adjustment from bio-medical constructs of disease prevention, diagnosis and treatment to consider the holistic needs of affected persons and their families. Our study showed increasing recognition amongst programme implementers of the broader social impacts of NTDs, specifically in relation to mental ill health. However, the challenges implementers faced in having the resources or knowledge to respond emphasizes that there is a need for further evidence generation on how to make best use of scarce resources to support in systems strengthening whilst meeting the holistic needs of affected persons. Study limitations We have only considered policy and programme reform from the perspective of key decision makers at national and county level within this paper. Engagement with stakeholders at lower systems levels, for example, facility staff and community health workers, who are often at the interface of implementing such reforms, would have provided useful and additional critical insights and should be considered as an area of future research. We have not explored the perspectives of people affected by NTDs in this manuscript; however, these insights are essential, and are prioritized within other publications from the same study [1]. Reflections on this process from other countries may have further supported the generalizability of these findings. However, we used a case study methodology as an instrument to facilitate thinking within the NTD and health systems community, rather than arguing that Liberia's situation is representative of all countries embarking on the development of more integrated people-centred DMDI service delivery. Nevertheless, the processes that are in operation in Liberia have the potential to provide learning to other settings that are embarking on the delivery of the WHO's 2021-2030 NTD roadmap and the mainstreaming of NTD services. Our findings showcase that, by prioritizing the development of strong interpersonal relationships across levels and teams within the health system; valuing the needs and priorities of affected persons through their inclusion in health governance; recognizing the broader social impact of NTDs; and maximizing opportunities for policy change, shifts towards more integrated approaches to the management of NTDs provide great opportunity for the development of more person-centred services. However, for country efforts to be successful, this must be accompanied by donor flexibility and responsiveness to the realities of those at the forefront of service delivery. Conclusion The case of Liberia illustrates the opportunities and challenges in implementing a policy and programme shift towards integrated PCHS within NTD programme reform. Assessing policy and practice against Sheikh, Ranson and Gilson [12] core pillars of PCHS should be considered by the NTD community as they seek to contribute to the development of PCHS through the provision of integrated DMDI interventions.
2023-04-13T13:52:12.383Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "e874274d86f4a966d548e982140175bf2806f061", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e874274d86f4a966d548e982140175bf2806f061", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6920306
pes2o/s2orc
v3-fos-license
Thermal density functional theory: Time-dependent linear response and approximate functionals from the fluctuation-dissipation theorem The van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. This produces a natural method for generating new thermal exchange-correlation (XC) approximations. a popular and well-established approach to electronic structure problems in many areas, especially materials science and chemistry [1]. The Kohn-Sham method imagines a fictitious system of non-interacting fermions with the same density as the real system[2] and from which the ground-state energy can be extracted. Only a small fraction of the total energy, called the exchangecorrelation (XC) energy, need be approximated to solve any ground-state electronic problem[1], and modern approximations usually produce sufficient accuracy to be useful [3]. The advent of TDDFT generalized this method to time-dependent problems [4]. Limiting TDDFT to linear-response yields a method for extracting electronic excitations [5,6], once another functional, the XC kernel, is also approximated. But there is growing interest in systems in which the electrons are not close to zero temperature. Warm dense matter (WDM) is partially ionized, solid-density matter having a temperature near the Fermi energy. It has wide-ranging applications including the astrophysics of giant planets and white dwarf atmospheres [7][8][9][10][11][12][13][14], cheap and ultra-compact particle accelerators and radiation sources [15][16][17], and the eventual production of clean, abundant energy via inertial confinement fusion [18,19]. One of the most successful methods for simulating equilibrium warm dense matter combines DFT[2, 20] and molecular dynamics [21] to capture quantum mechanical effects of WDM electrons and the classical behavior of ions [7][8][9][10][11][12][13][14][22][23][24]. Such simulations use the Mermin theorem [25] to generate a KS scheme at finite temperature, defined to generate the equilibrium density and free energy. In practice, the XC free energy is almost always approximated with a ground-state approximation, but formulas for thermal corrections are being developed [26][27][28][29][30]. Many processes of interest involve perturbing an equilibrium system with some time-dependent (TD) perturbation, such as a laser field [31] or a rapidly moving nucleus as in stopping power [32][33][34]. Of great interest within the WDM community are calculations of spectra, dynamic structure factors, and the flow of energy between electrons and ions [35][36][37][38]. Spectra expose a material's response to excitation by electromagnetic radiation, which would facilitate experimental design and analysis. Dynamic structure factors can be related to the x-ray scattering response, which is being developed as a temperature and structural diagnostic tool for WDM [39]. Thus it would appear that a TD version of the Mermin formalism is required. A theorem is proven in Li et al. [40,41], but the formalism assumes the temperature is fixed throughout the process, and so cannot describe e.g., equilibration between electrons and ions. Moreover, the proof requires the Taylor expansion of the perturbing potential as a function of time, just as in the Runge-Gross (RG) theorem[4]. This can be problematic for initial states with cusps [42], such as at the nuclear centers. (Recent efforts [43,44] have focused on avoiding these complications at zero temperature.) Finally, the RG proof requires invocation of a boundary condition to complete the one-to-one correspondence between density and potential [45], which create subtleties when applied to extended systems [46]. In the present work, we prove the RG theorem at finite temperature within linear response by generalizing the elegant linear response proof of van Leeuwen [43] to thermal ensembles. Our proof avoids several of the drawbacks mentioned above, while still providing a solid grounding to much of WDM theoretical work. We then define the exchange-correlation kernel at finite temperature and generalize the Gross-Kohn equation. Finally, we extend the fluctuation-dissipation theorem of ground-state DFT to finite temperatures, and show how this provides a route to equilibrium free energy XC approximations. Consider a system of electrons in thermal and particle equilibrium with a bath at some temperature, τ , and with static equilibrium density n τ (r). The system extends throughout space with a finite average density, i.e., the thermodynamic limit has been taken. The limit of isolated atoms or molecules is achieved by then taking the separation between certain nuclei to infinity. In this sense, no surface boundary condition need be invoked [45], as the density never quite vanishes, while the average particle number per atom or molecule molecule is finite. These electrons are perturbed at t = 0 by a potential δv(r, t) that is Laplace-transformable. To avoid complex questions of equilibration, we consider only the linear response of the system, so that the perturbation does not affect the temperature of the system as, e.g., Joule heating is a higher order effect [47]. The Kubo response formula for the density change in response to δv is where the Laplace transform is assumed to exist for all s > 0. Within the grand canonical ensemble [48,49], the equilibrium density-density response function is [50]: are matrix elements of the density fluctuation operator. The energy-ordered indices i, j run over all many-body states (both bound and continuum [51]) with all particle numbers, but ∆n τ ij vanishes unless N i = N j . The transition frequencies ω ji = E j − E i , and the statistical weights w i are thermal occupations for the equilibrium statistical operatorΓ τ = i w i |Ψ i Ψ i | and obey w i < w j if E i > E j and N i = N j . This condition is satisfied by the grand canonical ensemble of common interest with We also need the (Laplace-transformed) one-body potential operator: and its matrix elements: Its expectation value is so that matrix elements of its fluctuations are Then consider the expectation value: Inserting Eq. (1) and using the definitions, we find This is rearranged as We have ordered all states by energy regardless of particle number here for simplicity, though this is not strictly necessary since different particle number subsystems do not interact. For now, we assume no degeneracies. Then the above expression, m τ (s), vanishes only if every ∆V τ ij (s) does for i = j because of our assumption The usual statement of the RG theorem is that no two potentials that differ by more than an inconsequential function of time alone can give rise to the same density (for fixed statistics, interparticle interaction, and initial state[4]). Imagine two such perturbations exist, yielding the same density response. Since, in linear response, the density response is proportional to the perturbation, we can subtract one from the other, and the statement to be proved is that there is no non-trivial perturbation with zero density response. If it did exist, then m τ (s) would vanish and our algebra shows that every ∆V τ ij (s) with i = j would also. Finally, Ni k=1 δv(r k , s)Ψ j (r 1 . . . r Ni ) = i δV ij (s)Ψ i (r 1 . . . r Ni ), (12) which can be proven by integrating over all coordinates with Ψ * k . Then, as ∆V τ ij (s) = δV ij (s) for i = j, and must vanish if there is no density response, the sum on the right of Eq. (12) collapses to just the j-th term, showing that δv(r, s) must be spatially independent. We can also include a finite number (M ) of degenerate excited eigenstates. (For the complications involved when the ground-state is degenerate, see Ref. [52]). For such states, ω ij = 0 and the argument above no longer implies δV ij (s) vanishes, as the perturbation couples degenerate states within the same subspace. But simply choose at least M points in the 3N -dimensional coordinate space that are not on any nodal hypersurface of the degenerate subspace. Then the only solution to Eq. (12) is again that δv(r, s) must be independent of r. Thus we have generalized the van Leeuwen proof to thermal ensembles, even with finite degeneracies among excited states. Our proof applies to any ensemble with weights that monotonically decrease with increasing energy for each particle number [53,54]. This avoids complications caused by cusps in initial wavefunctions [42,55]. Extension to spatially periodic potentials is straightforward, as no boundary condition [45] was invoked [46]. In order for the above result to be of practical use, we consider the KS scheme for finite-temperature, timedependent systems and provide a method for generating XC approximations. We assume the equilbrium Mermin-Kohn-Sham (MKS)[2, 25] potential exists. At this point, we switch to using the more familiar Fourier-transform notation, but in fact all results and definitions apply only to Laplace-transformable perturbations. (In practice, this distinction rarely matters, but occasional formal difficulties arise if this restriction is not made, see Ref. [56] and Sec. 3.2 of Ref. [43].) First we generalize the Gross-Kohn response formula [57] to thermal ensembles. Define where η → 0 + [58]. Because of our proof of one-to-one correspondence, we can invert the response function (excluding a constant), and write where 1 denotes the coordinates r, t, and 2 another pair [59]. The standard definition of XC is: where v S is the one-body potential of the non-interacting KS system and v H is the Hartree potential [60]. Differentiating with respect to n(2), this yields which defines the XC kernel at finite temperature, where χ τ S is the KS response function [58] and the traditionally defined Hartree contribution is simply This follows the definition within the Mermin formalism [25] (but see Refs. [48] and [53] for alternative choices and their consequences). Inverting yields the thermal Gross-Kohn equation [57]: (42). (18) A simple approximation is then the thermal adiabatic local density approximation (thALDA), in which the thermal XC kernel is approximated using the XC free energy density per particle for a finite-temperature uniform gas, a τ,unif (19) which ignores its nonlocality in space and time, and could be used to generalize ALDA calculations of excitations in metals and their surfaces [61]. We next deduce the fluctuation-dissipation theorem for MKS thermal DFT calculations. This allows us to connect the response function and the Coulomb interaction through the dynamical structure factor [62]. In the MKS scheme, the XC contributions to the free energy are defined via . (21) By subtraction, where T denotes kinetic, U denotes potential, and S entropic components. Using many-body theory, the density-density response function determines the potential contribution to correlation [63,64], just as in the ground state [65]: where ∆χ τ = χ τ − χ τ S . By introducing a couplingconstant λ while keeping the density fixed, the thermal connection formula [66] yields where the scaled density is n γ (r) = γ 3 n(γr) and γ = τ /τ . This is exact, but only if the exact thermal XC kernel is used, as defined by Eq. (16). If the kernel is omitted, the result is the thermal random-phase approximation [67]. Next, we discuss the many applications of Eq. (25). There has been tremendous progress in implementing and testing the random phase approximation for calculating the XC energy in ground-state calculations and such calculations, while more expensive than standard DFT, are becoming routine [68][69][70]. Our results provide a thermal generalization that could likewise be used to generate new thermal XC approximations for equilibrium WDM calculations. At finite temperature, the XC hole fails to satisfy the simple sum rules [71] that have proven so powerful in constructing ground-state approximations [72]. But our formula uses instead the XC kernel. Inserting Eqs (18,19) into Eq. (25) yields thALDA-RPA, a new approximation to the equilibrium correlation energy, that can be applied to any system. Another, simpler approximation is ALDA, in which only the zero-temperature XC energy is used in the kernel. Both can be relatively easily evaluated for a uniform gas, and the resulting a τ XC (r S ) found from Eq. (25) compared with an accurate parametrization [27]. Even in the uniform gas, thALDA is an approximation because both the q-and ω-dependence of the true f τ XC are missing; thus the efficacy of these approximations can be tested on the uniform case. Next we discuss which known exact conditions on the zero-temperature kernel apply to the thermal kernel, and which do not. Because the equilibrium solution is a minimum of the thermal free-energy functional, the zero-force theorem [64] d 3 r d 3 r n τ (r)n τ (r )f τ XC (r, r , ω) = 0 (26) should be satisfied and the kernel should be symmetric in its spatial arguments. However, any simple formula for a one-electron system [73] is not true at finite temperature, as the particle number is only an average in the grand canonical ensemble [49,71]. A last set of conditions is found by considering the coupling-constant dependence in DFT. A parameter λ is introduced that multiplies the electron-electron interaction, while keeping the density constant. Because of simple scaling relations, the λ-dependence can be shown to be determined entirely by coordinate scaling of the density as in Eq. (25), i.e., determined by the functional itself, evaluated at different densities. This is used in both ground-state DFT [74] and in time-dependent DFT [75], and has been generalized to the thermal case [66,76]. Although the thermal connection formula does not require this relation for the response function, it is useful in many contexts. From the Lehmann representation [50] of χ τ [63], we find the λ-dependent response function satisfies: χ τ,λ [n](r, r , ω) = λ 4 χ τ /λ 2 [n 1/λ ](λr, λr , ω/λ 2 ). (27) Insertion into the definition of f XC yields: f τ,λ XC [n](r, r , ω) = λ 2 f τ /λ 2 XC [n 1/λ ](λr, λr , ω/λ 2 ), (28) and the potential perturbation scales as: Insertion of the scaling relation for the kernel into the thermal connection formula yields a more familiar analog to the ground-state formula. The exchange kernel must scale linearly with coupling constant, so Eq. (28) produces a rule for scaling of the exchange kernel: Because the poles in f XC are λ-dependent, we expect pathologies similar to those in zero-temperature TDDFT if the exact frequency-dependent f τ X is used in Eq. (25) [77]. But adiabatic EXX (AEXX), not including frequency-dependence, produces a well-defined approximation to the thermal free energy in which the kernel is non-local. This and the other proposed approximations above could prove useful in WDM simulations when thermal XC effects are relevant (but see [78] for discussion of the subtleties involved in thermal XC approximations). In conclusion, we have generalized the proofs and constructions of TDDFT within the linear response formalism to thermal ensembles, including those containing a finite number of degeneracies. We have avoided ambiguities about the relative perturbative and thermal equilibration time scales, allowed for degenerate excited states more common in finite-temperature ensembles, avoided invoking boundary conditions and the requirement of Taylor expandability, and provided firm footing for finitetemperature, time-dependent KS-DFT in the linear response regime. Definition of relevant KS quantities led to description of their properties under scaling. Further, we have shown that these quantities, in combination with the thermal connection formula, produce new routes to thermal DFT approximations for use in equilibrium MKS calculations. Implementation and tests of these approximations is ongoing. APJ acknowledges support from DE-FG02-97ER25308 and the University of California President's Postdoctoral Fellowship, PG from DE14-017426, and KB from CHE-1464795 NSF. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
2016-05-05T23:59:18.000Z
2015-09-10T00:00:00.000
{ "year": 2015, "sha1": "9b78a8b06453f9b44dcac2be4acc0a6afed81df3", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.116.233001", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9b78a8b06453f9b44dcac2be4acc0a6afed81df3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
59515104
pes2o/s2orc
v3-fos-license
Cervical cancer analysis : From burden to treatment Address for Correspondence: Dr. Disha Tiwari, Senior Resident, Department Of Radiation Oncology, Delhi State Cancer Institute, Delhi, India. E-mail: tiwaridisha@Gmail.com © Copyright AJMS Background: Cancer has emerged as huge epidemic over past two decades. There is a declining trend in the incidence of cervical cancer in the western world on the contrary in India we have observed that this disease is a great menace to a women’s health and is showing rising trend. Within the country there is wide range of variation in demography and epidemiology. There is lack of published data available on epidemiology such as age, incidence, burden, compliance of patients having cervical cancer in Uttar Pradesh. Aims and Objectives: Current study aims to gather the evidence to understand the pattern of cervical cancer burden in the community and to find out the lacunae in the treatment delivery and receiving end. Materials and Methods: A retrospective study was conducted on patients of cancer cervix visiting OPD of department of radiation oncology. The study comprised retrieval of medical records for different variables like age, stage, district etc and analysing it to understand the presentation and burden of disease. Results: A total of 470 patients were studied for their epidemiology. Majority of patients were from Lucknow (78) and its nearby districts. Out of 75 districts of Uttar Pradesh, patients from 38 different districts had come to seek radiation treatment which is almost half of the Uttar Pradesh. Majority of patients in study were women in their 5th& 6th decade. The most common stage of diagnosis was II B. On histopathological evaluation most common variant found was squamous cell carcinoma with moderate differentiation 26.38%. Conclusion: Our study highlights lack of resources in Uttar Pradesh as patients had travelled from peripheral districts to seek treatment, which lead to frequent treatment breaks and poor compliance for follow up. INTRODUCTION Cancer has emerged as huge epidemic over past two decades. 1 In the province of cancer, cervical cancer has hegemony among women of low socio-economic background especially in developing countries like India. 2 There is a declining trend in the incidence of cervical cancer in the western world on the contrary in India we have observed that this disease is a great menace to a women's health and is showing rising trend.As per Ferlay J et al 3 a woman's cumulative risk of developing cervical cancer by age 74 is 0.9% in developed countries compared to 1.9% in developing countries.As per population based registries from 2012-2014 in National Cancer Registry Programme(NCRP). 4incidence of cervical cancer holds top position among females of rural area and it is the second most common cancer among urban females.The GLOBOCAN 2012 5 report for India for cervix uteri cancer reveals an estimated 123,000 new cases and 67,000 deaths.Indian Council of Medical Research(ICMR) has prophecy about cervical that it will be the second most common gynecological malignancy with the estimated 1,23,291 new cases by 2020. 6As per SEER data the median age of diagnosis is 49 years and whereas in India the peak age for cervical cancer incidence is 55-59 years. 7ervical cancer is almost always associated with human papilloma virus infection [HPV].The development of carcinoma cervix is a multistep process initiated by persistent infection with high-risk HPV, which in a limited number of cases progresses via cervical intraepithelial neoplasia to invasive cervical cancer. 8rvical cancer arises most commonly from squamous epithelial cells of uterine cervix (squamous cell carcinoma) or glandular epithelial cells of endocervix (adenocarcinoma).These two types make up around 90% of cases of cervical cancer and squamous is the most common about 80% cases. Human papilloma virus is the single most cause of invasive cervical cancers and the most common strains which women of cancer cervix harbours are HPV16 and HPV18. 9V infection is seen in sexually active women and mostly it resolves by itself without causing obvious signs of disease but changes may be evident in cervical smears.However, in few women, infection persists and may progress to invasive cervical cancer.This progression generally takes many years so this forms the basis of cervical cancer screening. In western world, the incidence of adenocarcinoma is rising as compared to squamous cell carcinoma because of extensive cervical cancer screening program as adenocarcinoma is liable to get missed on screening. On retrospective analysis we found out the two main reasons which unfolds the baffling mystery of rising trend in our part of world are lack of awareness and education especially in women of low socio economic strata.As issues related to sex and diseases transmitted via this way like human papilloma virus, is still a taboo in India as it is difficult to educate and motivate masses for screening.Apart from cancer awareness campaigns, the screening facilities are yet not available for rural populations.On account of above mentioned reasons, the majority of rural females present with advanced disease.In developing countries like India, facing major problem of cervical cancer related death is because we are lacking in widespread use of vaccines and implementation of screening programs in peripheral hospitals too.Government and health professionals together in collaboration have to mend their ways to bridge this huge gap.Mass education is required to get the breakthrough. Within the country there is wide range of variation in demography and epidemiology.There is lack of published data available on epidemiology such as age, incidence, burden, compliance of patients having cervical cancer in Uttar Pradesh.In this retrospective analysis we have tried to gather the evidence to understand the pattern of cervical cancer burden in the community. MATERIALS AND METHODS This study was carried out in Department of Radiotherapy, KGMU, Lucknow, a tertiary care center, which provide cancer diagnosis and treatment facility to not only people of Uttar Pradesh but also to neighbor states like Bihar and country like Nepal.This study was conducted for the period of January 2013 to December 2015.Medical records of included patients were retrieved and data was gathered for different variables like age, histopathology, stage, operative details, treatment taken and follow-up.Keeping in view that many patient default after undergoing full course of treatment we have tried to obtain their status via telephone.Data were entered and analyzed by using SPSS21.0. Ethical considerations This study was conducted after approval from Institutional Ethics Committee of King George's Medical University, Lucknow. RESULTS A total number of 11,068 patients were registered in Department of Radiotherapy from January 2013 till December 2015 as shown in Table-1.Out of these 1678 (15%) patients were registered as cancer cervix in different units of department of radiotherapy.To rule out any heterogeneity in treatment policy prevailing in different units we have chosen single unit for our study purpose. A total of 470 patients were studied for their epidemiology.In our study we found that majority of patients were from Lucknow (78) and its nearby districts like Sitapur (40), Hardoi (31) and Barabanki (30) as shown in Figure -1.Out For 28% of patients no information regarding stage was found.On further analysis of this subset, these were the patients who had received treatment at other centers and majority were post operated cases with no prior examination and surgical detail as they were operated in periphery. On histopathological evaluation most common variant found was squamous cell carcinoma with moderate differentiation 26.38%.Out of 470 patients 16 patients were registered as either palliative or metastatic cases.323 (69%) patients were registered with intact cervix and 131 (28%) were as post operated cases.31% of patients defaulted either after registration or during initial work up as shown in Table -3. DISCUSSION The present study was undertaken in an effort to find the pattern of cervical cancer burden coming to our institution, patient's compliance to treatment and follow-up and to understand the issues regarding lost to follow -up.The most common age group of presentation in our study is 41-60 years which is in accordance with national cancer registry program (NCRP).In a retrospective analysis of cancer cervix patients Saibishkumar et al 10 , they have reported most common stage of diagnosis as IIB and most common histopathology as squamous cell carcinoma followed by adenocarcinoma likewise in our study we have also found similar results for both stage and histopathology.In a study done in South India 11 they have mentioned poor socioeconomic status as major impact factor in delay in diagnosis leading to advance stage of presentation.Study undertaken by Mandal and Roy 12 reported poor compliance in patients of cancer cervix is due to financial constraints. When we had analyzed our population to gather facts for poor compliance, major reasons that we can enumerate are financial issues as majority of our patients were from poor socio-economic background.Secondly we would like to emphasize that due to lack of transportation from periphery to our center, a large number of patient default at various steps of treatment.Thirdly lack of education and awareness in the family is a big social challenge which is knitting web like spider and taking toll on women life. Poor family support and other social responsibilities which women had to bear keeping her own health aside is the cause of advance presentation among rural population. CONCLUSION Our study highlights lack of resources in the peripheral district of Uttar Pradesh as patients had travelled from periphery to seek treatment to the capital, which lead to frequent treatment breaks and poor compliance for follow up.As majority of patients were diagnosed in their advanced stage, we would also like to highlight lack of implementation of screening programs in the community. There is massive lacunae present between prevention strategies and facility availed by the population.Education and awareness regarding vaccination and Pap test is the need of hour to fight this problem from its inception.This will decrease load on present health infrastructure.At last to conclude we require more hospital and population based registries to get the real picture and to improve our infrastructure accordingly.Integrated management approach is required to combat current Indian scenario of cancer cervix which include collaborated work from hospital based workers with community workers. Figure 1 : Figure 1: District-wise distribution of cervical cancer patients Table 1 : Annual registration of cervical cancer patients Majority of patients in study were women in their 5 th & 6 th decade.The most common stage of diagnosis was II B (31.70%) followed by III B (27.65%) as shown in Table-2.
2019-01-31T14:07:36.403Z
2018-12-11T00:00:00.000
{ "year": 2018, "sha1": "e876227d3fa257dd12a7cd9eef67bb9756189303", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/AJMS/article/download/21058/18830", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e876227d3fa257dd12a7cd9eef67bb9756189303", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269482516
pes2o/s2orc
v3-fos-license
Financial incentive interventions for smoking cessation among Chinese smokers: study protocol for a cluster randomised controlled trial Introduction There is an urgent issue to relieve the burdens caused by tobacco use through feasible and effective smoking cessation interventions, particularly in a middle-income country with less accessible smoking cessation services and high demand for quitting smoking. Financial incentives have shown effective in changing health behaviours, thus needing to test its portability to a wider implementation and effectiveness of increasing smoking cessation rates. Methods and analysis This is a three-arm cluster randomised controlled trial. 462 eligible participants will be assigned to 2 financial incentive groups—rewards or deposits, or the control group. All participants including those in the control group will receive text messages to help quitting smoking developed by the US National Cancer Institute over a 3-month intervention period. In addition to text messages, reward group participants will be rewarded with CNY200 and CNY400 (CNY100 approximately US$15) for sustained smoking abstinence at 1 month and 3 months follow-up assessments; participants in the deposit group will accumulate CNY200 and CNY600 in the deposit accounts after verified smoking abstinence at 1 month and 3 months follow-up assessments, and all the deposits will be given at once right after the 3-month follow-up visit. The primary outcome is biochemically verified smoking abstinence rate sustained for 6 months after enrolment. Ethics and dissemination This trial protocol has been approved by the Ethics Committee of Peking University Health Science Centre (date: 23 February 2023; ethical approval number: IRB00001052-22172). Results and findings of this trial will be disseminated in peer-reviewed journals and professional conferences. Trial registration number ChiCTR-IOR-2300069631. The phrases " the ambiguity in the function mechanisms" and " is not very clear, could you rephrase ?"the deposit scheme, different from rewards, is featured in that individuals receive cumulative rewards for short-term desired behaviors," could you rephrase the underlined part ?Please consider using the word "effective in the introduction instead of "Efficacious" -Other comments : On page 5 you mention the Trans-theoretical Model.However, I did not understand how you incorporate it in the design of the intervention.Could you please give further details on how it is related to your intervention ? Figure 1 / inclusion and exclusion criteria: You trial being a cluster randomised trial, shouldn't "participants inclusion" and informed consent happen after random allocation?Please modify the figure to specify that the worksites inclusion criteria are verified before randomisation . Could you give more details on the types of worksites you are expecting to recruit? Can you please specify that money can be sent on the Wechat app for international riders who are not familiar with this app ?(I had to google it to be sure) Can you give more detail on who will conduct Follow-up visits ?How will the visits be planned ?How many people could you hire to do that ?Could you give a preliminary calendar on the recruitment process?When will you start ?I can't seem to access your trial on https://www.chictr.org.cn/Did you start recruitment already ? Could you give more details about economic evaluation ? in the introduction you state that you expect the intervention to bet "costsaving and cost-effective in the long term" How will you measure that ? You mention that you wish to examine behavior change process, how is it measured and evaluated ?What are the " behavioral economics variables and cognitive factors related to smoking" you plan on studying ?Sample size.In the Halpern et al you cite to justify your sample size calculation, the highest cessation rate was 2.9% (in the redeemable deposit group).Please provide another reference to justify an 18 times higher cessation rate in the intervention group that you expect Please provide more clarification and explanation for the phrase " The design effect is 1+(m-1)ICC=1.528» An ICC = 0.022 is extremely low for me.Did you mean 0.22 ? REVIEWER Hughes, Jane University of Sheffield Section of Public Health, Public Health REVIEW RETURNED 08-Jan-2024 GENERAL COMMENTS This is an interesting piece of work and appears well-thought out. It would have been useful perhaps to have a bit more information about the secondary outcomes described as decisional balance and self-efficacy and how these were measured.There are quite a few points in the manuscript where trail has been written instead of trial so these need to be changed.I would be interested to see the results of this study once published. VERSION 1 -AUTHOR RESPONSE Response to Reviewer 1 -The standard of written English should be improved throughout the manuscript with many errors or writing issues.This makes the text complicated to understand in some passages.Even as a nonnative English speaker myself, I can still detect many writing issues.For example, and on the first page alone (between two '*'): "disproportional *to high* demand for quitting.""*despite that* most studies included were conducted" "However, this study took place in Hong Kong *that is classified as the high-income economy*, differing from mainland China *as the middleincome economy*" Reply: Thank you for your comment.We have carefully proofread the manuscript to minimize grammatical and bibliographical errors."*despite that* most studies included were conducted" has been modified to "despite the fact that most……"; "disproportional *to high* demand for quitting."has been modified to "disproportional to the high demand for quitting"; "the middle/high-income economy*" has been modified to "this study took place in Hong Kong, a city that is classified as a high-income economy, in contrast to mainland China, which is classified as a middle-income economy". On page 2, in the Ethics and dissemination section, please modify "trail to trial".On page 9 please change "where the trail is conducted."To "where the trial is conducted." Reply: We are sorry for the mistakes and we have modified them.Thank you for your comment. The phrases " the ambiguity in the function mechanisms" and " is not very clear, could you rephrase ?"the deposit scheme, different from rewards, is featured in that individuals receive cumulative rewards for short-term desired behaviors," could you rephrase the underlined part ? Reply: Thank you for your comment.We have rephrased these phrases to make them clearer.Modified versions were: "…the unclear operational mechanisms further complicate the transferability of financial incentive frameworks to smokers in mainland China.""Unlike traditional rewards, a deposit scheme is featured in that it incentivizes individuals to earn incremental rewards for desired behaviors, with the accumulated bonus being forfeited if the set target is not achieved.This strategy is informed by the concept of present bias, a phenomenon in behavioral economics that proposes that individuals are generally more motivated to avoid losses than to seek gains." Please consider using the word "effective in the background instead of "Efficacious" Reply: Thank you for your comment.We have modified them both in the abstract and background section. -Other comments : On page 5 you mention the Trans-theoretical Model.However, I did not understand how you incorporate it in the design of the intervention.Could you please give further details on how it is related to your intervention ? Reply: Thank you for your valuable feedback.In response to your comment, we have further expanded the background section to provide a more comprehensive explanation of how the Trans-theoretical Model has been integrated into our intervention methodology.Specifically, we have outlined how text messages are utilized across all study arms, including the control group, as a means to enhance intrinsic motivation and to facilitate a deeper understanding of the intervention's underlying mechanisms and impact within a theoretical framework.The paragraph commences with "Previous research on financial incentives has highlighted challenges...", contains additional details. -Figure 1 / inclusion and exclusion criteria: You trial being a cluster randomised trial, shouldn't "participants inclusion" and informed consent happen after random allocation?Please modify the figure to specify that the worksites inclusion criteria are verified before randomisation . Reply: Thank you for your valuable feedback.We have made adjustments to the figure to provide a clearer representation of the assessments of eligibility of worksites. Regarding the sequence of participant inclusion and random allocation, our approach is influenced by whether interventions are applied at the cluster level.As outlined in the Consort 2010 statement: extension to cluster randomised trials1, the methodology may vary depending on whether interventions are implemented at the cluster level, individual participant level, or a combination of both.In our study, worksites are initially evaluated for eligibility, followed by participant recruitment and screening.Randomization of clusters occurs once participant eligibility and consent have been confirmed. Could you give more details on the types of worksites you are expecting to recruit? Reply: Thank you for your comment.We have added this point in the Study setting and recruitment section."We expect to recruit worksites primarily in the mining, manufacturing or energy industries, which are characterized by high labour intensity and high smoking rates." Can you please specify that money can be sent on the Wechat app for international riders who are not familiar with this app ?(I had to google it to be sure) Reply: Thank you for your feedback.It is important to clarify that the funds are not transferred directly through the WeChat app but are provided in cash or via transfer by the study personnel during the follow-up visits.Additionally, we have included the exchange rate (CNY100 approximately US$15) in the sections detailing specific amounts of financial incentives. Can you give more detail on who will conduct Follow-up visits ?How will the visits be planned ?How many people could you hire to do that ? Reply: Thank you for your question regarding the follow-up visits in our study.We have modified the followup assessments section.The follow-up visits will be conducted by study personnel from our research team.They are all trained research post-graduate students or research assistants.The planning of the visits will involve scheduling appointments in advance to accommodate participants' availability and ensure minimal disruption to their workday.In terms of staffing, we have a team of 6 postgraduate students and 2 research assistants who will be responsible for conducting the follow-up visits, allowing for thorough support and attention to each participant throughout the study period. Given the space constraints, we didn't include everything mentioned above in the manuscript but we thought the necessary details have been included. Could you give a preliminary calendar on the recruitment process?When will you start ?I can't seem to access your trial on https://www.chictr.org.cn/Did you start recruitment already ? Reply: Thank you for your comment.We have added the survey period in the methods section.We have registered on the website and the registration number is ChiCTR2300069631.You can also search "Financial incentives for smoking cessation: a cluster randomized control trial", which was our registration title.We have completed the recruitment and the recruiting time was between June 2023 to July 2023, which has been stated in the registry. Could you give more details about economic evaluation ? in the introduction you state that you expect the intervention to bet "cost-saving and cost-effective in the long term" How will you measure that ? Reply: Thank you for your comment.After careful consideration, we have decided to remove the statement regarding the cost benefit analysis from the background.We acknowledge the challenges of extrapolating results from a randomized controlled trial to a societal level.While the economic evaluation is significant in the long term, we will focus on analyses and results that aligns closely with our objectives.We appreciate your input. You mention that you wish to examine behavior change process, how is it measured and evaluated ?What are the " behavioral economics variables and cognitive factors related to smoking" you plan on studying ? Reply: Thank you for your valuable comment.In response, we have made the necessary modifications to enhance the clarity of the statements in the Baseline assessment section.As stated in the background section, behavioral economics variables encompass loss aversion and delayed discounting, while cognitive factors related to smoking include decisional balance and self-efficacy. We have also included more detailed measurements in the outcome section. Sample size. In the Halpern et al you cite to justify your sample size calculation the highest cessation rate was 2.9% (in the redeemable deposit group).Please provide another reference to justify an 18 times higher cessation rate in the intervention group that you expect Please provide more clarification and explanation for the phrase " The design effect is 1+(m-1)ICC=1.528» An ICC = 0.022 is extremely low for me.Did you mean 0.22 ? Reply: Thank you for your comment.The study from Halpern et al. adopted a special consent form -participants who did not opt out by notifying trial staff before the enrollment date were automatically enrolled (a design known as "opt-out consent").Besides, the control group in our study is positive control.Considering the differences, we estimated the abstinence rates derived from the intervention groups of the engaged cohort of the study conducted by Halpern et al. Thank you for your advice regarding the design effect.But in a cluster randomised trial, the concept of design effect may be generally understandable.As demonstrated in the Consort 2010 statement: extension to cluster randomised trials1, "(Box 2) The reduction in effective sample size depends on average cluster size and the degree of correlation within clusters, ρ, also known as the intracluster (or intraclass) correlation coefficient (ICC).If m is the cluster size (assumed to be the same for all clusters), then the inflation factor, or "design effect," associated with cluster randomisation is 1+(m−1)ρ.Although typically ρ is small (often <0.05) and it is often not known when a trial is planned (and only estimated with error after a trial is completed), its impact on the inflation factor can be considerable if the clusters are large.In general, the power is increased more easily by increasing the number of clusters rather than the cluster size." We have verified that the ICC we calculated in the study was true.The study we cited demonstrated that "The crude ICCs were generally small, with a mean of .0163and values ranging from 0 to .0650."2Still thank you for your comment. Response to Reviewer 2 This is an interesting piece of work and appears well-thought out.It would have been useful perhaps to have a bit more information about the secondary outcomes described as decisional balance and self-efficacy and how these were measured.There are quite a few points in the manuscript where trail has been written instead of trial so these need to be changed.I would be interested to see the results of this study once published. Reply: Thank you for pointing this out.We have added detailed description in the outcome section: "Decisional balance and self-efficacy are measured using the 6-item and 3-item 5-point Likert scales respectively.Each item was rated from strongly disagree to strongly agree.Examples of pros items include "My health will improve after quitting smoking," while cons items include "My social life will be affected after quitting smoking."An example of a self-efficacy item is "I am confident that I can successfully quit smoking."The scale's reliability and validity have been verified among adult smokers in China."We have also carefully revised the manuscript to minimize typographical, grammatical and bibliographical errors. All the corrections indicated above are in the revised manuscript.Thank you and all the reviewers for the kind advice.We look forward to hearing from you regarding our submission.We would be glad to respond to any further questions and comments that you may have. GENERAL COMMENTS Thank you for your response to my review and the revised manuscript.I appreciate you addressing all the points previously raised. I still have some minor comments : Thank you for your clarification on the Halpern et al. studyyou cite to justify sample size calculation-and their difference with your study.However, I believe a reference specifically addressing an expected cessation rate of 18% in the intervention group would still be beneficial. -It is still not clear to me how you will specifically incorporate the Trans-theoretical Model (TTM) in your intervention.While you mention the TTm and some of its core constructs like processes of change, decisional balance, and self-efficacy (that are also present in many other models about change and not just TTM), you do not explicitly mention tailoring the intervention to specific stages of change (precontemplation, contemplation, preparation, action, maintenance).TTM emphasizes meeting individuals at their current stage and using targeted strategies accordingly. -You say the "The execution period of this trial will between April 2023 to February 2024. GENERAL COMMENTS I am happy to requested changes have been accepted. VERSION 2 -AUTHOR RESPONSE Response to Reviewer 1 -I am happy to requested changes have been accepted. Reply: Thank you for your time and comment. Response to Reviewer 2 Thank you for your clarification on the Halpern et al. studyyou cite to justify sample size calculationand their difference with your study.However, I believe a reference specifically addressing an expected cessation rate of 18% in the intervention group would still be beneficial. Reply: Thank you for your valuable comment.The expected cessation rate in the intervention arm is actually 12.7% in our sample size calculation.In order to further support and strengthen this expected rate, we have included an additional reference citing a cessation rate of 15.4% in the individual-reward incentive-based group1.With the current sample size, we are confident in our ability to detect the differences in cessation rates effectively.Your suggestion has been duly noted and we appreciate your attention to detail. -It is still not clear to me how you will specifically incorporate the Trans-theoretical Model (TTM) in your intervention. While you mention the TTm and some of its core constructs like processes of change, decisional balance, and self-efficacy (that are also present in many other models about change and not just TTM), you do not explicitly mention tailoring the intervention to specific stages of change (precontemplation, contemplation, preparation, action, maintenance).TTM emphasizes meeting individuals at their current stage and using targeted strategies accordingly. Reply: Thank you for your feedback.Since our initial statement may be misleading, we have revised and deleted the statement of "incorporate the Trans-theoretical Model into intervention design".While we will not modify interventions based on specific stages of change, we recognize the value of utilizing the TTM framework to explore the effects of these constructs on behavior change processes.We will measure decisional balance and self-efficacy at baseline and follow-up visits to understand their influence on smokers' behavior change and successful abstinence.The stages of change will also be measured at visits and be considered as smokers' behavior change process.The emphasis of this trial is financial incentive but not tailored text messages.Furthermore, the Cochrane review suggested that stage-based interventions may not necessarily be more effective than non-stage-based interventions2.We focused on utilizing TTM to explore the underlying mechanisms of behavior change in the context of financial incentives for smoking cessation.Your input has been invaluable in refining our statement, and we are grateful for your feedback. -You say the "The execution period of this trial will between April 2023 to February 2024."But later on you say "The recruiting time is between June 2023 to July 2023."Do you mean by execution recruitment and follow up period ?Or did you finalise study recruitment and follow up? Reply: Thank you for your comment.Following your feedback, we have revised "the execution period" to "the recruitment and follow-up period" in the methods section. We have completed the recruitment, and the recruiting time was between June 2023 to July 2023, which is same as that stated in the registry.We have completed the recruitment and all follow-up visits.However, data collection is still ongoing, and we have not commenced data analysis at this time.Thank you once again for highlighting this point, and we are committed to ensuring accuracy and transparency in our study reporting. -Won't removing 'having a smartphone" as an inclusion criteria risk bias by recruiting participants who won't be able to fully participate ? Reply: Thank you for your comment. We deleted this inclusion criterion to align the information in the protocol article with that in the trial registry.Since many worksites in China already use WeChat for internal communication and attendance tracking, we considered the inclusion of this criterion unnecessary for participants from worksites.This decision was made to avoid potential bias and ensure that all eligible participants could fully engage in the study activities using commonly available resources. All the corrections indicated above are in the revised manuscript.Thank you and all the reviewers for the kind advice.We look forward to hearing from you regarding our submission.We would be glad to respond to any further questions and comments that you may have.
2024-05-02T06:17:08.278Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "d54ecbfdc9bb82769c68cfb168ef5cec33185d1e", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "53afacbe4aea13e4d0da4bb87bd858bd1efa4411", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
117780674
pes2o/s2orc
v3-fos-license
Measuring Noise Temperatures of Phased-Array Antennas for Astronomy at CSIRO We describe the development of a noise-temperature testing capability for phased-array antennas operating in receive mode from 0.7 GHz to 1.8 GHz. Sampled voltages from each array port were recorded digitally as the zenith-pointing array under test was presented with three scenes: (1) a large microwave absorber at ambient temperature, (2) the unobstructed radio sky, and (3) broadband noise transmitted from a reference antenna centred over and pointed at the array under test. The recorded voltages were processed in software to calculate the beam equivalent noise temperature for a maximum signal-to-noise ratio beam steered at the zenith. We introduced the reference-antenna measurement to make noise measurements with reproducible, well-defined beams directed at the zenith and thereby at the centre of the absorber target. We applied a detailed model of cosmic and atmospheric contributions to the radio sky emission that we used as a noise-temperature reference. We also present a comprehensive analysis of measurement uncertainty including random and systematic effects. The key systematic effect was due to uncertainty in the beamformed antenna pattern and how efficiently it illuminates the absorber load. We achieved a combined uncertainty as low as 4 K for a 40 K measurement of beam equivalent noise temperature. The measurement and analysis techniques described in this paper were pursued to support noise-performance verification of prototype phased-array feeds for the Australian Square Kilometre Array Pathfinder telescope. INTRODUCTION Developing low-noise, wideband, receive-only array antennas is crucial to delivering the Square Kilometre Array (SKA) telescope ). Using aperture arrays and phased-array feeds (PAFs) allows more information to be collected from more of the sky in parallel. This increases instantaneous field of view, increases survey speed, and allows more agile observing strategies as electronic beam steering can be immediate. Array antennas enable telescope designers to spend more money on digital signal processing and less on mechanical signal processing via telescope dishes for a fixed performance goal. This trade-off becomes more effective with time because digital signal-processing is becoming exponentially cheaper while the cost of dishes is not. The SKA project explored this trade-off (Schilizzi et al. 2007;Chippendale et al. 2007;Alexander et al. 2007) and settled on significant deployments of both PAFs and aperture arrays in SKA phase 1 (Dewdney 2013). Important to the development of low-noise array antennas is the ability to make accurate and repro- ducible measurements of their noise performance after beamforming. A common approach for measuring array noise temperature is to apply the same Y-2 A. P. Chippendale, D. B. Hayman and S. G. Hay factor method used for single-antenna astronomy receivers (Sinclair & Gough 1991) to the beamformed power from an array antenna (Woestenburg & Dijkstra 2003). The Y-factor is the ratio of beamformed power between observations of "hot" and "cold" loads. At decimetre wavelengths, the hot load is often provided by microwave absorber at ambient temperature ( Figure 1) and the cold load by cosmic radio emission from the unobstructed sky. A number of groups have reported recent developments in test facilities and measurement techniques for low-noise arrays. Some experiments have positioned the absorber at the end of a tapered metal funnel or ground shield (Warnick 2009;Woestenburg et al. 2011). Others have used more open structures, like CSIRO's in Figure 1, to support the absorber over the array under test (Woestenburg et al. 2012). Previous work with a ground shield has indicated small differences between measured system noise temperatures with and without the shield (Woestenburg et al. 2011). The differences generally decrease with increase in the beamformed directivity of the array under test. At low directivity, where the shield is significant, the shield only partly decreased the effect of the terrestrial environment. In this paper we describe CSIRO's development of an aperture-array noise-temperature testing capability at Parkes Observatory. We develop a Y-factor approach similar to (Woestenburg et al. 2012) but introduce a reference-antenna ( Figure 2) measurement to constrain the pointing of the beam towards the zenith and therefore the centre of the absorber. PARKES TEST FACILITY The aperture-array test facility at CSIRO Parkes Observatory (-32 • 59'56"S, 148 • 16'3"E) uses a large rectangular microwave absorber supported by an open frame. Figure 1 shows that this absorber may be easily rolled over or away from the array under test via a wheelon-track arrangement. The aperture-array test pad is serviced by power, radio-frequency (RF) cabling, and a digital receiver and beamformer in a neighbouring hut. A nearby 12 m parabolic reflector has been used to test arrays at its focal plane using the same digital receiver as the aperture-array measurements. Correlated measurements against signals from the 64 m Parkes radio telescope have also been used to boost testing capability in signal-to-noise ratio and the ability to measure phase (Chippendale et al. 2010). The 64 m dish is located approximately 400 m west of the 12 m dish and aperture array test pad. MEASUREMENT SYSTEM Noise measurements of the 5 × 4 prototype were made with a purpose built 48-port dual conversion superheterodyne receiver followed by a 48-port digitiser and field-programmable gate array (FPGA) based signal processor. This initial measurement system was based on the same generation of technology as the New Technology Demonstrator (Hayman et al. 2008. The test facility has since been updated to use the same hardware that is deployed on the first six ASKAP antennas that form the Boolardy Engineering Test Array (BETA) (Schinckel et al. 2011;Bunton et al. 2011). Figure 3 shows the measurement configuration for this paper. Forty ports of the receiver were connected to the prototype array. One of the spare receiver ports was connected to a directly coupled sample of the radiated noise source used to constrain beam direction. The system was used to record baseband voltages with 0.875 MHz bandwidth to disk for each of the receiver's 48 ports. Each 0.5 s packet of data was time-stamped with a precise measure of Universal Coordinated Time Figure 3. Block diagram of beamformed noise performance measurement setup for a 5 × 4 prototype phased-array antenna. (UTC) from an atomic clock reference. Although the system is capable of online beamforming, offline beamforming on recorded data allowed exploration of different beamforming and radio-frequency interference (RFI) removal strategies. Each LNA output was filtered, amplified, upconverted to an intermediate frequency (IF) of 2.484 GHz, and then down-converted to an IF of 70 MHz. The 26 MHz bandwidth IF at 70 MHz was sampled at 56 MSPS then separated into 32 × 0.875 MHz channels by a digital polyphase filter bank (PFB) implemented in an FPGA based digital signal processing board 1 ). The complex (I/Q) output of a single 0.875 MHz channel, fractionally oversampled by 8/7, was streamed via 10 Gbit Ethernet to a data recording computer attached to a RAID disk storage array. The data recorder stored 0.5 s of contiguous I/Q data for each capture and was capable of approximately one capture every three seconds. Oversampling by 8/7 meant that the sampling period was 1 µs for the 0.875 MHz channel. The absorber load, shown in Figure 1, is a 2,440 mm × 2,900 mm sheet of 610 mm (24 in) pyramidal foam absorber. The manufacturer quotes a normal 1 Compact Array Broadband (CABB) board incidence reflectivity of −40 dB at 1 GHz. The foam is mounted tips-down in an upside-down sheet-metal box as shown in Figure 2. This mounting located the tips of the pyramids 2,590 mm above the ground and 1,270 mm above the surface of the array under test. As the metal sides of the box come to just below the array tips, the effective height of the load above the array for calculating the region of sky blocked by the load is approximately 1,200 mm. A log-periodic dipole array antenna (LPDA) is located at the centre of the absorber load as shown in Figure 2. This is for radiating broadband noise into the array under test so that a beam may be steered towards the centre of the absorber load in a reproducible manner. This antenna (Aaronia HyperLOG 7025) has a typical gain of 4 dBi from 0.7 GHz to 2.5 GHz. The radiation patterns published by the manufacturer indicate that the illumination falls by approximately 0.3 dB from the centre to the edge of the array under test. MEASUREMENT OVERVIEW We adapted the Y-factor method to measure the equivalent noise temperature of a receive-only beamformed antenna array. Over the 0.7 GHz to 1.8 GHz measurement band, the background radio sky has a "cold" brightness 4 A. P. Chippendale, D. B. Hayman and S. G. Hay temperature of approximately 5 K away from the galactic plane, compared to a "hot" microwave absorber at ambient temperature near 300 K. We deduce the noise contribution of the array from the Y-factor power ratio between beamformed measurements of the "hot" absorber and "cold" sky scenes. For each state, the RF measurement frequency was swept from 0.6 GHz to 1.9 GHz in 100 MHz steps by tuning the variable local oscillator (LO). Three 0.5 s recordings were made at each frequency for each measurement state. Measurements at each state were separated by approximately seven minutes. This consisted of three minutes to sweep the measurement frequency and record voltages for a given state, and four minutes to move the absorber and/or toggle the radiated noise source in preparation for recording the next state. The measurements spanned the local Australian Eastern Standard Time (AEST) range at Parkes from 14:35 AEST to 15:11 AEST, which corresponded to the local sidereal time (LST) range from 17:21 LST to 17:57 LST. The centre of this time range 17:39 LST (14:53 AEST) corresponded closely to the transit of the galactic centre which occurs at 17:45 LST. In fact, 17:39 LST corresponds exactly to the epoch at which maximum antenna temperature is expected during a zenith drift-scan with a low-gain antenna from a latitude near 30 • S ). At the midpoint of observations the Sun was at azimuth 281.8 • and elevation 44.9 • and was therefore just blocked by the absorber when it was rolled over the array under test. The physical temperature of the absorber T abs was taken as the mean ambient temperature measured by the observatory's weather station over the measurement period. This resulted in T abs = 294.2 ± 1K where the uncertainty was estimated by the standard deviation of the temperature measurements. The air pressure used for atmospheric emissivity calculation was 973 hPa as measured by the same weather station. The beamformed antenna temperature when observing the absorber was calculated by convolving the array factor pattern with the model sky brightness masked by an ideal model of the absorber with uniform brightness equal to its physical temperature. Diffraction about the edges of the absorber and scattering from its supporting frame were not considered. BEAMFORMING METHOD We introduced a technique to ensure noise measurements were made with well defined and reproducible beams directed at the centre of the absorber. Beam direction and polarisation were constrained by measurements of a radiated noise source located at the centre of the absorber as shown in Figure 2. Beamforming was performed offline in software using maximum signal-to-noise ratio (S/N ) weights (Lo et al. 1966). These were calculated by the method of direct matrix inversion developed by Reed et al. (1974) and summarised by Monzingo et al. (2011). First, the receiver-output sample correlation matrix was calculated by where x(n) is the n th time sample of the column vector of 40 complex array-port voltages x(t). Second, beamformed power P for weight vector w was calculated by For this work we used maximum S/N weights estimated via direct inversion of the sample noise correlation matrix R nn . This noise correlation matrix is calculated according to (1) from data recorded when the array observed the unobstructed sky. The maximum S/N weights are given by (Lo et al. 1966;Widrow et al. 1967) where r xd is the sample cross-correlation vector and d(t) is a reference signal provided as a template of the desired signal. Figures 2 and 3 show how we generated a reference signal by radiating broadband noise from an LPDA antenna located directly above the array. The noise source was fed through a coupler so that a copy of the radiated noise could be recorded directly via a spare port of the receiver. This allowed high S/N measurement of r xd while keeping the radiated noise source weak enough that it increased the noise power measured at individual array ports by just 3 dB. The plane of polarisation of the LPDA was oriented at 45 • to the plane of polarisation of the array elements. The desired reference signal for well defined aperturearray noise measurements is a plane wave from boresight. Although the radiator used as the source for beamforming is only 1.27 m from the array, the nearfield effect is expected to be small. Electromagnetic modelling of the experiment indicates less than 3 K Measuring Noise Temperatures of Array Antennas 5 variation in beamformed noise temperatures due to the near-field effect. The maximum S/N weights calculated from the sample cross-correlation r xd with the reference antenna signal are in fact equivalent to least-mean-square (LMS) beamforming (Widrow et al. 1967;Compton 1988). The LMS algorithm minimises the square of the difference between the beamformed phased-array voltage and the directly-coupled copy of the broadband noise voltage transmitted from the reference radiator. We used L = 500, 000 samples to calculate R nn and r xd for making weights via (3) in each 0.875 MHz channel for beamformed noise measurements. We also verified the convergence of these weights by inspecting plots of weight amplitude, phase, and beamformed noise temperature versus the number of samples L used. We believe this probes the convergence of R nn as we measured r xd with much higher S/N due to correlation against the coupled copy of the reference noise. Verifying convergence times against theory boosted our confidence that the measurement system operated as expected, and that the maximum S/N weight solution was not being perturbed by gain fluctuations or non-stationary RFI. The measured noise temperature converged to within a factor of two of its minimum after 50 samples and to within 2% of its minimum after 2,000 samples. Both of these convergence checks agree well with the theoretical expectation for relative excess output residue power given by (Monzingo et al. 2011;Reed et al. 1974) as This predicts convergence to within a factor of two after 2M = 80 samples and to within 2% after 51M = 2, 040 samples where M = 40 is the number of array ports. DATA SELECTION Having observed that the weights converge sufficiently after 2,000 samples, we reduced all available data by calculating R xx and r xd with L = 2, 000 samples. This generated 250 × 2 ms measurement points from each 0.5 s baseband data file. Before further processing, each 2 ms measurement was analysed for positive outliers in total power that are expected due to transient radiofrequency interference (RFI). Data from all array ports at a particular sampling time were ignored in further analysis when a sample in a single port at that time was judged to be an outlier. Algorithm 1 detected positive outliers by applying an iterative normality test to each array port's total-power time series. This test compared the sample skewness g 1 and sample kurtosis g 2 statistics to the respective values of 0 and 3 expected for a Gaussian distribution. The rational for this normality test is that we expect the 2 ms resolution total-power time series for the "hot" load and "cold" sky signals to have a near-Gaussian distribution. Further, we expect that most potential RFI signals do not have Gaussian distributed total power. Such use of higher order statistics to detect RFI has been surveyed by Fridman (2001). Algorithm 1 Detecting outliers in total-power time series of a single port. 1: for i = 1 → M array ports do 2: calculate g 1 and g 2 for port power time series 3: while |g 1 | > 0.51 and |g 2 − 3| > 1.3 do 4: remove sample with largest magnitude 5: end while 6: end for The thresholds at step 3 for limiting excess skewness and kurtosis above their expected values for normality were manually tuned to remove less than 1% of data from time series judged to contain no RFI on visual inspection. In the future we could generate a kurtosis threshold for RFI based on a desired false-trigger rate by applying the more rigorously derived spectral kurtosis estimator and associated statistical analysis of Nita et al. (2007) and Nita & Gary (2010). RFI strongly affected measurements at 0.8 GHz, 0.9 GHz and 1.1 GHz at which 6%, 35% and 22% of data were discarded respectively. Less than 1% of data were discarded at all other frequencies and there were numerous 0.5 s intervals at particular frequencies where no data was discarded at all. This highlights an advantage of Algorithm 1: that it will not discard any data that are consistent with a Gaussian distribution. Thresholding the data at 2.58σ would have resulted in typically discarding 1% of data, in all measurement intervals, that were consistent with a Gaussian distribution. We checked for potential bias introduced by Algorithm 1 by comparing overall noise temperature results with and without the application of Algorithm 1. At all frequencies where less than 10% of data were discarded by Algorithm 1 (i.e. all except 0.9 GHz and 1.1 GHz) the difference in measured noise temperature with and without Algorithm 1 was less than 0.022 K. The corresponding difference in uncertainty estimates was less than 0.023 K. These differences are at least one order of magnitude smaller than the smallest uncertainties in the current measurement procedure (see Figure 9). Visual inspection of the 1.1 GHz data suggested that it contained low-duty-cycle transient RFI, likely to be from aviation transponders. This was removed effectively by Algorithm 1. Inspection of the 0.9 GHz data suggested more continuous RFI, likely to be from mobile telephony. This was poorly removed by Algorithm 1. Our experience was consistent with Nita et al. (2007) who found that RFI detection based on kurtosis was most effective for low-duty-cycle transient RFI and less effective for continuous RFI. T sys,hot T sys,cold = η rad (T ext,abs(A) + T ext,sky(B) + T ext,gnd ) + T loss + T rec η rad (T ext,sky(A) + T ext,sky(B) + T ext,gnd ) + T loss + T rec (13) BEAMFORMED NOISE MEASUREMENT We deduce the noise contribution of the array from the Y-factor power ratio between beamformed measurements of the "hot" absorber and "cold" sky scenes. We use the notation and unified definitions of efficiencies and system noise temperature for receiving antenna arrays put forward by Warnick et al. (2010). Measurements of the receiver-output sample correlation matrix R xx were made with the array observing a large microwave absorber at ambient temperature giving Correlation matrix R ext,abs(A) measures the thermal noise coupled into the array from the microwave absorber which subtends solid angle A as seen by the array under test. R ext,sky(B) measures the stray emission from the sky from solid angle B that is not blocked by the absorber when it is in position and R ext,gnd measures stray radiation from the ground which subtends the entire backward hemisphere. R loss is the noise correlation matrix due to ohmic losses in the array and R rec is the receiver electronics noise correlation matrix. A second measurement was made with the array observing the unobstructed radio sky Beamformed Y-factor was then taken as the ratio of beamformed powers for these two measurements giving Here P hot = G av rec kBT sys,hot where G av rec is the available receiver gain, k is Boltzmann's constant, B is the system noise equivalent bandwidth, and T sys,hot is the beam equivalent system noise temperature of the array under test illuminated by the "hot" absorber load. When using the definitions of efficiencies and system noise temperature for receiving arrays in Warnick et al. (2010), the beam equivalent system noise temperature T sys may be written in the same form as the single-port system noise temperature formula T sys = η rad T ext + T loss + T rec . Here η rad is the beam radiation efficiency, T loss = (1 − η rad )T p is the beam equivalent noise temperature due to antenna losses, and T p is the physical temperature of the antenna. Warnick et al. (2010) define the beam equivalent system noise temperature T sys of a receiving antenna array as "...the temperature of an isotropic thermal noise environment such that the isotropic noise response is equal to the noise power at the antenna output per unit bandwidth at a specified frequency." The components of beam equivalent noise temperature due to antenna losses and receiver electronics are both referenced to the antenna ports after antenna losses. For example, the receiver electronics component of the beam equivalent noise temperature is given by (Warnick et al. 2010) Here we have normalised by the beam isotropic noise response P t,iso = w H R t,iso w which is the beamformed power response of the array to an isotropic thermal noise environment with brightness temperature T iso when the array itself is in thermal equilibrium at temperature T iso . Under these conditions R t,iso = R ext,iso + R loss . The external contributions from the absorber load, radio sky, and ground are referenced to an antenna temperature before losses, that is "to the sky". For example, the component of the beam equivalent noise temperature due to sky emission from the region of sky blocked by the absorber load is given by (Warnick et al. 2010) T ext,sky(A) = T iso P ext,sky(A) P ext,iso where we have normalised by the beam isotropic noise response P ext,iso = w H R ext,iso w before losses. The pre and post-loss reference planes are referred to each other via the beam radiation efficiency (Warnick et al. 2010) η rad = P ext,iso P t,iso = P ext,iso P ext,iso + P loss . Combining all of these definitions allows (8) to be rewritten as (13) at the top of this page. We define a measurable partial beam equivalent noise temperature T n = η rad (T ext,sky(B) + T ext,gnd ) + T loss + T rec (14) that includes external noise from the sky solid angle B that is not blocked by the absorber and from the PASA (2014) doi:10.1017/pas.2014.xxx ground, and internal noise from antenna losses and receiver electronics. This is essentially T sys less the external sky-noise T ext,sky(A) from the solid angle A blocked by the absorber. This is a step towards the receiver engineer's goal of isolating T loss and T rec , which are the basic receiver noise performance parameters that should be measured to validate the array design. We reference the partial beam equivalent noise temperature T n "to the sky" by dividing through by the beam radiation efficiency η rad . The sky-referenced partial beam equivalent noise temperatureT n is a quantity that can be determined by inverting (13) at the top of the previous page to givê Here we have made the substitution T ext,abs(A) = αT abs where α is a beam efficiency factor indicating how well the absorber load fills the beamformed beam and T abs is the physical temperature of the absorber. We calculate α from the array pattern and absorber geometry in §10.1. We calculate T ext,sky(A) from well-established models of the radio sky brightness in §10.2. The ideal case of an infinite absorber α = 1, zero sky emission T ext,sky = 0 K, and fixed ambient temperature T abs = 295 K reduces (15) tõ We have often used (16) when order 10 K relative accuracy is acceptable for initial comparison of arrays with identical geometry and test configuration. When order 1 K absolute accuracy is desired, we use (15). This is equivalent to making the following systematic corrections to (16) Of interest to astronomers wishing to use the array as an aperture-array is the system temperature T sys,cold when the array observes the unobstructed radio sky. This is given bŷ The beam equivalent receiver sensitivity can be expressed as (Warnick & Jeffs 2008;Warnick et al. 2010) where A e is the beam effective area, η ap is the aperture efficiency, and A p is the physical area of the antenna array projected in a plane transverse to the signal arrival direction. . Partial beam equivalent noise temperature referenced to the skyTn = T ext,sky(B) + T ext,gnd + (T loss + Trec)/η rad of the 5 × 4 connected-element "chequerboard" array. Maximum S/N weights for a beam directed to zenith were used. Thick, red error bars show uncertainty due to random effects only. Longer, thin, black error bars show uncertainty due to both random and systematic effects. The intervals defined by the error bars are believed to contain the unknown values ofTn with a level of confidence of approximately 68 percent. Figure 4 presents the partial beam equivalent noise temperature referenced to the skyT n = T n /η rad for the prototype 5 × 4 array with error bars showing combined standard uncertainty u c (T n ) (i.e. estimated standard deviation inT n ). Since it can be assumed that the possible estimated values ofT n are approximately normally distributed with approximate standard deviation u c (T n ), the unknown value ofT n is believed to lie in the intervalT n ± u c (T n ) with a level of confidence of approximately 68 percent. The uncertainty analysis is presented in §12 and follows the framework of Taylor & Kuyatt (1994). It applies standard methods for propagating uncertainty in linearly-combined variables to the first-order Taylor-series expansion of (15). RESULTS The thick red error bars show a combined standard uncertainty that only includes components of uncertainty arising from random effects. These are uncertainties u(P hot ) and u(P cold ) and estimated covariance u(P hot , P cold ) in measurements of the "hot" and "cold" beamformed powers, and uncertainty u(T abs ) in measurements of the physical temperature of the absorber. These uncertainties were estimated via statistical methods and are therefore Type A evaluations of uncertainty in the framework of Taylor & Kuyatt (1994). The thin black error bars show the combined standard uncertainty u c (T n ) that includes components of uncertainty arising from both random and systematic effects. The systematic effects included uncertainty in the absorber illumination efficiency u(α) and uncer- 5. Beam equivalent system noise temperaturê Tsys = T ext,sky(A) + T ext,sky(B) + T ext,gnd + (T loss + Trec)/η rad of the 5 × 4 connected-element "chequerboard" array referenced to the sky. Maximum S/N weights for a beam directed to zenith were used. The data with error bars show the system noise temperature for the measurement configuration of this paper where the array observed the galactic centre. The intervals defined by the error bars are believed to contain the unknown values ofTsys with a level of confidence of approximately 68 percent. The circles without error bars show an estimate of the system noise temperature for the array observing out of the galactic plane towards the coldest region of radio sky that transits at the zenith at Parkes (at 3:51 LST). For clarity of presentation, error bars are not plotted for this second series although they will be very close to a scaled copy of the error bars for the measurement towards the galactic centre. tainty in the beam equivalent external noise temperature due to the radio sky u(T ext,sky(A) ) over solid angle A that is blocked by the absorber. Both of these uncertainties are functions of the beamformed antenna pattern. They are evaluated via assessments of the range of plausible beam patterns defined by the uniform and optimised weights discussed in §10. These assessments are Type B (non-statistical) evaluations of uncertainty according to Taylor & Kuyatt (1994). The dominant component of uncertainty was the systematic effect characterised by u(α). This arises from the fact that the beamformed antenna pattern is not measured and so is estimated from theory. Uncertainty due to random effects was dominated at most frequencies by the contribution of u(P cold ). At most frequencies u(P cold ) characterised noise in measured beamformed power associated with the beam equivalent system temperature. This could be reduced by increasing measurement bandwidths and/or integration times. However, external RFI was the dominant effect contributing to u(P cold ) and therefore uncertainty due to random effects at 0.9 GHz. For the results in Figure 4 we estimated T ext,sky(A) using weights with uniform amplitudes and phases that are conjugate matched to the expected spherical wave from the reference radiator. These same weights are used in §10.1 to estimate the lower plausible limit of α. Under the approximate assumption of a direction independent sky brightness, T ext,sky(A) will be directly proportional to α. Therefore we expect that the uniform amplitude weights should yield an approximate lower bound for T ext,sky(A) . This should result in a conservative overestimate ofT n via (15). Figure 5 shows the beam equivalent system noise temperature referenced to the skyT sys = T sys /η rad . This is a key factor that determines the receiver sensitivity for an observation towards a particular part of the sky via (19). It is a property of both the receiver and the receiver's orientation with respect to the sky and surrounding environment. This is in contrast toT n which is controlled to be as close as practical to a property of the receiver in isolation. The error bars in Figure 5 show combined standard uncertainty u c (T sys ). A second trace (blue circles) shows the expected reduction inT sys if one of the coldest regions of the sky were used for the "cold" scene instead of the hotter galactic centre that was used in this work. The value ofT n , on the other hand, is significantly less dependent on the region of sky used as a reference. Although it becomes clear in §12 that using a cold region of sky would reduce uncertainty inT n . Absorber Illumination Efficiency We define the absorber illumination efficiency α as a dimensionless metric of the beamformed antenna pattern D(θ, φ) according to This follows the beam efficiency definition of Nash (1964) but with the numerator evaluated over the solid angle subtended by the absorber load A instead of the solid angle of the main beam. This is equivalent to evaluating the solid-beam efficiency, defined in the IEEE Standard Definitions of Terms for Antennas (IEEE Std 145-1993), for solid angle A but ignoring antenna losses. Figure 6 shows values of α calculated for array patterns formed by two different weightings of 40 isotropic elements arranged with the same geometry as the 5 × 4 prototype. The weight phases were conjugate matched to a supposed spherical wavefront emanating from the reference antenna used to constrain beam pointing. Weight amplitudes were assigned according to two different methods: uniform amplitudes and an optimised amplitude taper. These choices are thought to encompass the plausible range of amplitude tapers imposed by the maximum S/N weights of (3) with direction and polarisation constrained by the reference antenna measurement. We explored a third amplitude taper function matched to the expected illumination from the LPDA reference antenna according to pattern measurements provided by its manufacturer. The LPDA α result was not plotted as it was within 1% of that obtained by the uniform amplitude weights. Based on the range of α exhibited in Figure 6 we assessed that the value of α was highly likely (near 100% probability) to lie in the range α = 0.9 ± 0.1. Uncertainty in α was modelled by a uniform distribution over this range. We divided the half-range of this distribution of 0.1 by √ 3, according to Taylor & Kuyatt (1994), to estimate the standard uncertainty in the absorber illumination efficiency u(α) = 0.0577. The uniform-weight pattern is easy to calculate and we expect it to give the narrowest main beam but with high side lobes. This should perform well at lower frequencies where the 5 × 4 array is too small to form a main beam that falls entirely within the area blocked by the absorber load. In fact, the uniform-weight α turned out to be consistent with the optimised-taper α below 1 GHz. The optimised taper was calculated by parameterising an amplitude taper for an ideal boresight beam with the following taper function that is separable on x and y coordinates (Nash 1964) Here L x is the linear size of the array along the x-axis. Parameters K x and n x determine the shape of the taper function factor that separates along the x-axis. We tried two constrained optimisation techniques to find the parameters of (21) that lead to an array pattern that maximised α as calculated by (20), subject to the constraints 0.1 < K < 0.999 and 0.5 < n < 4. Both the SNOPT implementation (Gill 2013;Gill et al. 2005) of sequential quadratic programming (SQP) optimisation and Standard Particle Swarm Optimisation SPSO-2011 (Clerc 2012;Zambrano-Bigiarini et al. 2013) yielded the same optimal value of α to within 0.04%. We calculated the array pattern assuming isotropic elements (array factor) as we wanted a measurement and analysis method that does not require special knowledge of the array design beyond its geometry. This allows measurement and comparison of arrays provided as complete "black-box" systems. Our technique will become even more accurate for larger arrays, such as the ASKAP 188-port PAFs, that can form narrower beams with lower side lobes and therefore more efficiently illuminate the load. A better estimate of the partial beam equivalent noise temperature may be formed by including simulated or measured element patterns, but this is beyond the scope of the current work. Our technique is fair for the current array which has element patterns with relatively low gain. Alternatively, we could force α closer to unity by employing a ground shield to reflect as much of the array pattern as possible onto the load, by reducing the vertical spacing between the load and the array under test, or by using a larger load. We are building a ground shield and a larger load for future experiments. Sky Brightness CalculatingT n via (15) orT sys via (18) requires an estimate of T ext,sky(A) . This is the component of pre-loss beam equivalent noise temperature due to radio emission from the region of sky A blocked by the absorber when in position for the "hot" measurement. Figure 7 shows a break-down of contributions to T ext,sky(A) for measurements made from the Parkes Test Facility towards both the hottest and coolest regions of the radio sky observed during zenith pointing drift scans. These have been calculated with the same uniform amplitude weights used to estimate α in §10.1. We estimated T ext,sky(A) by convolving the sky brightness T bsky (θ, φ) with the beamformed antenna pattern D(θ, φ) to give The numerator is evaluated over the solid angle A subtended by the absorber load when in position over the array under test as shown in Figure 1. The denominator normalises by the beam solid angle. Figure 7. Breakdown of contributions to T ext,sky(A) , which is the beam equivalent external noise temperature due to radio emission from the area of sky blocked by the absorber load. The Global Sky Model (GSM) contribution is calculated at 17:39 LST, near transit of the galactic centre, when the measurements for this paper were made. It is also calculated at 03:51 LST when zenith observations from latitudes near 30 • S point out of the galactic plane towards one of the coldest patches of radio sky as deduced by measured and modelled drift-scans in §7 of . The thick lines show total T ext,sky(A) for these two limiting observation epochs. The thin lines show the breakdown of these totals into contributions from the GSM, cosmic microwave background (CMB), atmosphere, and Sun. The curves for the diffuse backgrounds (i.e. all but the Sun curve) would be directly proportional to α if the sky brightness were direction independent. The sky brightness is modelled as background radio sky brightness T b0 , attenuated by a dry atmosphere with: air mass X(θ) as a function of zenith angle θ, transmissivity e −τ X(θ) , and atmospheric noise emission represented by an equivalent physical temperature T atm T bsky (θ, φ) = T b0 (θ, φ)e −τ X(θ) Background sky brightness T b0 was estimated at each measurement frequency using the global radio sky model (GSM) of de Oliveira-Costa et al. (2008) plus an isotropic Cosmological Microwave Background (CMB) contribution of 2.725 K (Fixsen 2009). The integral in (22) was evaluated with 0.5 • resolution in θ and φ, which exceeds the 1 • resolution of the GSM evaluated at the frequencies of interest with principal component amplitudes locked to the the 408 MHz map of Haslam et al. (1982). The Sun's contribution is considered by adding a single pixel to T b0 (θ, φ) with brightness tempera-tureT ⊙Ω⊙ /Ω pixel . HereT ⊙ is the sum of the steady state component of solar emission plus the mean of the slowly changing component, all normalised to the mean visible solid angle of the SunΩ ⊙ = 0.22 deg 2 . Frequency Mean Brightness Temperature The mean brightness temperature is referenced to the mean visible solid angle of the SunΩ ⊙ = 0.22 deg 2 . This table gives the constant component and the mean value of the slowly varying component. b The numerator corresponds to a period of maximum solar activity and the denominator corresponds to a period of minimum solar activity. The spectrum ofT ⊙ was interpolated by fitting a power law to the single-frequency values tabulated by Kuz'min & Salomonovich (1966) and reproduced here in Table 1 for convenience. Dry atmosphere transmissivity e −τ at zenith was calculated according to Annex 2 of ITU Recommendation ITU-R P.676-9; typical equivalent physical temperature of the atmosphere T atm = 275 K was taken from ITU-R P.372-10; and the air mass versus zenith angle X(θ) model fit of Young (1994) was used. STRAY EXTERNAL NOISE The "stray" beam equivalent external noise can be broken into sky and ground components. The stray-sky noise T ext,sky(B) may be estimated via the same method as T ext,sky(A) in (22), but evaluating the integral in the numerator over the area of sky not blocked by the absorber which we label solid angle B. We assumed T ext,sky(B) was unchanged between "hot" and "cold" measurements, and therefore neglected scattering from the sparse metal frame that supports the absorber. Figure 8 shows the resulting T ext,sky(B) estimate for the experiment presented in this paper. It will be highest when the galactic centre is almost but not quite blocked by the absorber. It will be lowest when the galactic centre is near/below the horizon or completely blocked by the absorber. The stray-ground radiation T ext,gnd can also be estimated via (22), but evaluating the integral in the numerator over the backward hemisphere and substituting the sky-brightness model with a ground-brightness model T g (θ, φ). An order of magnitude estimate of T ext,gnd may be made by assuming that the ground brightness takes on the direction independent value of T g . We would then estimate T ext,gnd = (1 − e f )T g . Here e f is the forward efficiency of the beamformed antenna pattern. This may be calculated via (20), but with the numerator evaluated over the full forward hemisphere instead of just the solid angle blocked by the absorber. Stray radiation could be measured by making beamformed Y-factor measurements with and without a ground-shield. As mentioned above, such a ground shield is being manufactured for ongoing noise measurements of array receivers at the Parkes Test Facility. MEASUREMENT UNCERTAINTY Uncertainty in the measured noise temperature was estimated via the framework of Taylor & Kuyatt (1994). The combined standard error u c (T n ) is an estimate of the standard deviation inT n . This estimate is made via the linear combination of uncertainties in the first order Taylor series expansion of (15). This gives where u(x) is an estimate of the standard deviation associated with input estimate x and u(x, y) is an estimate of the covariance associated with input estimates x and y. Evaluation of the partial derivatives of (15) and substitution of (8) and (18) yields the square of the relative combined standard uncertainty as a function of input estimates Applying the same uncertainty analysis toT sys as defined by (18) In both (25) and (26), the first term's dependence on u(P hot ) and u(P cold ) suggests that beamformed power measurements should be made with adequate integration time and/or measurement bandwidth to reduce measurement variance via averaging. The first term's dependence on u(P hot , P cold ) highlights the necessity to minimise gain and/or noise performance drift between "hot" and "cold" measurements. The second term highlights the importance of knowing the absorberillumination efficiency α and accurately measuring the ambient temperature of the absorber. Good knowledge of the beam pattern of the array under test is required to accurately estimate α. The third term highlights the importance of estimating the beamformed antenna temperature which is a function of the background sky brightness and the beamformed antenna pattern. Comparing (25) and (26) shows that u(T ext,sky(A) ) contributes less uncertainty toT sys than toT n , particularly for low-noise arrays under test with high Y-factors. Figure 9 shows the contribution of each input uncertainty, via (25), to the combined standard uncertainty u c (T n ) in partial beam equivalent noise temperature. This shows that the combined uncertainty is largely u(T ext,sky(A) ) u(P hot , P cold ) u(P hot ) u(T abs ) Figure 9. Breakdown of contributions to combined uncertainty uc(Tn) in partial beam equivalent noise noise temperature due to each input uncertainty in (25). dominated by uncertainty u(α) in the illumination of the absorber, which is in turn due to uncertainty in the beamformed antenna pattern. In the above uncertainty analysis we have not included the correlation between α and T ext,sky(A) . This correlation arises as they are both direct functions of the beamformed antenna pattern. In the future, we could take advantage of the fact that the stray external noise T ext,sky(B) is also a function of the antenna pattern and correlated with α and T ext,sky(A) . Including these correlations in the uncertainty analysis at the same time as subtracting an estimate of T ext,sky(B) fromT n may lead to some cancellation in uncertainty terms that depend on the antenna pattern. This could reduce uncertainty at the same time as moving us closer to extracting T loss and T rec fromT n . CONCLUSION We have demonstrated the measurement of partial beam equivalent noise temperature as low asT n = 40 K with a combined standard uncertainty (estimated standard deviation) as low as u c (T n ) = 4 K. This combined uncertainty was dominated by uncertainty u(α) in the efficiency with which the beamformed array pattern illuminates the absorber. The prioritised action-list for reducing uncertainty further is: 1. Reducing uncertainty in absorber-illumination efficiency α by: increasing the solid angle subtended by the absorber by increasing its size or moving it closer to the array under test, adding a ground shield to reflect more of the antenna pattern onto the absorber, or accurately measuring or modelling the beamformed antenna pattern. 2. Moving the reference radiator into the far field of the array under test. 3. Increasing integration time for the "cold" sky measurement. 4. Reducing uncertainty in beam equivalent noise temperature due to radio emission from the sky T ext,sky(A) by: using the coldest possible region of the sky for the "cold" sky measurement, -accurately measuring or modelling the beamformed antenna pattern, and improving the accuracy of the global sky model. Addressing items (1) to (3) would reduce the median combined standard uncertainty to just u c (T n ) = 2 K over 0.7 GHz to 1.8 GHz. ONGOING DEVELOPMENT After the measurements were made for this paper, the Parkes test facility was upgraded to include a 192-port down-conversion and digital receiver system. This supports 304 MHz instantaneous bandwidth tunable over 0.7 GHz to 1.8 GHz with 1 MHz spectral resolution. It is capable of online measurement of the full 192 × 192 correlation matrix and online beamforming for nine simultaneous dual-polarisation beams. This was achieved by installing the electronics that are normally found in the pedestal of ASKAP's first six "BETA" antennas (Schinckel et al. 2011;Bunton et al. 2011) into a hut near the test-pad. This receiver can be connected to test arrays mounted on the aperturearray test pad or at the focus of the nearby 12 m dish via RF coaxial cables in trenches. The upgraded facility was recently used to verify an enhanced ASKAP LNA and chequerboard array design that has lownoise performance over the full 0.7 GHz to 1.8 GHz band . This improvement will be included in the ASKAP Design Enhancements (ADE) PAF (Hampson et al. 2012). The facility was also used to characterise the astronomical performance of a 188port BETA PAF that is currently installed at the focus of the 12 m dish. In the future, better estimates of the array noise properties may be obtained by electromagnetic modelling of test setups and the external environmental contribution with or without a shield. Work is underway to build a ground shield to both reduce u(α) and allow estimation of "stray" beam equivalent external noise T ext,sky (B) + T ext,gnd . We are also building a larger absorber load for use at our radio quiet site at the Murchison Radioastronomy Observatory (MRO) where ASKAP is sited. This radio quiet site will allow more repeatable noise measurements below 1 GHz where the RFI situation at Parkes becomes challenging.
2019-04-13T07:47:06.400Z
2014-02-07T00:00:00.000
{ "year": 2014, "sha1": "4ace0579ebfe585bcebd4ae52b4eb1372b611034", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/79F8A6BC5B8219B11165FBEAAB2354E7/S1323358014000113a.pdf/div-class-title-measuring-noise-temperatures-of-phased-array-antennas-for-astronomy-at-csiro-div.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5ede6493cc3a61a8a71ddcfd980d8347e8ee7f60", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
252815776
pes2o/s2orc
v3-fos-license
A Computationally Efficient, Robust Methodology for Evaluating Chemical Timescales with Detailed Chemical Kinetics Turbulent reacting flows occur in a variety of engineering applications such as chemical reactors and power generating equipment (gas turbines and internal combustion engines). Turbulent reacting flows are characterized by two main timescales, namely, flow timescales and chemical (or reaction) timescales. Understanding the relative timescales of flow and reaction kinetics plays an important role, not only in the choice of models required for the accurate simulation of these devices but also their design/optimization studies. There are several definitions of chemical timescales, which can largely be classified as algebraic or eigenvalue-based methods. The computational complexity (and hence cost) depends on the method of evaluation of the chemical timescales and size of the chemical reaction mechanism. The computational cost and robustness of the methodology of evaluating the reaction times scales is an important consideration in large-scale multi-dimensional simulations using detailed chemical mechanisms. In this work, we present a computational efficient and robust methodology to evaluate chemical timescales based on the algebraic method. Comparison of this novel methodology with other traditional methods is presented for a range of fuel-air mixtures, pressures and temperatures conditions. Additionally, chemical timescales are also presented for fuel-air mixtures at conditions of relevance to power generating equipment. The proposed method showed the same temporal characteristics as the eigenvalue-based methods with no additional computational cost for all the 1cases studied. The proposed method thus has the potential for use with multidimensional turbulent reacting flow simulations which require the computation of the Damkohler number. Turbulence-chemistry interaction depends on two main timescales, namely, the flow timescales and the chemistry timescales. The Damkoḧler (Da), defined as the ratio of mixing/flow timescales to chemical timescales is an important parameter that characterizes the behavior of the reacting flow system based on flow/turbulence and chemical kinetics (tf/ tc) [1]. While the definitions of the flow timescales are well-defined, namely integral timescales and Kolmogorov timescales, there are several different definitions of chemical timescales with varying degrees of complexity. The chemical timescale can be computed using two methods (i) algebraic methods (ii) eigenvaluebased methods. Algebraic methods define chemical timescales based on reaction-rate constants, the net-production rate of a species and species mass-fractions [2][3][4]. Eigenvalue-based methods define the chemical timescales based on the Jacobian describing the reacting flow system [5][6][7][8][9][10]. Reference [11] has a detailed discussion on both the algebraic and eigenvalue-based methods. The main objective of this paper is to present a novel, robust and computationally efficient algebraic method to compute chemical timescales for complex chemical mechanisms. In addition to providing insight into the chemical kinetic timescales, this approach can be used in multidimensional reacting flow simulations where the turbulence-chemistry interactions are modeled using the Eddy Dissipation Concept (EDC) model as in [11]. This paper is organized as follows. Section 2 discusses the governing equations describing a constant pressure, adiabatic combustion system and reviews common methods used to compute chemical timescales for such systems. Section 3 discusses the proposed method and discusses its advantages compared to currently used methods. Section 4 presents validation of the method and the importance of tight numerical tolerances in computing the species mass fractions. Section 4 further presents application of this method to various case studies under different thermodynamics constraints (isothermal and adiabatic), fuel-air mixtures, and initial conditions of temperature and pressures. Section 5 briefly summarizes the main findings of this work. Governing equations A constant pressure, adiabatic combustion system can be described by the coupled solution of the mass and energy conservation equations as shown in Eq (1) and (2), respectively. where, and where 'i' is the reaction index. If the reacting system f has K species, the time evolution of the mass fractions can be expressed in matrix form as and the Jacobian J, representing the system can be written as Chemical timescale definitions As stated above, algebraic methods define timescales as functions of the net production rate (the RHS of Eq. (1)) along with the species mass fractions or reaction rate constants. Some of the common definitions of timescales based on the algebraic method are shown below and discussed in this work. Inverse Reaction Rate Time Scale (IRRTS) is defined as where I is the maximum number of reactions in the mechanism. The rate of progress of reaction has units of moles/(volume-time) and hence must be multiplied by " / to yield dimension of 1/time. Thus, the time-constant of a system is defined based on the fastest reaction rate. This definition places equal importance on all reactions in a system. Since the reaction rate is a product of the reaction rate constants (kf/kr) and the species concentrations as shown in Eq (4), the temporal variation of the timescales of a system can vary by several orders of magnitude from the initial to the final state. In combustion systems, the concentration of certain species such as the fuel and/or oxidizer change from high initial values to near zero at the final stages leading to large temporal variations in the timescales of the system. The Ren Time Scale (RTS) is defined as For all species with ! < 0. Since Yk > 0 and ! < 0, the computed values of 7*= from Eq. 10 is always negative. The absolute value of the computed maximum value of 7*= (least negative value) is reported as the chemical timescale for the RTS method. Similar to the RTS timescale, the Ren Product Time Scale (RTPS) is defined as For ! > 0. (3), the net rate of production of species k, ! , is based on the production/depletion of the species due to all reactions in a mechanism. The net rate of production of a species depends on the reaction rate constants and molar concentrations of various species which changes continuously with time as the system proceeds from the initial state (temperature & composition) As shown in Eq to the final state. The temporal variation of temperature computed using Eq (2) is used to compute the reaction rate constants as shown in Eq (6). One of the main drawbacks of using timescales based on algebraic methods is that they rely on mass fractions and ! . As the system approaches examples. Reference [11] describes these methods in detail. The large computational cost of computing the Jacobian matrix with sufficient numerical accuracy is the main drawback of eigen value-based methods for computing timescales. While it may seem that the Jacobian computed for implicit time-marching might be used to compute the chemical timescales, this is not the case. Most time-marching schemes compute the Jacobian matrix numerically using methods such as finite-differencing. The accuracy of the Jacobian matrix for time-marching is not of high importance. The use of an approximate Jacobian for time-marching may affect the convergence rate but not the accuracy of the solution. Many numerical schemes exploit this consideration to compute/update the Jacobian matrix once every few time-steps to reduce the overall time to solution. Using an approximate Jacobian will however not yield accurate chemical timescales in eigenvalue-based methods. The Jacobian matrix can be computed using analytic expressions to avoid loss of accuracy due to numerical differentiation, but this process can be tedious and timeconsuming. The computational cost will also be prohibitive for multi-dimensional flow simulations where it is necessary to compute the timescale for each grid-point/cell at each timestep. This problem is further exacerbated for multidimensional simulations using detailed chemical kinetics with tens to hundreds of species. Hence, eigenvalue-based timescale computations are not practical for multidimensional CFD simulations. In this work, we will discuss the IETS method as a representative eigenvalue-based method for comparison with the algebraic methods. The IETS method defines the timescale as shown below Proposed method It has been pointed out that defining chemical timescales based on the species concentrations and net production rate can lead non-physical values and/or behavior. Reference [11] discusses a case where a single-step reaction yields two distinct timescales, which is non-physical since a singlestep reaction is characterized by a single timescale. Drawbacks associated with the timescales computed using various algebraic methods can be remedied if the RHS of Eq (1) can be written as In the above equation, Ck represents a collection of all terms in ! that do not include Yk. Based on this description of the RHS, Eq (1) could then be written as The term ! has time-units (seconds) and the absolute value of ! can be considered as a time constant of the species k. Thus, a reaction mechanism with 'K' species will have 'K' different chemical timescales. The extremum values of | ! |represent species with the fastest and slowest kinetics of a system as it proceeds towards steady state. The fastest times scales (lowest value of | ! |) is considered to be the chemical timescale. Computation of the RHS of Eq (1) needed to describe the time evolution of the species mass-fraction, also yields the time evolution of the time constants describing each species, at no extra computational cost. The numerical issues associated with the net production rate tending to zero at steady state does not affect the timescale since neither ! nor Ck approach zero even if the net production rate tends to zero. Thus, the proposed method is computationally inexpensive and numerically robust. Results and discussions In this section we present validation of the in-house solver developed to compute the net production rate as discussed in Section 3. The results of the in-house solver are compared to those predicted by Cantera (version 2.5) [12]. We also include the effect of numerical tolerances on the algebraic timescales. The validated solver with the correct numerical tolerances is used to examine the time scale for a series of case studies under perfectly stirred reactor (PSR) conditions. Code validation We present validation results for the case of oxidation of CO to CO2 under isothermal conditions reported in Ref. [11]. The initial mixture consists of 2 moles of CO, 1 mole of O2 (stoichiometric mixture) and 0.5 moles of H2O at a pressure of 1 atm and 1500K. The simulations are time-marched to a final time of 10 milliseconds (10 -2 sec) using the in-house solver and Cantera (version 2.5.1) [12] with the GRI 3.0 mechanism [13]. Figure 1 shows that the temporal variations of mass fractions of CO and CO2 obtained using Cantera and the in-house solver are in very good agreement. Figure 1 also shows that CO is rapidly oxidized to CO2 within about 100 microseconds (from 10 -5 < t < 10 -4 ). For 2x10 -4 < t < 10 -2 seconds, the change in the CO2 mass fraction is negligible. Effect of numerical tolerances Algebraic methods such as RPTS and RTS are defined based on species concentrations and the net production rate ( ! ) as discussed in Equations [9][10][11] shown in Section 2. As the system approaches steady state the net production term tends to zero. The species concentrations and net production rates of trace species and radicals which are formed and destroyed quickly during the combustion process can be very small (10 -50 < ! <10 -20 ), hence very high numerical accuracy of the species concentrations is required during the time-marching to accurately compute the chemical time constants using the RTS and RPTS methods. To ensure that the net production rates and the species mass fractions are computed with sufficient accuracy, very low values of relative and absolute tolerances are required. In this work, we have used the most stringent values of relative and absolute tolerances allowable, namely, an absolute tolerance of 10 -21 and a relative tolerance of 10 -16 in both the in-house solver and Cantera. Less stringent tolerances (such as using an absolute tolerance of 10 -6 ) do not impact the concentration of major species but can impact the chemical timescale evaluations. Figure 3 and Figure 4 shows the RTS and RPTS time scale computations conducted with the most stringent tolerance values stated above (10 -16 & 10 -21 ) along with the same computations conducted with less stringent tolerance criterion (absolute tolerance set to 10 -6 ). It can be seen that in Figure 3 and Figure 4, the chemical timescales obtained using a an absolute tolerance of 10 -6 is oscillatory in nature. For the RPTS method (blue line), after t >~ 5x10 -5 when CO is rapidly oxidized to CO2, the chemical timescale is determined by CO2 since it is the species with the largest net production rate. The RTS method (black line) determines the chemical timescales based on species that are depleted ( ! < 0) shows that after the oxidation to CO to CO2 is complete, timescale determining species are C3H7 and C3H8. Parametric studies of simulations conducted with relative tolerance ≥ 10 -15 and absolute tolerance ≥ 10 -18 , showed that the chemical timescales were oscillatory in nature, for both the RTS and RPTS methods. For these simulations with a less stringent tolerance criteria, the instantaneous chemical timescales were determined by short-lived trace species such as CH, CH2CHO, CH3OH, C, CH3O etc. whose species concentrations varied between 10 -20 and 10 -40 . Since different trace species with low values of Yk and ! determined the timescales at various time-instants the corresponding chemical timescales were highly oscillatory in nature as shown in Figure 3 and Figure 4 (and reported in Ref. [11]). From this study, it is clear that the oscillatory nature of chemical timescales of the RTS and RPTS methods are numerical artifacts due to inadequate convergence tolerances. Hence, it is very important to have very stringent numerical tolerances to obtain accurate (non-oscillatory) chemical timescales using the RTS and RPTS methods. Case studies: In this section we present comparison of the proposed new method of computing chemical timescales with other traditional methods, namely, RTS, RPTS and IRRTS (algebraic) and IETS (eigenvalue-based) for several cases. In addition to the simple system of isothermal oxidation of CO discussed in Ref. [11], we present the combustion of practical fuels such as hydrogen and methane. Since most engineering applications do not involve combustion under isothermal conditions, we present the combustion of these fuels under adiabatic conditions as well. The following cases will be discussed for both isothermal and adiabatic conditions. in engines fueled with natural gas. In a typical reciprocating engine using natural gas as a fuel, the cylinder pressure at SOI is about 25 atm and the fuel-air mixture is at about 750K at the time of sparking. For these conditions, we will study a stoichiometric CH4/O2 mixture and a lean CH4/O2 mixture with j = 0.8. Isothermal cases: The time-evolution of a chemical system under constant temperature (isothermal) conditions is accomplished by the coupled solution of the system of equations for the species evolution described by Eq (1) and setting the RHS of Eq (2) to zero (implying no change in temperature with time). Figure 6 shows the four case studies for 0 < t < 10 -2 under isothermal conditions. For all the cases shown in Figure 6, there are some important common characteristics. The RTS, RPTS and IRRTS methods show that the timescales are at a minimum just near ignition of the fuel-oxidizer mixture. As the system approaches steady state the chemical time scale increase monotonically. As explained earlier, algebraic methods such as RTS and RPTS use the net production rate while the IRRTS method uses the net rate of progress to evaluate the timescales. As the system approaches steady state these terms approach zero and hence the timescale would tend to infinity. It is seen that for these methods, the temporal variation of the chemical timescales span almost three to four orders of magnitude as the system progresses from the initial condition Adiabatic cases: The time-evolution of a chemical system under constant pressure, adiabatic conditions is accomplished by the coupled solution of the system of equations describing the temporal variation of species evolution described by Eq (1) and the temperature using Eq (2). Figure 7 shows the temporal variation of the chemical timescales for various methods for the four cases discussed above, under adiabatic conditions. The temporal variation of the temperature of the system is also show (solid red line) to depict the transition of the mixture from its initial to the final state (temperature and species composition). It is seen that the timescales for the various methods under adiabatic conditions share the same qualitative characteristics as the timescales under isothermal conditions. The algebraic methods show a large temporal variation in the timescales (orders of magnitude), whereas the IETS and the new method proposed in this work show that the timescales vary less than a factor of two from the initial mixture state to the final state after undergoing complete combustion. The timescale predicted by the IETS method and the proposed method also differ by less than a factor of two at any given time during the combustion process for all the cases considered. Table 1 shows the chemical timescales for CH4-air mixtures (stoichiometric and lean conditions) at steady-state for various methods. The timescales predicted by different algebraic methods differ widely as shown in Table 1 chemical timescales throughout the combustion process (initial to final state). It is well-know that N2 kinetics are orders of magnitude slower than C/H kinetics in combustion reactions. It was seen that the N2 chemical timescales were on the order of tens of milliseconds as the combustion process proceeded for 1500 K < T < 1700 K. At steady state, when the temperatures were in excess of 2600K, the N2 chemical timescales were a few milliseconds, as expected. The fastest chemical timescale during the initial stages of combustion was associated with the species 'NNH' whereas the fastest chemical timescale corresponded to H2O2 after the combustion process was complete and steady-state temperatures (> 2600 K) were reached. Near Ignition conditions in Natural gas engines: Hundreds of thousands of vehicles with natural gas engines are operating all over the world due their economic and environmental benefits. Natural gas engines generate almost no emissions of nitrogen oxides, particulate matter, volatile organic compounds, or carbon monoxide and have thus been widely used in a variety of medium and heavy-duty engine applications. Additionally, engines powered by natural gas cost significantly less than their gasoline and diesel counterparts and provide a pathway for a hydrogen economy. Given these benefits, there have been several modeling efforts for natural gas-powered engines [14][15][16][17]. We discuss the application of the proposed method in computing chemical timescales for the fuel-air mixture at conditions just prior to the spark (ignition). Typical values of the gas temperature and pressure prior to ignition is T = 750K and P = 25 atm. As with the earlier cases, the algebraic methods show a large variation (orders of magnitude) in the timescales during the combustion process and a monotonically increasing value of timescales after the temperature reaches a steady state. It is also seen that at t > 10 -3 when steady-state temperatures have been reached, the chemical timescales predicted by the algebraic methods under lean conditions are about three orders of magnitude lower than the stoichiometric CH4/air mixtures (see Table 1). This large variation in chemical timescales based on composition predicted by algebraic methods can lead to serious numerical instabilities in multi-dimensional reacting flow simulations where the equivalence ratios are expected to vary both spatially and temporally during the simulations. It is also seen that the proposed method shares the same characteristics as the IETS method with a minimal temporal variation in the timescales from initial to final (post combustion) conditions. The timescales of both the IETS and the proposed method under nearignition conditions are about an order of magnitude lower than the chemical timescales at atmospheric pressure. This decrease in chemical timescale is expected on account of the fact that though the mixture is at a lower initial temperature (750K compared to 1500K), the pressure is twenty-five times higher. It is also noted that both the IETS method and the proposed method show that chemical timescales are a weak function of mixture composition (equivalence ratio) and that they differ by about 20% at steady state even at elevated pressures. Furthermore, it was noted that N2 had the longest chemical time constant throughout the combustion process (initial to final state) with the chemical timescales on the order of tens of milliseconds in the initial stages of the combustion process to a few milliseconds after steady state was reached (as with the case where the initial mixture was at 1 atm and 1500K). It was also noted that at higher pressures and lower initial temperatures (25 atm/750K), the fastest chemical timescales during the initial stages of combustion were due to CH2(s), as opposed to the species NNH for lower initial pressure and higher initial temperature (1 atm/1500K Table 1: Chemical timescales (in sec) for CH4-air mixtures (stoichiometric and lean conditions) at steady-state for various methods under adiabatic and engine pre-ignition conditions. Conclusions A new computationally efficient and numerically robust methodology to compute chemical timescales using detailed chemical kinetics was proposed in this work. The temporal variation of chemical timescales under a range of thermodynamic conditions (isothermal and adiabatic), fuelair mixtures and initial conditions was studied using three different algebraic methods (IRRTS, RTS, RPTS) and the IETS eigenvalue method. The temporal variation of timescales predicted by these traditional methods was compared with the new algebraic method that addresses the deficiencies of the algebraic method and the eigenvalue-based methods. The effect of tight numerical tolerances on the predicted timescales was also studied. It was shown that very tight numerical tolerances are needed in algebraic methods, failing which, the predicted timescales are oscillatory in nature. The proposed method is computationally efficient as the algebraic methods but shows the same robust numerical and physical characteristics as the eigenvalue-based IETS method. All the algebraic methods studied in this work showed large temporal variation in the timescales with a monotonic increase in the chemical timescales even after the system had reached a steady state temperature. In contract, the eigenvalue based IETS method and the proposed method showed that the timescale of the system varied by less than a factor of two as the system evolved from the initial composition and temperature to the post-combustion temperature and composition. The IETS method and the proposed method also showed that the chemical timescale of the system was also constant after the system had reached steady-state temperatures. Quantitatively, the timescale predicted by the proposed methods was always within a factor of two of the IETS method. The proposed method also showed that the slowest time constant of the system was due to N2 and that it was on the order of a few milliseconds at steady state for all the cases studied. The proposed method also showed that the fastest chemical time constant corresponded to NNH in the early ignition stages (1500 K< T < 1700K) at a pressure of 1 atmosphere and CH2(s) at elevated pressures and initial lower temperatures. The fastest transient was associated with H2O2 at steady-state temperatures for both elevated pressures and atmospheric pressures. After steady state temperatures were reached, the algebraic method predicted timescales that were a strong function of mixture composition (equivalence ratio) and pressure. At atmospheric pressure, the lean mixtures showed timescales an order of magnitude lower than the stoichiometric mixture but were lower by as much as three orders of magnitude at elevated pressures. The IETS method and the proposed method both showed weak dependence on mixture composition and that the predicted timescales differed by about 20% both under atmospheric pressures and at elevated pressures seen in power generating equipment such as natural gaspowered engines. Use of the proposed method would enable the computation of chemical timescales for numerically robust multidimensional turbulent reacting flow simulations using detailed kinetics without the burdensome computational cost associated with eigenvalue-based methods and the numerical instability/non-physical results associated with algebraic methods.
2022-10-12T01:16:28.218Z
2022-10-07T00:00:00.000
{ "year": 2022, "sha1": "b8244a686c732d4a93a1d5e77d3b408311e02204", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b8244a686c732d4a93a1d5e77d3b408311e02204", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
211831688
pes2o/s2orc
v3-fos-license
ARNS: Adaptive Relay-Node Selection Method for Message Broadcasting in the Internet of Vehicles The proper utilization of road information can improve the performance of relay-node selection methods. However, the existing schemes are only applicable to a specific road structure, and this limits their application in real-world scenarios where mostly more than one road structure exists in the Region of Interest (RoI), even in the communication range of a sender. In this paper, we propose an adaptive relay-node selection (ARNS) method based on the exponential partition to implement message broadcasting in complex scenarios. First, we improved a relay-node selection method in the curved road scenarios through the re-definition of the optimal position considering the distribution of the obstacles. Then, we proposed a criterion of classifying road structures based on their broadcast characteristics. Finally, ARNS is designed to adaptively apply the appropriate relay-node selection method based on the exponential partition in realistic scenarios. Simulation results on a real-world map show that the end-to-end broadcast delay of ARNS is reduced by at least 13.8% compared to the beacon-based relay-node selection method, and at least 14.0% compared to the trinary partitioned black-burst-based broadcast protocol (3P3B)-based relay-node selection method. The broadcast coverage is increased by 3.6–7% in curved road scenarios, with obstacles benefitting from the consideration of the distribution of obstacles. Moreover, ARNS achieves a higher and more stable packet delivery ratio (PDR) than existing methods profiting from the adaptive selection mechanism. Introduction The Internet of Vehicles (IoV) can play an important role in reducing traffic pressure and improving driving safety. Relay-node selection is the basis of IoV, and has attracted significant attention from researchers in recent years . By appropriately selecting relay-nodes to forward messages, we can expand the coverage of messages with high time efficiency. Such methods aim to select a relay-node quickly and cover more range in one hop. Based on the difference in obtaining information of neighbor nodes, relay-node selection methods can be classified into beacon-based relay-node selection methods (called beacon-based • According to the specific distribution of obstacles in the real world, the optimal position-selection is redefined, and a curved road relay-node selection method suitable for the actual situations is proposed. • A criterion of classifying road structures is proposed to judge the road structure in complex scenarios. • Based on the above work, an adaptive relay-node selection method is designed to suit two real-world situations: the differences of the road structures in the communication range of the different senders, and multiple road structures in the communication range of one sender. The rest of the paper is organized as follows: Section 2 briefly introduces related work on relay-node selection methods. The problems of message broadcasting in RoI, which include complex road structures and the impact of obstacles, are analyzed in Section 3. An adaptive relay-node selection method based on the exponential partition is presented in Section 4. Section 5 demonstrates the performance of ARNS compared to other methods, and finally, we draw conclusions in Section 6. Related Work Several methods are proposed for relay-node selection in IoV, as discussed in the following. Greedy perimeter stateless routing (GPSR) [19] obtains the location of neighbor nodes through periodic flood beacons, and selects the relay-node in each hop using the greedy algorithm. When the greedy algorithm fails, the relay-node is selected with the right-hand rule. The advantage of GPSR is that it can be applied in all road structures. However, the information update of neighbor nodes in GPSR is not real-time, and it limits the performance. What is more, GPSR mainly considers end-to-end message propagation and does not fully consider message broadcasting. In order to improve the performance of message broadcasting, a real-time adaptive dissemination system (RTAD) is proposed in [20], and it defines two metrics-informed vehicles and messages received-and selects the most suitable beacon-based method for different RoIs based on the simulation results of the two metrics. Its advantage is that the message broadcasting in urban scenarios is achieved with better overall performance. However, it still has the problem of lacking real-time information, which is the same as GPSR and is only suitable in urban scenarios. Urban multi-hop broadcast protocol (UMB) [21] is a black-burst-based relay-node selection method, which solves the problem of lacking real-time information in the beacon-based method. It aims to maximize message progress by selecting the farthest vehicle as the relay-node. The sender broadcasts a Request-To-Broadcast (RTB) packet in its communication range. Upon the reception of RTB, nodes, i.e., vehicles, broadcast a channel jamming signal, i.e., black-burst, for a duration that is proportional to the node's distance from the sender. Then, the farthest node transmits the Sensors 2020, 20, 1338 3 of 18 longest black-burst, and performs forwarding. The disadvantage of UMB is that it has a relatively high communication delay since it spends the longest black-burst to select the farthest node to perform forwarding. Binary-partition-assisted broadcast protocol (BPAB) [22] is a binary partitioning broadcast method based on the black-burst, and solves the problem of UMB. It deploys a binary partitioning scheme and a novel contention mechanism. The binary partitioning scheme iteratively divides the range, which is the communication range in the first iteration and the selected segment in other iterations, into multiple segments. In addition, the farthest segment which contains nodes is selected by the aid of the black-bursts. Then, through a novel contention mechanism, a node is randomly selected as the relay-node in the farthest segment. Compared with the previous methods, BPAB achieves a lower and more stable delay, but it only works on the straight road or the junction. Trinary partitioned black-burst-based broadcast protocol (3P3B) [23] is a trinary partitioning broadcast method. Improving on BPAB, 3P3B uses a trinary partitioning method instead of the binary partitioning, and introduces mini-DIFS in the channel access period before the start of relay-node selection to reduce the channel access delay. With these improvements, it achieves a lower delay than BPAB, but it only considers the relay-node selection in straight road scenarios. Exponent-based partitioning broadcast protocol (EPBP) [24] is an exponential partitioning broadcast method. Improving on 3P3B, it divides the communication range of sender into N part segments for N iter iterations. The width of segment increases exponentially with the increase of its distance from the relay-node's optimal position. Then, a non-empty segment closest to the optimal position is selected as the final segment. Finally, a node in the final segment is randomly selected as the relay-node through an exponential back-off method. The delay of the partitioning process is called partition delay, and the delay of the exponential back-off process is called contention delay. Due to the exponential partition, EPBP has a lower and more stable delay than 3P3B. However, EPBP still is suitable for the straight road scenarios. In order to solve the problem, a complete EPBP-based curved road relay-node selection method is proposed in [25]. It implements relay-node selection in curved road scenarios through three modes: the normal selection, the reverse selection, and the double-direction selection. When a vacant appears in the normal selection, it will enter the double-direction selection. At this time, the reverse selection and the normal selection are performed simultaneously, and the farthest point from the sender in a vacant as the end point in the reverse selection. Through the three modes, it achieves a high broadcast coverage. However, it has a disadvantage in that it does not consider the influences of obstacles. Thus, an EPBP-based junction relay-node selection method is proposed in [26]. Improving on EPBP, it implements relay-node selection in junction scenarios with obstacles through two phases: the junction phase and the branch phase. It selects the node close to the center of the junction as the relay-node in the junction phase, and selects the furthest node on each branch as the relay-node in the branch phase. Compared to BPAB, it achieves a lower delay. However, it does not consider the situation where the branches are not a straight road. Though these black-burst-based methods [21][22][23][24][25][26], including our EPBP-based work [24][25][26], show better performance compared to the beacon-based methods [19,20], they are only suitable for a certain road structure, e.g., the methods in [24,27] are only for straight roads, that in [26] are only for junctions, and in [25] are only for curved roads. However, in the real world, varied road structures may exist in RoI and multiple road structures in the communication range of the sender. Moreover, the distribution of the obstacles can affect the relay-node selection. Therefore, in this paper, we have designed ARNS by fully considering the above situations to achieve better robustness. In the next section, we will describe the scenarios and state the problems. Scenario Description and Problem Statement In real-world IoV, the selection of relay-nodes needs to consider the high mobility of vehicles, the diversity of road structures, and the existence of obstacles in RoI to achieve higher coverage with lower delay. EPBP and its derived methods can well solve the problem of real-time caused by the high Sensors 2020, 20, 1338 4 of 18 mobility of vehicles, but it fails to completely solve the problem of broadcasting in RoI with various road structures and obstacles. For example, Figure 1 shows an area where various road structures and obstacles exist, and it is assumed to be an RoI of the message generated by Node S 0 at Point H. The road structure on the west of the road section HI ( HI indicates the road section connecting Point H and Point I is a curved road with a junction J 1 and is surrounded randomly by green woods. Additionally, the road structures on the east of HI are the straight roads with junctions and there are buildings around these junctions. Woods and buildings are obstacles that can prevent the dissemination of messages. The message is expected to cover the RoI, so the ends of the road at the RoI boundary are the termination positions of broadcast. It should be noted that we only consider relay-node selection in vehicle to vehicle (V2V), and nodes can obtain not only their own position by using GPS, but also the local information about roads and obstacles by using GIS. To solve the problems of relay-node selection in the scenarios described above, in the next section, we propose an adaptive relay-node selection method that adaptively selects a relay-node selection method suitable for the current scenario according to the road structures and obstacles within the communication range of the sender. Method Design In this section, we will propose ARNS to solve the problems described in Section 3, but before that, we need to improve the EPBP-based methods to make them suitable for real-world scenarios. Therefore, the content of this section is organized as follows: we first propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles, then, develop a criterion of classifying road structures. Moreover, we improve the EPBP-based junction relay-node selection method [26] to resolve the problems of multiple road structures existing in the communication range of the sender. Finally, an adaptive relay-node selection method based on these above works is proposed. The goal of this method is to achieve full coverage of RoI with the lowest delay. EPBP-Based Relay-Node Selection Method Suitable for Curved Road Scenarios with Obstacles Based on the analysis in Section 3, we first define Optimal Position and Vacant to facilitate the description of the relay-node selection method in curved road scenarios with obstacles. is the point that is closest to the terminal point of the curved road in the direction of message broadcasting, where is the set of the A process of message broadcasting is illustrated in Figure 1. Node S 0 is an original sender, and a message broadcasted by S 0 is expected to cover the region shown in the map, i.e., the RoI of the message. Obviously, the road structures in the communication ranges of Node S 1 , S 2 , and S 4 are the straight road, the junction, and the curved road, respectively, so the corresponding relay-node selection methods, i.e., the method in [24] for straight road scenarios, that in [26] for junction scenarios, and that for curved road scenarios in [25], are adopted according to the road structure. However, one problem needs to be solved, and that is how to distinguish road structures. Moreover, the road section in the communication range of one sender maybe consists of two or more road structures, not one typical road structure discussed in the existing works. This scenario is given as an example in Figure 1 as the road section in the communication range of Node S 3 . The range covers a junction and three curved road sections, neither the typical junction with several straight branches nor the typical curve only including the curved road section. Thus, in order to realize the node-selection in real-world scenarios, the first problem should be resolved as follows. Problem 1: how to classify the road structure? of 18 The broadcasted message is expected to cover the whole RoI at the cost of as little time as possible. Thus, in one hop, the node at the farthest position from the sender in the direction of message broadcasting is the most favorite relay-node. The farthest position is defined as the optimal position [24]. In the real world, the obstacles will affect the location of the optimal position. The line-of-sight condition in straight road scenarios is good because no obstacle affects the communication range of the sender, thus existing relay-node selection methods [21][22][23][24] use the point farthest from the sender as the optimal position on the straight road scenarios. In junction scenarios, obstacles such as buildings generally exist near junctions, and the existing relay-node selection methods [21,22,26] applicable for junction scenarios select a node close to the center of the junction as the relay-node of the first hop, and achieve the maximum coverage of all branches with the second hop to complete message broadcasting. In curve scenarios, the general relay-node selection methods [13,14] consider that obstacles are generally around road corners, so the corner of the curved road is marked as the optimal position to eliminate the impact of obstacles on the message broadcasting. However, in the specific scenarios, the effect of obstacles on the location of the optimal position needs to be analyzed differently. As shown in Figure 1, the road section BF is out of the sight of Point A due to the blocking by Obstacle O 1 , so the sender at Point A can only use corner Point B as the optimal position to realize the relay-node selection in this curved road scenario. However, the road section EG has a good line-of-sight condition because of no blocks, so the sender at Point E can directly select the farthest Point G in its coverage area as the optimal position. Therefore, by considering the specific distribution of obstacles within the communication range, we can select the proper optimal position to achieve the maximum coverage of one-hop and reduce the delay of the relay-node selection. Thus, the second problem to be resolved is described as follows. Problem 2: how to determine the optimal position? As shown in Figure 1, there are two road sections that are not covered by the broadcast: one is road section 1 indicated by the blue solid line, which is within the communication range of Node S 4 , but not covered by the signal of Node S 4 because of the obstruction of Obstacle O 1 ; another is road section 2 indicated by the black solid line, which is outside the communication range of Node S 3 and S 4 . As we aim to achieve full coverage of RoI, the location of the optimal position ensures that the broadcasting message can cover these road sections, i.e., road section 1 and 2 . It should be noted that we only consider relay-node selection in vehicle to vehicle (V2V), and nodes can obtain not only their own position by using GPS, but also the local information about roads and obstacles by using GIS. To solve the problems of relay-node selection in the scenarios described above, in the next section, we propose an adaptive relay-node selection method that adaptively selects a relay-node selection method suitable for the current scenario according to the road structures and obstacles within the communication range of the sender. Method Design In this section, we will propose ARNS to solve the problems described in Section 3, but before that, we need to improve the EPBP-based methods to make them suitable for real-world scenarios. Therefore, the content of this section is organized as follows: we first propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles, then, develop a criterion of classifying road structures. Moreover, we improve the EPBP-based junction relay-node selection method [26] to resolve the problems of multiple road structures existing in the communication range of the sender. Finally, an adaptive relay-node selection method based on these above works is proposed. The goal of this method is to achieve full coverage of RoI with the lowest delay. EPBP-Based Relay-Node Selection Method Suitable for Curved Road Scenarios with Obstacles Based on the analysis in Section 3, we first define Optimal Position and Vacant to facilitate the description of the relay-node selection method in curved road scenarios with obstacles. Definition 1. Optimal Position P opt ∈ {Node 1 } ∪ {Node 2 } is the point that is closest to the terminal point of the curved road in the direction of message broadcasting, where {Node 1 } is the set of the intersections of the sender's communication boundary and the curved roads that are not blocked by obstacles; {Node 2 } is the set of the intersections of the curved road and the tangents to the profile of the obstacles from the sender. Definition 2. Vacant is the segment of the curved road that is not covered by the communication ranges of the sender and the relay-node because of the high curving rate of the curved road and the blocking by the obstacles. Taking Figure 1 as an example, road section 1 and 2 are both vacant, because road section 1 is not covered by the signal of Node S 4 due to the obstruction of Obstacle O 1 , and road section 2 is not within the communication range of Node S 3 and S 4 . Next, we improved the reverse selection [25] to solve the problem of vacant-caused reduction of broadcast coverage. When a sender finds that there is a vacant between itself and the sender in the previous hop, it enters the reverse selection. At this time, it serves as an initial sender of the reverse selection and broadcasts an RTB packet to start the normal selection and the reverse selection simultaneously. The reverse selection chooses the nearest corner to the initial sender in the reverse direction as the optimal position, and the endpoint of the vacant closest to the previous sender as the termination of the reverse selection. In the reverse direction, only the reverse selection continues until it completely covers the vacant. To distinguish three states of relay-node selection-only the normal selection, only the reverse selection, and the concurrence of the normal selection and the reverse selection, we added a mode flag into the RTB packet. Moreover, we assigned black-bursts with different frequencies to avoid interfering with each other between nodes in different states. Based on the above definitions and descriptions, we propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles. The pseudo code is as follows in Algorithm 1. Next, we take Nodes S 3 and S 4 in Figure 1 as an example to illustrate our proposed method, and assume that Node S 3 is used as a sender to start message broadcasting. According to Definition 1, Node S 3 determines Point E as the P opt_norm . Then, we made a circle with Point E as the center and the distance between Node S 3 and Point E as the radius, EPBP was performed on the circle as shown in Figure 1, and Node S 4 was selected as the relay-node. After Node S 4 receives the message from Node S 3 , as a new sender it determines that road section 1 and 2 are both vacant according to Definition 2. Then, it starts both the normal selection and the reverse selection: The initial sender S 4 of the reverse selection chooses Point A as the terminal point of the reverse selection, chooses the corner (Point B) as the optimal position for the reverse selection, and selects Point G as the optimal position for the normal selection according to Definition 1. Then an RTB packet was broadcasted by Node S 4 to inform nodes within its communication range that both the reverse selection and the normal selection were started at the same time. Finally, Node S 7 was selected as the relay-node in reverse selection and Node S 8 was selected as the relay-node in the normal selection. After that, Node S 7 as a sender only performs the reverse selection, and Node S 8 as a sender only performs the normal selection. if there is an area between S and S pre that is blocked by or out of the communication range of S and S pre . 6 Determine the area as a vacant. 7 Phase 2. RTB Packet Broadcast Phase: 8 if there is a vacant between S and S pre 9 Set the mode flag of RTB packet to 3 (means simultaneously start the normal selection and the reverse selection). 10 Determine the optimal position P opt_norm in the message propagation direction according to Definition 1. 11 Choose the nearest corner as the optimal position P opt_rev in the reverse direction. 12 Determine the endpoint of the vacant closest to S pre as the termination of the reverse selection P rev_end . 13 Add P opt_norm , P opt_rev , P rev_end into the RTB packet. 14 else if be on the road between S and S pre . 15 Set the mode flag of RTB packet to 2 (means start the reverse selection). 16 Choose the next corner as the optimal position P opt_rev in the reverse direction 17 Update P opt_rev in the RTB packet. 18 Else 19 Set the mode flag of RTB packet to 1 (means start the normal selection). 20 Determine the optimal position P opt_norm in the message propagation direction according to Definition 1. 21 Update P opt_norm in the RTB packet. 22 Broadcast the RTB packet. 23 Phase 3. Relay-Node Selection Phase: 24 if the mode flag of RTB packet is 3 25 Start EPBP with P opt_norm as the optimal position, n norm ∈ N is not blocked by is selected as the relay-node in the message propagation direction. 26 Simultaneously, start EPBP with P opt_rev as the optimal position, and n rever ∈ N is not blocked by is selected as the relay-node in the reverse direction. 27 else if the mode flag of RTB packet is 2 28 Start EPBP with P opt_rev as the optimal position. 29 n rever ∈ N is not blocked by is selected as the relay-node. 30 Else 31 Start EPBP with P opt_norm as the optimal position. 32 n norm ∈ N is not blocked by is selected as the relay-node. 33 Relay-node selection finished Criterion of Classifying Road Structures In this subsection, we define a criterion to classify three typical road structures (junction, straight road, and curved road): In previous works [25,26], the broadcasting in junction scenarios is completed through two-hop relay-node selection, and its message propagation direction is multidirectional. In curved road scenarios, there will be both normal and reverse relay-node selection for broadcasting, and the message propagation is bidirectional. To achieve full coverage of RoI, the priority of judgment for the criteria of road structures is junction, curved road, and straight road. It is widely accepted that the criterion for judging whether a road structure is a straight road or not is whether the line-of-sight condition exists. We define a curving rate to facilitate the definitions of the curved road and the straight road, as follows. Definition 4. The curving rate β is expressed as where l is the length of the road within the communication range of the sender in the message propagation direction, and R is the communication radius. Based on the definition of curving rate, we give the definitions of curved road and straight road. Definition 5. Curved road scenario is a scenario that β > β ε when no junction exists in the communication range of the sender in the message propagation direction, where β ε is a threshold. Definition 6. Straight road scenario is a scenario that β ≤ β ε when no junction in the communication range of the sender in the message propagation direction. We discuss the value of threshold β ε based on whether obstacles on the roadside affect the line-of-sight propagation. When obstacles on the roadside affect the line-of-sight propagation of the message, the road will have at least one roundabout. For this circumstance, the road length must be more than twice the road width w beyond the communication radius, that is, where (l ε > R + 2 * w). Adaptive Relay-Node Selection Method In this subsection, we design an adaptive relay-node selection method based on the criterion of road structures, combining the relay-node selection method in curved road scenarios with obstacles proposed in Section 4.1 and an improved EPBP-based junction relay-node selection method in this subsection. The termination condition of message broadcasting is to achieve complete coverage of RoI. That is, all ends of the roads at the RoI boundary are covered by the broadcasting message. Moreover, in order to avoid the multiple coverage of a message on one road section, the termination condition in junction scenarios is that the message covers the RoI boundary, or that the branch has been covered by the same message. The EPBP-based junction relay-node selection method [26] includes a junction phase and a branch phase. It is suitable for urban scenarios where each branch of junctions is a straight road. Two types of nodes are selected successively as relay-nodes in the junction phase and the branch phase, which are closest to the center point of the junction and to the farthest point in the branches. However, in the real world, the branch of the junction, e.g., Junction J 1 in Figure 1, may not be a straight road. Therefore, we improve the EPBP-based junction relay-node selection method as follows. In the branch phase, first, the sender of the branch phase, i.e., the relay-node at the center of the junction, uses GIS information and the criterion of road structures to determine the road structure of each branch when it enters the branch phase. Then, according to the judgment result, a method suitable for the structure of branch is selected to complete the relay-node selection in the branch phase. The flow diagram of the improved method is shown in Figure 2. The improved method realizes the adaptive relay-node selection in the branch phase. Compared with the original method [26], it has stronger robustness in real-world scenarios. The termination condition of message broadcasting is to achieve complete coverage of RoI. That is, all ends of the roads at the RoI boundary are covered by the broadcasting message. Moreover, in order to avoid the multiple coverage of a message on one road section, the termination condition in junction scenarios is that the message covers the RoI boundary, or that the branch has been covered by the same message. The EPBP-based junction relay-node selection method [26] includes a junction phase and a branch phase. It is suitable for urban scenarios where each branch of junctions is a straight road. Two types of nodes are selected successively as relay-nodes in the junction phase and the branch phase, which are closest to the center point of the junction and to the farthest point in the branches. However, in the real world, the branch of the junction, e.g., Junction J1 in Figure 1, may not be a straight road. Therefore, we improve the EPBP-based junction relay-node selection method as follows. In the branch phase, first, the sender of the branch phase, i.e., the relay-node at the center of the junction, uses GIS information and the criterion of road structures to determine the road structure of each branch when it enters the branch phase. Then, according to the judgment result, a method suitable for the structure of branch is selected to complete the relay-node selection in the branch phase. The adaptive relay-node selection mechanism is shown in the flowchart in Figure 3. First, ARNS determine whether the broadcast completely covers RoI. If a full coverage of RoI is not achieved, the criterion of road structures is adopted to judge the road structure within the current communication scenario. If the road structure is judged as a junction scenario, we adopt the improved EPBP-based junction relay-node selection method to realize relay-node selection in the current scenario; if the judgment result is a curved road scenario, we use the method proposed in Section 4.1 to select a relay-node in the current scenario; if the judgment result is a straight road scenario, we directly adopt the intersection of the sender's communication boundary and road in the message propagation direction as the optimal position to implement straight road relay-node selection through EPBP. What is more, to ensure the security of message transmission, a caching optimization method [41] is used for each vehicle. The adaptive relay-node selection mechanism is shown in the flowchart in Figure 3. First, ARNS determine whether the broadcast completely covers RoI. If a full coverage of RoI is not achieved, the criterion of road structures is adopted to judge the road structure within the current communication scenario. If the road structure is judged as a junction scenario, we adopt the improved EPBP-based junction relay-node selection method to realize relay-node selection in the current scenario; if the judgment result is a curved road scenario, we use the method proposed in subsection 4.1 to select a relay-node in the current scenario; if the judgment result is a straight road scenario, we directly adopt the intersection of the sender's communication boundary and road in the message propagation direction as the optimal position to implement straight road relay-node selection through EPBP. What is more, to ensure the security of message transmission, a caching optimization method [41] is used for each vehicle. Results and Analysis To prove the effectiveness, simulations were conducted on a real-world map shown in Figure 1, which is a part of the urban map in Zhangjiajie city, Hunan Province, China. In addition, to reflect the real-time advantages with the black-burst, ARNS was compared with a beacon-based method that uses RTAD [20] to select relay-nodes in urban scenarios and the GPSR method [19] on the curved road combined with the adaptive mechanism proposed in this paper. Additionally, a black-burst-based method was also used for comparison, which substitutes EPBP with 3P3B in ARNS (called the 3P3B-based method), to verify ARNS' improvement. These results and analysis are presented in subsection 5.2. In addition, to demonstrate the advantages of considering obstacles in curved road scenarios, we compared ARNS with the complete relay-node selection method [25], which is an EPBP-based Results and Analysis To prove the effectiveness, simulations were conducted on a real-world map shown in Figure 1, which is a part of the urban map in Zhangjiajie city, Hunan Province, China. In addition, to reflect the real-time advantages with the black-burst, ARNS was compared with a beacon-based method that uses RTAD [20] to select relay-nodes in urban scenarios and the GPSR method [19] on the curved road combined with the adaptive mechanism proposed in this paper. Additionally, a black-burst-based method was also used for comparison, which substitutes EPBP with 3P3B in ARNS (called the 3P3B-based method), to verify ARNS' improvement. These results and analysis are presented in Section 5.2. In addition, to demonstrate the advantages of considering obstacles in curved road scenarios, we compared ARNS with the complete relay-node selection method [25], which is an EPBP-based relay-node selection method well-qualified in the curved road scenarios without considering obstacles. These results will be discussed in Section 5.3. Introduction of Evaluation We simulated these above approaches in VANET using MATLAB with the Monte Carlo method [42]. Since we focused on the relay selection in the link level, the simulation environment just includes the 802.11p MAC layers. The major simulation parameters of VANET are given in Table 1, and are identical to those used in [20,23,25]. In each simulation, Node S 0 was used as the original sender. The intersections of each road and RoI boundary were used as the terminal points of broadcast on this road. Since the roads in Figure 1 have different widths, for ease of expression, we classified them with the number of the lanes n lane in both directions (n lane = 2,4,6), and vehicle density λ in this paper is defined as the vehicle density on a single lane. In order to assess the performance of ARNS under a wide range of vehicle densities, we set the minimum interval between vehicles to be 4 m, and the minimum number of vehicles within communication range to be two vehicles. Thus, when the communication range was set to 200 meters, the lowest vehicle density was 0.01 vehicles/meter and the highest was 0.25 vehicles/meter. The vehicles were located randomly following the Poisson distribution with λn lane . The maximum speed v max of vehicles complies with the rule related to safe inter-vehicle distance [43,44]. Note that the inter-vehicle distance is defined as the distance between the heads of the adjacent vehicles. Each vehicle chose a random speed following a uniform distribution in [ 1 2 v max , v max ] at the beginning of the simulation, and kept the chosen speed during the simulation. Lane change and overtaking were not modeled for vehicle movement. From the simulation results shown in Figure 4, a single simulation duration, i.e., the end-end delay, is less than 6.2 ms, and the maximum movement distance of a node is 0.21 m corresponding the vehicle speed of 120 km/s. Thus, the above assumptions about the vehicle running are reasonable. The experimental environment was simulated in MATLAB, the same as [25], because the conclusion in [45] pointed out that the vehicle movement has little influence on the relay-node selection. PDR is expressed as a ratio of the number of successful broadcasting messages to the total number of simulations. Successful broadcasting means that no packet loss occurs during the entire broadcasting process. Maximum hops maxhops N is expressed as the maximum number of hops that a message is broadcasted from Node S0 to the terminations of RoI. Broadcast coverage cov γ is expressed as a ratio of the length of the road covered by broadcasting to the length of the entire road. Evaluations of ARNS In this subsection, we compare ARNS with RTAD and the 3P3B-based method in the same environment. We show the advantages of ARNS in three aspects, including end-to-end delay, maximum hops, and PDR. The simulation results show as follows. End-to-end delay and packet delivery ratio (PDR) are metrics widely used to evaluate the efficiency and reliability of message broadcasting in IoV [21][22][23][24][25][26][27]. In addition, a metric called maximum hops was proposed to evaluate the reliability of end-to-end delay. The metrics of broadcast coverage, partition delay, and contention delay were used to measure the improvement of considering obstacles. In this section, we show the comparisons of all method schemes in terms of six metrics: end-to-end delay, partition delay, contention delay, PDR, maximum hops, and broadcast coverage. The definitions of the metrics are described below. End-to-end delay T end is expressed as a total delay from the instant when Node S 0 starts broadcasting to the instant when RoI is completely covered. T end is the sum of one-hop delay. In the black-burst-based methods, the partition delay T part and the contention delay T cont dominate the one-hop delay. Thus, in the results of Section 5.3, T part and T cont are used to demonstrate the improvement of ARNS in the curved road scenarios. T part is expressed as an average value of the partition delay in each hop, and T cont is expressed in the same way. PDR is expressed as a ratio of the number of successful broadcasting messages to the total number of simulations. Successful broadcasting means that no packet loss occurs during the entire broadcasting process. Maximum hops N maxhops is expressed as the maximum number of hops that a message is broadcasted from Node S 0 to the terminations of RoI. Broadcast coverage γ cov is expressed as a ratio of the length of the road covered by broadcasting to the length of the entire road. Evaluations of ARNS In this subsection, we compare ARNS with RTAD and the 3P3B-based method in the same environment. We show the advantages of ARNS in three aspects, including end-to-end delay, maximum hops, and PDR. The simulation results show as follows. Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process. In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum hops of ARNS declines in a stable trend, while the maximum hops of the beacon-based method are already saturated. Sensors 2020, 20, x FOR PEER REVIEW 13 of 18 Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process. In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum hops of ARNS declines in a stable trend, while the maximum hops of the beacon-based method are already saturated. Figure 6 presents the PDR of the three methods. It can be clearly seen that, as vehicle density ascends, PDR declines. PDR of ARNS is better than that of both the 3P3B-based method and the beacon-based method. Additionally, PDR of ARNS is more stable than the other two. The reasons can be derived as follows. Firstly, for the beacon-based method, nodes in its routing table may travel out of the communication range during the beacon interval, resulting in the loss of message packets. In this case, we will re-transmit. However, if the number of re-transmissions reaches the maximum Figure 6 presents the PDR of the three methods. It can be clearly seen that, as vehicle density ascends, PDR declines. PDR of ARNS is better than that of both the 3P3B-based method and the beacon-based method. Additionally, PDR of ARNS is more stable than the other two. The reasons can be derived as follows. Firstly, for the beacon-based method, nodes in its routing table may travel out of the communication range during the beacon interval, resulting in the loss of message packets. In this case, we will re-transmit. However, if the number of re-transmissions reaches the maximum times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods. Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process. In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum Evaluations of ARNS in the Scenario with Obstacles In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection. As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S 3 selects Node S 4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases. times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods. Evaluations of ARNS in the Scenario with Obstacles In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection. Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%. Sensors 2020, 20, x FOR PEER REVIEW 14 of 18 times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods. Evaluations of ARNS in the Scenario with Obstacles In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection. As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S3 selects Node S4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases. Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%. Conclusions In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S3 selects Node S4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases. Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%. Conclusions In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender Figure 10. End-to-end delay on the curved road. Conclusions In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender in each hop. ARNS adopts the favorable relay-node selection method according to the road structure. In addition, the effect of obstacles was considered. It was demonstrated through simulation that ARNS is superior to methods based on 3P3B [23] and RTAD [20] in terms of the end-to-end delay and PDR, and superior to the complete method [25] in terms of the broadcast coverage and one-hop delay. In a real-world road scenario, we showed that ARNS reduces end-to-end delay by at least 13.8% compared to the beacon-based method, and the broadcast coverage of ARNS was increased by 3.6-7% compared with the complete method. In the future, we plan to extend our work to relay-node selection on 3D road structures, such as overpass structures and parking lot structures, and utilize AI [46][47][48][49][50][51] to optimize the method [52,53] in complex 3D scenarios.
2020-03-04T14:04:36.702Z
2020-02-29T00:00:00.000
{ "year": 2020, "sha1": "a6ea05b2f64583db1b8df854a4734cb025f1dec8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/5/1338/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "184531a4b088ef3e391c48421356ca55ef660514", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
258789914
pes2o/s2orc
v3-fos-license
Gintonin Isolated from Ginseng Inhibits the Epithelial—Mesenchymal Transition Induced by TGF-β in A549 Lung Cancer Cells Epithelial-to-mesenchymal transition (EM transition) is a process wherein epithelial cells lose their intrinsic characteristics and cell–cell junctions and differentiate into a mesenchymal phenotype. EM transition is an important feature of cancer invasion and metastasis. In this study, we aimed to investigate the inhibitory effect of gintonin (GT), an ingredient of ginseng, on EM transition using A549 cells. The proliferation of A549 cells was enhanced following treatment with 50, 75, and 100 μg/mL of GT. GT affected EM transition-induced gene and protein expression, specifically that of vimentin (Vim), N-cadherin (N-cad), zinc finger E-box-binding homeobox 1, and Twist in A549 cells. Furthermore, the transforming growth factor beta 1 (TGF-β1)-induced phosphorylation of Smad2 and Smad3 was suppressed by GT treatment. Immunofluorescence staining also showed that GT treatment decreased the TGF-β1-induced expression of Vim and N-cad in A549 cells. Therefore, GT may be used to suppress cancer cell metastasis via maintenance of the cell–cell junction’s integrity. However, further studies are required to pave the way for its translation into clinical application in cancer therapeutics. Introduction Lung cancer is a devastating disease and one of the leading causes of cancer-related deaths worldwide [1]. Lung cancer is highly invasive and has the ability to metastasize by invading other tissues and spreading throughout the body [2]. Metastasis is a complex process that involves a series of molecular events, including the epithelial-mesenchymal transition (EM transition), which plays a crucial role in tumor invasion and dissemination [3]. During the EM transition, the intrinsic characteristics and cell-cell junctions of epithelial cells are lost, causing them to differentiate into a mesenchymal phenotype, which ultimately increases the invasion and migration capabilities of cancer cells [4]. A major characteristic of EM transition is the decrease in the levels of proteins involved in tight junctions, such as E-cadherin (E-cad), claudin, and occludin, which are epithelial markers. In contrast, there is an increased expression of mesenchymal markers, such as N-cadherin (N-cad), vimentin (Vim), and matrix metalloproteinases, during EM transition, indicating the acquisition of mesenchymal characteristics by these cells [3]. E-cad maintains the integrity of intercellular junctions, thus preventing cell migration, invasion, and metastasis and suppressing tumor progression. Gene expression of E-cad is reduced in most cancer cells [5] and regulated by E-cad transcriptional repressors Snail-1/2, zinc finger E-box-binding homeobox 1 (ZEB1)-1/2, and Twist. Snail, a zinc finger transcription factor, induces EM transition by suppressing the expression of E-cad [6]. Another important regulator of cancer development is transforming growth factor beta 1 (TGF-β1). The effects of TGF-β1 on tumor progression can be either suppressive or promotive, depending on the cancer stage [7]. In the early stages of cancer, TGF-β1 inhibits tumor growth by inducing cell cycle arrest and apoptosis; however, in the later stages, when tumor growth has reached or surpassed a certain size, cancer cells become resistant to growth suppression by TGF-β1. Therefore, TGF-β1 then induces EM transition and contributes to tumor growth and metastasis, angiogenesis, and immune evasion mechanisms, thereby promoting cancer progression [8,9]. Panax Ginseng C.A. Meyer is a natural product that is extensively used in Korea, China, and Japan [10]. It is composed of major saponins and non-saponins, with ginsenosides being a major component of the saponin group, whereas polysaccharides, proteins, peptides, and gintonins (GTs) belong to the non-saponin group [11]. Among them, GT is a glycolipoprotein fraction of ginseng and is composed of carbohydrates as the main ingredient, lipids, and proteins containing several hydrophobic and acidic amino acids, as well as glucose [12]. The major active of GT are lysophospholipids, including lysophosphatidic acids (LPAs), the receptor of which serves as a specific target for GT [13][14][15][16]. GT exhibits various health benefits, such as anti-Alzheimer's disease effects, through the LPA receptormediated non-amyloidogenic pathway [17]. GT also improves cognitive function in older patients with Alzheimer's disease [18,19], increases hippocampal neurogenesis [20], possesses anti-depressant activity [21], exerts in vivo anti-metastatic effects [22], and protects against cardiovascular diseases [23]. However, to the best of our knowledge, detailed studies on the inhibition of the intracellular signaling pathway of cancer metastasis by GT have yet to be performed. Therefore, in this study, we investigated, for the first time, the mechanism of action of GT in EM transition using A549 lung cancer cells. Effect of GT on A549 Cell Proliferation First, we investigated the cytotoxicity of GT on A549 lung cancer cells. After treating A549 cells with GT at concentrations of 5, 10, 20, 40, 80, 100, and 200 µg/mL for 24 and 48 h, cell viability was evaluated using the EZ-Cytox method. As shown in Figure 1A,B, cell proliferation tended to increase as the GT concentration increased without cell toxicity in A549 cells. Therefore, subsequent experiments were conducted within the above concentration range. binding homeobox 1 (ZEB1)-1/2, and Twist. Snail, a zinc finger transcription factor, induces EM transition by suppressing the expression of E-cad [6]. Another important regulator of cancer development is transforming growth factor beta 1 (TGF-β1). The effects of TGF-β1 on tumor progression can be either suppressive or promotive, depending on the cancer stage [7]. In the early stages of cancer, TGF-β1 inhibits tumor growth by inducing cell cycle arrest and apoptosis; however, in the later stages, when tumor growth has reached or surpassed a certain size, cancer cells become resistant to growth suppression by TGF-β1. Therefore, TGF-β1 then induces EM transition and contributes to tumor growth and metastasis, angiogenesis, and immune evasion mechanisms, thereby promoting cancer progression [8,9]. Panax Ginseng C.A. Meyer is a natural product that is extensively used in Korea, China, and Japan [10]. It is composed of major saponins and non-saponins, with ginsenosides being a major component of the saponin group, whereas polysaccharides, proteins, peptides, and gintonins (GTs) belong to the non-saponin group [11]. Among them, GT is a glycolipoprotein fraction of ginseng and is composed of carbohydrates as the main ingredient, lipids, and proteins containing several hydrophobic and acidic amino acids, as well as glucose [12]. The major active of GT are lysophospholipids, including lysophosphatidic acids (LPAs), the receptor of which serves as a specific target for GT [13][14][15][16]. GT exhibits various health benefits, such as anti-Alzheimer's disease effects, through the LPA receptor-mediated non-amyloidogenic pathway [17]. GT also improves cognitive function in older patients with Alzheimer's disease [18,19], increases hippocampal neurogenesis [20], possesses anti-depressant activity [21], exerts in vivo anti-metastatic effects [22], and protects against cardiovascular diseases [23]. However, to the best of our knowledge, detailed studies on the inhibition of the intracellular signaling pathway of cancer metastasis by GT have yet to be performed. Therefore, in this study, we investigated, for the first time, the mechanism of action of GT in EM transition using A549 lung cancer cells. Effect of GT on A549 Cell Proliferation First, we investigated the cytotoxicity of GT on A549 lung cancer cells. After treating A549 cells with GT at concentrations of 5, 10, 20, 40, 80, 100, and 200 µg/mL for 24 and 48 h, cell viability was evaluated using the EZ-Cytox method. As shown in Figure 1A,B, cell proliferation tended to increase as the GT concentration increased without cell toxicity in A549 cells. Therefore, subsequent experiments were conducted within the above concentration range. GT Suppressed the Expression of Transcription Factors Associated with TGF-β1-Induced EM Transition in A549 Cells Epidermal growth factor, hepatocyte growth factor, bone morphogenetic proteins, and TGF-β1 are signaling pathway inducers that are well-known for their ability to activate EM transition-inducing transcription factors [7,8]. Among these, TGF-β1 is the most extensively studied EM transition-inducing protein [10]. Additionally, the Smad signaling pathway is a key mediator of the EM transition, and TGF-β1 plays a critical role in regulating this pathway. Upon TGF-β1 activation, it binds to TGF-βR1 and forms a complex that activates the Smad signaling pathway by phosphorylating Smad2 and Smad3 [24,25]. This activation regulates EMT transcription, contributing to the acquisition of mesenchymal properties [24,26]. Additionally, TGF-β acts on extracellular signal-regulated kinase (ERK), phosphoinositide 3-kinase, and p38, which affect the differentiation stages of cells undergoing EM transition [27]. Therefore, we examined the effect of GT on the phosphorylation of Smad2 and Smad3, which are downstream of the TGF-β1 receptor-related signaling pathway. A549 cells were treated with 50, 75, and 100 µg/mL of GT for 4 h, then incubated with TGF-β1 for 30 min. As shown in Figure 2A, Smad3 and Smad2 phosphorylation decreased in a concentrationdependent manner after GT treatment in A549 cells. In addition, we analyzed whether GT affected the phosphorylation of Smad2/3 in A549 cells. As shown in Figure 2C, GT treatment did not affect Smad2/3 phosphorylation in A549 cells. Moreover, the expression of epithelial cell marker proteins (E-cad) and mesenchymal cell marker proteins (Vim) did not change with GT treatment for 48 h in A549 cells. Based on these results, we speculated that the inhibition of Smad2 and Smad3 phosphorylation by GT treatment may affect the TGF-β1-induced signaling pathways, such as EM transition, in A549 lung cancer cells. for 24 (A) and 48 h (B). Cell viability was determined using an EZ-Cytox cell viability assay kit. Data are presented as the mean ± standard deviation (SD) of three independent experiments. GT Suppressed the Expression of Transcription Factors Associated with TGF-β1-Induced EM Transition in A549 Cells Epidermal growth factor, hepatocyte growth factor, bone morphogenetic proteins, and TGF-β1 are signaling pathway inducers that are well-known for their ability to activate EM transition-inducing transcription factors [7,8]. Among these, TGF-β1 is the most extensively studied EM transition-inducing protein [10]. Additionally, the Smad signaling pathway is a key mediator of the EM transition, and TGF-β1 plays a critical role in regulating this pathway. Upon TGF-β1 activation, it binds to TGF-βR1 and forms a complex that activates the Smad signaling pathway by phosphorylating Smad2 and Smad3 [24,25]. This activation regulates EMT transcription, contributing to the acquisition of mesenchymal properties [24,26]. Additionally, TGF-β acts on extracellular signal-regulated kinase (ERK), phosphoinositide 3-kinase, and p38, which affect the differentiation stages of cells undergoing EM transition [27]. Therefore, we examined the effect of GT on the phosphorylation of Smad2 and Smad3, which are downstream of the TGF-β1 receptor-related signaling pathway. A549 cells were treated with 50, 75, and 100 µg/mL of GT for 4 h, then incubated with TGF-β1 for 30 min. As shown in Figure 2A, Smad3 and Smad2 phosphorylation decreased in a concentration-dependent manner after GT treatment in A549 cells. In addition, we analyzed whether GT affected the phosphorylation of Smad2/3 in A549 cells. As shown in Figure 2C, GT treatment did not affect Smad2/3 phosphorylation in A549 cells. Moreover, the expression of epithelial cell marker proteins (E-cad) and mesenchymal cell marker proteins (Vim) did not change with GT treatment for 48 h in A549 cells. Based on these results, we speculated that the inhibition of Smad2 and Smad3 phosphorylation by GT treatment may affect the TGF-β1-induced signaling pathways, such as EM transition, in A549 lung cancer cells. . Gintonin (GT) inhibits Smad2/3 phosphorylation in transforming growth factor beta 1 (TGF-β1)-treated A549 cells. A549 cells (5 × 10 5 cell/well, 6-well plate) were incubated overnight, and the medium was changed to serum-free RPMI 1640 to synchronize the cells. After 6 h, cells were pre-treated with GT (50, 75, and 100 µg/mL) for 4 h and then stimulated with TGF-β1 (5 ng/mL) for 30 min (A). A549 cells were treated with GT for 48 h (C). Cell lysates were analyzed via Western blotting using specific antibodies for E-cad, Vim, pSmad2, pSmad3, and pSmad2/3. Protein expression was quantified using Image J software (B,D). All data are presented as mean ± standard deviation (n = 3). # p < 0.0001 compared to the control group. *** p < 0.0001 compared to the TGF-β1 group. GT Suppressed the Expression of TGF-β1-Induced N-cad, Vim, and ZEB1 in A549 Cells EM transition is an essential cellular process for embryogenesis, wound healing, fibrosis, and cancer progression [28]. During metastasis, TGF-β1 signals induce EM transition by activating various intracellular signaling pathways [29,30]. EM transition causes the destabilization of epithelial junction proteins, such as E-cad, occludin, and claudin, resulting in their cleavage and subsequent degradation in the plasma membrane [3,28,30]. This leads to a loss of cell-to-cell adhesion. Additionally, E-cad expression decreased, whereas N-cadherin expression increased, during EM transition. Furthermore, the expression of Vim, an intermediate protein filament in the cytoplasm of cells, increased [28]. Vim is responsible for maintaining the cytoskeleton and tissue structure of cells. EM transition involves several transcription factors, including ZEB1 and Twist. ZEB1, one of the regulators of cancer progression, downregulates E-cad gene expression and disrupts the adhesive junctions between epithelial cells [22,24]. To investigate whether GT inhibits EM transition in A549 cells, cells were pretreated with GT for 6 h and EM transition was induced by stimulating the cells with TGF-β1 for 24 or 48 h. As shown in Figure 3A, the mesenchymal cell markers N-cad and Vim, as well as ZEB1 expression, slightly increased by TGF-β1 treatment for 24 h, and pretreatment of GT substantially decreased the expression of the TGF-β1-induced mesenchymal cell marker. While the epithelial cell marker E-cad's expression markedly decreased by TGF-β1 treatment, GT treatment did not recover the expression of E-cad in A549 cells ( Figure 3A, lift panel). When A549 cells were treated with TGF-β1 for 48 h, N-cad, Vim, and ZEB1 expression were strongly induced. However, when the cells were pre-treated with 50, 75, and 100 µg/mL of GT for 4 h, the expression of N-cad, Vim, and ZEB1 strongly decreased . Gintonin (GT) inhibits Smad2/3 phosphorylation in transforming growth factor beta 1 (TGF-β1)-treated A549 cells. A549 cells (5 × 10 5 cell/well, 6-well plate) were incubated overnight, and the medium was changed to serum-free RPMI 1640 to synchronize the cells. After 6 h, cells were pre-treated with GT (50, 75, and 100 µg/mL) for 4 h and then stimulated with TGF-β1 (5 ng/mL) for 30 min (A). A549 cells were treated with GT for 48 h (C). Cell lysates were analyzed via Western blotting using specific antibodies for E-cad, Vim, pSmad2, pSmad3, and pSmad2/3. Protein expression was quantified using Image J software (B,D). All data are presented as mean ± standard deviation (n = 3). # p < 0.0001 compared to the control group. *** p < 0.0001 compared to the TGF-β1 group. GT Suppressed the Expression of TGF-β1-Induced N-cad, Vim, and ZEB1 in A549 Cells EM transition is an essential cellular process for embryogenesis, wound healing, fibrosis, and cancer progression [28]. During metastasis, TGF-β1 signals induce EM transition by activating various intracellular signaling pathways [29,30]. EM transition causes the destabilization of epithelial junction proteins, such as E-cad, occludin, and claudin, resulting in their cleavage and subsequent degradation in the plasma membrane [3,28,30]. This leads to a loss of cell-to-cell adhesion. Additionally, E-cad expression decreased, whereas N-cadherin expression increased, during EM transition. Furthermore, the expression of Vim, an intermediate protein filament in the cytoplasm of cells, increased [28]. Vim is responsible for maintaining the cytoskeleton and tissue structure of cells. EM transition involves several transcription factors, including ZEB1 and Twist. ZEB1, one of the regulators of cancer progression, downregulates E-cad gene expression and disrupts the adhesive junctions between epithelial cells [22,24]. To investigate whether GT inhibits EM transition in A549 cells, cells were pretreated with GT for 6 h and EM transition was induced by stimulating the cells with TGF-β1 for 24 or 48 h. As shown in Figure 3A, the mesenchymal cell markers N-cad and Vim, as well as ZEB1 expression, slightly increased by TGF-β1 treatment for 24 h, and pretreatment of GT substantially decreased the expression of the TGF-β1-induced mesenchymal cell marker. While the epithelial cell marker E-cad's expression markedly decreased by TGF-β1 treatment, GT treatment did not recover the expression of E-cad in A549 cells ( Figure 3A expression were strongly induced. However, when the cells were pre-treated with 50, 75, and 100 µg/mL of GT for 4 h, the expression of N-cad, Vim, and ZEB1 strongly decreased in a concentration-dependent manner. In addition, E-cad expression slightly decreased due to TGF-β1 treatment for 48 h, whereas GT did not recover E-cad expression ( Figure 3A, right panel). Based on these results, it can be confirmed that GT has anti-metastatic potential, as it suppresses EM transition in lung cancer cells primarily by suppressing the expression of mesenchymal markers. However, GT was unable to affect TGF-β1-induced downregulated E-cad expression. 12, x FOR PEER REVIEW 5 of 13 in a concentration-dependent manner. In addition, E-cad expression slightly decreased due to TGF-β1 treatment for 48 h, whereas GT did not recover E-cad expression ( Figure 3A, right panel). Based on these results, it can be confirmed that GT has anti-metastatic potential, as it suppresses EM transition in lung cancer cells primarily by suppressing the expression of mesenchymal markers. However, GT was unable to affect TGF-β1-induced downregulated E-cad expression. 2, x FOR PEER REVIEW 6 of 13 (C) Figure 3. Gintonin (GT) suppresses the expression of transforming growth factor beta 1 (TGF-β1)induced EM transition markers. A549 cells (5 × 10 5 cells/well, six-well plate) were incubated overnight, then transferred to serum-free RPMI to synchronize the cells. Next, cells were treated with 50, 75, and 100 µg/mL of GT for 6 h and then incubated with TGF-β1 (5 ng/mL) for 24 and 48 h (A). The cell lysates were analyzed using Western blotting and identified using specific antibodies for N-cad, Vim, and ZEB1. Protein expression at 24 h (B) and 48 h (C) was quantified using ImageJ software. All data are presented as means ± standard deviation (n = 3). # p < 0.0001 compared to the control group. *** p < 0.0001 compared to the TGF-β1 group. GT Suppressed the mRNA Expression of TGF-β1-Induced Mesenchymal Markers in A549 Cells Next, we investigated the effect of GT on EM transition-related gene expression by analyzing the expression of four representative mesenchymal genes, including N-cad, Vim, ZEB1, and Twist. A549 cells were treated with GT for 4 h and then with TGF-β1 for 18 h. As shown in Figure 4, mRNA expression was significantly induced by TGF-β1 treatment. While the upregulated mRNA expression of N-cad, Vim, ZEB1, and Twist decreased as the concentration of GT increased, the mRNA expression of E-cad increased with GT treatment; however, there was no statistical significance. Therefore, it can be predicted that GT treatment does not directly affect E-cad expression. In addition, the mRNA expression results correlated with the protein expression results, as shown in Figure 3A. Based on the results shown in Figure 4, we confirmed that GT inhibited EM transition by regulating the mRNA expression of N-cad, Vim, ZEB1, and Twist, which are essential in EM transition. . Gintonin (GT) suppresses the expression of transforming growth factor beta 1 (TGF-β1)induced EM transition markers. A549 cells (5 × 10 5 cells/well, six-well plate) were incubated overnight, then transferred to serum-free RPMI to synchronize the cells. Next, cells were treated with 50, 75, and 100 µg/mL of GT for 6 h and then incubated with TGF-β1 (5 ng/mL) for 24 and 48 h (A). The cell lysates were analyzed using Western blotting and identified using specific antibodies for N-cad, Vim, and ZEB1. Protein expression at 24 h (B) and 48 h (C) was quantified using ImageJ software. All data are presented as means ± standard deviation (n = 3). # p < 0.0001 compared to the control group. *** p < 0.0001 compared to the TGF-β1 group. GT Suppressed the mRNA Expression of TGF-β1-Induced Mesenchymal Markers in A549 Cells Next, we investigated the effect of GT on EM transition-related gene expression by analyzing the expression of four representative mesenchymal genes, including N-cad, Vim, ZEB1, and Twist. A549 cells were treated with GT for 4 h and then with TGF-β1 for 18 h. As shown in Figure 4, mRNA expression was significantly induced by TGF-β1 treatment. While the upregulated mRNA expression of N-cad, Vim, ZEB1, and Twist decreased as the concentration of GT increased, the mRNA expression of E-cad increased with GT treatment; however, there was no statistical significance. Therefore, it can be predicted that GT treatment does not directly affect E-cad expression. In addition, the mRNA expression results correlated with the protein expression results, as shown in Figure 3A. Based on the results shown in Figure 4, we confirmed that GT inhibited EM transition by regulating the mRNA expression of N-cad, Vim, ZEB1, and Twist, which are essential in EM transition. Plants 2023, 12, x FOR PEER REVIEW 7 of 13 Figure 4. GT inhibited transforming growth factor beta 1 (TGF-β1)-induced increase in the mRNA levels during EM transition. A549 cells (5 × 10 5 cell/well, six-well plate) were incubated overnight, and the medium was changed to serum-free RPMI 1640 to synchronize the cells. Next, cells were pre-treated with GT (50, 75, and 100 µg/mL) for 6 h, and then stimulated with TGF-β1 (5 ng/mL) for 18 h. mRNA expression was determined using qPCR. # p < 0.0001 compared to the control group, and *** p < 0.0001 and ** p < 0.001 compared to the TGF-β1 group. Effect of GT on Morphological Changes and Mesenchymal Cell Markers in TGF-β1-Induced EM Transition As previously alluded to, morphological changes that occur during the EM transition process enable cancer progression [7,25,28]. Therefore, we observed the inhibition of EM transition by GT using confocal microscopy. As depicted in the bright field of Figure 5A, B, control cells (not treated with TGF-β1) had an epithelial appearance and adhered to each other. However, TGF-β1-treated cells became mesenchymal, and were elongated and spindle-shaped. Immunofluorescence staining of Vim (red dye) showed that GT treatment inhibited TGF-β1-induced Vim expression in a concentration-dependent manner. In addition, GT-treated cells displayed an epithelial morphology compared with those in the TGF-β1 group ( Figure 5A). Similar results were observed following N-cad immunofluorescence staining ( Figure 5B). The expression of N-cad (green dye) increased with TGF-β1 treatment, whereas GT treatment strongly inhibited N-cad expression in A549 cells. These results correlate with those shown in Figure 2. The expression of Vim is induced by TGF-β1 activation, Snail expression, and ERK phosphorylation [26]. The suppression of Vim and N-cad not only reduces motility during cancer metastasis, but also partially restores the epithelial cell phenotype in numerous cancer cell lines [23,26]. Therefore, GT can successfully suppress EM transition, preventing cancer cell migration and invasion. . GT inhibited transforming growth factor beta 1 (TGF-β1)-induced increase in the mRNA levels during EM transition. A549 cells (5 × 10 5 cell/well, six-well plate) were incubated overnight, and the medium was changed to serum-free RPMI 1640 to synchronize the cells. Next, cells were pre-treated with GT (50, 75, and 100 µg/mL) for 6 h, and then stimulated with TGF-β1 (5 ng/mL) for 18 h. mRNA expression was determined using qPCR. # p < 0.0001 compared to the control group, and *** p < 0.0001 and ** p < 0.001 compared to the TGF-β1 group. Effect of GT on Morphological Changes and Mesenchymal Cell Markers in TGF-β1-Induced EM Transition As previously alluded to, morphological changes that occur during the EM transition process enable cancer progression [7,25,28]. Therefore, we observed the inhibition of EM transition by GT using confocal microscopy. As depicted in the bright field of Figure 5A,B, control cells (not treated with TGF-β1) had an epithelial appearance and adhered to each other. However, TGF-β1-treated cells became mesenchymal, and were elongated and spindle-shaped. Immunofluorescence staining of Vim (red dye) showed that GT treatment inhibited TGF-β1-induced Vim expression in a concentration-dependent manner. In addition, GT-treated cells displayed an epithelial morphology compared with those in the TGF-β1 group ( Figure 5A). Similar results were observed following N-cad immunofluorescence staining ( Figure 5B). The expression of N-cad (green dye) increased with TGF-β1 treatment, whereas GT treatment strongly inhibited N-cad expression in A549 cells. These results correlate with those shown in Figure 2. The expression of Vim is induced by TGF-β1 activation, Snail expression, and ERK phosphorylation [26]. The suppression of Vim and N-cad not only reduces motility during cancer metastasis, but also partially restores the epithelial cell phenotype in numerous cancer cell lines [23,26]. Therefore, GT can successfully suppress EM transition, preventing cancer cell migration and invasion. . Gintonin (GT) suppressed the expression of vimentin and N-cadherin in transforming growth factor beta 1 (TGF-β1)-induced EM transition. A549 cells were seeded on a coverslip and incubated with RPMI 1640 medium. Thereafter, the medium was changed to serum-free RPMI 1640 medium for 6 h. Cells were pre-treated with GT (50 µg/mL and 100 µg/mL) for 4 h, and then stimulated with TGF-β1 (5 ng/mL) for 42 h. Immunofluorescence staining was performed as described in the Materials and Methods section to visualize vimentin and N-cadherin expression using confocal microscopy. Red and green fluorescence indicate the expression of vimentin (A) and N-cadherin (B), respectively. The nuclei were stained using DAPI. Preparation of GT from Ginseng The GT-enriched fraction of P. ginseng was prepared following our previous protocols [14,23]. Four-year-old ginseng was extracted using ethanol (yield: 35%). Ethanol extraction and centrifugation were performed, followed by lyophilization of the precipitate to produce the GT-enriched fraction. Our previous study identified the GT-enriched fraction as containing fatty acids (7.53% linoleic acid, 2.82% palmitic acid, and 1.46% oleic acid), 0.6% lysophospholipids and phospholipids, and 1.75% phosphatidic acids, as determined using liquid chromatography-mass spectrometry/mass spectrometry [14,23,31]. The GT-enriched fraction was dissolved in dimethyl sulfoxide (DMSO) for further analysis and diluted with media before treatment in A549 cells. Cell Culture The human lung cancer cell line A549 was obtained from the Korean Cell Line Bank (KCLB, Seoul, Republic of Korea) and cultured in RPMI medium, supplemented with 10% fetal bovine serum and 1% penicillin-streptomycin, under standard conditions of 37 • C and 5% CO 2 . A549 Cell Viability Analysis To assess the effect of GT on A549 cell viability, we used the EZ-Cytox cell assay kit (DoGenbio, Seoul, Republic of Korea). Briefly, cells were seeded in a 96-well plate (2.5 × 10 4 cells/well) and treated with different concentrations of GT for 24 or 48 h. GT was prepared in DMSO and diluted to 5-200 µg/mL in the medium. After incubation, EZ-Cytox solution was added to the cells and incubated for 1 h, and the absorbance was measured at 450 nm using a 96-well plate reader (Emax, Molecular Devices, San Jose, CA, USA). Induction of EM Transition via TGF-β1 Treatment A549 cells (5 × 10 5 cells/well) were seeded in RPMI medium in six-well plates and incubated overnight. The medium was then replaced with serum-free RPMI medium containing 1% penicillin-streptomycin, and the cells were treated with different concentrations of GT (50, 75, and 100 µg/mL) for 4 h. Subsequently, the cells were incubated with TGF-β1 (5 ng/mL) for 48 h, as previously reported [32,33]. Immunoblotting A549 cells (5 × 10 5 cells/well) were cultured in six-well plates and treated with GT for 48 h to assess the expression of E-cad, Vim, p-Smad2/3, and Smad2/3. For phosphorylation analysis of Smad2 and Smad3, cells were treated with GT for 4 h before TGF-β1 exposure for 30 min. N-cad, Vim, ZEB1, and E-cad expression levels were evaluated after 48 h of GT treatment. Cell lysates were prepared in RIPA buffer (T&I, Chuncheon, Republic of Korea) supplemented with protease inhibitor, dithiothreitol, and phosphate inhibitor, followed by centrifugation and separation of protein samples using TGX gel electrophoresis. PVDF membranes were incubated overnight in 5% skim milk/TBS-T buffer, probed with primary antibodies, and then incubated with secondary antibodies. Protein bands were visualized and quantified using ImageJ software and normalized to GAPDH levels. Phosphorylation levels were determined by comparing the samples with their non-phosphorylated counterparts. Real-Time Quantitative Reverse Transcription Polymerase Chain Reaction (RT-qPCR) For mRNA expression analysis, A549 cells were seeded at a density of 5 × 10 5 cells/well in a six-well plate and incubated overnight. After changing the medium to serum-free medium, cells were treated with GT at concentrations of 50, 75, and 100 µg/mL for 6 h, followed by treatment with TGF-β1 (5 ng/mL) for 18 h. Total RNA was extracted using an AccuPrep Universal RNA Extraction Kit (Bioneer, Daejeon, Republic of Korea) and reverse-transcribed using an AccuPower RT premix (Bioneer). Real-time PCR was performed using the TaqMan gene expression assay kit (Applied Biosystems, Foster City, CA, USA) or SYBR Green Master Mix (Applied Biosystems), with specific primers for E-cad (sense and anti-sense primers 5 -CCACCAAAGTCACGCTGAATAC-3 and 5 GAAGAAGAGGACAGCACTG-3 ), Vim (Hs_00958111_m1), N-cad (Hs_00983056_m1), ZEB1 (Hs_01566408_m1), Twist (Hs00361186_m1), and GAPDH (Hs_02786624_m1). qPCR results were determined from triplicate reactions. mRNA levels were determined by the Quant 3 PCR system (Applied Biosystems). Immunofluorescence Staining for Confocal Microscopy A549 cells (3.0 × 10 4 cells/well) were cultured on an eight-well chamber slide and incubated overnight. After treatment with GT (50 and 100 g/mL) for 6 h, the cells were further incubated with TGF-β1 for 42 h. The cells were then fixed with 4% paraformaldehyde, permeabilized with 1% Triton X-100, and blocked with 1% normal goat serum. Texas redconjugated Vim or Alexa Fluor 488-conjugated N-cad antibody treatment was conducted at room temperature, followed by DAPI staining. The expression of Vim and N-cad was observed using an LSM700 confocal microscope (Carl Zeiss, Oberkochen, Germany) Statistical Analysis All graphs were constructed using mean ± standard deviation results calculated from triplicate analyses. Data were analyzed via one-way analysis of variance followed by Tukey's post-hoc test using GraphPad Prism 8 software (GraphPad, Inc., San Diego, CA, USA). Conclusions Cancer metastasis, which is characterized by the spread of cancer cells throughout the body to create secondary tumors, is responsible for 90% of cancer-related deaths [34]. Lung cancer, which is the most prevalent type of cancer worldwide, has the highest mortality rate among all cancers [35]. Despite recent advances in lung cancer diagnosis and surgical techniques and an increase in the efficiency of chemotherapy, the survival rate of patients with lung cancer has not changed. Therefore, recently, cancer treatment strategies using natural plant-derived ingredients have received considerable research attention. GT activates LPA receptors to regulate various intracellular activities that suppress inflammation and perform various functions in cells, including those related to cell proliferation, migration, and vascular development. In the present study, we evaluated the inhibitory effect of GT on TGF-β1-induced EM transition in A549 cells and identified the regulated signal pathways. GT decreased protein expression (Vim, N-cad, and ZEB1) and mRNA levels (Vim, N-cad, ZEB1, and Twist) in A549 cells. In addition, GT substantially suppressed TGF-β1-regulated Smad-dependent signaling, such as phosphorylation of Smad2 and Smad3 ( Figure 6). Previous studies have demonstrated the inhibitory effects of epigallocatechin gallate (EGCG), geraniin, and sanguiin H6, which are found in Camellia sinensis, Phyllanthus amarus, and Sanguisorbae Radix, respectively, on TGF-β1-induced EM transition in lung cancer by targeting the Smad signaling pathway [36][37][38]. Although GT, a compound extracted from ginseng, has shown promising anticancer potential against various types of cancer in vitro, predicting its efficacy in vivo remains a challenge. Therefore, further extensive research and clinical studies are necessary to provide robust evidence of its anti-cancer mechanisms and clinical applications [39,40]. Conclusively, GT warrants further investigation as a potential therapeutic agent for inhibiting early metastasis in lung cancer. Conflicts of Interest: The authors declare no conflicts of interest.
2023-05-20T15:03:12.101Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "7b0747383ce6341d62925d38ee7dfe1f2de6c6cc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/plants12102013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0df31a23efa10bd5731e66539079becf358c2509", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250592913
pes2o/s2orc
v3-fos-license
Central precocious puberty occurring in Bardet-Biedl syndrome-10 as a method for self-protection of human reproductive function: A case report Hypogonadism and obesity are primary features of Bardet-Biedl syndrome (BBS). Obesity is also an associated factor of central precocious puberty (CPP). The present report describes the case of a girl (age, 7 years and 6 months), with clinical manifestations of precocious puberty, progressive obesity, postaxial polydactyly, retinal degeneration and intellectual disability. The patient visited the clinic for the first time due to early breast development and progressive obesity. After 8 months of follow-up, the bone age had advanced almost 3 years, and the gonadotropin-releasing hormone (GnRH) stimulation test showed results that had changed from indicating pseudo precocious puberty to CPP. Whole-exome gene sequencing showed that there were two heterozygous mutations in the BBS type 10 (BBS10) gene, chr12:76739816(c.1949del) and chr12:76740374(c.1391C>G). The final diagnosis was of BBS10 and CPP. In order to protect the reproductive capacity of the patient, GnRH analogs were used for CPP treatment. After 15 months of treatment and follow-up, a physical examination revealed Tanner breast stage 1. Ultrasonography showed that the uterus and ovaries had reduced to their prepubertal size. In conclusion, the present report describes a case of CPP that occurred in a young girl with BBS10. We hypothesize that this was a prelude to gonad dysplasia, acting as a method for the self-protection of human reproductive function. However, more clinical data and molecular biological evidence are required to confirm the etiology and mechanism of this case. Introduction Bardet-Biedl syndrome (BBS) a rare autosomal recessive genetic disease with clinical manifestations that can affect multiple systems throughout the body, was first reported by Georges Bardet (1). The incidence of BBS is nearly 1 in 125,000-160,000 in North America and Europe (2). Beales et al (3) analyzed the clinical symptoms of 109 families with BBS and summarized and revised the diagnostic criteria of the disease. The proposed diagnostic criteria for BBS are the presence of either four primary features or three primary and two secondary features, which differentiate this syndrome from other phenotype overlapped syndromes. The incidence rate of retinal degeneration/dystrophy in patients with BBS is >90% (4), and it mostly occurs as night blindness at the beginning of the disease (3). Obesity is the second primary feature of BBS, with an incidence rate of 72-92% (3)(4)(5); it often starts in childhood and gradually worsens with age, and can further develop into type 2 diabetes (4). The average body mass index (BMI) of adult men and adult women with BBS is 36.6 and 31.5 kg/m 2 (normal ranges: 18-24 kg/m 2 ), respectively (5). Postaxial polydactyly is the only symptom that can be observed at birth, with both upper and lower extremities being simultaneously involved in 21% of patients, lower limb involvement occurring in 21% of the patients and upper limb involvement occurring in 9% of the patients (4,6). The incidence rate of hypogonadism in BBS populations is 59-98% (6). The indicators of hypogonadism range from late sexual maturity to hypogenitalism in males (7). Most individuals with this condition have a micropenis and/or low testicular volume at birth, and 9% have cryptorchidism (3). Females with hypogonadism exhibit features such as hypoplastic fallopian tubes, uterus and ovaries, partial and complete vaginal atresia, a septate vagina, a duplex uterus, hydrocolpos, hydrometrocolpos and a persistent urogenital sinus (8). Renal abnormality is a major cause of morbidity and mortality in patients with BBS, and has an incidence rate ranging between 20 and 53% (3), including cystic tubular disease and anatomical deformities. Intellectual disability occurs in 50-61% of patients with BBS (7). and a previous study has shown that the volume of the hippocampus is decreased in patients with BBS (9). In total, >500 cases of BBS have been reported globally, of which only 80 cases have been accurately diagnosed in China (10). To date, 21 BBS genes have been identified and mapped on various chromosomes (11), and ~80% of the clinically examined cases can be explained by the known identified BBS genes (6). BBS has both a high degree of genetic heterogeneity and extensive clinical heterogeneity, and the association between genotype and phenotype is not significant (12). Novel interventions are developing at a rapid pace including genetic therapeutics such as gene therapy, exon skipping therapy, nonsense suppression therapy, and gene editing. Other non-genetic therapies such as gene repurposing, targeted therapies, and non-pharmacological interventions are also ongoing. A major challenge in developing genetic therapies for BBS is the generation of a long lasting therapy. A successful example of this is the retinal gene therapy (Luxturna), which has been developed for RPE65-associated Leber congenital amaurosis (13). The present report describes the case of a girl (age, 7 years and 6 months) who was initially diagnosed with BBS due to early breast development and obesity, and gradually developed central precocious puberty (CPP) during follow-up. Whole-exome gene sequencing revealed new heterozygous mutations in the BBS type 10 (BBS10) gene, which, to the best of our knowledge, have not yet been reported. Case report Patient. The patient was a girl (age, 7 years and 6 months), with a height of 127.8 cm [+0.4 standard deviation score (SDS) girls, i.e. 0.4 SDs above average for girls this age] (14), weight of 38.0 kg (+3.0 SDS girls (14)] and a BMI of 23.3 kg/m 2 . The patient visited the Department of Children's Health Care, Fifth People's Hospital of Foshan City (Foshan, China) for the first time in July 2020 due to a rapid increase in body weight from the age of 6 years and breast development for a month prior to the visit. The patient was the only daughter in the family and was born at full-term after spontaneous labor (measuring 49 cm in length and 3.0 kg in weight, with a 43.5-cm head circumference at birth). At 1 year and 3 months of age, the parents of the patient noticed rapid weight increase, with a height of 76.0 cm [-1 SDS girls (14)], a weight of 13.0 kg [+3.1 SDS girls (14)] and a BMI of 22.5 kg/m 2 being reached. The patient experienced retinal degeneration in both eyes. Sixth finger/toe deformities in both hands and the left foot were treated with surgical operations when she was 2 years old. At the first visit, motor functions and speech development were delayed, and the patient had attention problems and a poor academic performance (age, 7 years and 6 months). The father and mother were aged 44 and 42 years, respectively, and both were healthy; the family history revealed a non-consanguinous marriage and no notable genetic findings. Clinical findings. At the first visit to the Department of Children's Health Care, Fifth People's Hospital of Foshan City in July 2020, the patient was 7 years and 6 months of age. A physical examination indicated the following: Blood pressure, 92/60 mmHg; height, 127.8 cm [+0.4 SDS girls (14)]; weight, 38.0 kg [+3.0 SDS girls (14)]; and BMI, 23.3 kg/m 2 . The abdominal circumference was 73.5 cm. Facial features revealed a low hairline, crowding of the teeth, malocclusion and no abnormal facial features or limb malformation appearance. The patient had small hands and feet, with surgical scars of 1.0-1.5 cm in length on the outside of the little fingers of both hands and the little toe of the left foot. The breasts were at Tanner stage 2 (15) of development and female genitalia were present. The vaginal opening was normal and located below the urethral opening. The patient's father and mother were 167 cm [-0.9 SDS (14)] and 163 cm [+0.4 SDS (14)] in height, respectively. Both parents had normal sexual characteristics and both were Tanner stage 5. After this visit, the patient was referred to other hospitals. Therefore, no other examination and treatment was provided. The patient returned for another visit in January 2021, at 8 years and 2 months of age. The patient had experienced breast development and intermittent pain for 3 months. The following measurements were recorded: Height, 130.8 cm [+0.3 SDS girls (14)]; weight, 40.5 kg [+2.8 SDS girls (14)]; and BMI, 23.6 kg/m 2 . The abdominal circumference was 75.8 cm and the patient was at Tanner breast stage 2. Diagnostic assessment. In July 2020, at the time of the first visit, the patient's intelligence test score (Chinese-Wechsler Intelligence Scale for Children) was 75 (normal range: 85-115) (16). Routine blood and urine tests were within the normal ranges. Her hormones and biochemical data were normal, and they were in the state of Tanner Stages 2 female. And her ovarian function was also normal (Table I). Ultrasonography indicated that the patient's uterus and ovaries were in the prepubertal stage, and there was no adrenal and celiac ectopic hyperplastic disease. Additionally, the bone age was 10 years (17), which was 2 years and 6 months advanced of that expected. Magnetic resonance imaging of the pituitary gland was normal. The gonadotropin-releasing hormone (GnRH) stimulation test showed that the peaks of FSH and LH, which were 5.66 IU/l and 1.23 IU/l (Reference range: peak of LH>5.0IU/l and LH/FSH>0.6), respectively, appeared at 60 min post-administration.The ratio of LH to FSH was 0.22. Peripheral blood lymphocyte karyotype showed a result of 46, XX (Fig. 1). According to all the aforementioned results, the patient was initially diagnosed with BBS and pseudo-precocious puberty. However, whole-exome gene sequencing was not performed, as it was too expensive for the parents. After 3 months of follow-up, the patient's breasts returned to Tanner stage B1. In January 2021, at 8 years and 2 months old, the patient returned for another visit. This time, her hormones and biochemical data were still normal, and they were in the state of Tanner Stages 2 female. And her ovarian and adrenal function were also normal (Table II). Furthermore, the bone age was 11 years (17), which was almost 3 years advanced of that expected. Ultrasonography revealed the following results: Uterine volume, 26x10x17 mm; endometrial thickness, 4 mm; left ovary volume, 21x13x15 mm or ~2.1 ml; and right ovary volume, 17x11x12 mm or ~1.2 ml. No early antral follicles were observed. The GnRH stimulation test showed that the peaks of FSH and LH, which were 9.29 and 6.31 IU/l, respectively, appeared 60 min after administration. The ratio of LH to FSH was 0.68. Whole-exome gene sequencing was performed by the Guangzhou Daan Clinical Laboratory Center in Guangzhou, China). Genomic DNA from peripheral blood leukocytes, derived from the proband was extracted using a QIAamp DNA Blood Mini kit (cat. no. 51185; Qiagen, GmbH). Therapeutic intervention. To avoid premature depletion of gonadal function, the patient's parents agreed to use GnRH analog (GnRHa) for CPP treatment. The initial dose was 3.75 mg subcutaneous injection, and the maintenance dose was subcutaneous injection 50-100 mg/kg every 4 weeks. The patient's height and sexual development will be fully assessed every 3 months. This treatment plan will last until the age of 11 or until the precocious puberty is well controlled. Follow-up and outcomes. The patient showed good intervention compliance and tolerance. So far, no unfavorable and unanticipated events have been observed. After 3 months of treatment with GnRHa, height and weight had increased to 132.0 cm [+0.2 SDS girls (14)] and 42.0 kg [+2.8 SDS girls (14)], respectively. The Tanner breast stage was now 1. The GnRH stimulation test showed that the peaks of FSH and LH, which were 7.15 and 3.42 IU/l, respectively, appeared 60 min after administration. The ratio of LH to FSH was 0.48. Ultrasonography showed that the uterus and ovaries had reduced to their prepubertal size. In May 2022, the patient was 9 years and 6 months old, and after receiving 15 months of treatment with GnRHa, height and weight measurements had increased to 138.6 cm [+0.26 SDS girls (14)] and 46.8 kg [+2.4 SDS girls (14)], respectively. The BMI was 24.4 kg/m 2 and the abdominal circumference was 76.8 cm. The patient was at Tanner breast stage 1. The GnRH stimulation test showed that the peaks of FSH and LH, which were 9.66 and 3.15 IU/l, respectively, appeared 90 min after administration. The ratio of LH to FSH was 0.33. The level of anti-Müllerian hormone was 3.9 ng/ml (reference range for a 0 to 10-year-old girl, 0.05-10.40 ng/ml). Ultrasonography showed that the uterus and ovaries were still their prepubertal size. The patient's bone age was 11 years and 6 months (17), which was 2 years advanced of that expected. Discussion The final diagnosis in the present case was BBS10 and CPP. This is the first ever encounter of a patient with BBS10 and CPP in the Fifth People's Hospital of Foshan City. The first visit of the patient was due to a rapid increase in body weight, but in fact, weight and BMI measurements did not change significantly from the beginning of pseudo-precocious puberty to CPP. A recent study showed that early-onset obesity enhanced paraventricular nucleus expression of serine palmitoyltransferase long chain base subunit 1 and advanced the maturation of the ovarian noradrenergic system (22). Although the age of thelarche has decreased from 1977 to 2013 (23), it is questionable whether this type of obesity is sufficient to cause precocious puberty in BBS, which is characterized by hypogonadism. Patients with precocious puberty often have secondary sex characteristic of mismatched gonadal development. However, the Tanner stage of the breast, and the uterus and ovaries of the present patient markedly lagged behind the development of bone age. We hypothesized that this may have been associated with the clinical features of BBS. Clinical manifestations included retinal degeneration, obesity, postaxial polydactyly and intellectual disability, which were in line with the characteristics of BBS10, except for renal abnormality. No adrenal gland diseases and germ cell tumors were found, and there was no chronic steroid use. The final whole-exome gene sequencing revealed that the c.1949del and c.1391C>G heterozygous mutations associated with the patient's clinical phenotype were located in the BBS10 gene, and were not included in the recommendations of the ACMG guidelines of 59 genes. A literature search revealed no clinical studies reporting these two mutations. Therefore, none of these mutations of the previous case reports can explain the occurrence of CPP. U-shaped gonadotrophin levels are present from birth to puberty in normal males, while the same pattern, but at markedly higher levels, is present in anorchid boys, indicating that the gonads serve a role in the negative feedback of gonadotrophins in childhood (24). In addition, patients with Turner syndrome and triple X syndrome show premature activation of the GnRH pulse generator, even without signs of puberty (25). Both of these chromosomal aneuploidies have increased gonadotropin levels as compensation for the restricted ovarian function, to the extent that they manifest as CPP, but eventually progress to premature ovarian failure. Gonadal dysplasia may reduce the negative feedback of gonadotrophins, resulting in the earlier activation of the hypothalamic-pituitary-gonadal axis (25). These theories seem to reasonably explain the occurrence of precocious puberty cases in hypergonadotropic hypogonadism. Another question is with regard to the manner in which precocious puberty then occurs in hypogonadotropic hypogonadism. No pathogenic allelic variants of genes known to cause monogenic CPP (KISS1 receptor, KiSS-1 metastasis suppressor, makorin ring finger protein 3 and δ like non-canonical Notch ligand 1) (26-28) were found in the present case. Perhaps hints can be taken from other biological studies; for example, adult Drosophila accelerate their mating behavior to defend against the threat of certain parasitic wasps (29). It is unclear whether CPP occurring in BBS-10 would be a prelude of gonad dysplasia (30) or a self-protection mechanism of human reproductive function. Central precocious puberty occurred in case of hypogonadotropic hypogonadis. Therefore, further clinical data and molecular biological evidence is required to confirm the etiology and mechanism of the present case.
2022-07-17T15:05:28.337Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "ff22e4875badb1ddafd2af1310db4c8cebf16042", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "063daaf21fcff68192526ece15d49b4c00183d3a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
236292432
pes2o/s2orc
v3-fos-license
Evaluation of Management of Pre-Eclamptic Cases Admitted to Elshatby Maternity University Hospital Hypertensive diseases of pregnancy, mainly PET, consist of several diseases which are associated with increased morbidity and mortality rates both maternal and fetal wise. Its incidence among pregnant females is about 3% 10%. This study included 500 pregnant PET women, and they categorized to mild and severe forms, after exclusion of cases with essential hypertension, renal disease and SLE. All pregnant ladies underwent history taking, abdominal examination (general and local), obstetric ultrasound, Doppler ultrasound, laboratory investigations; treatment received (antihypertensive medications, magnesium sulphate (MgSO4) and steroids) and finally follow up till delivery. Regarding complications, PET can result in serious fetal/neonatal or maternal complications. By the beginning of this study we started with 380 mild cases and 120 severe cases, but the end of the study 50 cases of mild PET were lost during follow up and 70 cases of mild PET converted into severe form. The aim of this study is to evaluate the management protocols of preeclamptic cases admired to Elshatby maternity hospital to minimize maternal and fetal morbidity and mortality. RESULTS: The results showed that the incidence of PET is significant in both young and old age, higher incidence of IUGR in sever PET, the use of corticosteroids and MgSO4 was significantly higher in sever cases. CONCLUSION: it’s important to diagnose PET by repeated blood pressure measurement. The use of corticosteroids and MgSO4 is improving maternal and fetal outcomes significantly. The incidence of C.S. is higher towards prematurity in sever cases. How to cite this paper: Hefila, N.M., Eldayem, T.M.A. and El Fazari, H.A. (2021) Evaluation of Management of Pre-Eclamptic Cases Admitted to Elshatby Maternity University Hospital. Open Journal of Obstetrics and Gynecology, 11, 689-700. https://doi.org/10.4236/ojog.2021.116064 Received: May 2, 2021 Accepted: June 8, 2021 Published: June 11, 2021 Copyright © 2021 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Epidemiology Pregnancy hypertensive diseases, including PET, can be present in different presentations which increased morbidity and mortality rates both maternal and fetal. The incidence in pregnant cases is about 3% -10%. [1] [2] PET is one of the major causes of maternal fetal deaths worldwide but its less common in developed countries. It is the main cause of maternal ICU admission due to high maternal morbidity [3] [4]. Approximately 12% to 25% of fetal IUGR and SGA age infants as well as 15 to 20% of all preterm births are due to PET [3] [5]. Preeclampsia/Eclampsia Preeclampsia is a main cause of maternal and fetal or neonatal mortality and morbidity [6]. The disorder complicates 5% -7% of all pregnancies [7]. It occurs after 20 weeks of pregnancy, most often near term, and can be superimposed on another hypertensive disorder. It is defined by the occurrence of hypertension plus proteinuria not previously existing. Although hypertension associated with protienuria is the main classical presentation of PET but some cases are presented with hypertension with multi end organ failure specially liver and kidney [8] [9]. Eclampsia is the severest form of PET with convulsions. It is defined as newly onset grand mal seizures in a pregnant with sever preeclampsia. It can occur antepartum, intrapartum or postpartum. It is often preceded by some dangerous signs, such as severe headaches and hyperreflexia, but it can occur suddenly without warning signs or symptoms [9]. Role of Smoking in Preeclampsia Smoking was suggested to be a protective factor against preeclampsia. During pregnancy, smoking has been involved in lowering the circulating levels of the anti-angiogenic proteins such as soluble fms-like tyrosine kinase-1 (sFlt1) and sEng (soluble endoglins) and raises the concentrations of the pro-angiogenic placental growth factor protein (PGF) [10] [11]. Moreover, the protective property of smoking can be explained by the role of carbon monoxide that is produced during smoking. It acts by inhibiting the production of anti-angiogenic proteins by the placenta such as sFlt1 and by inhibiting necrosis of the placenta and programmed cell death known as apoptosis. Also, NO (nitric oxide), which present in cigarettes may cause vasodilation and hence so protect against PET [10] [12]. However, evidence suggests that carbon monoxide may be the main critical mediator. In addition, carbon monoxide is protecting the vessels from a lot of vascular disorders such as ischemia as well as reperfusion injuries [10] [13]. Abnormal Remodeling of Uterine Vessels by Trophoblasts While the main pathophysiology of PET is abnormal placention but the clear understanding of the pathology is still unclear. The means by which abnormal placentation results in systemic dysfunction is an area of ongoing research. In normal pregnancy, there should be normal invasion of cytotrophoblasts to the uterine arteries, the tunica media is destroyed and the maternal endothelium is replaced by 16 -18 weeks of gestations. The fetal requirements of oxygen and food is transferred to him by the low resistance high capacity uterine vessels which previously were of high resistance low capacity [14]. In preeclampsia this process is impaired due to abnormal cytotrophoblastic invasion resulting in high resistance uterine vessels with Placental ischemia and multisystemic dysfunction. Preeclampsia can therefore starts with abnormal placentation leading to placental ischemia and later on development of hypertension and proteinuria [15] [16]. However, the actual pathophysiology of preeclamptic toxemia is unclear. Haemodynamic Changes Placental hypoxia and ischemia is considered a key feature in the pathogenesis of preeclampsia. Following ischemia, the placenta releases soluble products into the serum of the mother which leads to endothelial dysfunction and antiangiogenic state. These products include sFlt1and sEng which have a major role in the inhibition of PGF and VEGF, vaso active endothelial growth factor, via bonding to these molecules in the maternal blood and target tissues as the kidneys. Moreover, an increase in Angiotensin-1 receptor lead to a rise in endothelin-1 and a drop in nitric oxide resulting in maternal hypertension, protienuria, endothelial dysfunction and oxidative stress [17] [18]. Subjects This observational study was conducted on 500 pregnant women cases admitted to Shatby Maternity University Hospital from July 2019 to July 2020 All pregnant cases got full information about the study and informed consent was taken from each of them. 2) Proteinuria of 300 mg in a 24-hour urine collection or >1+ on two random sample urine dipsticks at least 6 hours apart. Severe Pre Eclampsia 1) Systolic BP > or =160 mmHg and/or diastolic BP > 110 mmHg on two times at least 6 hours apart. 2) Proteinuria of 5 g or higher in 24-hour urine specimen or 3+ or greater on two random urine samples collected with at least 4 hours interval. Methods For the all cases, the following is done: 1) Complete history taking (obstetric, medical, and surgical). 2) Complete general examination especially: general look, sites of edema, vitals (blood pressure, pulse rate, respiratory rate). 3) Complete obstetric examination. 4) Ultrasound examination: number of gestation, fetal biometry, amount of liquor and placenta. 5) Doppler findings: uterine arteries, umbilical artery, MCA Doppler and DVD. 6) Laboratory findings: a) Routine investigations: CBC, Creatinine, liver enzymes and uric acid. b) Urinary dip sticks, twenty four hours urine protein or alb\creat. ratio. 7) Recording system: a) Treatment given (antihypertensive drugs, magnesium sulphate, steriods and others). b) Duration from admission till delivery. c) Maternal and fetal outcome. d) Other needed interventions. e) Follow up till delivery. Statistical analysis of the data [19] Data were collected and analyzed using IBM SPSS software package version 20.0. (Armonk, NY: IBM Corp) [20]. Results Description of the studied cases according maternal age and gestational age at admission: Severe PET is significantly common among young women and old age. Gesta- Table 1). Description of the studied cases according to number of gestation and Liquor: Significantly higher incidence of PET in singleton than multiple pregnancies. Normal amount of liquor is significantly higher in mild cases but oligohydramnios is significantly higher in severe cases (Table 2). Descriptive analysis of the studied cases according to viability and placenta: Occurrence of IUFD (intrauterine fetal death) and abruptio placenta is significantly higher in severe PET ( Table 3). Description of the studied cases according to IUGR (intrauterine growth retardation): IUGR is significantly higher in severe cases ( Table 4). Description of the studied cases according to treatment received: Table 1. Comparative analysis between the two studied groups according to maternal age and gestational age. Only severe cases received MgSO 4 (190 cases) 120 cases severe from the start and 70 mild cases who converted to severe form during their follow up. Severe cases which received steroids were significantly higher in number ( Table 5). Description of the studied cases according to mode of delivery: The cases delivered by cesarian section (C.S.) were significantly higher in both groups. In this study 50 cases were excluded as they lost during their follow up ( Table 6). Description of the studied cases according to maternal outcome; Table 3. Comparative analysis of the two studied groups according to viability and placenta. Among 450 cases of PET with good follow up there were 20 cases of eclampsia, 93 cases of HELLP syndrome, 9 cases with neurological complications, 8 cases developed DIC, 68 cases were admitted to ICU, 12 cases with pulmonary edema, 10 cases with AKI and 7 cases with maternal mortality (Table 7). Descriptive analysis of the studied cases according to NICU admission: Significantly higher incidence of NICU admission among severe PET ( Table 8). Distribution of the studied cases according to severity: But by the end of this study 50 cases of mild PET were lost during follow up and 70 cases of mild PET converted to severe form (Table 9). Table 9. Distribution of the studied cases according to severity (n = 500). Discussion In agreement with our study also, Siveska, and Jasovic, [21] conducted a prospective study upon 400 pregnant women. They were divided in three groups: 300 normo-tensive pregnancies (controls), 67 pregnancies with mild PET and 33 pregnancies with severe PET. They reported that severe cases occurred at earlier gestational age and more common among primigravida but did not exhibit significant difference between the patients as regarding maternal age. In contrast to our study, Sharma, et al., [22] studied the incidence of abruptio placenta among PET cases who were complicated by SGA via a retrospective cohort study of 8927 singleton pregnancies, they found that women with preeclampsia and SGA infants were more likely to experience abruption than preeclamptic women with appropriate for gestational age (AGA) (5.3% versus 3.0%). While we had a higher incidence of abruptio placenta totally about 16%. This can be attributed to lack of good antenatal care and delay for seeking medical advice. In contrast to our study, Weiler, et al., [23] by using aretrospective cohort study of 176 cases of severe pre-eclampsia, showed that 39% (n = 68) found to be complicated with fetal growth restriction (IUGR). But in our study we had a percentage of 79.2%, and this higher percentage of IUGR cases may be attributed to a huge sample size in our study. Duley, et al., [24] conducted a study on 397 women and reported that magnesium sulphate was associated with fewer maternal deaths and was better in preventing further seizures than lytic agents mixture (usually promethazine, chlorpromazine and pethidine) which agrees with our study that adopted giving Mgso4 to all severe cases. Haram, et al., [25] used clinical reports and reviews published between 2000 and 2008 and Pub Med and Cochrane data bases and reported that in cases with HELLP syndrome delivery is a must to gaurde against maternal and fetal morbidity and mortality with preference of vaginal delivery more than C.S. if the cervix is favourable. In contrast, at our study the incidence of C.S rate was significantly higher in both groups and about 418 cases and only 32 cases occurred before the age of medicolegal viability were candidate to vaginal delivery if possible. In 24 -34 weeks of gestation, most studies prefer a single course of corticosteroids therapy for fetal lung maturity preferring dexamethasone. And this agrees with our study but we use only dexamethasone for lung maturity. Ramos Amorim, et al., [26] found that of 325 women with severe PET only 55 found to have one or more complications (16.9%); but there were no maternal deaths. In contrast to our study there were 20 cases of eclampsia (4.4%), 93 cases of HELLP syndrome (20.6%), 9 cases with neurological complications (2.0%), 8 cases developed DIC (1.7%), 68 cases were admitted to ICU (15.1%), 12 cases with pulmonary edema (2.6%), 10 cases with AKI(2.2%) and 7 cases with maternal mortality (1.6%). This can be due to large number of cases in our study and also to the fact that all sever and/or complicated cases were referred to our hospital and may be at late time. Aabidha, et al., [27] used a descriptive study which was conducted in a rural secondary referral center from August 2010 to July 2011 and reported that 93 women of the 1900 women found to be PET and the most common neonatal complications was pre-maturity (23.65%), intra uterine growth restriction (9.67%) and intra uterine demise of the fetus (8.6%). In contrast to our study, NICU admission and prematurity (32.6%), intra uterine growth restriction (19%) and intra uterine demise of the fetus (13.4%).This higher incidence of fetal complications may be due to lack of good antenatal follow up of PET cases. Conclusions 1) Regarding diagnosis, single reading of blood pressure is not reliable as cases may improve or even discharge. 2) Recent studies have demonstrated minimal to no influence of the severity of proteinuria on pregnancy outcome in preeclampsia; management of fetal growth restriction (FGR) is similar in pregnant women with or without preeclampsia [12] [13]. 3) According to ACOG guidelines methyldopa, labetalol, beta blockers (other than atenolol) and slow release nifedipine are considered as appropriate treatment. 4) Regarding corticosteroids, ACOG recommended that women with severe preeclampsia receiving expectant management at 34 0/7 weeks or less of gestation, should receive corticosteroids for fetal lung maturity. The same was done in our study and was associated with good neonatal outcome. 5) Regarding HELLP syndrome, the same way of management was adopted. 6) Eclampsia should be treated with intravenous magnesium sulfate as a first-line agent. A loading dose of 4 g should be given by an infusion pump over 5 -10 minutes, followed by an infusion of 1 g/h maintained for 24 hours after the last seizure. Magnesium sulfate in our study was also given only to severe cases not to mild cases. 7) When delivery is indicated, vaginal delivery can often be accomplished, but this is less likely with decreasing gestational age. 8) In our study the complications of CS itself were wound infection in 53 cases and parietal wall hematoma in 10 cases, only 2 cases of them were re operated upon them.
2021-07-26T00:06:27.474Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "0063e851399928c5530926db5e108404bae0dff4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=109798", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f9c158015dfb5da4b7b1f020d935fc52a3dea7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
110596807
pes2o/s2orc
v3-fos-license
Multifunctional thick-film structures based on spinel ceramics for environment sensors Temperature sensitive thick films based on spinel-type NiMn2O4-CuMn2O4-MnCo2O4 manganites with p- and p+-types of electrical conductivity and their multilayer p+-p structures were studied. These thick-film elements possess good electrophysical characteristics before and after long-term ageing test at 170 °C. It is shown that degradation processes connected with diffusion of metallic Ag into film grain boundaries occur in one-layer p-and p+-conductive films. Some part of the p+-p structures were of high stability, the relative electrical drift being no more than 1 %. Introduction Spinel ceramics based on mixed transition-metal manganites and/or magnesium aluminates are known to be widely used for temperature measurement, in-rush current limiting, liquid and gas sensing, flow rate monitoring and indication, etc. [1][2][3][4][5]. But their sensing functionality is sufficiently restricted because of bulk performance allowing, as a rule, no more than one kind of application. At the present time, a number of important problems connected with hybrid microelectronic circuits, multilayer ceramic circuits, temperature sensors, thermal stabilizers, etc. requires such resolution, when not bulk (e.g. sintered as typical bulk ceramics), but only thick-film performance of electrical components (possessing the possibility to group-technology route) is needed [5]. The wellknown advantages of screen printing technology revealed in high reproducibility, flexibility, attainment of high reliability by glass coating as well as excellent accuracy, yield and interchangeability by functional trimming are expected to be very attractive now, for new-generation sensing electronics [6]. No less important is the factor of miniaturization for developed thick-film elements and systems, realized in a variety of their possible geometrical configurations. Thus, the development of high-reliable nanostructured thick films and their multilayers based on spinel-type compounds for multifunctional environment sensors operating as simultaneous negative temperature coefficient thermistors and integrated temperature-humidity sensors are very important task [6][7][8]. To fabricate the integrated temperature-humidity thick-film sensors, only two principal approaches have been utilized, they being grounded on temperature dependence of electrical resistance for humidity-sensitive thick films and/or on humidity dependence of electrical resistance for temperaturesensitive thick films. The first approach was typically applied to perovsite-type thick films like to BaTiO 3 [9]. Within second approach grounded on spinel-type ceramics of mixed Mn-Co-Ni system with RuO 2 additives, it was shown that temperature-sensitive elements in thick-film performance attain additionally good humidity sensitivity [10]. Despite improved long-term stability and temperature-sensitive properties with character material B constant value at the level of 3000 K, such thick-film elements possess only small humidity sensitivity. This disadvantage occurred because of relatively poor intrinsic pore topology proper to semiconducting mixed transition-metal manganited in contrast to dielectric aluminates with the same spinel-type structure. Thick-film performance of mixed spinel-type manganites restricted by NiMn 2 O 4 -CuMn 2 O 4 -MnCo 2 O 4 concentration triangle has a number of essential advantages, non-available for other ceramic composites. Within the above system, can be prepare the fine-grained semiconductor materials possessing p + -conductive Cu 0.1 Ni 0.1 Mn 1.2 Co 1.6 O 4 and p-conductive Cu 0.1 Ni 0.8 Mn 1.9 Co 0.2 O 4 . So, a real possibility to prepare multilayer thick-film spinel-type structures for principally new device application, such as thermoelectric transformers in a power supply, high-accurate temperature sensors and compensators exploring current-voltage dependence, temperature difference detecting elements utilizing thermoelectromotive force, etc. seems to be a quite realistic one. In addition, the prepared multilayer thick-film structures involving semiconductor NiMn 2 O 4 -CuMn 2 O 4 -MnCo 2 O 4 and dielectric MgAl 2 O 4 spinels can be potentially used as simultaneous thermistors and integrated temperaturehumidity sensors with extremely rich range of exploitation properties. The aim of this work is development of high-reliable temperature and humidity sensitive thick films and multilayered structures based on spinel-type ceramics for multifunctional application in integrated temperature/humidity sensors. Experimental Bulk temperature sensitive ceramics were prepared by a conventional ceramics processing route using reagent grade cooper carbonate hydroxide and nickel (cobalt) carbonate hydroxide hydrates [11]. Chemical composition of these ceramics and the main points in their sintering schedules are presented in Table 1. The bulk MgAl 2 O 4 ceramics were prepared via conventional sintering route as was described in more details elsewhere [12]. The pellets were sintered in a special regime with maximal temperature 1300 C during 5 h. Temperature sensitive Cu 0. The prepared pastes were printed on alumina substrates (Rubalit 708 S) with Ag-Pt electrodes using a manual screen-printing device equipped with a steel screen. Then thick films were fired in furnace PEO-601-084. To prepare multifunctional temperature/humidity sensitive elements we used typical design performance in respect to the scheme shown in Fig. 1. In the case under consideration, the main advantages proper to bulk transition-metal manganite ceramics (wide range of electrical resistance with high temperature sensitivity) and humidity-sensitive MgAl 2 O 4 ceramics were transformed into thick-film multilayers resulting in a principally new and more stretched functionality. The spinel-type Cu 0.1 Ni 0.1 Mn 1.2 Co 1.6 O 4 compound with p + -type of electrical conductivity, Cu 0.1 Ni 0.8 Mn 1.9 Co 0.2 O 4 compound with p-type of electrical conductivity and dielectric magnesium aluminate d-MgAl 2 O 4 were designed as overall integrated p + -p and p-d structures shown in the topological scheme (Fig. 1). Figure 1. Topological scheme integrated humidity/temperature sensing functionality in the developed spinel-type thick films The topology of the obtained thick films was investigated using 3D-profilograph Rodenstock RM600. The electrical resistance of temperature-sensitive thick films was measured using temperature chambers HPS 222. The temperature constant B for these thick films was calculated according to the equation: (1) where R 1 and R 2 were corresponding resistance at T 1 = 25 C and T 2 =85 C. accordingly. The currentvoltage I-V characteristics were measured at 50, 25 and 5 C (0.1 C) using a precise digital multimeter. The humidity-sensitivity of thick-film elements based on MgAl 2 O 4 ceramics was evaluated on dependence of electrical resistance from relative humidity (RH). The measurements were performed at 20 C and 1000 Hz frequency in direction of RH increase and in reverse one. The longterm ageing test at 170 о С for p-, p + -type thick films and p+-p structures was carried for study of their reliability. The relative change of electrical resistance (R/R o ) was used as a controlled parameter (R oinitial value of electric resistance, R -absolute change of electric resistance caused by a ageing test). Results and Discussion In respect to the obtained 3D-profilogramph data, the thickness of temperature sensitive p + -conductive thick films based on Cu 0. Fig. 2. The temperature sensitive p + /p-conductive thick films and their p + -p structures based on spineltype NiMn 2 O 4 -CuMn 2 O 4 -MnCo 2 O 4 ceramics posses good linear electrophysical characteristics in the region from 298 to 358 K in semi-logarithmic scale (Fig. 3). The values of B constants were 3589, 3630 and 3615 K for p-, p + -conductive thick films and p + -p structure, respectively. As was shown early [12], the bulk humidity-sensitive ceramics characterizes of hystereses in desorption cycles. It's connected with peculiarities of pore-grain structure and quantity of addition phases localized near grain boundaries. These failings were succeeded in thick films based on MgAl 2 O 4 ceramics using the optimum amount of Bi 2 O 3 , organic solvent, organic copula and pine oil. So, the studied d-type MgAl 2 O 4 thick films possess linear dependence of electrical resistance from relative humidity without hysteresis in the range of RH  40-99 % (see Fig. 4). It is shown that electrical resistance of p-and p + -conductive thick films incidentally decreases in the process of degradation test (see Fig. 5). This effect is supposed to be connected with thermallyinduced compression of thick films and diffusion of metallic Ag into the grain boundaries. The value of R/R o reaches -(2-5) %. However, the p+-p thick-film structures shows high reliability after longterm ageing test at 170 о С. The relative electrical drift is no more than 1 %. The values of temperature constant B are decreases on 20-50 K after degradation test both p/p +conductive thick films and p + -p structures, while the activation energy of electrical conductivity was not changed significantly, being at the level of 0.31 eV. Typical current-voltage characteristics for pconductive thick films are presented in Fig. 6. Conclusions The separate temperature and humidity sensitive thick-film elements based on spinel-type NiMn 2 O 4 -CuMn 2 O 4 -MnCo 2 O 4 manganites with p + /p-type of electrical conductivity, p + -p structures and dielectric magnesium aluminate MgAl 2 O 4 were prepared using ecological glass constituents. These thick films can be used to produce multifunctional high-reliable integrated temperature/humidity sensors for effective environment monitoring and control.
2019-04-13T13:04:54.131Z
2011-04-01T00:00:00.000
{ "year": 2011, "sha1": "6d10c26b6ad870f87e8c12cdf2e92d82c0140cb0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/289/1/012011", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5f8c7ca6a6d8efff0fc9396eb8f2f83c1d73c49b", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
14204123
pes2o/s2orc
v3-fos-license
Extra-Abdominal Desmoid Tumors Associated with Familial Adenomatous Polyposis Extra-abdominal desmoid tumors are a significant cause of morbidity in patients with familial adenomatous polyposis syndrome. Understanding of the basic biology and natural history of these tumors has increased substantially over the past decade. Accordingly, medical and surgical management of desmoid tumors has also evolved. This paper analyzes recent evidence pertaining to the epidemiology, molecular biology, histopathology, screening, and treatment of extra-abdominal desmoid tumors associated with familial adenomatous polyposis syndrome. Introduction Desmoid tumors (DTs), also known as aggressive fibromatosis, are fibroblastic neoplasms which are often locally aggressive but lack metastatic potential. They may occur sporadically or in association with familial adenomatous polyposis (FAP) syndrome. Among individuals with FAP, desmoids most frequently occur in intra-abdominal and abdominal wall locations with most arising from the peritoneum. These abdominal desmoids range in severity from indolent, asymptomatic lesions to highly invasive, sometimes fatal tumors. Although less common than abdominal desmoids and very rarely fatal, extra-abdominal desmoids are also a significant cause of morbidity in this population. This paper will review recent developments in the diagnosis, screening, treatment, and prognosis of FAP-associated extraabdominal DTs. Epidemiology of FAP-Associated Desmoid Tumors The overall incidence of DTs has frequently been quoted at 2-4 per million people per year [1,2]. This estimate is derived from a 1986 Finnish study which used the pathologic records of several regional hospitals and their known catchment area populations to calculate an incidence figure [3]. Recently, the Dutch national pathology database was analyzed, and 519 total desmoid cases in patients over the age of ten were identified from 1999 to 2009. There were 480 sporadic DTs and 39 FAP-DTs. The annual incidence was 3.7 per million overall [4] consistent with the earlier Finnish study. The same nationwide study from The Netherlands identified 1400 patients over the age of ten with FAP during the 1999 to 2009 period. FAP-associated DTs (FAP-DTs) made up 7.5% of all DTs, and the relative risk of an FAP patient developing a DT was over 800-fold higher than the general population [4]. The Dutch study was limited by the use of pathologic specimens as many DTs may be identified based upon history, physical exam, and imaging but not biopsied or surgically excised especially in the FAP cohort. Additionally, some individuals with sporadic DTs may have had as yet undiagnosed FAP. Therefore, FAP-DTs likely constitute more than 7.5% of all DTs. A 1994 study of the Johns Hopkins Polyposis Registry found that 10% (83/825) of FAP patients had desmoids, and their relative risk of DTs was 852-fold higher than the general 2 Sarcoma population [5]. A study of Mayo Clinic data from 1976 to 1999 identified 447 desmoid patients of whom 70 (15.7%) had FAP [6]. In all of the previously mentioned studies, intraabdominal and abdominal wall desmoids predominated in the FAP cohorts whereas extra-abdominal desmoids were most common among sporadic cases. The sites of extraabdominal DTs (head and neck, trunk exclusive of abdominal wall, and extremity) do not appear to vary between the sporadic and FAP-associated desmoid cohorts. Other consistent demographic findings include younger age at DT presentation among FAP patients, history of abdominal surgery in abdominal DTs, and reduced female predominance of DTs among individuals with FAP [4][5][6][7]. Although females develop DTs more frequently than males in both FAP-and non-FAP-associated disease, the sex predominance is less in the FAP cohort. Table 1 summarizes the known risk factors for DT development in FAP patients based upon the previous cited studies. Desmoid Histology, Cytogenetics, and Immunohistochemistry Desmoids usually present grossly as firm, white tumors with a coarse, trabeculated surface. They may appear to be scarlike and encapsulated which belies their infiltrative behavior at the microscopic level. Histologic analysis reveals bland spindle-shaped cells in a collagenous stroma containing blood vessels [8]. The cells lack atypia, but the mitotic rate is variable [8]. Sporadic and FAP-DTs are indistinguishable at the gross and microscopic levels. Cytogenetic analyses of DTs (both sporadic and FAP-associated) have shown trisomies of chromosomes 8 and 20 to be recurrent abnormalities [9]. Trisomy 8 was found to correlate with recurrence in two separate studies [9,10]. Immunohistological staining of DTs is positive for vimentin and variably positive for muscle and smooth muscle markers [8]. A study of 116 DT samples (both sporadic and FAP specimens) found only 7 estrogen receptor-beta-positive tumors, one C-KIT-positive tumor, and no HER2 or estrogen receptor-alpha-positive tumors [11]. A subsequent study of 40 desmoids using different immunohistological techniques found some degree of estrogen receptor beta expression in all samples whereas estrogen receptor alpha expression was absent in all samples [12]. Desmoids and the APC Gene Pathway Mutation of the tumor suppressor Adenomatous Polyposis Coli (APC) gene was identified as the cause of FAP in 1991 by two different groups working independently [13][14][15][16]. The APC gene is located on the long arm of chromosome 5 (5q21); its product has been implicated in a wide variety of cellular processes including cell migration, cell adhesion, chromosome segregation, spindle assembly, apoptosis, and neuronal differentiation [17]. Despite these many roles, the classical function of APC in neopalsia is inhibition of the WNT signaling pathway. WNTs are a family of secreted glycoproteins which act as short range ligands in cell signaling. [7] Binding of WNT on the cell surface upregulates the accumulation of beta-catenin in the cytoplasm, and the beta-catenin molecules subsequently move to the nucleus and activate WNT pathway transcription factors [18]. The APC gene product, located in the cytoplasm, forms a molecular complex with Glycogen Synthase Kinase 3 (GSK3) and Axin which in turn binds beta-catenin leading to its subsequent degradation [19]. The APC pathway is summarized in Figure 1. Both sporadic and FAP-DTs have been analyzed for APC and beta-catenin mutations. As expected, most FAP-DTs show a second somatic mutation of the APC gene [20]. However, the secondary somatic mutations of the FAP-DTs have been shown to differ consistently from the secondary somatic mutations in the colonic polyps from the same individuals [21]. APC mutations are infrequently found in sporadic DTs [22] which more frequently demonstrate beta-catenin mutations [23,24]. Genotype Phenotype Correlations in FAP-Associated Desmoids The correlation of genotype with phenotype in FAP-DTs may permit more efficient screening strategies, improved treatment regimens, and ultimately targeted therapy of the disease. A variant of FAP, termed hereditary desmoid disease was first described by Eccles et al. in 1996 [25]. They reported 100% penetrance of desmoid tumors in a three-generation kindred with a mutation in the extreme 3 end of the APC gene [25]. DTs in this kindred had both extra-and intra-abdominal involvement. Subsequently, Couture et al. reported a large French-Canadian kindred with a similar phenotype and extreme 3 mutation of the APC gene [26]. This kindred had extensive desmoid disease and attenuated colonic polyp formation in contrast to classic FAP. These authors further demonstrated that desmoid tissue from a member of the kindred had elevated beta-catenin levels [26]. Prior studies of the secondary somatic mutations which occur in FAP colon polyps revealed that the type and location of the somatic mutation were nonrandom and at least partially determined by the location of the germ-line mutation [21,27]. The APC gene product contains seven 20 amino acid beta-catenin degradation repeats (AARs). These repeat segments permit binding of beta-catenin leading to its ultimate degradation. The "just right" model of FAP tumorigenesis proposes that there is an ideal level of beta-catenin binding suitable for polyp progression to colon cancer, and selective pressure results in nonrandom selection of somatic mutations with the appropriate number of AARs [27]. Analysis of FAP-DTs by Latchford et al. revealed that 87% (26/30) of tumors had one allele with no AARs and preferentially retained a total of two AARs 57% (17/30) [28]. These authors suggested that specific levels of beta-catenin activity are required by the different tumor types with desmoids preferentially requiring two AAR segments. A large Japanese study (86 colorectal tumors, 40 extracolonic tumors) identified similar associations between AARs and phenotype. With respect to FAP-DTs, 5 of 6 were found to have two AARs in the Japanese study [29]. Development of desmoids among individuals with FAP has been correlated with specific mutations. Early studies with small numbers of FAP-DTs suggested that mutations in these patients tended to occur at the 3 end of the gene [30,31]. A 2001 study from Hereditary Colorectal Tumor Registry in Milan analyzed 809 FAP patients of which 107 (11.9%) developed DTs including 59 extraabdominal cases [32]. These authors found a 12-fold increased risk of DT when the APC mutation occurred beyond codon 1444 as compared with upstream mutations [32]. In a multivariate analysis, these authors determined that genotype was the strongest predictor of desmoid development [32]. A 2007 review of the world literature on APC genotype/phenotype correlation identified ten articles with data on FAP-DTs. The reviewers concluded that patients with APC mutations downstream of codon 1400 were at increased risk of desmoid development [33]. More recently, genotype data have been incorporated into a desmoid risk scoring system for FAP patients. Female sex, presence of other extracolonic manifestations, a relative with a DT, and genotype were the risk factors considered [34]. The authors utilized the risks identified using this system to guide surgical management. They advocated use of antiadhesion material, sulindac prophylaxis, and minimally invasive techniques in patients at increased risk of desmoid formation [34]. Gene Expression Profiles of FAP-Associated Desmoids APC is a large protein with numerous binding sites and multiple putative functions. Gene expression profiling is one strategy which has been used to better understand the complex downstream effects of APC mutations. A critical factor in gene expression profiling is determination of which tissues should be compared because genes can only be up-or downregulated with respect to a reference specimen. With reference to DTs, numerous tissue samples have been studied including FAP-DTs, sporadic DTs, banked reference fibrous tissue, fibrous tissue from the same patient, adenomatous tissue from the same FAP patient, and many other banked histologic specimens. The technical aspects of each study are beyond the scope of this paper, but some notable findings merit discussion. The first desmoid gene expression profile study (2004) compared 12 sporadic DTs with banked normal fibrous tissue. Notably, the study identified two distinct groups within the 12 patients based upon gene expression, but no obvious clinical correlations were evident [35]. A 2006 study analyzed four tumors (2 with APC mutations, 2 with beta-catenin mutations) using normal fibrous tissue from the same patients as control. Sixty-nine differentially expressed genes were identified, of which 33 were upregulated and 36 were downregulated [36]. Interestingly, no differences in Sarcoma the profiles of the APC and beta-catenin tissues were identified. The authors additionally confirmed consistent downregulation of insulin-like growth factor-binding protein 6 using reverse transcriptase PCR and Northern blot assays [36]. A study comparing desmoid samples (both sporadic and FAP associated) with nodular fasciitis was performed using 33 DTs and 11 nodular fasciitis specimens. Hierarchical clustering revealed distinct gene expression signatures between the two groups [37]. The authors concluded that this technology may be useful in diagnostically challenging cases. Gene expression profiling may also be of prognostic value as demonstrated by a 2007 study which found that elevated beta-catenin and p53 expression correlated with local recurrence in a retrospective analysis of 37 DTs (sporadic versus FAP not specified) [38]. A recent study reported the results of array comparative genomic hybridization analysis of 196 DTs (only 5% were FAP-DTs) [39]. Four recurrent chromosomal abnormalities were identified: loss of 6q, loss of 5q, gain of 20q, and gain of chromosome 8 [39]. Loss of 5q is likely explained by APC localization to this region. The other gains and losses suggest avenues of future investigation. A 2011 study compared sporadic and FAP-DTs using array comparative genomic hybridization analysis [40]. The authors analyzed 17 FAP-DTs and 38 sporadic DTs. They found more copy number abnormalities among the FAP-DTs than the sporadic DTs. Loss of 6q was common to both sporadic and FAP-DTs, and the authors believed that further study of genes in this region may help elucidate desmoid tumorigenesis [40]. They noted that several known or putative tumors suppressor genes including ANKRD6, BACH2, MAP3K7/TAK1, EPHA7, and NLBP/KIAA0776 reside in this region. As yet, none of these putative tumor suppressors have been correlated with the downregulated genes identified in the previously discussed gene expression profile studies. Another application of gene expression profiling is analysis of treatment response. A 2010 report compared a FAP-DT human cell line with a sporadic DT human cell line using microarray analysis [41]. Doxorubicin-treated cells from each line were compared with each other and their untreated controls. Separate in vitro assays had already shown that the FAP-DT cell line demonstrated greater doxorubicin resistance than the sporadic DT cell line [41]. The gene expression profiles of the treated cells differed in that the pro-survival genes netrin 1 and tumor necrosis factor receptor superfamily member 10c were upregulated in the treated FAP-DT line and the proapoptotic gene forkhead box L2 was upregulated in the treated sporadic DT line [41]. Although this study was preliminary and in vitro, gene expression profiling may ultimately be applicable to prediction of response to treatment in humans. Desmoid Cell of Origin As recently as 2000, debate existed as to whether desmoids were neoplastic or reactive. A 2000 study by Middleton et al. demonstrated that FAP-DTs were monoclonal [42]. The authors derived a clonality ratio by assessing X chromosome inactivation in desmoid samples from 12 female patients. Although it is now generally agreed that desmoids, both sporadic and FAP associated, are neoplastic, the cell of origin has yet to be identified. Recent animal studies suggest that mesenchymal stem cells (MSCs) are likely candidates and at minimum contribute to tumor development. Wu et al. recently demonstrated that MSCs and desmoids had similar gene expression profiles, and mice deficient in MSCs but prone to desmoids (mice with an APC mutation and deficient MSC production) developed fewer desmoid tumors while colonic tumor rates were uneffected [43]. In fact, desmoid development was directly proportional to the number of MSCs present. Additionally, MSCs with the APC mutation from heterozygote APC wt/1638N mice produced DTs when transplanted to immunodeficient mice, but MSCs without the mutation did not. Furthermore, they found that MSCs from mice with inducible expression of beta-catenin (Catnb tm2kem mice) could also induce desmoid-like tumors when transplanted to immunodeficient mice. Finally, they showed that these tumors were clonally derived from the donor MSCs with use of a green florescent protein tag [43]. A 2012 study has further defined the role of mesenchymal stem cells in FAP-DTs using human tissue. Carothers et al. analyzed 16 human desmoid specimens and using immunohistochemistry found that desmoid tissue expressed MSC markers but surrounding normal tissue did not [44]. They next developed a primary desmoid cell line from the human desmoid tissue. These cells had an immunohistochemical profile consistent with MSC, and the cells were able to differentiate into chondrocytes, osteocytes, and adipocytes confirming that they are MSCs [44]. These human desmoidsderived MSCs were found to have elevated beta-catenin in their nuclei (similar to desmoid tissue) and demonstrated upregulation of the Notch and Hedgehog pathways [44]. The aforementioned studies do not definitively prove that MSCs are the cell of origin in FAP-DTs, but they at a minimum demonstrate the importance of MSCs in desmoid development. The association between desmoid development and surgical wound healing in patients with FAP has long been established [45]. Presence of extra-abdominal and abdominal wall DTs increases the risk of intra-abdominal DT development at the time of prophylactic colon resection [46]. A recent case report analyzed the individual tumor mutations of a FAP patient with multiple recurrences at the same surgical site. Interestingly, different APC mutations were identified in the "recurrent" tumors suggesting that these were in fact new clonal populations and not true recurrences [47]. Based upon the previously noted findings, one can postulate a model in which secondary somatic mutations develop in the MSC rich wound healing environment of FAP patients. This model fits well with the known development of desmoids after surgical or incidental trauma in the FAP population. FAP Screening and Treatment Guidelines in relation to Desmoid Treatment Physicians specializing in the treatment of sarcomas will rarely be the first to diagnose FAP because desmoids in these Sarcoma 5 patients most frequently occur after gastrointestinal manifestations of the disease are evident. Additionally, many kindreds have been extensively tested, and affected family members are frequently diagnosed early in childhood. However, de novo mutations may occur, and individuals with FAP may still initially present with extracolonic manifestations such as desmoids. A meta-analysis of desmoid risk among FAP patients identified family history of DT, APC mutation 3 to codon1399, previous abdominal surgery, and female sex to be significant risk factors for DTs [48]. The same analysis found that 80% of FAP-DTs occur before age 40 [48]. Two other studies have noted that FAP-DTs present at a younger age in females than males [45,49]. Practitioners should therefore suspect FAP in patients with a family history of desmoids and in young patients presenting with desmoids. Referral to gastroenterologists, geneticists, and colon and rectal surgeons experienced in FAP care is critical if the diagnosis is suspected. Many cancer centers have well established multidisciplinary groups and polyposis registries. A 2006 review of screening guidelines recommended careful postcolectomy follow-up to asses for desmoids as early intervention has anecdotally improved outcome for some [50]. Practical surveillance measures for all FAP patients include asking them about new masses and examining their body surface for tumors at each visit. Other extracolonic manifestations of FAP should be considered by the clinician treating FAP-DTs. Gastric polyps were found in 88% of FAP in a 2008 study of 75 consecutive FAP patients, and gastric cancer rates are increased in this population [51]. Duodenal and papillary adenomas occur in 50-90% of FAP patients, and there is an overall 5% lifetime risk of duodenal cancer in FAP patients [52,53]. Routine surveillance of the upper gastrointestinal tract with endoscopy is therefore recommended [53]. APC is a tumor suppressor gene and is associated with other cancers including papillary thyroid carcinoma, hepatoblastoma, medulloblastoma and other brain tumors, and pancreatic cancer [54]. The associated cancer risks are low (1-2% for each diagnosis) compared with the 100% risk of colon cancer in untreated FAP [33,54]. However, these associated tumors (except pancreatic cancer) tend to occur at a young age, often before gastrointestinal manifestations develop. This fact further emphasizes the importance of genetic testing of at-risk individuals. Nonmalignant FAP associations include adrenal tumors, osteomas, congenital hypertrophy of the retinal pigment epithelium (CHRPE), and dental abnormalities [33,54]. Most of these nonmalignant entities do not cause significant morbidity, and as previously noted DTs are the most clinically significant nonmalignant extracolonic manifestation of the disease. Table 2 summarizes the extra-colonic manifestations of FAP. Evolving Trends in the Surgical Management of FAP-DTs The surgical treatment paradigm for DTs in general has changed substantially over the past decade. Overall, a less aggressive surgical approach has been adopted by many [55]. The authors concluded that "aggressive resection in an effort to obtain as wide a margin as possible is clearly the single most important determinant of successful outcome" [55]. A Mayo Clinic series reporting extra-abdominal desmoid cases from 1981 to 1989 similarly found a high local recurrence rate (9/19) in patients with microscopic residual disease [56]. In 1999, another report (105 patients with primary desmoid disease, both sporadic and FAP-DT) from Memorial Sloan-Kettering covering the years 1982-1997 did not find positive microscopic margin to be predictive of local recurrence [57]. These later authors recommended against excessively morbid resections in an effort to obtain wide margins. In 2003, Gronchi et al. reported a series of 203 consecutive desmoid patients treated over 35 years at a single institution. They found that microscopic positive margins did not adversely affect recurrence rates for primary disease [58]. They recommended function sparing surgery and resection of all macroscopic disease but avoidance of heroic attempts at obtaining negative microscopic margins. A smaller series from the United Kingdom reported the results of surgery for 32 FAP-DTs including 16 intra-abdominal, 12 abdominal, and 4 extra-abdominal tumors treated from 1994 to 2004. In contrast to some prior reports of abdominal desmoids in FAP patients, they had no desmoids-related mortalities and only one patient required long-term parenteral nutrition [59]. These authors noted that they had a high threshold for surgery, and that most intra-abdominal desmoids at their institution were treated conservatively. Even more recently, several authors have begun advocating a wait and see approach to DTs as it has been recognized that many DTs undergo a prolonged stable phase or even spontaneous regression. A 1998 article from this journal reported a series of 17 patients treated nonoperatively, all of whom had an interval of at least six months without disease progression [60]. Subsequently, a French report identified 6 Sarcoma Sarcoma a subgroup of patients who did well with a wait and see approach. Only 23 patients were included in the nonoperative group, and there were no strict inclusion criteria [61]. A subsequent, larger study analyzed the results of a routine front-line conservative approach used to treat both primary and recurrent desmoids at two institutions [63]. Seventyfour primary and 68 recurrent tumors were studied. Eightythree received no intervention, and 59 received medical therapy. Overall progression-free survival was 64% at 3 years and 53% at 5 years. There was not a statistically significant difference in progression free survival between the no intervention and the medically treated groups [63]. The authors did not believe that subsequent surgery was compromised by delay in the patients who progressed. More recently, a study was performed to identify factors associated with progression free survival. In a multivariate analysis of 426 sporadic desmoid tumors, age less than 37, extremity location, and size greater than 7 cm were associated with progression [65]. Notably, the authors could not determine how to use this information with respect to surgery versus wait and see. One could argue that DTs at high risk of progression should be resected early because conservative treatment is more likely to fail. On the contrary, perhaps the high-risk group should be observed because they may be more biologically aggressive and therefore more likely to recur after surgery. This cannot be answered without prospective data. Most of the aforementioned studies included few if any FAP-DTs. There are no studies which show that FAP associated extra-abdominal desmoids behave differently than their sporadic counterparts with respect to surgical management of primary disease. As previously discussed, FAP-DTs may occur after surgery and trauma. This phenomenon is presumably related to the wound healing environment in the setting of germ-line APC mutations. A conservative approach to intra-abdominal desmoids has long been recommended due to the high morbidity and even mortality noted in many early studies [64,66]. Modern studies of FAP-DTs have shown that resection is surgically safe but recurrence rates remain high. Consensus for first-line conservative management is growing [63][64][65]. The studies referenced in this section are summarized in Table 3. Medical Treatment of FAP-Associated Extra-Abdominal Desmoids Current first-line medical management includes antihormonal therapy (specifically tamoxifen) and nonsteroidal anti-inflammatory drugs (NSAIDs, specifically sulindac, indomethacin, and more recently celecoxib) [62]. A recent review of antiestrogen therapy for DTs found that approximately half of patients respond, and response does not appear to correlate with estrogen receptor status [67]. Furthermore, the desmoid location and FAP status of the patient do not appear to influence the response [67]. NSAIDs have shown efficacy against desmoids in numerous studies, but the mechanism of action of these agents is even less clear than that of antidestrogen therapies [68]. A mouse model of APCassociated desmoid tumors was found to have elevated levels of cyclooxygenase-2, and mice treated with a cyclooxygenase-2 inhibitor had decreased desmoid tumor size [69]. There are little human data corroborating the effects of prostaglandins and prostaglandin inhibition on DTs. Multiple chemotherapeutic agents have shown efficacy against desmoids including doxorubicin, methotrexate plus vinblastine, cyclophosphamide plus doxorubicin, and VAC (vincristine, actinomycin-D, cyclophosphamide) [68,70]. Interferon alpha has also been used singly and in combination with some of the aforementioned cytotoxic agents [68]. More recently, targeted biologic agents have been added to the desmoid treatment armamentarium. Two phase 2 trials have reported efficacy of imatinib, a tyrosine kinase inhibitor, in the treatment of desmoids [71,72]. As previously mentioned, C-KIT expression is lacking in most DTs. Analysis of 124 DTs from 85 patients found that PDGF alpha and PDGF receptor alpha were expressed in all tumors, but PDGF beta and PDGF receptor beta were not expressed [73]. The same authors failed to identify PDGF receptor mutations in 14 analyzed specimens [73]. These data suggest that imatinib's efficacy against desmoids results from a mechanism other than direct inhibition of these known tyrosine kinase protooncogenes. Another tyrosine kinase inhibitor, sorafenib, has also shown efficacy against desmoids in a smaller single-institution trial [74]. Finally, a clinical trial (NCT01265030) of the mammalian target of rapamycin (mTOR) inhibitor, sirolimus, for the treatment of desmoids in children and young adults was opened in 2010. The large number of agents used for DTs clearly indicates that presently there is lack of consensus with respect to medical management of this condition. Conclusion Understanding of the epidemiology, genetics, molecular and cellular biology, pathophysiology, and treatment of FAP related desmoid tumors has improved substantially over the past decade. Despite these improvements, DTs remain a major cause of morbidity in the FAP population. A more conservative surgical approach is presently advocated by many oncologic surgeons. Medical management is attempted first for most abdominal DTs, and a wait and see approach is undertaken for many extra-abdominal DTs. Surgical goals and techniques are now often less aggressive than in the past. Recent studies have implicated mesenchymal stem cells as critical components of desmoid development. Gene expression profiling has shown promise in elucidating downstream elements of the WNT/APC/beta catenin pathway. Future progress in treatment will likely depend upon continued advances in understanding of basic desmoid biology and the development of additional targeted therapies for the treatment of refractory cases.
2018-04-03T05:00:57.773Z
2012-06-03T00:00:00.000
{ "year": 2012, "sha1": "02958854e1285eb8ccb2a4e231b8e4fe8c73be00", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2012/726537", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e45463da4b6c0255ae319e064cf967264e288e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212520015
pes2o/s2orc
v3-fos-license
Sinonasal Organized Hematoma. Cases Report Organized sinus hematoma is an infrequent disease, caused by hemorrhage, fibrosis and neovascularization. Due to the expansive characteristics and local destruction that can occur, it can be difficult to make the differential diagnosis with other benign or malignant tumors of the paranasal sinuses. The treatment is surgery and the endonasal approach with endoscopes is the most used technique to resect these pseudo tumors. In this study we describe the cases of two patients who had organized hematomas in the maxillary and sphenoid sinuses. A review of the literature on this rare disease was made. Introduction Organized sinus hematoma is an infrequent disease, caused by hemorrhage, fibrosis and neovascularization. Due to the expansive characteristics and local destruction that can occur, it can be difficult to make the differential diagnosis with other benign or malignant tumors of the paranasal sinuses. The treatment is surgery and the endonasal approach with endoscopes is the most used technique to resect these pseudo tumors. In this study we describe the cases of two patients who had organized hematomas in the maxillary and sphenoid sinuses. A review of the literature on this rare disease was made. Case 1 Male patient of 67 years of age. He consulted for intermittent epistaxis of the right nasal fossa of 3 months of evolution. He had a history of anticoagulant treatment. By nasal endoscopy, hematic remnants were observed, and the sinus computed tomography showed an occupation of the right maxillary sinus with bone destruction of the intersinusonasal wall ( Figure 1). An endonasal approach was made with endoscopes and an ungiectomy, right maxillary antrostomy and resection of friable consistency polypoid tissue were performed. The deferred anatomopathological study reported: "Extensive areas with necrosis, hemorrhage and fibrin leukocytic exudate" compatible with an abscessed polyp associated with an organized hematoma. The patient was discharged 24 hours after the intervention and remained asymptomatic and without evidence of tumor in the endoscopic controls performed for 3 years. Case 2 A 49-year-old man with a history of combined endoscopy-assisted cranionasal approach and postoperative chemoradiotherapy seven years earlier, for an esthesioneuroblastoma that involved ethmoids and had an endocranial extension. During routine checks, and for referring intermittent headache, imaging studies were requested. Computed tomography and nuclear magnetic resonance with contrast showed two lesions of different density compatible with sphenoid mucocele. Through an endonasal approach with endoscopes and intraoperative navigation, a wide sphenoidotomy was performed and the mucocele was marsupialized, aspirating mucous secretions. A red and round, non-pulsatile tumor was observed behind, in contact with the roof and the left wall of the sphenoid sinus. The tumor was resected with cutting forceps. He was discharged 24 hours after the surgery. The deferred pathological study reported "Hematic material with fibrin, some lymphocytes and leukocytes" compatible with sphenoid organized hematoma. The patient had no more headache and in the endoscopic controls for 2 years the sphenoidal sinus was ventilated and without recurrence of the hematoma (Figure 2). Discussion Organized hematoma of the paranasal sinuses is a rare pathology that causes sinus inflammation and bone destruction. The etiology is unknown but is believed to be initiated by the accumulation of blood in the paranasal cavities by various factors. The inadequate ventilation and drainage of the affected sinus, accompanied by the formation of a fibrous capsule, prevents the resorption of the hematoma and results in neovascularization and fibrosis with recurrent intracapsular bleeding and expansive growth. Depending on their histological characteristics, they can be classified as edematous, glandular, fibrous, cystic and angiomatous. It represents between 4 to 5% of nasosinusal polyps. Its location in the maxillary sinus is more frequent and patients may have a history of pathologies that can produce epistaxis. In a retrospective study they described 84 patients with organized hematomas, 82 involved the maxillary sinus and two the nasal cavity. Thirty-nine were men and 45 women, and the average age was 50.2 years. The most frequent symptoms were epistaxis (60/84-71.4%) and nasal obstruction (47/84-60%), followed by pain in the cheek (n=11), headache (n=7), epiphora (n=5) and exophthalmos (n=2). Twenty-five patients (30%) had a history of rhinosinusal surgery and 12 of antiaggregant treatment with aspirin, 7% had liver cirrhosis and 5% had renal failure. All antecedents that can produce epistaxis. Seventy percent had bone destruction of one of the walls of the maxillary sinus [1]. In other studies, they also described the location of the hematoma in the maxillary sinus as more frequent [2][3][4]. All were treated with surgery, using an endonasal approach with endoscopes. Our first case coincides with the literature in which the patient had a history of a treatment that favored bleeding (anticoagulants). It was successfully operated by an endonasal approach with endoscopes. The localization of organized hematomas in other paranasal cavities is very rare [5][6][7]. A systematic search was made in the bibliography using the key words "Shenoid sinus organized hematoma" and [6] cases of hematomas located in the sphenoid sinus were found in PUBMED. ERO.000554. 3(1).2019 Wu and collaborators [3] reported a lesion located in the sphenoid sinus among seven patients described with organized sinus hematomas and in other studies they described one located in the sphenoid and one in the frontal sinus [2], and among seventeen patients they found only one in the sphenoid and another in the frontal [4]. In three other studies they described isolated clinical cases of organized hematomas in the sphenoid sinus. One caused destruction of the sella turcica and simulated a pituitary tumor, was operated by a transnasal transsphenoidal approach with endoscopes [5] and the other caused a decrease in visual acuity with epistaxis, and retroorbital headache, and despite endonasal decompressive surgery with endoscopes the patient was left with a decrease in visual acuity. In the third case, they described a patient with a sphenoid hematoma that caused decreased visual acuity and palpebral ptosis, with mydriasis and atrophy of the second cranial nerve that did not improve his vision after surgery [7]. This highlights the importance of knowing this pathology and that although it is histologically benign it must be treated early because of its potential to generate serious complications due to the involvement of adjacent vital structures. In our clinical case 2, the patient had few symptoms but when performing the sphenoidotomy the hematoma was found in relation to the lateral wall of the sphenoid and in relation to the II pair and the internal carotid artery. It is important to know the existence of this infrequent pathology due to its clinical and imaging similarity with other benign and malignant tumors of the paranasal sinuses. The exegesis of the organized hematoma must be complete and since the lesion is easy to resect, the endonasal video-endoscopic approach is preferred because of the lower morbidity produced by the surgery. Conclusion The organized sinus hematoma is a benign and infrequent pathology. It is important to know its existence in order to make the differential diagnosis with other benign and malignant tumors of the paranasal sinuses. The involvement of the sphenoid sinus is exceptional and may be associated with serious complications due to the injury of the adjacent anatomical structures. The treatment is surgical and the endonasal technique with endoscopes is of choice due to its effectiveness and low morbidity.
2020-03-07T11:42:53.330Z
2019-06-27T00:00:00.000
{ "year": 2019, "sha1": "812b97d5f15d677c62a0f52e4e9fad0c3b6210aa", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/ero/pdf/ERO.000554.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "812b97d5f15d677c62a0f52e4e9fad0c3b6210aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266807971
pes2o/s2orc
v3-fos-license
Novel Intelligence ANFIS Technique for Two-Area Hybrid Power System’s Load Frequency Regulation . The main objective of Load Frequency Control (LFC) is to effectively manage the power output of an electric generator at a designated site, in order to maintain system frequency and tie-line loading within desired limits, in reaction to fluctuations. The adaptive neuro-fuzzy inference system (ANFIS) is a controller that integrates the beneficial features of neural networks and fuzzy networks. The comparative analysis of Artificial Neural Network (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Proportional-Integral-Derivative (PID)-based methodologies demonstrates that the suggested ANFIS controller outperforms both the PID controller and the ANN controller in mitigating power and frequency deviations across many regions of a hybrid power system. Two systems are analysed and represented using mathematical models. The initial system comprises a thermal plant alongside photovoltaic (PV) grid-connected installations equipped with maximum power point trackers (MPPT). The second system comprises hydroelectric systems. The MATLAB/Simulink software is employed to conduct a comparative analysis of the outcomes produced by the controllers. Introduction Now days, electricity is crucial because more and more individuals require it.Changes in the system's operating point and disturbances influence the system's dynamic behaviour.In power plants, the quality of the electricity generated is contingent on the machine's capacity [1].The rate and intensity of the electricity must remain constant, as intended.Therefore, load frequency control is a crucial component of the power system if it is to deliver reliable electricity [2].The most important goal of load frequency control (LFC) issues is to keep the frequency and two-area tieline power within a reasonable range in order to manage fluctuations in demand and disturbances [2]. Failure of grid was caused by this load frequency control issue, as is common knowledge.This is due to the excessive consumption of grid-sourced electricity in addition to its production.It resulted in a blackout throughout virtually the entire region, affecting all traffic.Due to the ineffective control of conventional controllers and despite repeated warnings, certain loads continued to utilize an excessive amount of energy..A system of robust control is required to detect load changes and stabilize frequency deviations.[3]. Traditional PID controllers can provide control actions for a single operating state, but in the real world, the parameters vary over time.Therefore, it is difficult to configure the appropriate gains so that there is no frequency shift.Since this is the case, an automatic solution is required.Load frequency controllers' dynamic performance has been enhanced by utilising various control strategies [3]. The most famous and most often used type of load frequency controller is the traditional PID controller.The standard controller is easy to set up, but it makes a big difference in frequency [4].Most state feedback controllers that are meant to improve performance are based on the idea of linear optimum control.Fixed-gain controllers are made for "nominal operating conditions," but they don't work well in a wide range of working settings [5].Keep track of the working conditions and use the most recent parameters to figure out how to run the system so that it works at its best.For LFC to do the same job as a PID Controller, adaptive controllers with gain settings that change on their own have been proposed [6]. Even when both renewable energy sources and the load cause disturbances, the proposed controller will make the frequency more stable.A neuro-fuzzy system can use both neural networks and fuzzy systems.So, an ANFIS-based controller will respond to changes in the environment by changing the membership functions of the fuzzy controller.This makes the fuzzy controller more flexible and reliable.The proposed technique (ANFIS) improves the performance of the system by training the settings of the fuzzy logic controller [7]. Proposed Scheme Adaptive Neural Fuzzy Inference System (ANFIS) for building LFC in multiconnected areas with energy storage systems, which may reduce both control time and frequency variation during active power system operation [8].Artificial neural networks and fuzzy reasoning come together to make neurofuzzy systems [9].This technology uses neural networks to learn and fuzzy logic to figure out what to do.Neural networks can learn from data, but they can't tell you how they come to their conclusions.Fuzzy systems have linguistic rules that can be understood, can make decisions with imperfect data, and are good at explaining their choices, but they can't instantly learn new rules.Due to these limits, hybrid intelligent systems have been created [10]. A neuro-fuzzy hybridization creates a hybrid intelligent system by mixing the reasoning style of fuzzy systems, which is similar to how humans think, with the way neural networks learn [11]. Most of the time, neurofuzzy systems are shown as three-layer feedforward neural networks.The first layer is for the variables that come in, the second (secret) layer is for the fuzzy rules, and the third layer is for the variables that come out [12]. Each rule of the form if x = Ai, then y = Bi, where Ai and Bi are fuzzy sets, 1 ≤ ≤ , can be interpreted as a training pattern for a multi-layer neural network [13]. Figure 1 depicts the analytical structure of the proposed ANFIS-based load frequency management, which consists of the defuzzification, knowledge base, neural network, and fuzzy logic elements [14] [15].(1) Where Are power angles of equivalent machines of the two areas Modelling of PV System The equation describes the transfer function of a solar photovoltaic system that includes a PV system, MPP tracker, converter, and filter [16] [17] . Figure 3 block diagram of PV system Figure 3 shows the power electronic system collaborate with the corresponding second-order transfer function to describe the PV system, Where K -gain of the PV system, L -Zero (positive value) in the transfer function, M and N are poles (positive values) in the transfer function.[18]. A chopper circuit with MPPT and an inverter(grid-side ) with a filter comprise the PV system [19]. describing complex nonlinear interactions and are ideally suited for classifying phenomena into predetermined categories [20].However, the precision of outputs is frequently constrained and does not permit error-free results; it only permits the minimization of as few errors as possible [21].In addition, the training period for a NN may be quite extensive.In addition, the training data must be meticulously selected to represent the entire range over which the projected changes in the various variables will occur [22].Fuzzy logic systems effectively handle the imprecision of inputs and outputs by representing them using fuzzy sets, enabling the development of system descriptions with a desired amount of detail and flexibility.[23]. Neural networks and fuzzy logic have the capability to define mathematical relationships among multiple variables within intricate dynamic processes, enabling the representation of varying degrees of influence.Additionally, they offer the ability to control nonlinear systems to a level that surpasses the capabilities of conventional linear control systems.[24]. This is the last step in obtaining the precise output value O. Results and discussion Some standard benchmark measures are being used to test the suggested plan.Table 1 shows how well the proposed method works with different topologies.This test shows that the modified method is more likely to work.Table 1 shows how long it takes for different techniques from the suggested scheme to settle down at 1000 MW.It can be said that the suggested method gives better results than other methods.So, the proposed method is used to balance the load frequency and power for the said Two-area system. Figures 6-23 show how the system responds to a 10% step increase in demand for a two-area system.As the system overshoots are less, it is clear that there is a big difference between the performance of the different schemes and the performance of the proposed ANFIS scheme. To represent the results of the simulation, the following scenarios are considered: Case 1: Figures 6 and 7 show how the system responds without an arrangement in place.From what is seen, the frequency changes, power changes, and tieline power cannot be adjusted.Case 3: Figures 10 and 11 depict the system response utilising a discrete PID controller, in which observations are made, the frequencies (F1 and F2) are settled at 6.42 and 6.41 seconds, the power deviations (P1 and P2) are settled at 6.52 and 6.51 seconds, and the tie line power is settled at 6.49 seconds. Figure 10.Frequency variations in two area system with discrete PID controller Figure 11.Power variations in two area system with discrete PID controller Case 4: Figures 12 and 13 depict the system response using the Fo-PID controller.In the observations made, the frequencies (F1 and F2) are stabilised at 6.13 and 6.15 seconds, power deviations are stabilised at 6.23 and 6.22 seconds, and connect line power is stabilised at 6. the system response using the NN system, in which observations have been made, the frequencies have been settled at 5.8 and 5.9 seconds, the power deviations have been settled at 5.66 and 5.79 seconds, and the tie line power has been settled at 5.8 seconds. Conclusion In this paper, a novel method is presented.The problem of demand frequency balancing has been solved by developing a linearized model for renewable (PV)-based two-area power systems.Results from simulations and system indices indicate that the developed ANFIS outperforms the alternatives.The following conclusions can be drawn from the above observations:  Several operating conditions and performance indices reveal that ANFIS achieves satisfactory results. The developed ANFIS benefits from the NN and FLC exploration properties. When human export knowledge is unreliable, the developed ANFIS can be used to generate mature membership functions and fuzzy rules based on training data. Different indices and setting times guarantee the superiority of the proposed method. The future scope of this work will consist of applying the proposed scheme to large-scale power systems. Figure 1 Figure 1 ANFIS-based load frequency regulation analytical framework 2.1Mathematical model for Proposed Scheme Figure 2 represents the block diagram of hybrid power system consists of PV system, thermal and hydro system. Figure 2 Figure 2 block diagram of hybrid power system .Power transmitted from area -1 is given by Figure 4 Figure 4 block diagram of PV system (dc-dc converter with MPPT) 2.3 Modelling of Adaptive-Neuro Fuzzy System Fuzzy logic and neural networks are two distinct methods for addressing uncertainty.Each has both benefits and drawbacks.Neural networks are capable of Figure 8 .Figure 9 . 7 E3S Figure 8. Frequency variations in two area system with PID controller 2 seconds. 8 E3SFigure 12 .Figure 13 . Figure 11.Power variations in two area system with discrete PID controller Case 4: Figures12 and 13depict the system response using the Fo-PID controller.In the observations made, the frequencies (F1 and F2) are stabilised at 6.13 and 6.15 seconds, power deviations are stabilised at 6.23 and 6.22 seconds, and connect line power is stabilised at 6.2 seconds. depict Figure 14 .Figure 15 . Figure 14.Frequency variations in two area system with NNsystem Figure 16 .Case 6 : Figure 16.Frequency variations in two area system with FLC Table . 1 Representation of settling time of various technologies
2024-01-07T16:21:38.444Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "fb72d3df1b67f765eb0eeb5f7aae3533f174959a", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2024/02/e3sconf_icregcsd2023_02005.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "57de6b009a5f2b36a9d4adfdbedbcf031439bc6a", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
246561928
pes2o/s2orc
v3-fos-license
Chitin/egg shell membrane@Fe3O4 nanocomposite hydrogel for efficient removal of Pb2+ from aqueous solution The development of adsorbents by using the byproducts or waste from large-scale industrial and agricultural production is of great significance, and is considered to be an economic and efficient strategy to remove the heavy metals from polluted water. In this work, a novel chitin/EM@Fe3O4 nanocomposite hydrogel was obtained from a NaOH/urea aqueous system, where the proteins of egg shell membrane and Fe3O4 nanoparticles were chemically bonded to chitin polymer chains with the help of epichlorohydrin. Due to the existence of a large number of –NH2, –OH, –CONH–, –COOH and hemiacetal groups, the adsorption efficiency for Pb2+ into the absorbent was dramatically enhanced. The experimental results revealed that the adsorption behavior strongly depends on various factors, such as initial pH, initial Pb2+ concentration, incubation temperature and contact time. The kinetic experiments indicated that the adsorption process for Pb2+ in water solution agreed with the pseudo-second-order kinetic equation. The film diffusion or chemical reaction is the rate limiting process in the initial adsorption stage, and the adsorption of Pb2+ into the nanocomposite hydrogel can well fit the Langmuir isotherm. Thermodynamic analysis demonstrated that such adsorption behaviors were dominated by an endothermic (ΔH° > 0) and spontaneous (ΔG° < 0) process. Introduction In response to the national policy on the energy saving and carbon emission reduction, the electric car/bicycle industries have developed rapidly in recent years especially in China.As one of the most important parts, the battery has been paid great attention.Among those various batteries, the lead acid battery is still considered to be an essential species and widely used, due to its high stability, reliability and low cost.However, the extensive use and potential leakage of Pb 2+ ions in the related industries, such as smelting, battery recycling, painting and mining, can result in high levels of contamination in water and thus accumulation via the food chain.Even very low levels of Pb 2+ intake can give rise to serious irreversible injuries on human beings' nervous system, reproductive system, immune system, and organs. 1,2Hence, effective removal of Pb 2+ from industrial wastewater will contribute to environmental sustainability. 4][5][6] Of the current strategies, adsorption has emerged as an effective way for water treatment due to its many advantages, including cost-efficiency, easy handling, availability of plentiful adsorbents and environmentfriendly manner.4][15] Such behavior is of high signicance for the developing countries, since huge amounts of agricultural wastes have been effectively recycled for the sake of saving the natural resources.For example, the orange and cucumber peels have been employed to treat the contaminated water. 14,16The presence of carboxyl, hydroxyl, and carbonyl groups is likely responsible for the efficient adsorption of heavy metal ions.Egg shell membrane (EM) is a naturally-occurring biomaterial generated by poultry and regarded as a biowaste.This semipermeable membrane is composed of the highly cross-linked protein bers, and identied as a highly-value material in biomedical engineering benetting from its nontoxic, collagenrich and biocompatible characteristics. 16Moreover, the unique 3D brous net-work and a large number of functional groups (-COOH, -NH 2 and -OH) would render the material with a strong ability to bind and capture metal ions. 17,18hitin is another abundant and renewable biomass resource.It is composed of b-(1-4)-linked 2-acetamido-2-deoxy-D-glucose, and widely exists in different crustaceans, mollusks, algae, insects, fungi, and yeasts on earth. 19,20The annual production of chitin is estimated to be about 10 10 to 10 11 tons from living organisms, and more than 10 000 tons could be available every year from shellsh wastes. 21This polysaccharide possesses a number of amide (-NH-) and hydroxyl (-OH) groups in the polymer chains, which will facilitate the chelation to Ar 3+ , Cd 2+ , Co 2+ , Cu 2+ , Mn 2+ and Pb 2+ ions in aqueous system. 22Thus, the chitin-derived materials as the low-cost absorbents have been largely applicated in the water treatment.This occurrence is further enhanced with the development of solvent systems of the chitin.][25] With respect to the advantages of these biomass resources, in this work, we aim to combine the chitin and egg shell membrane to create a novel kind of nanocomposite hydrogel to efficiently remove Pb 2+ in an economical way.In the design, the magnetic Fe 3 O 4 nanoparticles were also incorporated into the hydrogel matrix.On one hand, the Fe 3 O 4 nanoparticles have a suitable framework for interaction with Pb 2+ , leading to enhanced adsorption efficiency; 26 on the other hand, the introduction of magnetic particles could provide the convenience for the ultimate separation.Thus, the impacts of the environmental pH condition, initial Pb 2+ concentration and incubation temperature on Pb 2+ adsorption into the chitin/ EM@Fe 3 O 4 nanocomposite hydrogel were investigated in detail, respectively.The corresponding adsorption kinetics and isotherm behaviors were also explored.We believe that this work provides a promising strategy for the construction of novel biomass-based absorbents for rapid and high-capacity removal of heavy metal ions. Preparation of highly water-dispersible Fe 3 O 4 nanoparticles The magnetite Fe 3 O 4 nanoparticles were prepared through a modied solvothermal reaction. 27Briey, 2.025 g of FeCl 3 -$6H 2 O, 5.781 g of NaAc, and 0.684 g of trisodium citrate dihydrate were dissolved in 150 mL of ethylene glycol.Aer vigorous stir for 1 h at room temperature, the as-formed homogeneous black mixture was transferred into a Teon-lined stainless-steel autoclave (200 mL) with the further incubation at 200 C for 10 h.During the process Fe 3+ was partly reduced into Fe 2+ .Finally, the obtained Fe 3 O 4 product was washed with ethanol for 6-10 times with the help of a magnet eld, and then dried in oven at 60 C for further use. Pre-treatment of raw egg shell membranes Raw egg shell membranes were obtained by peering manually from discarded eggshells and composed of both inner and outer membranes.Subsequently, the membranes were treated with HCl (5.0 wt%) for 4 h to remove the CaCO 3 residues, and thoroughly washed in deionized water.Aer incubation in 0.1 mmol L À1 EDTA aqueous solution for 24 h and fully rinse with deionized water, the fresh egg shell membranes (denoted as EM) were collected.These products were nally dried at 60 C and ground into powders for the preparation of nanocomposite hydrogels. Fabrication of the chitin/EM@Fe 3 O 4 nanocomposite hydrogel The fabrication of chitin/EM@Fe 3 O 4 nanocomposite hydrogel was mainly involved two steps, the dissolution of egg shell membranes and chitin in NaOH/urea solvent system.Firstly, 1.0 g EM powders were totally dissolved into 11.0 wt% NaOH aqueous solution upon heating and sonication.Aer cooling down to room temperature, certain amount of urea was added to form the 1.0 wt% EM/11.0 wt% NaOH/4.0 wt% urea/80.0wt% H 2 O mixture system.Subsequently, 4.0 g chitin powders were dispersed into the above mixture solution with stirring for 5 min, and then was stored under refrigeration (À30 C) for 4 h. 28The frozen solid was thawed and stirred extensively at room temperature.Aer the freeze/thawing manipulation were repeated 3 cycles, 2.0 mL epichlorohydrin as cross-linker and 1.0 mL Fe 3 O 4 aqueous dispersion were added into 100 g of the mixture solution and stirred at 0 C for 0.5 h to obtain a homogeneous solution, which was then subjected to centrifugation for 10 min at 4 C.In the following, the obtained uniform solution was cast into a Petri dish and kept at ambient temperature for 12 h allowing the gelation.Finally, the asprepared hydrogel was immersed in distilled water for 3 days to remove any residues.The nanocomposite hydrogel sample was labeled as chitin/EM@Fe 3 O 4 .For the preparation of chitin and chitin/EM hydrogels, similar experimental procedures with a xed concentration of epichlorohydrin (2.0 mL/100 g) were displayed.For the preparation of nanocomposite hydrogel beads, the obtained pre-gel solution was allowed to drip into hot water by an injector to get the raw beads.Aer thorough washing with ultrapure water, the hydrogel beads were collected. Adsorption behaviors studies The adsorption behaviors of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel including the effects of environmental pH condition and incubation temperature, kinetic mechanisms, adsorption isotherms and thermodynamic parameters were explored intensively.All the experiments were performed in 50 mL plastic centrifuge tubes containing 30 mL of Pb 2+ stock solution by utilizing the batch equilibrium method.During the whole process, all the tubes were sealed with the caps and placed in a thermostatic water bath shaker at a speed of 150 rpm.In the adsorption isotherm, the initial concentrations of Pb 2+ varied from 0.5 to 20.0 mmol L À1 at the incubation temperature of 293 K.To each container, approximate 1.0 g hydrogel samples were added.Aer incubation for 24 h, the equilibrium concentration of Pb 2+ in each tube was determined with ICP-OES spectrometer.Thus, the adsorption capacity, q e (mmol g À1 ), of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel can be calculated as: where c 0 and c e are the initial and equilibrium concentrations of Pb 2+ (mmol L À1 ), respectively, V is the volume of Pb 2+ aqueous solution (L) in each tube and W is the mass of chitin/EM@Fe 3 O 4 hydrogel samples (g). For the kinetic studies, 1.0 g chitin/EM@Fe 3 O 4 hydrogel samples were added to 30 mL of Pb 2+ aqueous solution.The concentration was kept at 5.0 mmol L À1 and the pH value was set to be 5.0.At different time intervals, the nal concentration of Pb 2+ in each container was analyzed by ICP-OES.To investigate the effect of pH on the Pb 2+ adsorption capacity, the pH values were changed from 1.0 to 5.0 by adding diluted HCl solution.At higher pH condition, the Pb 2+ ions are easy to precipitate.In order to investigate the impact of incubation temperature on the adsorption efficiency, the batch adsorption studies were performed at the temperatures of 277, 293 and 310 K, respectively. Desorption and regeneration studies In order to evaluate the desorption and regeneration abilities of chitin/EM@Fe 3 O 4 hydrogel, 1.0 g samples were stly placed in 30 mL of 5.0 mmol L À1 Pb 2+ aqueous solution.Aer incubation for 24 h at 293 K, the adsorbents and aqueous solution were collected, respectively.The aqueous solution was used to evaluate the adsorption capacity and desorption efficiency.The adsorbents were regenerated by using 10 mL of 0.05 mol L À1 HCl as the static eluting solution for 3 times.Each static elution time was set to be 4 h.Then, the adsorbent was fully washed with deionized water for the subsequent adsorption studies.For comparison, the hydrogel samples were also regenerated by using 10 mL of 0.1 mol L À1 EDTA as the static eluting solution for 3 times.Each static elution time was set to be 4 h.Aer that, the adsorbent was fully washed by 1.0 mol L À1 NaCl solution and deionized water for the succeeding adsorption performance.This adsorption-desorption cycle in both cases was repeated 3 times under the same condition. Characterization The wide-angle X-ray diffraction (XRD) patterns for each sample were recorded in reection mode on a Rigaku SmartLab diffractometer equipped with a CuKa radiation source (l ¼ 1.542 Å) operated at 40 kV and 30 mA.The samples were scanned at 4 min À1 and at a step size of 2.5 in 2q.The FT-IR spectra were recorded in the wavenumber range from 4000 to 700 cm À1 using the attenuated total reection Fourier transform infrared spectroscopy (ATR-FTIR, PerkinElmer, USA) at room temperature.The samples were freeze-dried in a conventional freezer dryer and then dried at 45 C in a vacuum chamber to eliminate water from the samples.Thermogravimetric analysis (TGA) tests were performed on a TG instrument (PerkinElmer Co., USA) in an atmosphere of nitrogen at a heating rate of 10 C min À1 from 50 to 650 C. High resolution transmission electron microscopy (HR-TEM) and SAED images were acquired using a JEM-2011 transmission electron microscope (JEOL Ltd, Japan) operated at 200 kV.Magnetic characterization was carried out on a vibrating sample magnetometer (PPMS-9, Quantum Design, USA).Particle size and zeta potential were measured using a Nano-ZS ZEN3600 (Malvern Instruments, UK) at 25 C.The exact concentration of Pb 2+ in solution was determined by the ICP-OES instrument with an RF generator power 1150 W (Thermo, USA).The auxiliary, carrier and plasma gas (Ar) ow rates were set to be 0.5, 0.5 and 12.0 L min À1 , respectively.The wavelength of emission line for Pb is 220.353nm.The leaching concentration of Fe 2+ /Fe 3+ in solution was determined by the ICP-MS instrument (Agilent 7700, USA), and the total weight content of Fe in chitin/EM@Fe 3 O 4 hydrogel sample was measured by the ICP-OES instrument (Agilent 725ES, USA).The wavelength of emission line for Fe is 239.56 nm. Structure characterization of chitin/EM@Fe 3 O 4 hydrogel The prerequisite for preparation of chitin/EM@Fe 3 O 4 nanocomposite hydrogel is the dissolution of both chitin and egg shell membrane in an appropriate solvent system.In the design, we attempted to fabricate this novel adsorbing material by dissolving them in the 11.0 wt% NaOH/4.0 wt% urea solvent system.One possible reason for the choice can be attributed to the fact that as-prepared hydrogel from the alkali/urea solvent system is expected to display excellent mechanical properties, 20 which will guarantee the stability and integrity of the absorbents in the recycle processing.During the whole preparation, egg shell membranes were rstly allowed to be dissolved in 11.0 wt% NaOH aqueous solution upon heating, and then the chitin powders were dispersed and nally dissolved in the 11.0 wt% NaOH/4.0 wt% urea solution via freeze-thawing cycle treatment.By addition of water-dispersible Fe 3 O 4 nanoparticles and chemical-crosslink agent of epichlorohydrin into the mixture, the chitin/EM@Fe 3 O 4 nanocomposite hydrogel was obtained aer thorough washing in water.As shown in Fig. 1a, the hydrogel samples showed different morphologies.The pure chitin hydrogel is transparent and colorless, while the chitin/ EM hydrogel displays the yellowish color and higher swelling behavior, indicating the successful chemical crosslinking between EM protein and chitin polymer chains.By comparison, the chitin/EM@Fe 3 O 4 nanocomposite hydrogel shows the darkbrown color, as a result of the existence of Fe 3 O 4 nanoparticles in the matrix.Meanwhile, by immersing the nanocomposite hydrogel into water for 1 week, the Fe 3 O 4 nanoparticles were hardly observed and separated from the hydrogel matrix, illustrating the strong affinity to the matrix and high stability of the bio-absorbents.It should be mentioned that we didn't adopt the fabrication strategy by in situ synthesis of Fe 3 O 4 in the hydrogel matrix, because Fe 3+ ions are not easy to penetrate into the deep interior of the hydrogel, nally leading to the gradient distribution of Fe 3 O 4 in the matrix.This heterogeneous structure will display an adverse effect on the adsorption capacity. The existence of Fe 3 O 4 nanoparticles in the nanocomposite hydrogel was rstly studied using a vibrating sample magnetometer (shown in Fig. 1b).The results demonstrate that the nanocomposite hydrogel possesses a superparamagnetic character.Both the chitin/EM@Fe 3 O 4 hydrogel samples and Fe 3 O 4 nanoparticles had no remanence or coercivity at 300 K, and the magnetization saturation values (Ms) were evaluated to be 2.8 and 58.2 emu g À1 , respectively.The smaller Ms value can be ascribed to the lower content of Fe 3 O 4 nanoclusters in the nanocomposite hydrogel.Nevertheless, the as-prepared hydrogel beads (shown in Fig. S1 †) can be still easily separated from aqueous solution under an external magnetic eld. Further evidence was collected from the EDS pattern (Fig. 1c).The observation for the elements of Fe and O in the spectrum conrmed the successful synthesis of the magnetite particles.The particle size analysis and TEM images (Fig. 1d and e) reveal that the Fe 3 O 4 particles have a uniform size of about 290 nm, and dispersed in the hydrogel matrix.The particles are composed of nanocrystals with the size of about 5-20 nm, where the nanocrystals seem to be connected with each other similar like the appearance of pomegranate.Selected-area electron diffraction (SAED) (Fig. 1f) recorded on edge of a magnetite particle presents the polycrystalline-like phase pattern.This structure may provide them with a large surface area for adsorption of guest molecules, such as the heavy metal ions or organic dyes. 30ig. 1g presents the XRD patterns for Fe 3 O 4 , chitin, chitin/ EM and chitin/EM@Fe 3 O 4 , respectively.The typical pattern of Fe 3 O 4 particles exhibits the strong diffraction peaks at 2q ¼30.1 , 35.4 , 43.1 , 57.0 and 62.7 , corresponding to the reection planes of (220), (311), (400), ( 511) and (440), respectively.This result indicates that the magnetite particles possess the polycrystalline structure, which is in good agreement with the SAED.In the case of chitin pure hydrogel, two main semicrystalline peaks at 2q ¼8.6 and 19.4 occurred, which are assigned to the (020) and (110) reections, respectively.For the chitin/EM nanocomposite hydrogel, it shows the similar diffraction behavior to the chitin, implying that the EM was greatly degraded into small peptide sections during the fabrication process.The occurrence of the diffraction peaks at 2q ¼30.1 and 35.4 in the chitin/EM@Fe 3 O 4 nanocomposite hydrogel samples proves that Fe 3 O 4 particles have been well incorporated into the hydrogel matrix.In addition, the diffraction peak at 2q ¼8.6 in the chitin/EM sample shied to 9.3 .The strong interaction between the Fe 3 O 4 particles and hydrogel polymer network and certain chemical-crosslink reaction may commonly contribute to this phenomenon.More information on the mutual interactions was identied from ATR-FTIR spectra (Fig. 1h).As shown, all the tested samples show the prominent absorption bands around 3287-3432 cm À1 , which are ascribed to the stretching mode of O-H and N-H groups.These features ensure that the EM proteins and Fe 3 O 4 particles could be chemically bonded to chitin in presence of epichlorohydrin.The respective peaks in the nanocomposite hydrogel located at 1652 cm À1 , 1538 cm À1 and 1376 cm À1 mainly originate from the amide I, amide II and amide-III bands of chitin.The characteristic peaks at 1559 and 1382 cm À1 associated with carboxylate groups in the Fe 3 O 4 particles shied to the lower wavenumber region, and seem to be overlapped by the amide (II and III) bands, suggesting the strong hydrogen bonding interactions between Fe 3 O 4 particles and the polymer chains.Thermogravimetric analysis (Fig. 1i) illustrate that the thermal stability of chitin/EM@Fe 3 O 4 nanocomposite hydrogel was slightly enhanced by incorporation of EM and Fe 3 O 4 particles.According to the percentage of weight loss in N 2 atmosphere, the contents for EM and Fe 3 O 4 particles in the given nanocomposite system were estimated to be 23.1 wt% and 3.1 wt%, respectively. Adsorption kinetics of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel Fig. 2 presents the equilibrium adsorption capacity for Pb 2+ into different adsorbents.As indicated, three hydrogel species displayed the differentiated efficiency.A relatively higher q e value was observed in the chitin/EM hydrogel samples than chitin pure hydrogel, implying that the abundant groups such as the -NH 2 and -COOH in the EM protein will provide more feasibility to anchor the heavy metal ions.Among these adsorbents, the chitin/EM@Fe 3 O 4 nanocomposite hydrogel showed the highest q e value.The addition of Fe 3 O 4 nanoparticles in the hydrogel matrix is favorable for the chelation to Pb 2+ ions.The citrate stabilized Fe 3 O 4 nanoparticles own a large number of -COOH groups on the surface. 30Besides, the magnetite nanoparticles show the characteristics of high surface area and unique cubic inverse spinel structure.There features are conductive to the interactions with Pb 2+ ions. 26With respect to the improvement in adsorption capacity, the chitin/EM@Fe 3 O 4 nanocomposite hydrogel can be taken as the promise candidate for water treatment. The adsorption behaviors of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel were extensively investigated.Fig. 3a depicts the dependence of the q e value for Pb 2+ adsorption into nanocomposite adsorbent versus the incubation time in aqueous solution.The q e increased almost linearly in the initial period, implying a quick and efficient adsorption process on the surface of chitin/EM@Fe 3 O 4 hydrogel.Upon 24 h incubation, the biosorption equilibrium was established.The time prole for the Pb 2+ uptake in water was smooth and continuous until the equilibrium, suggesting that the monolayer adsorption possibly occurred. 31To analyze the adsorption kinetics of Pb 2+ into the as-prepared nanocomposite hydrogel, several basic models including the pseudo-rst-order, kinetics pseudo-second-order kinetics, and intraparticle diffusion models were applied.The related parameters obtained from these equations will cast great signicance on how to design and model the adsorption process.The pseudo-rst-order kinetic equation can be expressed as follows: 32 The pseudo-second-order kinetic model can be depicted as: where q t is the adsorption amount of Pb 2+ (mmol g À1 ) in chitin/ EM@Fe 3 O 4 hydrogel at different time t.q 1 (mmol g À1 ) and k 1 (h À1 ) are the maximum adsorption capacity and rate constants for the model, while q 2 (mmol g À1 ) and k 2 (g mmol À1 h À1 ) represent the maximum adsorption capacity and rate constants for the pseudo-second-order model.The values of k 1 , k 2 , q 1 , q 2 and correlation coefficients of R 1 2 and R 2 2 for Pb 2+ adsorption can be easily drawn from the linear tting plots of ln(q e À q t ) versus t (Fig. 3b) and t/q t versus t (Fig. 3c), respectively.As listed in Table 1, the maximum adsorption capacity of q 2 for Pb 2+ in aqueous solution is almost same to the actual experimental data (Fig. 3a).Meanwhile, the correlation coefficients (R 2 2 ) for pseudo-second-order kinetic model is 0.9966, which is much closer to 1.0.Hence, the adsorption in water solution follows the pseudo-second-order kinetic model, implying that Pb 2+ adsorption in chitin/EM@Fe 3 O 4 hydrogel is controlled by the inner surface adsorption.In other words, the adsorption behavior for Pb 2+ is dominated by chemical adsorption. 33he adsorption process was further investigated by employing the intra-particle diffusion model.Generally, in many cases the intra-particle diffusion into the interior of the adsorbent through the pores is considered to be the rate-controlling step, where the uptake amount of solutes increases almost linearly with t 0.5 rather than with the contact time t. 34Therefore, the adsorption equation can be referred to: where k p i is the intra-particle diffusion rate constant at stage i (mmol g À1 h À0.5 ).The intercept of C i represents the thickness of boundary layer.According to the theory, q t versus t 0.5 should be linear when the intra-particle diffusion occurred in the adsorption process.Otherwise, additional mechanisms may be involved.As shown in Fig. 3d, the intra-particle diffusion tting plot for Pb 2+ uptake into chitin/EM@Fe 3 O 4 hydrogel is consist of three stages.All the linear lines for each stage did not pass through the origin, demonstrating that the intra-particle diffusion could not solely determine the overall rate of mass transfer at the initial stage of adsorption process.Both the lm diffusion (chemical reaction) at the rst linear region and the following pore diffusion at the second linear segments contribute to the adsorption rate.More evidence could be claried from the Boyd's model, 35 which is given as: where F is the fractional attainment of equilibrium at different contact times t, and Bt is a function of F where q t and q e are the amount of Pb 2+ loaded (mmol g À1 ) into chitin/EM@Fe 3 O 4 hydrogel at time t and at equilibrium state, respectively.Then the eqn (5) could be rewritten as: 0.0034 0.0214 0.9199 0.3789 0.0467 0.9966 The rate-controlling step in the adsorption process Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel will be easily distinguished from the Boyd plots.If the values of Bt change linearly along with the incubation time t and pass through the origin, the rate of mass transfer is controlled by the pore diffusion.If the plot prole is nonlinear or linear without passing through the origin, the lm diffusion or chemical reaction will mainly dominate the adsorption rate.As illustrated in Fig. 3e, the tting plot for the adsorption process of Pb 2+ exhibit the nonlinear behavior, clearly indicating that the lm diffusion or chemical reaction is the rate-limiting step in the initial period and then follows the intra-particle diffusion. Adsorption isotherms of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel The adsorption efficiency strongly depends on the initial concentration of heavy metal ions in solution.Usually, the adsorption capacity is enhanced along with the increase of the concentration of metal ions in the solution. 31As shown in Fig. 4a, the amount of Pb 2+ (q e ) adsorbed into chitin/EM@Fe 3 O 4 hydrogel increased from 0.013 to 0.070 mmol g À1 , when the initial concentration of Pb 2+ increased from 0.7 to 18.0 mmol L À1 at 293 K.This relationship between the adsorption capacity at the xed temperature and the ions concentration at equilibrium can be easily described by the equilibrium adsorption isotherms, including the Langmuir and Freundlich equations.It is of great importance to learn more about the mutual interactions and optimize the dosage of adsorbents.The Langmuir isotherm model is established in the following form: where q max (mmol g À1 ) is the maximum adsorption capacity associated with the monolayer coverage on the surface, and b is the Langmuir constant (L mmol À1 ), reecting the uptake efficiency.Unlike Langmuir model, the Freundlich isotherm is supposed to be an empirical equation to study the multilayer adsorption process on heterogeneous surface: where K F is the Freundlich constant, and the exponent 1/n is the heterogeneity factor, indicating the adsorption capacity and intensity, respectively.Fig. 4b and c present the Langmuir and Freundlich proles for Pb 2+ adsorption in absorbents at 293 K, respectively, and the Langmuir parameters Freundlich parameters 0.079 0.259 0.9727 0.542 0.017 0.9301 related parameters are summarized in Table 2.In Langmuir model, the value of q max estimated to be 0.079 mmol g À1 , implying that the as-fabricated chitin/EM@Fe 3 O 4 nanocomposite hydrogel could display high adsorption capacity for Pb 2+ in aqueous solution.As for the Freundlich plot tting, the value of 1/n was determined to be 0.542 at 293 K, revealing that Pb 2+ could be easily anchored into the absorbents.In Freundlich equation, the constant (1/n) is closely related to the adsorption intensity of the adsorbent.When the value of 1/n ranges 0.5 < 1/n # 1, adsorbates are easy to be anchored on the matrix. 37Based on the fact that the regression coefficient of R 2 obtained from the Langmuir model (R 2 ¼ 0.9727) is higher than that for the Freundlich model (R 2 ¼ 0.9301), it can be concluded that the uptake process of Pb 2+ into the chitin/EM@Fe 3 O 4 hydrogel prefers to the Langmuir adsorption.Namely, the adsorption process in water solution is primarily dominated by monolayer adsorption. The values of DH and DS can be calculated by van't Hoff equation: where K d is the standard thermodynamic equilibrium constant at the absolute temperature of T (K), R is the gas constant (8.314J mol À1 K À1 ). As shown in Table 3, the value of DG for Pb 2+ adsorption into chitin/EM@Fe 3 O 4 hydrogel was negative, indicating that the adsorption of Pb 2+ into absorbents underwent a favorable and spontaneous process.The decreased value of DG with an increase in temperature suggests that the higher temperature is benecial to the Pb 2+ adsorption in solution.Furthermore, according to the linear tting prole of ln K d versus 1/T (Fig. 5b), the values of DH and DS in water solution were determined to be 48.6 kJ mol À1 and 267.3 J mol À1 K À1 .The positive value of DH demonstrates the endothermic nature of the adsorption process and the existence of an energy barrier.The large positive value of DS proves the higher binding ability of Pb 2+ to the nanocomposite matrix and the increased randomness at the solid-solution interface. 395 Effects of pH on Pb 2+ adsorption behavior Plenty of function groups such as the hydroxy, amino, carboxyl, acetyl amino and hemiacetal groups exist in the chitin/ EM@Fe 3 O 4 nanocomposite hydrogel.The protonation/ deprotonation of these groups is susceptible to the environmental pH conditions.Therefore, pH plays an important role in regulating the adsorption behavior for metal ions.To avoid the precipitation of Pb 2+ in higher condition, the pH values of the solutions were adjusted within the range of 1.0-5.0 for investigation.As illustrated in Fig. 6, the as-prepared adsorbent performs different behaviors for Pb 2+ adsorption during the pH ranges.The adsorption capacity increased sharply as the pH value increased from 1.0 to 5.0.The zeta potentials for the hydrogel samples are positive (Fig. S2 †), revealing that the amino groups are protonated in the whole pH range.This would result in the electrostatic repulsive force with Pb 2+ and prevent the metal complex formation.Hence, the increase in adsorption capacity is mainly relevant to the deprotonation of the carboxyl groups and the electronegative oxygen atoms in hydroxy, acetyl amino and hemiacetal groups.In condition of low pH, the protonation of active sites such as carboxylates takes place, which is unfavorable for the Pb 2+ binding to the biosorbents.Meanwhile, abundant hydrogen ions (H + ) will restrain the activity of electronegative oxygen, thus leading to the decrease in adsorption capacity.40 As the pH increases, the ionization of -COOH groups from the citric stabilized Fe 3 O 4 nanoparticle and EM protein occurred, and in turn the increased negative charges of -COO À groups will give rise to the stronger binding ability to heavy metal ions.41 Moreover, the lone pair of electrons in the neutral oxygen atoms (the hemiacetal oxygen atom of the anhydro-glucose at the nonreducing end and the hydroxyl groups of an anhydro-glucose unit at the reducing end) under high pH condition is benecial to the formation of coordination bonds between O atoms and the lead atoms.42 Regeneration As an ideal adsorbent, how to recover and maintain the original adsorption capacity is vital for the practical application.In this study, Pb 2+ -loaded nanocomposite hydrogels were regenerated by using 0.05 mol L À1 HCl aqueous solution.For comparison, 0.1 mol L À1 EDTA aqueous solution was also introduced as another static eluting agent.Since the component of Fe 3 O 4 nanoparticles in chitin/EM@Fe 3 O 4 nanocomposite hydrogel may be dissolved or chelated in the eluting solution, the weight loss of Fe was rstly explored.According to the original and nal weight contents of Fe in hydrogel samples, the weight loss can be calculated as: where m 0 (Fe) (g) is the original weight content of Fe in hydrogel sample, and m t (Fe) (g) is the weight content of Fe aer incubating the hydrogel sample in the eluting solution for the given time of t.As shown in Fig. 7, aer immersing the hydrogel samples in the eluting agents of diluted HCl and EDTA aqueous solutions for 48 h, the corresponding loss amounts of Fe from the matrix were evaluated to be 10.6% and 5.7%, respectively.The values of weight loss for the hydrogel samples in HCl aqueous solution are relatively higher than those in EDTA aqueous solution, implying that the nanocomposite biosorbents displayed higher stability in EDTA solution during the regenerated process.Although the phenomena of weight loss would occur in both cases, the values are acceptable and can still be utilized as the candidate eluting agents.Alternatively, in order to maximally reduce the weight loss of Fe during the regeneration, the elution time can be limited to be 0.5-1 h in each cycle, which has been demonstrated to be sufficient to remove the adsorbed heavy metal ions from the matrix.Similar processing methods have been readily applicated in the regeneration of various adsorbents, such as Fe 3 O 4 /sawdust carbon, 43 Fe 3 O 4 -modied sugarcane bagasse 44 or CMC/alginate/graphene oxide@Fe 3 O 4 . 45urther, the regeneration behaviors were investigated.As listed in Table 4, the desorption ratios of Pb 2+ from the matrix by using EDTA aqueous solution were slightly higher than diluted HCl solution in each cycle, conrming the better elution ability.Irrespective of the experimental deviation, these results reveal that most of the Pb 2+ adsorbed in chitin/EM@Fe 3 O 4 hydrogel could be removed in both cases, testifying the potential re-usability of the adsorbents.When the regenerated materials were placed in Pb 2+ aqueous solution upon the same incubation condition, they displayed a slight decrease tendency in the loading efficiency aer each cycle.The existence of the small reasonable decrease in adsorption capacity may be due to the static elution method, resulting in the incomplete desorption of ions from the matrix.In spite of this, the developed chitin/EM@Fe 3 O 4 hydrogel could still be a good candidate for future application. Conclusion A novel kind of chitin/EM@Fe 3 O 4 nanocomposite hydrogel derived from the biowastes of egg shell membrane and chitin was successfully prepared from the NaOH/urea aqueous system.The obtained hydrogel adsorbent could display the enhanced adsorption efficiency for Pb 2+ in aqueous solution with the addition of egg shell membrane and citric stabilized Fe 3 O 4 nanoparticles in the matrix.The good correlation coefficient (0.9966) demonstrates that the adsorption process in water solution obeys the pseudo-second-order kinetic model, implying that the Pb 2+ uptake the hydrogel is mainly controlled by the inner surface adsorption.Moreover, the adsorption behaviors strongly depend on the initial concentration of the Pb 2+ , pH value and the incubation temperature. The adsorption process of Pb 2+ into this absorbent can be suitably described by the Langmuir isotherm, and the thermodynamic analysis indicates that the adsorption behavior was spontaneous and endothermic.In addition, the nanocomposite bio-sorbents could still display high uptake capacity aer three cycles.This work will pave a way for the utilization of novel biomass-based absorbents for rapid and high-capacity removal of heavy metal ions. Fig. 1 Fig. 1 (a) Photographs of the as-prepared hydrogel samples.(b) Magnetic hysteresis loops of Fe 3 O 4 particles and chitin/EM@Fe 3 O 4 hydrogel sample.(Inset) Pictures of Fe 3 O 4 nanoparticles in aqueous solution before and after suffering from a magnetic field.(c) EDS spectra for the chitin/ EM@Fe 3 O 4 hydrogel sample.(d) TEM image and (e) high resolution TEM image of the Fe 3 O 4 nanoparticles dispersed in chitin/EM@Fe 3 O 4 hydrogel.(Inset) The corresponding size distribution of the Fe 3 O 4 particles.(f) SAED pattern for the corresponding location.XRD patterns (g), TGA curves (h) and (i) FT-IR spectra for the specimens. Fig. 3 Fig. 3 (a) The effect of contact time on the Pb 2+ adsorption in chitin/EM@Fe 3 O 4 hydrogel at 20 C in pH ¼ 5 aqueous solution.The corresponding pseudo-first-order model (b), pseudo-second-order model (c), intra-particle diffusion plots (d) and (e) Boyd plots for Pb 2+ adsorption in chitin/EM@Fe 3 O 4 hydrogel at 20 C in pH ¼ 5 aqueous solution. 3. 4 Fig.5adepicts the effects of incubation temperature on the uptake capacity for Pb 2+ adsorption into chitin/EM@Fe 3 O 4 hydrogel in water solution.The equilibrium adsorption capacity slightly increased from 0.043 to 0.050 mmol g À1 , as the temperature was elevated from 4 to 37 C, revealing the endothermic nature of this adsorption process.With the aim to intensively explore the endothermic nature in the adsorption process, the thermodynamic parameters, such as standard free energy (DG ), enthalpy change (DH ) and entropy change (DS ) are determined by utilizing the following equations38.: DG ¼ ÀRT ln K d(10) Fig. 6 Fig. 6 Effects of pH on the uptake capacity of Pb 2+ adsorption in chitin/EM@Fe 3 O 4 hydrogel at 20 C in aqueous solution. Fig. 7 Fig. 7 The weight loss of Fe as a function of immersing time for chitin/ EM@Fe 3 O 4 hydrogel samples in different eluting media at 20 C. Table 1 Kinetic parameters for adsorption of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel at 20 C in aqueous solution Table 2 Langmuir and Freundlich parameters for Pb 2+ adsorption into chitin/EM@Fe 3 O 4 hydrogel at 20 C in aqueous solution Table 3 Thermodynamic parameters for Pb 2+ adsorption into chitin/ EM@Fe 3 O 4 hydrogel at 20 C in aqueous solution Table 4 Repeated adsorption capacity and desorption efficiency of Pb 2+ into chitin/EM@Fe 3 O 4 hydrogel in the desorption-regeneration cycle
2022-02-06T16:55:10.822Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "5e8d4359b1b73b0b499d81843400fa8999a38c73", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d1ra08744d", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8faaf3ec372149386a0a16a883b9c50d3ff4110c", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
243891464
pes2o/s2orc
v3-fos-license
Systematic Experimental Evaluation of Function Based Cellular Lattice Structure Manufactured by 3D Printing : Additive manufacturing (AM) has a greater potential to construct lighter parts, having complex geometries with no additional cost, by embedding cellular lattice structures within an object. The geometry of lattice structure can be engineered to achieve improved strength and extra level of performance with the advantage of consuming less material and energy. This paper provides a systematic experimental evaluation of a series of cellular lattice structures, embedded within a cylindrical specimen and constructed according to terms and requirements of ASTMD1621-16, which is standard for the compressive properties of rigid cellular plastics. The modeling of test specimens is based on function representation (FRep) and constructed by fused deposition modeling (FDM) technology. Two different test series, each having eleven test specimens of different parameters, are printed along with their replicates of 70% and 100% infill density. Test specimens are subjected to uniaxial compressive load to produce 13% deformation to the height of the specimen. Comparison of results reveals that specimens, having cellular lattice structure and printed with 70% infill density, exhibit greater strength and improvement in strength to mass ratio, as compared to the solid printed specimen without structure. The study also shows that infill density, along with the pattern of material distribution, plays an important role for improvement in compressive strength. The results of the study can be successfully applied, according to compressive strength requirements, to different regions of objects under compression. Introduction Additive manufacturing (AM), mostly referred to as 3D printing, is known as the family of processes used to manufacture parts by accumulating thin layers of material over the previously deposited one. The pattern of deposition follows 3D CAD data available through slicing of a digital model. Each layer is deposited on x-y plane, whereas layer upon layer deposition in z direction determines the height of the 3D printed object. Application areas of AM are automotive sector [1], aerospace applications [2], marine, oil and gas sector, heavy machinery, consumer sector [3], biomedical field [4], architectural miniature models, civil construction works [5,6], food industry [7], and repair works of degraded parts [8]. The variety in AM processes is created by processes designed to use a variety of materials, at different physical conditions, and during the time of manufacture [9]. In 1991, three AM commercial machines, based on unlike processes, were introduced: Fused Deposition Modeling (FDM) by Stratasys, Solid Ground Curing (SGC) by Cubital, and Laminated object manufacturing (LOM) by Helisys [10]. Among all of the AM machines, FDM is third in top of the line machines, for its characteristics of producing functional parts of complex geometry, using various thermoplastic materials safely within a closed environment [11]. FDM process involves a layer-by-layer deposition of material extruded through a nozzle. During extrusion, the nozzle is following a predefined path, created according to the geometry of each layer. The commonly used material is plastic or wax, supplied in the form of filament, heated through heating coils. Heating coils are attached to the nozzle such that the material can be maintained at 0.5 • C above the melting temperature before extrusion. Extruded material fuse and solidify immediately over the previously deposited layer. Thus, a layer-by-layer deposition of material is carried out on a heated platform, which allows minimum distortion in part. After completion of a single layer, space is provided according to the diameter of the nozzle for the next layer, as shown in Figure 1. Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 16 form of filament, heated through heating coils. Heating coils are attached to the nozzle such that the material can be maintained at 0. 5 C above the melting temperature before extrusion. Extruded material fuse and solidify immediately over the previously deposited layer. Thus, a layer-by-layer deposition of material is carried out on a heated platform, which allows minimum distortion in part. After completion of a single layer, space is provided according to the diameter of the nozzle for the next layer, as shown in Figure 1. During slicing operation for FDM printing, supports are added to eliminate risk of collapsing the material at areas, freely hanging and extending outwards, beyond the main geometry of the model. Break away type of support structures can be easily removed after the completion of the building process [12]. Along with the support structure, infill material is specified during slicing operation. Infill is the amount of material used to fill inside the cavity of the model, specified, in percentage, of cross sectional areas, ranging from 100% infill to 0% infill material to build the model [13]. Cellular structures are ordered materials that are constructed by the repetition of a unit cell [14]. The promising properties of these materials are their lightweight construction, higher strength and energy absorption with minimum material consumption. The principles that drive the properties of cellular solids depends on the material, the joining pattern of cell through the edge, face, or vertices, and the relative density of the structure [15][16][17]. Cellular structures can be segregated as natural occurring or artificially produced by humans, which include structures made up of cells with open or closed boundaries, randomly distributed or periodic, and 2D or 3D in structure [11]. Nature has provided an unlimited variety of these structures around and within us. Most common naturally occurring cellular structures are wood [17], bone structure [18], corals and cork [15], toucan beak [19], and many more. Humans get the inspiration from the variety of design and use of these structures, by nature, and adopt them in their construction. Lattice structure, initially defined by Gibson and Ashby as three dimensional cellular structure having cell walls, follows random orientation in space [15]. Although initially defined, many researchers do have their own understandings for lattice structure, based upon the definition by Gibson and Ashby, as lattice structure does not rely completely on the dimensions and connecting style of the struts and nodes. In general, lattice structure is defined as being composed of interconnected cells, arranged in a repeated manner, to build a three dimensional structure, whereas the cells are an assembly of struts and nodes During slicing operation for FDM printing, supports are added to eliminate risk of collapsing the material at areas, freely hanging and extending outwards, beyond the main geometry of the model. Break away type of support structures can be easily removed after the completion of the building process [12]. Along with the support structure, infill material is specified during slicing operation. Infill is the amount of material used to fill inside the cavity of the model, specified, in percentage, of cross sectional areas, ranging from 100% infill to 0% infill material to build the model [13]. Cellular structures are ordered materials that are constructed by the repetition of a unit cell [14]. The promising properties of these materials are their lightweight construction, higher strength and energy absorption with minimum material consumption. The principles that drive the properties of cellular solids depends on the material, the joining pattern of cell through the edge, face, or vertices, and the relative density of the structure [15][16][17]. Cellular structures can be segregated as natural occurring or artificially produced by humans, which include structures made up of cells with open or closed boundaries, randomly distributed or periodic, and 2D or 3D in structure [11]. Nature has provided an unlimited variety of these structures around and within us. Most common naturally occurring cellular structures are wood [17], bone structure [18], corals and cork [15], toucan beak [19], and many more. Humans get the inspiration from the variety of design and use of these structures, by nature, and adopt them in their construction. Lattice structure, initially defined by Gibson and Ashby as three dimensional cellular structure having cell walls, follows random orientation in space [15]. Although initially defined, many researchers do have their own understandings for lattice structure, based upon the definition by Gibson and Ashby, as lattice structure does not rely completely on the dimensions and connecting style of the struts and nodes. In general, lattice structure is defined as being composed of interconnected cells, arranged in a repeated manner, to build a three dimensional structure, whereas the cells are an assembly of struts and nodes [20][21][22][23]. Based upon the placement pattern of unit cells in three dimensional Euclidean space, lattice structures are divided into three categories [24]: 1. Disordered lattice structure: The distribution of unit cells within the defined space do not follow any pattern, and is arbitrary and unplanned, as shown by Figure 2a. 2. Periodic lattice structure: Here, the distribution of unit cells is repeated periodically, having the same topology, geometry, and size, as shown in Figure 2b. 3. Pseudo-periodic lattice structure: Also known as conformal lattice structure, as depicted in Figure 2c, each unit cell holds the same topology with variations in shape and size. [20][21][22][23]. Based upon the placement pattern of unit cells in three dimensional Euclidean space, lattice structures are divided into three categories [24]: 1. Disordered lattice structure: The distribution of unit cells within the defined space do not follow any pattern, and is arbitrary and unplanned, as shown by Figure 2a. 2. Periodic lattice structure: Here, the distribution of unit cells is repeated periodically, having the same topology, geometry, and size, as shown in Figure 2b. 3. Pseudo-periodic lattice structure: Also known as conformal lattice structure, as depicted in Figure 2c, each unit cell holds the same topology with variations in shape and size. (a) (b) (c) Modeling of cellular lattice structure depends on traditional approaches related to surfaces: boundary representation and voxels (discrete volume representation). It is observed that these approaches are successful, up to some extent, and limited by design complexities of regular cellular lattice structure. The problems intensify further when modeling irregular cellular lattice structure. For such cases, each representation requires time to render the object and larger processing power, which mostly exceeds the processing power of computing machines. Qualitative problems are related to precision, operability, and manufacturability. Surface generated through boundary and voxel representation are problematic to manufacture due to defects (cracks, self-intersection, false and residual polygons) created during modeling, which produces an approximate instead of an exact geometry. Both representations are not supported for parametrization. If parameters are changed, the user needs to regenerate using a high-level procedure [25]. Most of the associated problems of the traditional approach to modeling, as stated above, are addressed by the function-based approach. Some recent research claims that implicit functions are markedly suitable for porous as well, as it can successfully be used for geometric modelling of objects, including cellular lattice structure [25][26][27][28]. To investigate properties of cellular lattice structure, several different geometries of unit cells are used by researchers to construct them. Wang et al. utilized a unit truss with five strut members to generate an octet truss conformal cellular lattice structure [29,30]. A complete design and experimental analysis study were conducted by Beyer for cellular lattice structure [31]. Three different types of structures: square pyramidal, tetrahedral, and kagome sourced structure are produced and tested for compressive and flexural strengths. The experimental results were analyzed and compared with the corresponding strengths of the solid structure, which shows the strengths are close to the solid structure. The research findings from Beyer triggered researchers to investigate the potential of truss lattice structures and surface network structures, in compression, to extend their application areas. The study performed by Jansson et al. compares network structures based on truss-lattices and periodic surfaces. The structures were designed and analyzed by Finite element software Abaqus, and the findings show the periodic surface networks are much stiffer than truss lattice structures [32]. Several lattice structures are generated through periodic octet cell, including Octet, Cross, FrameCross, InsideCross, and OctetFramed to measure their performance in compression [21]. Experimental study, by Ravari et al. and Lyibilgin et al. [33,34], was conducted to investigate the mechanical properties of variety Modeling of cellular lattice structure depends on traditional approaches related to surfaces: boundary representation and voxels (discrete volume representation). It is observed that these approaches are successful, up to some extent, and limited by design complexities of regular cellular lattice structure. The problems intensify further when modeling irregular cellular lattice structure. For such cases, each representation requires time to render the object and larger processing power, which mostly exceeds the processing power of computing machines. Qualitative problems are related to precision, operability, and manufacturability. Surface generated through boundary and voxel representation are problematic to manufacture due to defects (cracks, self-intersection, false and residual polygons) created during modeling, which produces an approximate instead of an exact geometry. Both representations are not supported for parametrization. If parameters are changed, the user needs to regenerate using a high-level procedure [25]. Most of the associated problems of the traditional approach to modeling, as stated above, are addressed by the function-based approach. Some recent research claims that implicit functions are markedly suitable for porous as well, as it can successfully be used for geometric modelling of objects, including cellular lattice structure [25][26][27][28]. To investigate properties of cellular lattice structure, several different geometries of unit cells are used by researchers to construct them. Wang et al. utilized a unit truss with five strut members to generate an octet truss conformal cellular lattice structure [29,30]. A complete design and experimental analysis study were conducted by Beyer for cellular lattice structure [31]. Three different types of structures: square pyramidal, tetrahedral, and kagome sourced structure are produced and tested for compressive and flexural strengths. The experimental results were analyzed and compared with the corresponding strengths of the solid structure, which shows the strengths are close to the solid structure. The research findings from Beyer triggered researchers to investigate the potential of truss lattice structures and surface network structures, in compression, to extend their application areas. The study performed by Jansson et al. compares network structures based on truss-lattices and periodic surfaces. The structures were designed and analyzed by Finite element software Abaqus, and the findings show the periodic surface networks are much stiffer than truss lattice structures [32]. Several lattice structures are generated through periodic octet cell, including Octet, Cross, FrameCross, InsideCross, and OctetFramed to measure their performance in compression [21]. Experimental study, by Ravari et al. and Lyibilgin et al. [33,34], was conducted to investigate the mechanical properties of variety of different cellular lattice structures including circle, square, triangle, diamond, honeycomb, and BCCZ manufactured through FDM process. Kagome lattice structure was investigated by Gautum et al., whose study reveals that build orientation and printing imperfections have considerable effects on mechanical properties of the structure [35]. The acceptance of cellular lattice structure directly relates to the advancements and capabilities of AM processes to create design complexities without increasing manufacturing cost. Hence, it improves the feasibility to manufacture graded parts [14]. Through design complexities of cellular lattice structure, some of the properties of these structures can be influenced directly. An adjustment in topology and dimensions of the cell geometry of structure can lead to a physical retort of these structures to display properties unattainable by their base material [36], including mechanical, acoustic, and dielectric properties [37][38][39]. The higher strength to weight ratio, and structure thermal conductivity, makes lattice structures a suitable option for aerospace industry [40]. According to the details mentioned above, objects made up of cellular lattice structure provides the benefits of reduction in weight, material, larger surface area to volume ratio, and, for some particular AM processes, less time and energy requirements. It is therefore predicted that most digital fabricated solid objects will be constructed via cellular lattice structures in future [25]. Although it seems that a variety of unit cell types are used to generate cellular lattice structures, a detailed study, performed by Helou and Kara [41], highlights that unit cell types used in previous published research are very limited, around 40 in number, and not all of them are of new design. Some of them are derived from basic existing unit cell type. There exists an immediate requirement of innovative design of cellular lattice structures to be used for different situations and applications. An in-depth survey reveals that none of the researchers, used to generate series of lattice structures, investigated their mechanical performance in compression. We have taken this requirement as our main target of study, which involves design, construction, and investigation of the behavior of lattice structures. This study presented an innovative approach to optimize the use of infill material at the inner region by assigning values to percentage of infill material, frequency, and vertical shift, which determines the thickness and the distance between the bars of the cellular lattice structure. Function based approach is applied to the design and modeling of the series of cellular lattice structure, and performance of each member of series is determined through experimental investigation. The organization of the manuscript is according to the sequence of activities for study. Section 2 provides methodology and the information related to material, machines, process parameters, standard to follow, and modeling requirements of cellular lattice structure. Modeling details of a series of cellular lattice structures is discussed in Section 3. Section 4 provides details of the results of experimental evaluation, and finally, in Section 5, the results and findings are discussed. Section 6 provides the conclusion of the study. Methodology This section deals with step by step procedure, in detail, to design a series of internally built-in cellular lattice structure within objects of closed boundaries, such that each object contains a unique internal structure, with dimensions of details orders of magnitude lesser than the overall size of the object. The concepts applied in this study are simple, and a basic knowledge of solid mechanics is needed for the calculations. The compressive strength (σ c ) will be calculated by dividing the compressive load (F c ) applied on the specimen by an initial cross sectional area of the specimen (A X-sectional ). The strength to mass ratio will be calcualated through Equation (1). Strength to mass ratio = σ c /M s where M s is the measured mass of the specimen. Material Acrylonitrile Butadiene Styrene (ABS) and Poly-lactic Acid (PLA) are the most commonly used materials in 3D printing for functional requirements. Most of the published research investigations considered ABS either for investigation or a printing test specimen [42][43][44][45][46]. For our research investigation, we are using PLA Filament from Anet PLA 3D printing filament for 3D printers for the main structure, support structures, and infill structures. The material is used as received. The properties of the material [47][48][49] are shown in Table 1. Printer and Parameters Prusa i3, an open-source 3D printer, is used for printing test specimens. The selected printing parameters are given in Table 2. Testing Procedure Tinius Olsen Hydraulic Universal Testing Machine (300SL) is used for the execution of the experimental tests which meets ASTM E4, BS 1610, DIN 51221, EN 10002-2, and ISO 7500-1 standard. The machine is equipped with two hardened steel plates, both parallel to each other in a plane, perpendicular to the axis of the test specimen. The test specimen is loaded between the plates, ensuring that the outer surface of the specimen is parallel to the compression plates. The compression speed for test is adjusted as 2.5 mm/min, keeping it uniform from start of the compression until it reaches a strain level equal to 16% of its original length, as in the ASTM standard document. During testing, magnitude of uniaxial compressive load and percent reduction in height are recorded for further use in calculations. The test specimens are manufactured and tested according to "Standard test method for compressive properties of rigid cellular plastics" (ASTMD1621-16). Each test specimen for the study is around a closed cylindrical shell except the base, the diameter and height are calculated and kept at 57.3 mm and 25.4 mm, respectively, as shown in Figure 3a. With respect to the internal geometry, there are two different types of specimen, with one having cellular lattice structure inside the cylindrical shell, as shown in Figure 3b. The details of the geometry and dimensions of cellular lattice structure are presented in Section 3. The second type of specimen is a hollow cylindrical shell (will be called "solid part"), as shown by Figure 3c. The base of both type of specimen are not closed and the shell has a uniform wall thickness of 4 mm, as shown by Figure 3b,c. After printing, the masses of each specimen are calculated and measured through a digital scale, with readability up to 0.0001 g (0.1 mg). The masses of the specimen are mentioned in Section 4. The observed variations in calculated and measured masses were found to be less than 2% in all test specimens. For the study, we use the measured values of the masses. masses of each specimen are calculated and measured through a digital scale, with readability up to 0.0001 g (0.1 mg). The masses of the specimen are mentioned in Section 4. The observed variations in calculated and measured masses were found to be less than 2% in all test specimens. For the study, we use the measured values of the masses. Modeling of Cellular Lattice Structure As discussed, use of functions are more efficient than boundary representation and voxel based modeling techniques. We propose to construct a series of objects having unique cellular lattice structure inside and within the closed boundaries, using trigonometric functions and -functions. For working with function representation, "HyperFun, a modeling language", a simple specialized high level programming language and associated software, designed to set out function based models, is used [50]. To construct the series of cellular lattice structure within a closed cylinder, an algorithm is developed and given at the end of the section. The series is constructed by assigning values to variables, ( ) and ℎ ( ) used in sin function. The values of frequency are taken, which are 19 (f1) and 21 (f2), whereas the vertical shift is varied from "0" to "0.4" with an increment of "0.1". Frequency controls the length of the period, and vertical shift controls the division within a period. The selected values of frequency and vertical shift are according to the size limitations of test specimen following the standard ASTMD1621-16 and printing capabilities of 3D printing machine. The internal lattice structure consists of intersecting vertical and horizontal hollow bars, generated through function representation, as shown in Figure 4a. Figure 4b provides the x-sectional view of the specimen through the bars to show the hollow regions of the bar. (a) (b) Figure 3. Dimensions of test specimen (a) Test specimen overall dimensions (b) X-sectional view of test specimen with internal cellular lattice structure (c) X-sectional view of hollow test specimen (solid part). Modeling of Cellular Lattice Structure As discussed, use of R functions are more efficient than boundary representation and voxel based modeling techniques. We propose to construct a series of objects having unique cellular lattice structure inside and within the closed boundaries, using trigonometric functions and R-functions. For working with function representation, "HyperFun, a modeling language", a simple specialized high level programming language and associated software, designed to set out function based models, is used [50]. To construct the series of cellular lattice structure within a closed cylinder, an algorithm is developed and given at the end of the section. The series is constructed by assigning values to variables, f requency ( f ) and vertical shi f t (v) used in sin function. The values of frequency are taken, which are 19 (f 1 ) and 21 (f 2 ), whereas the vertical shift is varied from "0" to "0.4" with an increment of "0.1". Frequency controls the length of the period, and vertical shift controls the division within a period. The selected values of frequency and vertical shift are according to the size limitations of test specimen following the standard ASTMD1621-16 and printing capabilities of 3D printing machine. The internal lattice structure consists of intersecting vertical and horizontal hollow bars, generated through function representation, as shown in Figure 4a. Figure 4b provides the x-sectional view of the specimen through the bars to show the hollow regions of the bar. masses of each specimen are calculated and measured through a digital scale, with readability up to 0.0001 g (0.1 mg). The masses of the specimen are mentioned in Section 4. The observed variations in calculated and measured masses were found to be less than 2% in all test specimens. For the study, we use the measured values of the masses. Modeling of Cellular Lattice Structure As discussed, use of functions are more efficient than boundary representation and voxel based modeling techniques. We propose to construct a series of objects having unique cellular lattice structure inside and within the closed boundaries, using trigonometric functions and -functions. For working with function representation, "HyperFun, a modeling language", a simple specialized high level programming language and associated software, designed to set out function based models, is used [50]. To construct the series of cellular lattice structure within a closed cylinder, an algorithm is developed and given at the end of the section. The series is constructed by assigning values to variables, consumed to act as support material and provide support to hollow bars. Figures 5 and 6 show the internal structure of a test specimen after rendering plus slicing and printing operations. The variation in the thickness of the hollow bars and material distribution for the same frequency and different values of vertical shift can be seen by comparing Figure 5a with Figures 6a and 5b with Figure 6b, respectively. During printing, material accumulates to form the outer surfaces of the hollow bar, and the same material (PLA) accumulates in the space between the hollow bars to form intersecting vertical and horizontal column-like structure (solid bar). Some extra material is consumed to act as support material and provide support to hollow bars. Figures 5 and 6 show the internal structure of a test specimen after rendering plus slicing and printing operations. The variation in the thickness of the hollow bars and material distribution for the same frequency and different values of vertical shift can be seen by comparing Figure 5a with Figure 6a and Figure 5b with Figure 6b, respectively. The outer shape, inside structure, and geometry of the rendered and printed test specimen is shown in Figure 7a-c, respectively. Figure 7b shows the magnified view of the internal structure containing the dimensions after rendering. Here, "p" is the thickness of the hollow bar and "q" is the distance between the hollow bars, which is empty space. The empty space between the hollow bars, shown in Figure 7b, is then fillled by the same material (PLA) during printing, as shown by Figure 7c, to form solid bars. Tables 3 and 4 show dimensions "p" and "q" in millimeters related to each specific test specimen. During printing, material accumulates to form the outer surfaces of the hollow bar, and the same material (PLA) accumulates in the space between the hollow bars to form intersecting vertical and horizontal column-like structure (solid bar). Some extra material is consumed to act as support material and provide support to hollow bars. Figures 5 and 6 show the internal structure of a test specimen after rendering plus slicing and printing operations. The variation in the thickness of the hollow bars and material distribution for the same frequency and different values of vertical shift can be seen by comparing Figure 5a The outer shape, inside structure, and geometry of the rendered and printed test specimen is shown in Figure 7a-c, respectively. Figure 7b shows the magnified view of the internal structure containing the dimensions after rendering. Here, "p" is the thickness of the hollow bar and "q" is the distance between the hollow bars, which is empty space. The empty space between the hollow bars, shown in Figure 7b, is then fillled by the same material (PLA) during printing, as shown by Figure 7c, to form solid bars. Tables 3 and 4 show dimensions "p" and "q" in millimeters related to each specific test specimen. The outer shape, inside structure, and geometry of the rendered and printed test specimen is shown in Figure 7a-c, respectively. Figure 7b shows the magnified view of the internal structure containing the dimensions after rendering. Here, "p" is the thickness of the hollow bar and "q" is the distance between the hollow bars, which is empty space. The empty space between the hollow bars, shown in Figure 7b, is then fillled by the same material (PLA) during printing, as shown by Figure 7c, to form solid bars. Tables 3 and 4 show dimensions "p" and "q" in millimeters related to each specific test specimen. The Algorithm 1, presented below is used to construct ten unique compressive strength test specimens with built in cellular lattice structure by altering values of frequency (f) and vertical shift (v). According to the standard, five replicates should be constructed and tested for each type of specimen, but due to time constraints, three The Algorithm 1, presented below is used to construct ten unique compressive strength test specimens with built in cellular lattice structure by altering values of frequency (f ) and vertical shift (v). According to the standard, five replicates should be constructed and tested for each type of specimen, but due to time constraints, three replicates are made and tested [51]. Average values of test results are considered for evaluation. Test specimens are constructed with 100% and 70% infill density, tested, and their results are compared with specimen of same infill density without internal structure (solid part). All the specimens are printed through Prusa i3, an open source 3D printer, keeping x-y plane of specimens parallel to the print bed. Top and bottom surfaces of specimens are made parallel after printing, and their dimensions are kept, according to ASTM 1621-16. The X-sectional views of specimens after rendering are presented in Table 5. Algorithm 1 Construction of a cellular lattice structure within closed cylindrical object Procedure: regular(x,y,z) Construct slabs orthometric to (x,y,z) axes: Construction of the bars by intersection operation: b x = s y ∧ s z b y = s x ∧ s z b z = s x ∧ s y Perform blending union operation over the constructed bars: Construction of infinite lattice structure by union operation: regularlatticestructure = rls. rls = blend1 ∨ blend2 ∨ blend3 Construction of closed cylindrical shell: First Test Series The first test series of eleven test specimens are constructed with 100% infill density. Out of eleven, ten unique test specimens, having internal structure within them, are Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of Table 5. X-sectional views of the test specimens constructed through Algorithm 1, given above. First Test Series The first test series of eleven test specimens are constructed with 100% infill densit Out of eleven, ten unique test specimens, having internal structure within them, ar First Test Series The first test series of eleven test specimens are constructed with 100% infill density. First Test Series The first test series of eleven test specimens are constructed with 100% infill density. Out of eleven, ten unique test specimens, having internal structure within them, are constructed, and one solid test specimen is constructed as indicated in Table 5. The specimens are constructed with 100% infill density by using the regular infill pattern provided by the software Repetier-Host. Three replicates of each test specimen are made and tested. Tables 6 and 7 show the data summary of the characteristics and results of the first test series. Second Test Series The second test series, of eleven test specimens, are constructed with 70% infill density. Out of eleven, ten unique test specimens, having internal structure within them, are constructed, and one solid test specimen is constructed as indicated in Table 5. The specimens are constructed with 70% infill density by using the regular infill pattern provided by the software Repetier-Host. Three replicates of each test specimen are made and tested. Tables 8 and 9 show the data summary of the characteristics and results of the second test series. Results and Discussion Testing of series 1 and series 2 specimens was carried out as described in Section 3. We have not shown the raw data due to its high volume, including around one thousand points per specimen. In this section, we will discuss the method used to convert the data into useful information. Our objective in the study is to determine the compressive strength of the structures, for which the load required to produce 13% deformation in the height of the test specimen, is applied through a testing machine (Tinius Olsen). Once the data is received from the testing machine, it is imported to an excel sheet to record the amount of load and the subsequent deformation in the test specimen. The recorded data is used to determine the compressive strength and strength to mass ratios are calculated through Equation (1) From the load and subsequent deflection, data is used to calculate the amount of strength exhibited by the specimen against 13% axial deformation. Figure 8a,b provides the graphical representation of the data points. An increasing trend in strength is observed for both 70% and 100% infill density specimens. Figure 8a provides the comparison, which shows that the solid specimen is much stronger, as compared to the specimen having an internal structure with 100% infill density. The performance of specimens with 70% infill density (see Figure 8b) is more valued, as all of the specimen are of greater strength than the solid specimen having a 70% infill density. Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 16 performance of specimens with 70% infill density (see Figure 8b) is more valued, as all of the specimen are of greater strength than the solid specimen having a 70% infill density. Figure 9a,b shows the relationship between specimen mass and vertical shift. Comparison between Figures 8a and 9a show that, for 100% infill density, the strength of specimen mostly follows the same trend, where an increase in mass increases the strength of specimen. It is also observed that, with an increase in the value of vertical shift, the mass and the strength of the specimen mostly increases. The response of the 70% infill density specimen shows better performance, as indicated by Figure 9b. Graphical representation of mass vs vertical shift shows that the masses of specimens with internal structure are equal to, or less than, the mass of solid specimen. A comparison between Figures 8b and 9b reveals that the mass of specimen is not the only factor that contributes towards greater strength, as it can be seen that specimens of equal masses do not perform uniformly and have different values of strength. Figure 10a,b shows the strength to mass ratios versus vertical shift of specimens with 70% and 100% infill density. For 100% infill density specimens, a zigzag trend is observed for specimens with frequency f1 (19), and an increasing trend is observed for specimens with frequency f2 (21). The strength to mass ratio of solid specimens is higher than the values of 100% infill density with internal structure specimens, except for the specimen with frequency 21 and vertical shift 0.4, having the same value of strength to mass ratio. Similar results are observed through the research study of Beyer et al., [31] where the Kagome type of lattice structure performance in compression is equal, in magnitude, to the solid specimen. The performance of specimens with 70% infill density is highly regarded, including all the observations greater than the strength to mass ratio of solid specimens. A continuously increasing trend is observed for both frequencies. (a) (b) Figures 8a and 9a show that, for 100% infill density, the strength of specimen mostly follows the same trend, where an increase in mass increases the strength of specimen. It is also observed that, with an increase in the value of vertical shift, the mass and the strength of the specimen mostly increases. Results of the second test series are more recognized, as the performance of each specimen of this series is much better than the solid specimen with 70% infill density, as shown in Tables 8 and 9. From the graphical representations in Figures 9 and 10, it can be seen that, for minor increases in values of specimen mass, the range of improvement in strength to mass ratio in comparison to solid specimen, is from 23.6% (min. ) to 68.4% (max. ). It is also observed that mass of the specimens range from 58.4 g to 62.72 g, and their performances are in the range of 27.8 MPa to 40.2 MPa i.e., for an improvement of 7.4% in material, their performance is 44.6% better than the specimen with the least values. For the first series, these values are with a 7.7% increase in material and a 17.5% increase in strength is achieved. Performance in the second series is further investigated and evaluated according to material distribution within the sample. Figures 8 and 9 show that the amount of material utilized for the construction of specimens is, approximately, the same, but their strength to mass ratios are different and greater than the minor difference in their masses, if any. Comparing the values of strength to mass ratio show that higher values relate to frequency The response of the 70% infill density specimen shows better performance, as indicated by Figure 9b. Graphical representation of mass vs vertical shift shows that the masses of specimens with internal structure are equal to, or less than, the mass of solid specimen. A comparison between Figures 8b and 9b reveals that the mass of specimen is not the only factor that contributes towards greater strength, as it can be seen that specimens of equal masses do not perform uniformly and have different values of strength. Figure 10a,b shows the strength to mass ratios versus vertical shift of specimens with 70% and 100% infill density. For 100% infill density specimens, a zigzag trend is observed for specimens with frequency f 1 (19), and an increasing trend is observed for specimens with frequency f 2 (21). The strength to mass ratio of solid specimens is higher than the values of 100% infill density with internal structure specimens, except for the specimen with frequency 21 and vertical shift 0.4, having the same value of strength to mass ratio. Similar results are observed through the research study of Beyer et al. [31] where the Kagome type of lattice structure performance in compression is equal, in magnitude, to the solid specimen. The performance of specimens with 70% infill density is highly regarded, including all the observations greater than the strength to mass ratio of solid specimens. A continuously increasing trend is observed for both frequencies. Results of the second test series are more recognized, as the performance of each specimen of this series is much better than the solid specimen with 70% infill density, as shown in Tables 8 and 9. From the graphical representations in Figures 9 and 10, it can be seen that, for minor increases in values of specimen mass, the range of improvement in strength to mass ratio in comparison to solid specimen, is from 23.6% (min.) to 68.4% (max.). It is also observed that mass of the specimens range from 58.4 g to 62.72 g, and their performances are in the range of 27.8 MPa to 40.2 MPa i.e., for an improvement of 7.4% in material, their performance is 44.6% better than the specimen with the least values. For the first series, these values are with a 7.7% increase in material and a 17.5% increase in strength is achieved. Performance in the second series is further investigated and evaluated according to material distribution within the sample. Figures 8 and 9 show that the amount of material utilized for the construction of specimens is, approximately, the same, but their strength to mass ratios are different and greater than the minor difference in their masses, if any. Comparing the values of strength to mass ratio show that higher values relate to frequency 21, and the material distribution in higher frequency specimens is more widened in the case of 70% in contrast to 100% specimens (refer to Tables 3 and 4). The thickness of solid bars is lesser for frequency 21 and compared to frequency 19 and, for such situations with the same masses of specimen, the much expanded material distribution of specimens with frequency 21 is justified. The solid bars serve as pillars internally and provide supports to the whole geometry. For these reasons, a greater level of compressive strength is achieved when comparison is made between 70% infill solid part and 70% infill cellular lattice structure specimens. The above research findings are supported through previous published research works by Chunze et al. [37,52]. Using Direct Metal Laser Sintering process (DMLS), similar observations are made when the unit cell sizes of the cellular lattice structure are reduced to 3 mm, 5 mm, and 7 mm, respectively, within the same volume fraction. With the reduction in the cell size, sizes of struts also decrease, and during manufacturing, the thinner struts cool down much rapidly, leading to finer microstructure and stronger struts. Therefore, as the dimensions of the struts reduce, improvement is observed in the strength and mechanical properties of the cellular lattice structure. Results of the second test series are more recognized, as the performance of each specimen of this series is much better than the solid specimen with 70% infill density, as shown in Tables 8 and 9. From the graphical representations in Figures 9 and 10, it can be seen that, for minor increases in values of specimen mass, the range of improvement in strength to mass ratio in comparison to solid specimen, is from 23.6% (min. ) to 68.4% (max. ). It is also observed that mass of the specimens range from 58.4 g to 62.72 g, and their performances are in the range of 27.8 MPa to 40.2 MPa i.e., for an improvement of 7.4% in material, their performance is 44.6% better than the specimen with the least values. For the first series, these values are with a 7.7% increase in material and a 17.5% increase in strength is achieved. Performance in the second series is further investigated and evaluated according to material distribution within the sample. Figures 8 and 9 show that the amount of material utilized for the construction of specimens is, approximately, the same, but their strength to mass ratios are different and greater than the minor difference in their masses, if any. Comparing the values of strength to mass ratio show that higher values relate to frequency Conclusions The research study has investigated the compressive strength of cellular lattice structure of two different test series manufactured by FDM technology. Comparisons of the results are made according to strength to mass ratio, compressive strength, and mass/density of the specimen. The comparison among the 100% infill density specimen (solid specimen versus specimen with internal structure) shows that the solid specimen is much stronger, as it consumes more material. Samples of 100% infill density, with internal structure, perform much better than specimens with 70% infill density, which is evident from the data of specimens with frequency 21 and vertical shift of 0.4, printed with 100% and 70% infill density. Data of 100% infill specimen shows that, for a 4.2% increase in material, the strength rises to 26.6% higher than 70% infill specimen. Comparison within 70% infill specimen strength data suggests the use of specimens having internal structure instead of solid structure. These specimens show a balanced compressive strength property along with the benefit of material savings.
2021-11-10T16:29:47.536Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "1dd9126e723d907e1770231839b50b6b6b173041", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/21/10489/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dae5191664d5ec4e1f6e3189025f930415643073", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
226578352
pes2o/s2orc
v3-fos-license
Exercise in Adolescent Depression: Fitness, Clinical Outcomes, and BDNF Introduction: Despite the initiation of treatment for depression including medications and evidence-based psychotherapies, many adolescents continue to have depressive symptoms. A 2017 meta-analysis of exercise research for this population summarized that physical activity appears to improve depression symptoms in adolescents, but the need for larger trials was emphasized. Most importantly, the physiological and neurological mechanisms of action through which exercise exerts its antidepressant effects must be explored. Objectives: The objective of this study was to assess the feasibility of a group exercise intervention for adolescents with depressive disorders. To investigate physiologic changes, serum biomarkers were examined including brain-derived neurotrophic factor (BDNF). Secondary analyses explored relationships among depressive symptoms, exercise self-efficacy, and fitness. Methods: Adolescents with depression (Children’s Depression Rating Scale-Revised, CDRS-R ≥ 40) participated in a group intervention for 12 weeks of aerobic exercise (1X in group and 2X independently each week). Blood draws were taken preand post-intervention. At weeks 1 and 12, the Balke Fitness Test was administered measuring exertion ratings and heart rates during treadmill activity. Results: Participants had a significant decrease in depressive symptoms over the 12-week intervention. Mean CDRS-R score of completers was 52.2 at baseline and 29.6 post-intervention, for a decrease of 22.5 points. Paired samples t-tests showed that the decrease in CDRS-R scores from baseline to week 12 was statistically significant [t(12)=9.12, p<.001]. There was a significant increase in plasma BDNF between baseline and post-intervention for the completers of the exercise intervention [t(12)= -2.6, p<.03]. Reductions in mean exertion ratings on the Balke Fitness Test from the final minute of testing at weeks 1 and 12 were significantly correlated with reductions in CDRS-R scores (r=.57, p=.04). Conclusions: The significant decrease in depressive symptoms over the 12-week intervention suggests that exercise is effective in the treatment of depression in adolescents. When comparing data from these adolescents with depressive disorders, there were significant reductions in depression and significant increases in BDNF after 12 weeks of exercise. Introduction Data from Monitoring the Future study demonstrates significant increases in reports of depressive symptoms among adolescents between 1991 and 2018, consistent with increases in depressive disorders and suicides [1]. There is convincing evidence to suggest that serum levels of brain-derived neurotrophic factor (BDNF) are lower for individuals suffering from major depressive disorder [2]. Abnormalities in BDNF levels and in the BDNF signaling pathway may be contributors to suicidal thoughts and behaviors [3]. At the same time, numerous studies suggest that serum levels of BDNF often normalize in response to treatment with antidepressant medications [2]. Unfortunately, many adolescents with depression never seek medical treatment. Even for adolescents taking antidepressant medications in combination with cognitive behavioral therapy, nearly two thirds fail to achieve remission of their depression [4]. Physical activity has been shown to treat depression in adults [5] and has been demonstrated as a feasible treatment for adolescents with depression [6]. A 2017 meta-analysis of exercise research for this population summarized that exercise and physical activity appear to improve depression symptoms in adolescents, especially in clinical samples, suggesting that exercise may indeed be a useful treatment strategy for adolescents with depression [7]. Journal of Neurology and Neurobiology Open Access Journal Exercise is also known to increase BDNF levels in the brain, confirmed by the measurement of simultaneous blood samples obtained from the radial artery and the internal jugular vein as human subjects were actively exercising [8,9]. Rodent studies have demonstrated that exercise induces BDNF activation, both peripherally in skeletal muscle and centrally in the hippocampus, via similar pathways [10]. Increases in BDNF appear to be one of the mechanisms through which exercise exerts its antidepressant effects. Objectives The objective was to assess feasibility and effectiveness of a group exercise intervention for adolescents with depressive disorders. Secondary analyses explored relationships among depressive symptoms, exercise self-efficacy, and fitness. To investigate physiologic changes, serum biomarkers were examined pre-and post-intervention, including BDNF. We hypothesized that participation in this 12-week exercise intervention would be associated with significant decreases in symptoms of depression and with increases in measured levels of BDNF. Methods Adolescents with depression and low activity levels were consented and accepted into a group exercise intervention for 12 weeks of aerobic exercise (1X in group and 2X independently each week). Blood draws were taken pre-and post-intervention to assess changes in metabolic biomarkers. Participant information is included in Table 1. Clinical Measures Children's Depression Rating Scale, Revised (CDRS-R); [11] is a semi-structured interview assessing depressive symptoms. CDRS-R was administered at baseline and post-intervention. Balke Fitness Test is a 6-minute treadmill task with increasing incline and a 6-20 exertion rating [12]. The Balke Protocol is designed to provide a measure of cardiovascular fitness. Stage 1 of Balke Fitness Test is recorded after 2 minutes; 1 minute at 6% incline and 1 minute at 8% incline with a speed of 3 mph. Stage 5 of Balke Fitness Test is recorded after 6 minutes of increasing incline at 3 mph. Participant reaches maximum incline of 12% at the fourth minute. This fitness test was conducted at weeks 1 and 12. Exercise Self-Efficacy Scale (ESES) [13] is an 18-item questionnaire that assessed the extent to which one believes he/she can exercise under certain circumstances. Suicidal Ideation Questionnaire Junior (SIQ-JR) [14] is a 15item assessment (6 points maximum per question) for how often an adolescent is contemplating suicide. A score of 23 out of 90 is a cutoff used in establishing the risk for suicide. Results Mean CDRS-R score of completers was 52.2 at baseline and 29.6 post-intervention, for a decrease of 22.5 points. Paired samples t-tests showed that the decrease in CDRS-R scores from baseline to week 12 was statistically significant [t(12)=9.12, p<.001]. When comparing baseline and week 12 values on the Balke Fitness Test, mean exertion scores were significantly reduced for both Stage 1 of testing [t(12)=2.3, p<.04) and Stage 5 of testing [t(12)=2.3, p<.04], as seen in figure 1. Exertion ratings from the final minute of the Balke Fitness Test were negatively correlated with Exercise Self-Efficacy Scores at baseline (r= -.53, p<.02) and at week 12 (r= -.82, p=.001). Reductions in mean exertion ratings from the final minute of testing were significantly correlated with reductions in CDRS-R scores (r=.57, p=.04), as seen in figure 2, and SIQ scores (r=.59, p<0.04), as seen in figure 3. There was a significant increase in plasma BDNF between baseline and post-intervention for the completers of the exercise intervention [t(12)= -2.6, p<.03], as seen in figure 4. Limitations This was a feasibility study to see if group exercise would have the same impact on depression levels for adolescents as has been established for those in individual exercise [6]. There was no control condition in this study which required the reliance on within-subjects comparisons in our analyses. The small sample size was a limiting variable. While the dropout rate was indicative of the challenges of engaging adolescents with depression in an exercise regimen, it further reduced the sample size. Concurrent treatment may have been a confounding variable. Research participants were allowed to continue their current treatment while they were in this study. If they were on medications, they were expected to be on a stable dose for 4-6 weeks prior to initiation of the exercise intervention. Of the 13 completers, all were either on medications, in psychotherapy, or both. The impact of these treatments occurring simultaneously with the exercise regimen is thus unknown. Discussion Exercise should be considered as part of clinical recommendations for all youth and adults with depressive disorders. Research in rodents has demonstrated the important role of BDNF in the molecular mechanisms fostering neuronal plasticity [15]. More recent research has shown that myokines, muscle-derived molecules affecting aspects of metabolism, may serve as "exercise factors" that may contribute to improved brain health [16]. Wrann and colleagues identified similar mechanisms of BDNF induction in both skeletal muscle and in the hippocampus via a PGC-1a/FNDC5 pathway [17]. Peripheral FNDC5 is cleaved resulting in a product called irisin which enhances BDNF gene expression in the hippocampus, a major site of activation and brain changes following exercise [10,17]. Sci Forschen In this intervention, improved fitness, as measured by significant decreases in ratings of physical exertion, predicted reductions in depression and suicidal ideation. Among these adolescents with depression who completed the 12-week exercise intervention, a significant increase in BDNF was observed between baseline and postintervention. Conclusions Exercise is an effective treatment for depression among adolescents. BDNF plays a key role in neuroplasticity of the brain and its ability to adapt and respond to new challenges. Increases in BDNF are associated with improved brain plasticity and cognition. Similar to the increases in BDNF reported with other antidepressant treatments [18], reductions in depression in these adolescents following exercise may be a result of increases in BDNF. The results of this research project suggest that adolescents with depressive disorders can use Journal of Neurology and Neurobiology Open Access Journal exercise as part of their treatment to effectively reduce their symptoms of depression, increasing BDNF levels that improve brain health and promote synaptic plasticity. Exercise should be considered as part of treatment for adolescents with clinically significant depressive disorders.
2020-08-13T10:02:37.838Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "ab4e34bb056508ed638f337f70fbed9859a0e450", "oa_license": "CCBY", "oa_url": "https://www.sciforschenonline.org/journals/neurology/article-data/JNNB167/JNNB167.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bda5dad4cbceb8bd7e1c2f01bec21e5ba7c80f7a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252354353
pes2o/s2orc
v3-fos-license
Cloud-Based English Multimedia for Universities Test Questions Modeling and Applications This study constructs a cloud computing-based college English multimedia test question modeling and application through an in-depth study of cloud computing and college English multimedia test questions. The emergence of cloud computing technology undoubtedly provides a new and ideal method to solve test data and paper management problems. This study analyzes the advantages of the Hadoop computing platform and MapReduce computing model and builds a distributed computing platform based on Hadoop using universities' existing hardware and software resources. The study analyzes the advantages of the Hadoop computing platform and the MapReduce computing model. The UML model of the system is given, the system is implemented, the system is tested functionally, and the results of the analysis are given. Multimedia is the critical link to realizing the optimization of English test questions. The proper use of multimedia test questions will undoubtedly become an inevitable trend in the development of English test questions in the future, which requires every worker on the education front to continuously analyze and study the problems arising from multimedia teaching, summarize the experience of multimedia teaching, and explore new methods of multimedia teaching, so that multimedia teaching can better promote the optimization of English test questions in colleges and universities and better serve the education teaching. Introduction Cloud computing is a computing model and is developed based on the Internet.Cloud computing integrates several computer technologies: distributed computing, parallel computing, utility computing, network storage, virtualization, load balancing, and other traditional computer technologies [1]. e integration of these technologies with network technologies gradually forms cloud computing.Cloud computing provides many services, and end users only need to extract these services without caring about the underlying computer technologies; for example, it shields the more complex issues such as data cluster management, massive data processing, and application deployment [2].Cloud computing is a product of the convergence and development of distributed computing, utility computing, virtualization technology, web services, grid computing, and other technologies [3].Its goal is that users can maximize the use of virtual resource pools at any time and any place through the network to deal with large-scale computing problems.Cloud computing relies on its powerful computing capacity so that many end users do not have to worry about the computing technology used and the way of access; they can carry out various practical applications on the network through the services provided by the "cloud."With the further reform of university curriculum and management, examination management, which is one of the objectives of information construction, has also encountered some challenges, such as the distributed management of test databases, the diversity of examination users, and the wide range of services and security [4].Using existing equipment and management systems to solve related problems is one of the topics to be studied.e emergence of cloud computing technology undoubtedly provides a new, more ideal method for solving the above issues.Cloud computing-based test bank management system can continuously enrich the question bank and accurately define the difficulty and differentiation of the question bank so that it can achieve comprehensive coverage of examination questions, reasonable differentiation, and difficulty so that the examination can accurately reflect the actual ability of candidates and have better reliability and validity, which has a vital role in realizing the separation of teaching and assessment and promoting the teaching reform of colleges and universities. e cloud-based test bank management system can also reduce costs, improve computing speed, and improve the system's reliability, availability, and scalability by using distributed computing to realize real-time examinations better. Multimedia technology is widely used in modern English teaching because of its interactive, informative, practical, and easy-to-use features.As an integral part of English testing, test questions are developing rapidly [5].e test bank has taken shape through efforts (programming language Basic, database language FoxPro, and multimedia programming software such as Authorware have been used).Regardless of the size of the test bank, what kind of application software is used, as a product of the information age, from the essence, the multimedia test bank has the traditional paper-based multimedia test questions, which can give learners graphics, text, sound, and image multisensory stimulation, with the advantages of being intuitive, concrete, image, lively, etc., which is conducive to attract students' attention, mobilize their interest in learning, and help them.It is suitable for attracting students' attention, mobilizing their interest in education, allowing them to acquire perceptual knowledge, and reducing the cognitive difficulty, thus enriching the content of the test questions and improving the quality of the test. e superiority of multimedia test questions in presenting information has been used more and more in various examination systems and educational software [6]. is study proposes a multimedia test model based on cloud computing for English language teaching in colleges and universities and designs and develops a multimedia test application. e tool is simple and easy to use and can provide users with the convenience of creating beautiful multimedia test questions quickly and efficiently [7]. Education informatization refers to the adoption of multimedia technology and network technology in traditional education to promote the reform of the education model and make the education model meet the requirements of the modern information society [8].e focus of education informatization is teaching informatization.e traditional teaching model is centered on school education, and the teaching method is teacher centered, which leads to limited learning time and space and a single teaching mode [9].e salient features of teaching informatization are open sharing and interactive collaboration to meet the rich educational resources that scholars can enjoy anytime and anywhere and realize the cooperation between teachers and students and between students and students.With the promotion of teaching informatization, online teaching and virtual learning platforms have emerged [10].e maturity of dynamic web technology makes the interactivity and personalization of virtual learning platforms possible.However, the contradiction between massive educational resources and people's limited learning time makes people put forward intelligent new demands for virtual learning platforms [11].In recent years, a young and vibrant field of text mining has attracted significant attention from the whole society and the information industry.e most important reason is that text mining has become a key technology to solve the explosive growth of information and the effective use of data.Text mining is an intelligent information processing technology that transforms massive information into practical knowledge.For long text with a standardized structure, the mining process extracts the text's keywords based on its structural characteristics and the features of the natural language itself [12].en, text retrieval, digest dynamic generation, and classification comparison are carried out, and good research results are achieved. Related Works With the recognition of cloud computing and its role in academia and industry and the joint promotion of its technology, cloud computing and its applications are rapidly developing and growing.Major computer giants such as Amazon, IBM, Google, Microsoft, and Sun have launched their own developed cloud computing service platforms [13].At the same time, cloud computing platforms also follow the trend of the times and provide various services to end users. rough the research and analysis of cloud computing, it can be considered that it includes the following levels of benefits: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [14].Famous large high-tech companies such as Sina Corporation, Baidu Corporation, Huawei Corporation, Ali Cloud, and Shanda Corporation have also developed their cloud platforms [15].After research and development, they have also provided excellent services to many users.Various research institutions and universities have also seized this opportunity to conduct much research on cloud computing and conduct preliminary application explorations [16].e structure of the software industry is also changing quietly with the popularity of cloud computing, and computer technology providers are moving toward customized, agile, and efficient computing services.On-demand, customized applications are being created, and the services available to end users are becoming diverse and personalized [17].Cloud computing is building a new situation of close integration with applications.e large-scale underlying infrastructure is evolving to provide a more robust underlying support for cloud computing, and the cloud computing underlying layer, as shown by the ever-enhanced computer clusters and more powerful virtual storage, is a research direction; in 2 Computational Intelligence and Neuroscience addition, cloud computing provides more efficient and more diverse services to end users by building a new service architecture [18].At the same time, various services in universities are continuously integrated with the cloud computing platform.e application in universities is also a research area of cloud computing technology. Computer network test bank characteristics in management and measurement are outstanding compared with other factors in the teaching practice process [19].e test bank can manage the data with confidentiality, economy, university, and flexibility; it can improve measurement and testing quality, efficiency, and performance [20].erefore, using a computer network test bank system can achieve fairness, comprehensiveness, randomness, and polymorphism of test questions, which can improve the credibility of testing and the quality of teaching.e rapid development of computer technology, remote network technology, multimedia technology, and communication technology has propelled the progress of the whole society and brought new opportunities and challenges to the modern education field.In terms of options, the development of the maturity of the technologies and the expansion of their application areas and scope have enabled modern education methods such as multimedia teaching and distance learning to be realized to a certain extent, expanding the development space of contemporary education technology [21].e challenge is that the application of new science and technology in the process of implementation requires a large amount of investment and requires a certain amount of risk, so it is a difficult task and challenge for us to carry out the modern education reform firmly and continue under the constraints of both economic and technical conditions.Today, the development of contemporary education is receiving more and more attention from more and more countries, and many countries place it in a crucial position.At the end of the last century, the means of educational technology in many developed countries, especially the United States, have been increasingly modernized and diversified under the increasing investment in education for reform and experimentation.As an essential component of modern educational technology, computer test banks will undoubtedly be necessary for educational reform. Test bank systems emerged and developed to meet the needs of scale, science, and standardization of examinations. e main body of the development of the test bank system is the theory of educational measurement.e system's primary function is to manage and maintain many test questions [22].e specific functions include test question entry, test question editing, the composition of test papers, teaching tests and exercises, and statistics and analysis of teaching results.Teachers can use the questions in the question bank system in their teaching activities, set the test subjects' parameters, number of questions, test scores, and difficulty coefficients for the test questions, and generate test papers on the computer network in two ways.Students can test and practice to assess their learning results through the web.After the students take the test practice online, the automatic marking function of the test bank system will review and grade the test practice results uploaded by the students [23].e test bank system can also provide a management platform for school academic departments to manage and monitor online exams.By using the test bank system, the school academic affairs department can improve the utilization rate of all kinds of test questions in the process of organizing examinations, reduce the workload of teachers in the process of issuing queries, and accumulate more excellent test questions in each review with the test bank system to achieve the effect of improving teaching quality and efficiency. Modeling and Application Design of English Multimedia Test Questions in Universities Based on Cloud Computing Construction of a Multimedia Test Model for English in Universities with Cloud Computing.Cloud computing uses the Internet as the basis and platform to virtualize various essential hardware resources, computing resources, software resources, etc. and then provides services to users in need. Cloud computing technology gives full play to the performance characteristics of transmission computer resources, uses the Internet to achieve the integration and unified processing of resources, has powerful computing capabilities, and has features such as load balancing to improve service quality.Cloud computing services can fully use all existing resources to form a powerful computing and storage capacity [24].Users can use only low-priced personal computer terminals, intelligent mobile devices, etc., to obtain all kinds of services built by the cloud computing platform through the network.For example, many companies have launched cloud storage applications.Users can enjoy storage services with larger capacity and more security than personal computers through computer software, mobile apps, and web browsers.e cloud computing platform has changed the traditional way of system architecture centered on personal computers and servers.e platform can provide users with various resources they need for different application demand targets.e cloud computing architecture is shown in Figure 1. e emergence of cloud computing can offer us a breakthrough in space and time services, and cloud service types are usually three kinds: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).Services are provided in traditional forums, such as domain name registration, website publishing, VPN virtual private network, data generation, and storage, etc. e development of multimedia applications and services benefited from the emergence of the Internet and mobile wireless network technologies.Considering that the user groups they serve simultaneously are massive and consume large amounts of computing resources, and combining multimedia services with cloud computing has become an inevitable development trend.Users can store and process multimedia data in this new multimedia computing model and place the multimedia data in the cloud for distributed execution, thus extending user Computational Intelligence and Neuroscience terminals' endurance and service life.Today, low-cost digital goods such as smartphones and video cameras have brought about an explosion of user media content.Different new systems such as technology, military, and medicine rely on the cloud for different types of multimedia data storage, processing, and analysis.With the development of cloud computing, users can store and access audio and video files, presentations, multimedia applications, and other rich multimedia documents, etc., in cloud data storage servers.Multimedia technology is widely used in modern English teaching because of its interactivity, a large amount of information, noticeable effect, and easy operation.As an integral part of English testing, test questions are developing rapidly.e test bank has taken shape through efforts. e media cloud consists of the main components: a central processing unit (CPU) for general processing, a storage unit, and a graphics processing unit (GPU) for multimedia content processing.Customers can rent these resources from multimedia service providers (MSPs) for storage, transmission, editing, accessing multimedia content, etc. Users in the cloud connect to the cloud media platform through various terminal devices such as smartphones, cameras, and tablets and send task requests to the cloud media application provider.In contrast, the data center of the cloud media platform authenticates the user terminal's identity information, geographic location, and other relevant information, allocates corresponding resources from the data center resources according to the requested task characteristics, and then realizes the help according to the scheduling policy optimal scheduling and allocation of resources according to the scheduling policy.e simple service framework of multimedia cloud computing is shown in Figure 2. In the static tasks based on cloud computing, the computational power, network transmission bandwidth, and spending per unit time generated by different VM nodes are different.So, the optimization methods developed for different types of resources will yield different execution results; that is,, the optimization objectives of the model determine the final results that the algorithm can produce.is section next describes the multiple optimization objectives of the static task model [25].e model does not consider the traditional singleobjective problem but rather a multiobjective English multimedia test model, which not only considers the total execution time of the task and the monetary overhead incurred during the system operation but also considers the execution cost of all tasks in the final optimization objective of the task model to ensure that the model can produce the best scheduling strategy after taking all factors into account.e real execution time of functions is the sum of the time taken to assign tasks to virtual nodes for execution following the scheduling scheme pos.e execution time of the ith task is given by the following equation: ( As the content and structure characteristics of different test types are different, the format of saving the content of different kinds of test questions is also different from each other, and the differences in the process of saving data are mainly reflected in the fact that different types of test questions have different data structures.e system needs to 4 Computational Intelligence and Neuroscience have a unified system data exchange format to solve the problem that different types of test questions can be processed through a suitable data structure.To meet the demand for test question content identification and matching, corresponding test question analysis rules can be added and modified according to different types of test question structures.Unified test document import processes the test document intelligent import system has a suitable test data combination and can provide a convenient data processing interface; after parsing different test questions, the content of additional test questions can be output in the form of data with a consistent structure.As the system follows the modular development principle, the system has certain portability, and the user's operation process will remain constant and unchanged when interfacing with new test bank systems.Based on the above discussion, the test document import system defines the data in the test document that the system needs to analyze and import in HTML format.e system extracts information from the data in HTML format.It parses it through the test question parsing rules to form the test question content that meets the requirements of the test bank and then saves the data in XML format that conforms to the QTI standard for the test bank system to call flexibly.HTML is used to display data; XML is used to describe and store data so that it can be used as a persistent medium!HTML combines data and displays this data on the page; XML separates data and presentation.XML is designed to describe data and focuses on the data's content.HTML is designed to display data and focuses on the data's appearance. College English Multimedia Test Application Program Design.Considering the problems in applying multimedia test questions and the actual needs of English teaching in colleges and universities, this test question generation system will be based on easy-to-use software, standardized test question storage, and rich presentation.e system provides teachers high-quality multimedia test question templates and samples [26].It provides them with easy-to-use and straightforward test question editing tools so that teachers can quickly and efficiently create excellent multimedia test questions that meet the needs of college English teaching.e test question generation system has these requirements: (1) the types of test questions supported should include common types of questions in daily teaching, such as judgment questions, multiple-choice questions, sorting questions, fill-in-the-blank questions, matching questions, etc.; (2) the multimedia test questions supported should be fully interactive and meet the cognitive characteristics of college students, help attract their attention, and enhance their interest in answering questions; (3) the generated multimedia test questions should have timely evaluation feedback to support students' self-testing and reduce teachers' workload in evaluating answers; (4) the system provides teachers with template-based test question editing, the test question editing interface should be simple, and the test question generation tool should be easy to use to help teachers quickly master its use; (5) The cloud media application provider The scheduling policy Computational Intelligence and Neuroscience choose the template.e structure of the multimedia test question generation system is shown in Figure 3. Feature item weight measures the importance of a feature item in identifying a text or the strength of its ability to distinguish a text.In applications, a feature lexicon is usually constructed using a specific feature item weight evaluation model in a class of text consisting of nouns, several verbs, or adjectives.In the text preprocessing process, first, the reader is sliced by the word separation algorithm to form small fragments called word metastreams; second, deactivated words are filtered to obtain word roots to eliminate the trouble caused by word polymorphism; then, the feature lexicon is scanned to record the word meta-streams existing in the dictionary.After an in-depth analysis of traditional feature item weight calculation methods, a feature item weight evaluation model including word frequency factor, position factor, word length factor, and word co-occurrence factor is proposed, considering the characteristics of the test text and the semantic features of English. e word frequency factor inherits the advantages in highlighting high-frequency words and excluding noisy words. e other three feature term factors are introduced to make up for the shortage of relying solely on word frequency to measure the importance of words. e definitions are as follows: Combined with the incoherent feature of the test text, it is not conducive to relying on context for semantic analysis.According to the semantic features of English, long words are content oriented and occur less frequently, while short words are function oriented, have rich meanings, and appear more regularly.Strengthening the rights of long words is helpful to improve the accuracy of feature word extraction, and the word length factor is introduced accordingly.In English text, sentence meaning is mainly reflected in two aspects: the word meaning of words and the relationship between words.e most direct manifestation of the relationship between words is the same present.e intention expressed by words is reflected in the co-occurrence relationship between all words appearing in the same word.e meaning of sentences is reflected in the co-occurrence relationship between words appearing in the same sentence.erefore, if two words occur in the same sentence, the two words have a strong correlation, and the word co-occurrence factor is introduced.Let s i be the total number of occurrences of the word in the text, s j be the total number of events of the word t j in the text, and the number of occurrences of the word t i together with t j is recorded as s ij , as follows: Since the weights calculated from different feature term weight factors differ by magnitude, the consequences are normalized by data transformation. e minimum-maximum normative method is used, the normalized weight letters are kept with four valid digits, and the calculation is as follows: e word frequency factor, position factor, word length factor, and word co-occurrence factor consider the test text's characteristics and English semantic features, solve the problem of feature item selection for the test text, and satisfy the completeness and validity.Further, feature term weight where weight I represents the weight of the feature word word I , Tf I its word frequency factor weight formula, Len I its word length factor weight formula, Coo I the word cooccurrence factor weight formula, and Loc I the position factor weight formula.e system is divided into six sub-modules: test question management module, test paper template management module, test paper template management module, basic data management module, personal information management module, and user management module.e design structure diagram of English test questions is shown in Figure 4: (1) Functional Design of Test Question Management Module.e test question management module is the basic module of this system.It mainly includes entering, modifying, querying, and deleting the test questions of all kinds of related questions in college English.Teachers can enter test questions, which are saved in our school, and the status of the test questions is waiting for review after entry.e teacher can also view and collect all the general questions of each school, private questions, and questions not yet approved and delete and edit their questions.ere are two situations when editing a question: for questions that are pending or not yet approved, teachers can edit them directly, and their status will remain pending; for questions that have been approved, the quality of the question will change to unapproved after editing it.Student users can only view each school's reviewed public test questions, collect the test questions, and cancel the collection.e administrator of this school has the function of checking questions and the function of a regular teacher.( 2 e primary data management module provides basic data for the above three modules, including querying, modifying, and deleting information on preliminary data.Only the university administrator and super administrator have open data modification and deletion privileges. (5) Functional Design of Personal Information Management Module.e system mainly includes three users: teachers, students, and school administrators.erefore, the individual information management module is divided into three submodules according to the different user roles: teacher personal information management module, student personal information management module, and school administrator information management module.( 6) User Management Module Function Design.e user management module includes information query and correction, and gives corresponding permissions according to other user roles. Testing and Analysis of the English Multimedia Test System in Universities Based on Cloud Computing.System testing and analysis are necessary for system design, effectively detecting whether the system's functions and performance meet the system user's needs.rough testing each functional module of the system, we can timely find the loopholes in the system design process and adjust the system functions in real-time; after the successful testing of functional modules, we need to test the overall performance of the system, record, and analyze the overall operation of the system [27].After successfully testing available modules, we need to test the system's overall performance and history, explore the system's overall process, and observe whether the system can adapt to various test bank management system requirements according to the operation status.Functional testing of the system, i.e., test case, tests its available modules by classifying them.e system is divided into different functional modules according to the design requirements in system testing.Other testing methods are used to check whether the available module design meets the requirements according to the functional modules.Different test programs need to be written for other modules in the operational testing process.e rationality of the available modules of the system is checked according to the expected test results of the test programs.e test runs are performed manually and compared with the critical information resources of the software and test bank contents and justified by using the number of purchase records and synchronized information of the same period.A comparison chart of the functional test results is shown in Figure 5. Computational Intelligence and Neuroscience For the performance testing of the test bank management system, the primary performance testing tool used is Apache JMeter, which is a stress testing tool developed by the Apache organization, mainly for JAVA business code testing, not only for WEB application performance testing but also for database, dynamic, or static resource files, in line with the test bank management system requirements [28].In the performance testing process, the JMeter testing tool simulates multiuser login operation to operate the server's business functions.is automated testing tool is responsible for recording the performance indexes of time characteristics and concurrency to complete the performance testing process of the test bank management system.e test bank management system is first deployed according to the application scenario.en, multiple users are simulated to log in to the test bank management system for actual operation and testing.Response Time Test Data.e performance test tool affects the business function modules of 100, 200, 300, 400, 500, 600, 700, 800, 900, and 1000 user operating systems and records the maximum response time and average response data information of the system.Concurrency Test Data. e system stability is registered when100, 200, 300,400, 500,600, 700, 800, 900, and 1000 user logins to operate the test database management system through JMeter.When the business function module is used, JMeter gradually increases the number of clients tested while recording the response time of the business function module and other contents; the specific corresponding test time results are listed in Table 1. e most significant difference between the Rasch model and IRT is that the Rasch model is model driven, while IRT is data driven.Based on the subjects' responses to items in the test, items can be analyzed by applying the Rasch model to item attributes, such as item difficulty.Model fit can also be examined.If two or more tests contain items in common, test equivalence can be performed to convert item difficulties from different tests to a standard scale.is feature allows for the longevity of a calibrated item pool, and such a pool is handy for monitoring the progress of student performance. e proficiency values of the tests are shown in Figure 6. College English Multimedia Test Application Implementation.e primary method of creating test questions is to enter the test questions required by the user into the system and save the currently entered test questions in the test question table corresponding to the corresponding question type and difficulty level according to the user's requirements.Users can use the copy, cut, and paste buttons on the entry interface to edit the content of the test questions.When the user finishes entering and presses the OK button, the system will automatically check if there are unfilled items.e system will prompt the user to complete the unfilled items if there are unfilled items.e system will save the current test content if there is no unfilled item.en, the system will prompt the user, "Do you want to add more questions of the same type and difficulty level?," "Do you want to add more questions?," and "Do you want to add other questions or difficulty level?"If we want to continue adding questions or adding other questions or difficulty levels, the system will continue to run the test creation process.Otherwise, the system will end.e linear trend of the test questions is shown in Figure 7. In the process of programming, we use the concept of classes to describe the objects applied.e data objects used in the test document intelligent import system consist of several data classes: test type, test features, test structure, test template, rules, and rule attributes.Test question rules comprise laws and attribute objects, and test question knowledge consists of other data objects.According to the business logic, the system generates the test question parsing rules based on the test question knowledge.e generated test question rules and knowledge identify and parse the test question document content.A test type includes multiple test feature information, so the test type and feature information are one-to-many relationships.It defines test structure information, stores test structure ID, and tests structure name because the order of test features determines the test structure, so the test structure information also contains the order of the front and backtest features, so the test feature RULE class: It defines the ED of the test rule, test rule type, test rule name, test rule description and test rule code, which are the description information of the law and the core information of the rule.Since the test rule has multiple attributes, the test and test rule attributes are a oneto-many relationship.RULEP PROPERTY Class.It defines the content of the relevant qualities contained in the test rule.Here, the rule attribute is the programmatic abstraction of the test question template, which is the basis for parsing the test question content and is composed of the test question feature information, the test question structure, and the aggregation of the two.Hence, the test question rule attribute has a test question structure and multiple test question feature information.e data pair of the English test application is shown in Figure 8. Computational Intelligence and Neuroscience e core function of the test document intelligent import system is the test question analysis module, which receives the test question information converted into HTML format by the system data processing module and then analyzes the information in the test document by the test question rules generated by the knowledge management module, extracts the test question information that conforms to the test structure, generates error hints for the test question information that cannot be correctly identified, and sends the test question analysis results.It also generates error messages for the test questions that cannot be correctly identified and provides feedback to users on the results of test question analysis and error messages to realize the intelligent import function of test question documents. e test question template is implemented in writing functional code, which indicates the default mode of recognizing test question information.e system adopts the technique of prewritten test question parsing rules to implement the test question template. e grouping information of the regulations is defined in the test question parsing rules.e laws in the same group are matched with the test question information in a mutually exclusive operation.Only one direction can be compared with the test question information simultaneously.Unlike the data model design model of the Computational Intelligence and Neuroscience prototype system, according to the analysis of the content and structure of various test questions, the test question feature information and test question structure information are first defined as the primary test question knowledge.On this basis, the test question template is abstracted.e system can get the information in each test question type through the test question identification-related information saved by the test question template corresponding to each test question type and the relationship between them.e test question structure contains the sequential relationship of test question feature information.Such a design can strictly limit the specification of test question parsing rules, ensure that the format information in the test document is consistent with the test question rules, and improve the accuracy of the test question document import function. Conclusion With the continuous development of computer technology, the comprehensive application of large-scale data is constantly changing the current social process.Higher education institutions' education and teaching reform are deepening, and the combination with computer network technology is more in depth. e emergence of cloud computing technology has undoubtedly opened a brandnew research field for education and teaching reform.It effectively improves English teaching quality by developing an adaptive test bank system for professional English. is study applies the English model in response theory in the test bank and the statistical test of fit.e test bank system should establish a student model, automatically select relevant test questions according to students' characteristics, and provide analysis reports on students' learning ability and effect, conducive to teachers' individualized teaching of students according to their needs.is study uses Eclipse as the development platform, Java language for program development, Hadoop for system architecture, Java Script language for page interactive development, and MapReduce programming model to realize a test bank management system under the multilayer architecture model.e test question, paper management function, audit function, and essential system functions are well implemented and tested using software testing methods.ere are limitations in the breadth and depth of the research in this study.Some issues need further study and discussion in the specific description to optimize English test questions in multimedia technology. Figure 2 : Figure 2: A simple service framework for multimedia cloud computing. Figure 4 : Figure 4: e design structure of English test questions. Figure 5 : Figure 5: Comparison of functional test results. Figure 7 : Figure 7: Linear trend of the test questions. Figure 8 : Figure 8: English test application data comparison. 10 the test question generation tool should provide a preview and sample experience for each test question template so that teachers can understand the characteristics of each template in detail and Functional Design of Test Paper Management Module.e test paper management module provides teachers with the functions of grouping papers, querying test papers, modifying test papers, deleting test papers, collecting test papers, and exporting test papers for printing.Teachers can group pieces, query, manage, and uncollected all public reports, edit and delete documents created by themselves, etc.We can only create new ones for test papers not made by us after modifying them based on the revised test papers, and the original test papers remain unchanged and cannot be deleted.(4) Basic Data Management Module Function Design. Table 1 : Corresponding test time results.
2022-09-12T15:36:46.464Z
2022-09-10T00:00:00.000
{ "year": 2022, "sha1": "d8d60cc8052b01bae57e19c485ce0637002fb4f7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/4563491.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3bd3162f3b61ce243021a939e9baedb32ed0178", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
219797843
pes2o/s2orc
v3-fos-license
Development and approbation of the improved CART algorithm version This article considers the classification problems using the decision trees. It suggests the improved CART algorithm version called clean CART (CCART). The key feature of this algorithm is the reducing of the hardware memory for the tree storing. The main ideas of the CCART algorithm software implementation have been presented. The experimental results of comparison of the original and the presented algorithms for constructing the decision trees confirm that the proposed CCART algorithm has the stated advantage. Therefore, the proposed CCART algorithm is recommended for use in the ensembles of classifiers construction. Introduction Currently, the various machine learning algorithms are actively used in solving many applied problems, in particular, the SVM algorithm, the algorithms based on the artificial neural networks, the algorithms based on the decision trees [1]. This acute demand for the machine learning algorithms is due to the need to work with the large volumes of the complex structured data sets in order to quickly extract the information hidden in them and to further use it in the decision making, for example, when solving the problems of classifying new data [2]. Each of these algorithms has its own advantages and disadvantages. So, the SVM algorithm allows to explicitly build the decision rules using the support vectors identified in the training data set, but it requires some effort to select the optimal values of its parameters and the type of kernel function [3]. The algorithms based on the artificial neural networks do not allow explicitly writing down the decision-making rules, and provide, on the one hand, very flexible tools for solving various types of problems, and on the other hand, require the considerable efforts to select the structure and parameters of the neural network [4]. The advantage of the tree-based algorithms is the simplicity of interpretation of the decisions made, while the fine tuning the parameters of decision trees is also necessary [5]. Moreover, all machine learning algorithms are faced with the need to solve the problem of overfitting [6]. In the proposed study, the decision trees were selected as the subject of the study, on the one hand, in view of the good interpretability of the decision-making process and the possibility of using the trees in the variety of ensembles, and on the other hand, due to the possibility of realizing the significant improvements from the point of view of the study authors in organizing the process itself building the trees and storing information in the nodes of the tree [7]. The decision trees are widely used models for solving problems of classification, regression, and clustering [8]. At their core, they represent a set of if-else conditions by which a decision can be made. Such conditions can be displayed in the form of the ordinary questions and therefore we can build decision tree "by hand" (which shows one of the positive qualities of trees: they are clear and understandable for humans), but the use of algorithms to build trees based on any statistics is a much more attractive solution [8]. Nowadays, there are several popular algorithms and their modifications that implement the basic principles of the decision-making based on the trees, namely:  the CART algorithm (Classification and Regression Tree) and its modification with the closed implementation of IndCART, based on the definition of the rule for splitting a node using the Gini index [9];  the C4.5 algorithm, which is the modification of the ID3 algorithm and based on the determination of the rule for splitting a node by means of the information gain (Gain ratio). Since the above mentioned algorithms are very sensitive to data and, as a consequence, are prone to overfitting, the balancing algorithms and/or pruning algorithms are used to work with them [10]. Also, the application of various sampling strategies to data on which a tree will be directly constructed can positively affect to the quality of classification. In this case, it is possible to use the decision trees as the part of the certain ensembles with the aim to increase the classification accuracy and solving the overfitting problem [2]. In the course of research, the CART algorithm, whose advantage is the minimum complexity of the pruning cost, will be considered [3]. The tree-based decision-making algorithms have the following advantages:  the decision-making results are easily interpreted by a human;  the algorithm is able to interact with various types of data (for example, with the categorical and numeric data);  the algorithm performs well even in situations when the initial assumptions are violated;  the algorithm works with the huge amounts of information. The disadvantages of the tree-based decision-making algorithms are the following:  the algorithm is usually greedy (it is trying to maximize any attribute) and, as a result, it cannot ensure the optimality of the whole tree;  the sensitivity of the algorithm to data;  the tendency of the algorithm to overfitting;  when implementing the algorithm for the categorical parameters, the parameters whose number is greater receive the large weights [4]. The disadvantages of the tree-based decision-making algorithms are explained by the fact that the tree is constructed using the so-called greedy procedure [11]. However, these disadvantages can be mitigated by the use of the various truncation methods, in particular, the pruning method or the method with the penalty term, which are actively used, for example, when implementing the CART algorithm [12]. Development of the improved CART algorithm version The CART algorithm is relatively simple. It uses the binary trees, that is, each node has two child nodes. At each stage of the development, the node is split according to the rule (in fact, for each sample label), which divides the node into two parts: the part, which meets split condition, and the part, in which the condition is not fulfilled [13]. After that, the resulting set of rules is evaluated using the split quality assessment function (Gini index, Gain ratio, etc.) and the best split rule is selected. The next step is the recursive construction of the tree to specified stopping condition 14]. As the stopping condition, the method of limiting the tree depth can be used, but since this method has a probability of losing the classification quality, the tree is built to the complete "cleanliness" of the node (only sampling elements belonging to the same class remain) and, if necessary, the branch pruning mechanism is used [15]. Let be T is the data set, which keeps the information about objects. Each record in the data set describes the pre-selected characteristics of the certain object, and, also, contains the class label for this object. To assess the quality of splitting in the CART algorithm, the Gini index can be used, which ensures the reduction of the "impurity" in the node: where i p is the probability (the relative frequency) of the i -th class in the data set T , which is used for the classifier development. If the set T is divided into two parts N , respectively, then the partition quality indicator will be the following: The best split is the one for which the ) (T Gini split value is minimal [16]. The clean CART algorithm The CART algorithm is redundant, since each node must store in itself a subset (on the basis of the data set T ) which was obtained after splitting (figure 1), and must be mutable for the tree construction possibility. It would seem that this is not a big problem with the modern computer capacities, but with the growth of these capacities the new opportunities and requirements for the classifiers have appeared. In addition, the presence of data, which is necessary only during the construction of the tree and is not needed when using the classifier, in the nodes can confuse and clog the program code with the unnecessary information [17]. Also, the ability to modify the tree, after it has already been built, can lead to errors and nondeterministic behavior of the program, especially when it comes to asynchrony [18]. Therefore, storing the above data in the nodes is redundant and this problem must be addressed [7]. To eliminate the redundancy in the above sense, it is proposed to use the improved CART algorithm version, which is offered to be called "clean" (CCART, Clean CART). In this CART algorithm version, the process of the tree construction is divided into two stages (figure 2). The first stage is to build the frame. This frame represents classic binary decision tree with additional features. The frame will store the indices of examples from the training set instead of creating the fullfledged training set entity. It should be noted that these indices are easier to store in the list, because immediately before the node splitting is impossible to say how the examples will be distributed in the child nodes of the tree. The second stage is implemented taking into account the fact that the tree is built and for the already constructed tree there is no need to store the subset. Moreover, when building the ensembles, the expressiveness of each individual tree is lost in the number of trees used in the ensemble. Since it is assumed that this algorithm will be used in the development of the ensembles, we can conclude that this data can and should be cleared, and the nodes of the tree themselves should be made immutable, while the immutability of the nodes must be made explicit (in the sense that it is necessary not to give the user to change the nodes at the program code level). The pruning, if necessary, must be done while building the frame tree. Hence, it is necessary to build along the frame the new "cleaned" tree, in which there will be no possibility of changing the nodes. This increases the time for the building of the tree, but at the same time the tree will turn out to be more lightweight due to the fact that the nodes will not store the subsets, and the non-mutable tree nodes will allow the safe using of the trees in the multi-threaded environment [19]. Nowadays, the various methods are used to prevent the classifier overfitting, for example, the method which implements the tree depth limit, as well as the prepruning and postpruning (otherwise just pruning) methods [19]. The meaning of the pruning methods is to reduce the size of the tree. After the tree has been built, it can be analyzed and rebuilt accordingly. For example, the CART algorithm will repeatedly split the data set into smaller and smaller subsets until these final subsets become homogeneous in terms of the final variable. In practice, this often means that finite subsets (known as tree leaves) are made up of one or more examples. In such situations, the tree may be of the poor quality when forecasting on new data. The alternative to the pruning method is the prepruning method, whose essence is to assess the quality of splitting and stopping the construction of the tree when the quality is below a given threshold. In this study, the prepruning method as more appropriate is preferred. When finalizing the CCART algorithm in order to prevent the overfitting, the same methods apply as when finalizing the original CART algorithm, but taking into account the fact that all of them will be applied to the tree frame. The prepruning method implements the tree building stop, ensured that the new partition does not provide the proper improvement in the classification quality. The main attention in the implementation of this method should be given to the choice of the threshold that will be used to stop the building of the tree, and to the choice of the criterion by which we will evaluate the quality of the classifier. The classification quality is represented with various criteria such as the accuracy, the specificity, the sensitivity, etc. Each of them carries the certain meaning. For example, the accuracy is simply the number of the correct answers at the data set and the specificity measures the proportion of the actual positives that are correctly identified as such. The most popular method for the classifier quality evaluation is the cross-validation. The choice of threshold depends on used quality criterion, and to select it correctly, it is necessary to use the quality rating of the final classifier [20]. Moreover, the quality of the classifier must be assessed at each step of the tree construction. If the increase in quality for the resulting partition is less than the specified threshold, then the tree is completed. Using the prepruning method can cause the tree to stop building too early and, as a result, the quality of the resulting tree will suffer [21]. In this case, for example, it may turn out that the result of the tree construction following the stop could provide a significant reduction in the number of classifier errors. The prepruning and pruning methods can be used together, individually or not at all. At its core, the prepruning method is a quick fix for heuristics. When the prepruning method used together with the pruning method, it can save time. The implementation development of the prepruning method for the CCART algorithm When implementing the prepruning method for the CCART algorithm, the most interesting thing is that to evaluate the quality of the classifier after each partition, it is necessary that the frame tree has the ability to classify objects from the test data set. This raises the problem that the frame tree does not imply the classifier (because if we do not use the pruning method, we simply do not need this functionality) and it turns out that when we add this logic we violate the single responsibility principle. The solution to this problem may be the implementation of the approach based on the use of the "decorator" design pattern over the frame tree that knows how to classify the data. The very essence of using the "decorator" design pattern is the ability to dynamically add the new functionality to objects. The proposed approach allows not to clutter up the usual frame tree with functionality, which it does not need in cases where the pruning method is not necessary. Similarly, other tree truncation approaches can be implemented, while using the described design pattern allows to combine the implementations and, for example, prune the several methods of pruning and postpruning at once. This flexibility allows to achieve the required depth of the tree with simultaneous application of several techniques. The API development for the CCART algorithm implementation The API is the important part in any software library. The trees can be represented in various ways depending on the programming paradigm chosen. A herewith, the most popular is the combining of the object-oriented and functional paradigms. From the point of view of the object-oriented approach for the tree, it is necessary to create the class hierarchy, which in the future will make it easy to expand the API provided by the library [30]. In turn, from the point of view of the functional approach, as mentioned above, the tree is essentially nothing more than the set of rules or predicates, which can be represented as the functions which receive the object at the input which needs to be classified, and predict the output class. The implementation of the discussed algorithm was implemented as was described above. Building the hierarchy made it easy to implement the "decorator" pattern, which in turn made it possible to flexibly expand the functionality provided by the tree (in particular, the prepruning method). Moreover, the tree itself is the function, which allows to use the resulting classifier in the functional style. This opens up the possibility of constructing the compositions in the form of currying or partial application in the functional languages. Since the CART algorithm has the large number of settings, and the process of creating a tree is not trivial, the best solution in this situation is to create the library with the "builder" design pattern for this algorithm. The essence of the "builder" design pattern is to create the special entity which will be responsible for configuring and creating the class objects so as not to clutter the constructor with this logic and not mislead the users by providing the large complex constructors (or a large number of them), because even in the such concise programming languages as Python, the use of such constructors usually forces users to turn to the documentation. By applying the design pattern, the builder manages to comply with the principle of the single responsibility and achieve the clearly determined tree development process. If the prepruning method is not used, the tree will not know anything about it, there is no unnecessary logic in it, and the builder itself does not provide methods for setting up the prepruning method. In the opposite case, the user explicitly indicates the need to use the prepruning method and the builder provides the additional methods for configuring this functionality. In addition, the frame tree "decorator", which is specifically designed for prepruning, will be built, although from the user point of view, the necessary classifiers settings are simply indicated. Using of the "builder" design pattern is declarative and it brings usability. Below is the example of the fully configured builder using the above proposed API. A herewith, the common version of the builder is the following. BinaryDecisionTree.builder() .trainDataSet(categoricalDots) .nodeQualityEvaluator(QualityIndicators.GINI_INDEX) The common version of the tree builder does not provide the methods for configuring the prepruning parameters so as not to distract the user's attention. The methods for setting the prepruning parameters are provided by the special tree builder, which will be returned as the result of calling the tree builder method withPrePruning(). The approach using the above design pattern looks especially effective when applied in the languages with static typing due to the fact that the modern IDEs provide the support in terms of outputting the return type of the method, so working with such the design is very convenient. Also, the classifier quality evaluation API was developed. This API was described as the functional interface: it means that a matching signature method can be used for the quality evaluation. Moreover, API can be used not only for the prepruning. It brings a lot of flexibility in terms of development and testing classifiers. The following quality evaluation indexes have been already implemented:  the accuracy;  the precision;  the recall;  the F-measure;  the specificity;  the sensitivity. In addition, due to the use of the "decorator" design pattern, the several methods can be set at once to assess the quality of the resulting partition, which will be used when implementing the prepruning method. This implies the ability to evaluate each partition at once according to several criteria for evaluating the quality of the classifier and to stop the construction of the tree if the partition did not give the good results on one of them. For example, we can evaluate the quality of the partition at the same time using the accuracy criterion as well as using the f-measure criterion indicating the threshold of quality which the partition must pass in order to continue building the tree and as soon as the quality of the partition by the criterion is less than the corresponding quality threshold, the tree building will be stopped. Experimental results The suggestions formulated in the theoretical part for making the constructive improvements in the CCART algorithm are implemented in the open source machine learning library in the Java programming language. During the experiments using the various data sets, the sizes of the formed trees were measured with application of the standard language tools. The difference between the sizes of the classical binary decision tree constructed in accordance with the CART algorithm and the binary decision tree constructed using the proposed improved version of this algorithm (i.e., using the CCART algorithm) turned out to be six-fold even for small data sets (in particular, for the classical dataset for classifying the irises). In this case, we can conclude that with the growth of the size of the data set, this difference will increase. Implementation of the CCART algorithm is used in the specially developed implementation of the classifiers ensembles [22]. The main features of this implementation are that the classifier itself is represented by the interface, that allows the using of classifiers, which were constructed by the different [23]. It should be noted that the proposed implementation does not focus on the specific data storage structure, but uses the iterator over classifiers. The essence of the iterator pattern (regardless of programming language in which this pattern is implemented) is to allow developer, without focusing on the data storage structure, to bypass it and perform the corresponding calculations. This gives a huge advantage in the sense that the library in question allows to store the ensembles anywhere [24]. So, to store the ensemble, was used the API collections provided by the Java programming language and the distributed Apache ignite cache, but nothing prevents from storing the ensembles, for example, in a database. Moreover, the proposed CCART algorithm is particularly effective in the distributed systems, since when working in such systems, the size of the used data sets plays a significant role [25]. Tree sizes were measured using the standard Java language tools (namely, the ObjectSizeCalculator object). It should be noted that Java does not have the analogue sizeOf () from C and the size of the object can be estimated only approximately, but even despite the errors in the measurements, one can judge the difference in the sizes of the resulting trees. Table 1 shows the measurements of trees built at the artificial data set (2 numerical features which display the coordinates along the x and y axes, 2 classes, 50 examples at the data set) and at the classic data set for classifying the irises (4 numerical features, 3 classes, 150 examples at the data set) [26]. The measurements in the cells of table 1 are given in bytes. Moreover, the classification accuracy for both implementations is the same. During the research, the trees, which are provided by the sklearn package for the Python language, were measured. These measurements were made using the pympler package. According to the results of measurements, it can be concluded that the trees built using the sklearn package are heavier (the difference was about 200 bytes for the data set for classifying the irises, it is possible that it will be even larger as the result of further development and improvement of the CCART algorithm). Conclusion The research results show that the effectiveness of the proposed CCART algorithm increases with increasing the size of data set. This fact allows to conclude that the use of the CCART algorithm is the good way to reduce the volume which the tree occupies in the computer memory. Moreover, the developed API gives the users of the library the flexibility to customize the process of constructing the classifier. A herewith, the methods to combat the problem of overfitting of the classifier on the basis of the decision tree were implemented. The implementation in the Java programming language allowed the development of the flexible software architecture which can be easily expanded if necessary, and provided the ability to use the library, for example, for applications written in Scala. It is planned to conduct research in order to improve both the presented algorithm and its implementation [27]. This can be done through the introduction of various technologies in tree pruning, new metrics for assessing node quality, approaches aimed at accelerating the construction of a tree. Most
2020-05-28T09:08:48.455Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "53367fead3b39240238e7f4daaf5c83569fd9a98", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1479/1/012085", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a6b022d890dcd1d7ca8a4477725414f4e4deee45", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
268732628
pes2o/s2orc
v3-fos-license
Reproducibility Made Easy: A Tool for Methodological Transparency and Efficient Standardized Reporting based on the proposed MRSinMRS Consensus Recent expert consensus publications have highlighted the issue of poor reproducibility in magnetic resonance spectroscopy (MRS) studies, mainly due to the lack of standardized reporting criteria, which affects their clinical applicability. To combat this, guidelines for minimum reporting standards (MRSinMRS) were introduced to aid journal editors and reviewers in ensuring the comprehensive documentation of essential MRS study parameters. Despite these efforts, the implementation of MRSinMRS standards has been slow, attributed to the diverse nomenclature used by different vendors, the variety of raw MRS data formats, and the absence of appropriate software tools for identifying and reporting necessary parameters. To overcome this obstacle, we have developed the REproducibility Made Easy (REMY) standalone toolbox. REMY software supports a range of MRS data formats from major vendors like GE (p. file), Siemens (twix, .rda, .dcm), Philips (.spar/.sdat), and Bruker (.method), facilitating easy data import and export through a user-friendly interface. REMY employs external libraries such as spec2nii and pymapVBVD to accurately read and process these diverse data formats, ensuring compatibility and ease of use for researchers in generating reproducible MRS research outputs. Users can select and import datasets, choose the appropriate vendor and data format, and then generate an MRSinMRS table, log file, and methodological documents in both Latex and PDF formats. REMY effectively populated key sections of the MRSinMRS table with data from all supported file types. Accurate generation of hardware parameters including field strength, manufacturer, and scanner software version were demonstrated. However, it could not input data for RF coil and additional hardware information due to their absence in the files. For the acquisition section, REMY accurately read and populated fields for the pulse sequence name, nominal voxel size, repetition time (TR), echo time (TE), number of acquisitions/excitations/shots, spectral width [Hz], and number of spectral points, significantly contributing to the completion of the Acquisition fields of the table. Furthermore, REMY generates a boilerplate methods text section for manuscripts.The use of REMY will facilitate more widespread adoption of the MRSinMRS checklist within the MRS community, making it easier to write and report acquisition parameters effectively. Introduction Magnetic resonance spectroscopy (MRS) is a non-invasive magnetic resonance imaging (MRI) method used to evaluate the metabolic profiles of tissues. 1,23][4][5][6] These efforts have led to a plethora of MRS consensus papers that provide a platform for education and training, standardization, and addressing conflicting viewpoints (Table 1).][9][10][11][12][13][14][15][16][17][18] One particularly impactful consensus publication addressed the urgent need for standardized reporting criteria for MRS studies.The authors recognized that the enormous methodological heterogeneity in the field has contributed to a lack of reproducibility across different groups, sites, and techniques.Furthermore, they identified a lack of structured methods reporting, making it difficult to compare and integrate quantitative results.In fact, the majority of journals within the MRI field require study overview checklists for experimental setup details and participant characteristics, such as STARD 19 , CONSORT 20 , PRISMA 21 , and STROBE 22 , however, these do not cover reporting standards for acquisition parameters and the technical details that are required for an MRI acquisition, including acquisition parameters necessary for MRS measurements.Consequently, comparing studies is difficult and may lead to inaccurate and inconsistent conclusions, contributing to the slow translation of MRS from the research setting to clinical applications. To overcome the lack of information in publications, a checklist of minimum requirements for reporting MRS studies was developed -the Minimum Reporting Standards for in vivo Magnetic Resonance Spectroscopy (MRSinMRS). 15The objective was that, for clinical and research studies using MRS, authors should complete the provided table and attach it as a supplementary figure or appendix to their manuscript.This was intended to foster standardization and simplification of the review process and ensure that all critical parameters of data acquisition and analysis are appropriately reported.As a further step, the editors of 121 clinical journals were asked to consider implementing the MRSinMRS table as a requirement for publication, of which seven agreed and 12 are currently evaluating the proposal. 23ile the expert consensus papers have been widely cited in MRS publications (Table 1), the actual recommendations are not nearly as well implemented in practice.Research groups with long-established workflows experience the adoption of consensus practices as an inconvenience, often due to a lack of strong incentive, while researchers new to the MRS field may not yet have the necessary expertise or guidance to implement them.For example, the MRSinMRS consensus paper, published in 2021, has been cited 102 times to date, but only 43 references actually incorporated the MRSinMRS table, while the remaining 59 citations only acknowledged the paper (Figure 2).While the MRSinMRS table offers a practical solution for reporting methods, locating required parameters within DICOM headers or MRS raw files can be challenging, especially for beginners.This is primarily due to variations in MRS data formats and nomenclature across different vendors and the complex nature of the parameters themselves. 9lthough MRS software developers have begun implementing the MRSinMRS checklist into their standard output tables, community members expressed a clear need for a user-friendly and easy-to-use standalone solution at the 2022 Magnetic Resonance Spectroscopy workshop in Lausanne, Switzerland. 24Incorporating the MRSinMRS table provides editors, reviewers, and readers with a clear overview of the specific MRS methodology used in each study, while also guaranteeing the availability of comprehensive details for those seeking to replicate an experiment or conduct meta-analyses based on the results.Moreover, the checklist (Figure 1) will establish a consistent format for presenting MRS information and offer journals less acquainted with MRS a coherent means of validating methods. To address this need, we have developed the expandable base for an open-source standalone software application to enhance accessibility and streamline the reporting process for both novice and expert MRS researchers.This application automates the population of the hardware and acquisition portions of the MRSinMRS table using a single data file from an MRS study and generates a corresponding methods section to be used in publications.Advantages of the application are evident in easy data input, using an intuitive graphical user interface (GUI) as shown in Figure 4, and freedom from any proprietary dependencies.Furthermore, it was implemented to facilitate extensions and incorporation of existing features in the future, making it a community-driven software. Figure 1.Minimum Reporting Standards for in vivo Magnetic Resonance Spectroscopy (MRSinMRS) checklist of required parameters for the publication of MRS studies, adopted from Lin et al. (2021.)A template Excel spreadsheet of this table can be found at: https://github.com/agudmundson/mrs_in_mrsHighlighted in green are the fields that REMY always populates, in yellow fields that REMY populates if the filetype supports it, and in orange fields that REMY doesn't populate at this point.Figure 2. Adoption and Utilization of the MRSinMRS Consensus in MRS Research.This figure illustrates the citation dynamics of the MRSinMRS consensus since its publication in 2021, highlighting a total of 102 citations.It differentiates between papers that merely cited the consensus (n=59) and those that actively integrated its reporting table into their methodology (n=43).The trend indicates an initial phase of citations without substantial adoption of the reporting table in 2021 and 2022.In contrast, 2023 marks a significant shift towards implementing the consensus framework, with 30 papers including the reporting table versus 20 that only cited the consensus.This trend signifies a growing commitment within the MRS research community towards enhancing research reproducibility through standardized reporting.REMY is anticipated to further streamline the reporting process, enabling researchers to automatically populate the table, thereby facilitating more efficient and accurate adherence to the consensus guidelines. Methods Here, we describe the development of a software suite termed "REproducibility Made EasY" (REMY).REMY is freely available for download from the GitHub repository https://github.com/agudmundson/mrs_in_mrs;the source code is open under a liberal BSD-3 license.REMY is designed as a standalone application to create the MRSinMRS table and a matching MRS methods section that can be used in publications.The Python-based (v.3.11) application requires no programming experience, operating through an intuitive graphical user interface (GUI) built using Tkinter 25 (Figure 4).While REMY is available to run through the command line, executables were created using PyInstaller for Windows, macOS, and Linux.The application is operating-system (OS) agnostic, meaning it operates uniformly across platforms.As an open-source application, REMY is transparent, granting visibility into its underlying codebase. Workflow Overview REMY currently supports various commonly used MRS data formats, including GE pfiles (.7); Siemens DICOM (.ima), .rda,and Twix (.dat); Philips spar/sdat (.spar/.sdat); and Bruker Method (.method).To begin, users will select a dataset with a file explorer using the 'Import' button.Once selected, REMY will automatically update the outputs to export to this directory.Users can also select a different folder using the 'Export' button.Now that a dataset has been selected, users must select which vendor and which data format they have input using the dropdown menus.Finally, the 'Run' button exports and MRSinMRS table, log file, and text documents (Latex and PDF format) with a completed and referenced Methods section as shown in Figure 3. Once REMY has completed, the 'Run' button updates to read 'Completed.' Data Reading Beyond the standard codebase, necessary for the application execution, REMY also leverages spec2nii 29 and pymapVBVD (https://github.com/wtclarke/pymapvbvd) to read the various file formats.When reading Siemens data, REMY uses the 'pymapVBVD' function from spec2nii to read Twix and the 'multi_file_dicom' function from spec2nii to read dicom.For GE, the 'pfile' class and '_dump_struct' function are used to read pfiles.Philips spar/sdat is read using the read_spar function from spec2nii.REMY reads Bruker method files directly by parsing the text and uses the 'Dataset' class from brukerapi (https://github.com/isi-nmr/brukerapi-python,included with spec2nii). Header Fields When reading each file, REMY attempts to identify as many relevant fields as possible from the data header.Field strength, vendor, vendor software, pulse sequence, number of data points, TE, TR, spectral width, and voxel size are among the parameters commonly available for each vendor.While each header file uses unique nomenclature, REMY identifies, translates, and inputs these parameters into the generic MRSinMRS format and table. Base Files Included with REMY are a set of base files, each named MRSinMRS, that are used as a generic blueprint when generating the final outputs.The first of these files is the empty MRSinMRS table (.csv) that is read and populated throughout the execution process.Next, while LaTeX is not required to use REMY, all the necessary files to generate a LaTeX PDF are saved, including auxiliary document information (.aux), bibliography (.bbl), citations (.bib), control file (.bcf), and bibliography log (.blg).These files are each copied over to the export directory and renamed to reflect the name of the dataset that was inputted. Outputs After completion, REMY will output a series of files to the export directory, each named after the input dataset.The first of which is a log file (.log) that includes detailed information about the runtime and notes whether any errors occurred.Next is the completed MRSinMRS table (.csv).If the users' system includes LaTeX, the necessary support files, LaTeX (.tex), and a text document (.pdf) will automatically be generated that now includes the information from the dataset. 7][28] The MRSinMRS table therefore contains sections requiring details of data analysis procedures and data quality, that for now, is not automatically populated by REMY because it is not linked to a processing pipeline/software.However, when integrated into existing software packages, this functionality would be easily implemented in the future. 3 Test datasets For the development of the REMY, an initial single voxel spectroscopy (SVS) test dataset was used, consisting of 15 Siemens datasets including 8 twix (.dat), 3 DICOM (.ima) and 2 RDA (.rda) datasets, 8 Phillips .spar/.sdat datasets, 8 GE pfiles (.7) and 2 Bruker method files (PV 360 V1.1 and PV 360 V3.3).The initial test data came from 4 sites and was acquired at 3 field strengths (3T, 7T and 14T) with 6 different pulse sequences (STEAM, LASER, sLASER, PRESS, MEGA-PRESS and HERMES) ensuring a diverse test dataset covering the majority of SVS sequences (Supplementary Table 1).After the application was developed, we performed a crash test using publicly available data from the Big GABA dataset (https://www.nitrc.org/projects/biggaba/).We randomly selected 15 PRESS datasets from 3 sites per vendor (i.e., 45 datasets per vendor) which contained GE (.7), Siemens Twix (.dat) and Philips (.spar/.sdat)files.We further expanded our crash test with spec2nii test data employing additional 4 GE (.7), 6 Philips (.spar/.sdat),14 Siemens Twix, 2 Siemens DICOM (.ima), 3 RDA (.rda) and 2 Bruker (method) datasets to increase the diversity of sites and sequences included.Figure 3. REMY Workflow.REMY supports multiple MRS data formats as input, including GE (.7), Siemens (.ima, .rda,.dat),Philips (.spar/.sdat), and Bruker (.method, .ser).Users initiate the process by importing a dataset, after which REMY looks for the necessary parameters in the header file of the data file, using spec2nii functionality.It then populates the hardware and acquisition parts of the MRSinMRS table and creates documentation (in Latex and PDF formats) for the Methods section. Initial test data The application was tested using an initial SVS test dataset (Siemens twix (.dat) and DICOM (.ima), Phillips (.spar/.sdat),GE pfiles (.7) and Bruker (.method) files) with the aim of reading, and translating all the available hardware and acquisition parameters required in the MRSinMRS table.For the hardware section of the table, REMY successfully read Field strength [T], Manufacturer name, Software version, and nuclei from all the file types, therefore populating 4 of 5 required hardware fields from the MRSinMRS table.RF coil and additional hardware information were not present in any of the data files and therefore could not be inputted.For the acquisition part of the table, pulse sequence name, nominal VOI size, TR, TE, total number of acquisitions or excitations per spectrum (NA), spectral width [Hz], and number of spectral points were successfully read from all file types populating an important part of the Acquisition fields of the MRSinMRS table.Necessary information on VOI anatomical location, WS method, transmit frequency offsets and shimming method could not be found in the majority of file types and as such was not automatically read.Therefore, the automatically generated methods section highlights all the necessary information that was not automatically populated in bold, indicating their importance and necessity for manual user input. Phillips .spar/.sdat -In total, 21 Philips .spar/.sdat datasets were tested, ranging from software versions 3.2.1/.2.1 to 5.1.7/1.7.The application achieved a 100% success rate in importing the data and extracting the targeted parameters, except for the model information, which is inaccessible from this file type.Consequently, we report a success rate of 91.6% in extracting all intended parameters.Siemens twix (.dat) -29 Siemens .datdatasets were tested, ranging from software version syngo MR B17 to E11.The application was 100% successful in reading in all 28 datasets and extracting the aimed parameters.Siemens DICOM (.ima) -2 Siemens .imadatasets were tested, ranging from MR B17 to E11.The application was 100% successful in reading in all datasets.The application was 100% successful in reading all datasets and 91.6% successful in extracting the aimed parameters (scanner model information missing).Siemens RDA (.rda) -3 Siemens .imadatasets were tested syngo MR B17 to XA31.The application was 100% successful in reading all datasets and extracting the aimed parameters.GE pfiles (.7) -19 GE pfiles (.7) were tested, ranging from software versions HD16 to MR24.The application was 100% successful in reading in all 19 datasets.It was 91.6% successful in extracting the aimed parameters, extracting all the aimed parameters except the scanner model for 14 datasets (which is not available for this file type) and 83% successful for 5 datasets by failing to extract the number of spectral points in addition to the model.It's important to emphasize that the application reads GE pfiles for version 7 onwards, as supported by spec2nii since the parameter notation is drastically different in files before that version. Bruker method files -2 Bruker datasets were tested.The application was 100% successful in extracting the aimed parameters for these datasets, except for the scanner model information, which is inaccessible from this file type, leading to 91.6 % overall success rate.The data types that Bruker exports include ser and fid files; however, they are always accompanied by a method text file that contains all the necessary information for the MRSinMRS table.The nomenclature in the method files remains unchanged across versions (here tested PV6.0.1, PV 360 V1.1, PV 360 V3.3); therefore we chose to read this file type for our application. Figure 5. Exported output from the REMY standalone application using a Siemens .twixMRS dataset.The first two sections of the table regarding data acquisition and hardware are populated, while data processing and quality need to be manually inputted by the researchers. Discussion REMY is a robust and convenient tool that enables researchers and clinicians to report essential MR hardware and acquisition parameters for MRS experiments.By automatically populating a standard table suggested by a consensus paper, sourced from a single MRS data file, the process facilitates straightforward study replication and streamlined Fmethod evaluation.Furthermore, the tool generates a methods section, simplifying the reporting process for researchers, which can help ensure the validity of the study setup and the interpretability of the results.The alternative requires a complete manual search for those parameters, and exporting and populating the table justified in the consensus paper. 15While our REMY tool populates the hardware and acquisition sections of the MRSinMRS table from most data formats currently used (Figure 1), it cannot populate the other sections, and it is imperative for researchers to carefully fill those in manually after completing the analysis.This process can and should be automated as well; for example, the end-to-end analysis pipeline Osprey populates the data analysis methods and data quality sections in the MRSinMRS table with information generated by its built-in linear-combination modeling and quantification modules. 24Leveraging the open-source and adaptable nature of REMY, we anticipate that other MRS analysis software developers may seamlessly integrate it into their pipelines.With straightforward modifications in the source code, the complete table generation process can be automated, further enhancing the ease and efficiency of MRS reporting. While many have cited the MRSinMRS consensus paper, indicating the willingness of the MRS community to provide detailed methodological approaches, often the table is not included in neither the manuscript nor supplementary materials.A notable challenge in completing the table is the variation in parameter nomenclature among different scanner vendors, which can complicate the task of locating and reporting parameters.Additionally, given the inherent variations across vendors and software versions, we have established a detailed, frequently updated table on GitHub with software versions and datasets tested and implemented throughout REMY development.Other parameters like dynamics, transients, averages, and blocks also can be misconstrued as the same.In response, our application is equipped with a backend code that identifies the vendor and automates the population of parameters, mitigating the potential for human error.For instance, the total number of excitations or acquisitions per spectrum (2e) indicates how many single scans were recorded for the averaged spectrum.The subsequent fields 2e(i) -number of averaged spectra per time point (number of excitations per time point), 2e(ii) -averaging method and 2e(iii) total number of spectra, are required only for kinetic studies.By populating the table automatically, the confusion between terms has been successfully solved by REMY.The substantial technical hurdle that may discourage researchers from using the REMY application, involving advanced coding, has been effectively addressed through the development of a user-friendly, extendable, standalone application.Note that for all vendors, the application successfully reads pulse sequence names; however, the sequences are often user modified and renamed for different studies.As such, the name that is given to the sequence and subsequently read by the application might not contain its original/true name and needs to be corrected by the user.The application successfully translated the test data acquired with various MRS sequences to the output table and methods section as shown in Figure 5. Parameters that are used as a metric of success are field strength [T], manufacturer, model, software version, nuclei, pulse sequence, VOI size, TR, TE, NA, spectral width, and number of points.In cases where certain parameters are present in one data file format but absent in another, and consequently not filled in by REMY for the formats lacking this information, we interpret this as a failure or limitation in accurately reading and reporting parameters which has affected success rate measurement. One major constraint of this project is its limited use for single voxel MRS, which inherently minimizes applicability across a broader spectrum, including multi-voxel MRS studies.However, all code is openly available, to encourage the community to contribute to REMY and extend its usability to all possible MRS cases.The rapid increase in development of available MRS processing software presents an opportunity for considerable advancement within the MRS field.As mentioned before, a drawback resides in the need for conclusive reporting of methods.We are actively strategizing to extend our outreach to software developers working on the development of MRS processing tools.The essence of our plan involves integrating our open-source application into existing tools, creating a partnership that simplifies reporting methods for the MRS scientists and eliminates an aspect of what can be perceived as "black box" processing.Although many MRS processing tools have taken steps to ensure transparency and logical checkpoints throughout their processing pipelines, the inclusion of MRSinMRS outputs and the completion of the third section in the table, pertaining to data analysis methods and outcomes, will further enhance the credibility of published research.Furthermore, we provide a standardized and automatically filled methods section with adequate references.This not only establishes a standardized template, but also provides a concrete methodological framework that can be directly incorporated into publications.Given the increasing number of published MRS studies, we recognized a challenge within the field and provided a simple solution through our work.REMY as an application is a start to provide a valuable, easy-to-use, expandable resource for novice and expert MRS researchers. Future Directions Despite its current limitation to single voxel multinuclear MRS studies, the open-source nature of REMY invites contributions that could extend its applicability to other MRS applications.Moreover, the ability of REMY to be integrated into existing MRS processing tools promises further simplification of methodological reporting, increasing the transparency and credibility of MRS research.Additionally, to advance reporting in preclinical studies we will implement automatic population of the water suppression mode, and pulse bandwidth from the Bruker method file. Lastly, the implementation of REMY could be refined, and the source code could be improved in accordance with community recommendations.For example, the future roadmap for REMY includes a demographics section that will be generated when researchers import all datasets of one study.The demographics section would provide the number of participants and, where available, gender, mean age, etc.This would be especially helpful for meta/mega-analyses.With the input of all datasets, REMY would highlight fields where the parameters, accidentally or deliberately, diverge across scans.In addition, after extraction is complete, the user would be presented with an editable list of the extracted parameters so that they can insert parameters that are typically not in headers (e.g., water suppression method) before the methods section is generated. Conclusion We developed the REMY application to boost the adoption of MRSinMRS consensus recommendations as a solution for insufficient MRS research reporting.By automating the population of the MRSinMRS table with essential study parameters from a single dataset, REMY facilitates the replication of studies and evaluation of methodologies.Challenges such as the variation in parameter nomenclature across different scanner vendors and the technical hurdles associated with manual table completion have been effectively resolved.Additionally, by providing an automatically filled, standardized methods section with appropriate references, REMY sets a template that can be directly incorporated into publications, addressing the pressing need for standardized reporting in the growing body of MRS literature.Table 1.Compilation of MRS consensus papers, along with their corresponding citation numbers.This table is a resource for any questions within the MRS field and the most updated list on published consensus in MRS. Figure 4 . Figure 4. Reproducibility Made Easy User Interface.Single file import facilitates the table and methods section output at the location specified by the user. consensus on clinical proton MRS of the brain: Review and recommendations 274 Near et al. (2021) Preprocessing, analysis and quantification in single-voxel magnetic resonance spectroscopy: experts' resonance spectroscopy in skeletal muscle: Experts' consensus recommendations 67 Kreis et al. (2020) Terminology and concepts for the characterization of in vivo MR spectroscopy methods and MR spectra: Background and experts' and lipid suppression techniques for advanced 1H MRS and MRSI of the human brain: Experts' consensus recommendations Lanz et al. (2021) Magnetic resonance spectroscopy in the rodent brain: experts' consensus recommendations 12
2024-03-29T06:44:56.882Z
2024-03-28T00:00:00.000
{ "year": 2024, "sha1": "e702cc99fea2047fbdd7103731202ad26c26098b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "e7c59ef39ff0357dfdafef5057b1ea8881d76067", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
229426004
pes2o/s2orc
v3-fos-license
Young Spanish Adults and Disinformation: Do They Identify and Spread Fake News and Are They Literate in It? : The infodiet of young Spanish adults aged 18 to 25 was analysed to determine their attitude towards fake news. The objectives were: to establish whether they have received any training in fake news; to determine whether they know how to identify fake information; and to investigate whether they spread it. The study employed a descriptive quantitative method consisting of a survey of 500 representative interviews of the Spanish population aged between 18 and 25 through a structured questionnaire. The results indicate that they are aware of the importance of training, although generally they do not know of any course and when they do, they do not tend to enroll on one either due to lack of interest or time. These young adults feel that they know how to identify fake content and, moreover, that they know how to do so very well. However, they do not use the best tools. While they do not always verify information, they mainly suspect the credibility of information when it is meaningless. However, they do not tend to spread fake information. We conclude that media information literacy training (MILT) is necessary in educational centres that focuses on the main issues identified. Introduction In the era of fake news, information consumption patterns require media literacy to empower citizens and help them acquire the media skills necessary to access, understand, analyse, evaluate, produce content and distinguish between real and fake news [1]. In addition to the problem of the immediacy with which it is generated and spread, various studies warn that it is also widely believed in society. If the report "Fake news, filter bubbles, post-truth and trust" [2] revealed that Spanish people were the most likely in Europe to believe fake news, forecasts do not indicate any improvements in the future because in 2022, according to the Gartner report [3], more fake information will be consumed than true. Thus, it is extremely important to determine whether young adults are equipped to deal with misinformation. This study analyses young adults because they are the age group who most consume information in the digital environment [4,5] and are "those who feel most vulnerable to fake news [ . . . ]. Indeed, almost half of the people who believe they receive fake news are very often aged between 18 and 34 years old" [6]. In this study we analyse the infodiet of young Spanish adults between the ages of 18 and 25 to determine the filters they apply to the information they consume in order to avoid fake content. We analysed whether they spread fake content because the circulation of fake information is one of the complex problems that must be addressed. In this regard, the World Economic Forum warns that "the spread of disinformation online is one of the 10 global risks of the future" [7] p. 407. We examine whether they have received any kind of training to deal with fake news as it has damaging consequences for the political, social, and economic future and for daily decision-making, among many other things. To mitigate it, mechanisms have been created in various spheres, including social networks, the European Union, and the United Nations Educational, Scientific and Cultural Organization (UNESCO). Media organizations have introduced fact-checking. These measures are of interest to the scientific community, with studies documenting the verification initiatives implemented at both the international level [8] and the national level, such as B de Bulo [9] or Maldita.es [10]. Work has also been carried out that has examined the variety of authentication methods, practices and tools aimed at users and media professionals to protect themselves from fake content and to ensure the quality of information presented taking into consideration the recent advances in multi-channel media storytelling and their potential in cross-modal veracity strategies [11]. The similarities and discrepancies between academic and professional discourse around fact-checking have also been analysed [12], as has the role journalistic deontology plays as a tool in the fight against fake information [13]. Such tools help define what some researchers are beginning to refer to as the future of journalism in post-truth times [14] or the new global media ecosystem suffused with fake information [15]. However, along with these initiatives it is also necessary to provide a solid education in fake news due to the amount of non-journalistic content disseminated on the Internet and consumed daily. Alonso [16] point to the need for media literacy across society to deal with information disorder. To this end, several training courses have been organised in Spain. The modalities offered comprise courses that are seminars or workshops organised by educational or business institutions and taught by experts in the field or by those who work with verification platforms in Spain such as Maldita.es and Newtral, as well as in collaboration with Google after starting their verification workshop. There are also initiatives run by the European observatory for the analysis and prevention of misinformation (ObEDes). These courses mainly analyse such elements as: the role played in society by fake news and post-truth; identifying the objective of fake news; investigating who is responsible for fake news; studying the models of propagation and distribution of fake news online; classifying the types of fake news; studying the formats and genres of fake news; learning how to detect and combat fake news; and understanding the concept of fake news, among other contents. In this context, this research aims to provide data on young Spanish adults and their relationship with fake news. The goal is to provide significant data to create effective curricular programs that allow the adaptation from fast consumption to consumption that applies criteria to verify credibility and to examine issues relating to information to contribute to an ecosystem of reliable, responsible and transparent information. The Ethical Journalism Network (EJN) defines fake news as "information deliberately fabricated and published with the intention to deceive and mislead others into believing falsehoods or doubting verifiable facts" [17]. Such information, which according to the Cambridge Dictionary [18] is characterized by presenting itself as news, is "generally created to influence political opinions or as a joke". Amoros also considers that it poses as news "with the aim of spreading a hoax or deliberate misinformation to obtain a political or financial end" [19] p. 171. Fake news is a concept that young Spanish people are well aware of. Mendiguren, Dasilva and Meso [20] reveal that young university adults understand fake news as: fake information that is intended to influence people's opinions; fake information usually spread through social networks in order to manipulate public opinion in the interests of those who spread it; news with fake information; or news with fake information that becomes so well known that many ends up accepting it as true without even corroborating it. After conducting a review on how academic studies defined and put into practice the term fake news, Tandoc, Wei Lim, and Ling drew up a classification consisting of six types of fake news: "news satire, news parody, fabricated, manipulated, publicity and propaganda" [21] p. 141. However, Martens, Aguar, Gómez and Mueller-Langer [22] highlight that there is no consensus regarding this term. Indeed, there are some who argue against using the term fake news, as it has an impact on the credibility of journalism because associating fake information with the news is a breach of the essence of journalism, which is to tell the truth about what happened. Therefore, it should be noted that "even if fake news has the appearance of journalistic news (headline, journalistic structure and appearing to have a reliable interface), fake news can never be considered journalistic content because it contravenes the journalistic essence" [23] p. 245, which is why an open debate on how to designate this type of information is considered necessary. Indeed, Rodríguez-Pérez proposed that it is better to use the term disinformation than fake news to address hoaxes, or misleading or malicious content for four reasons: "Firstly, we highlight the simplification of the concept with regard to the complexity of disinformation; secondly, the oxymoron of the term fake news; thirdly, the discursive appropriation of the term by political leaders to discredit the media and journalists; and, fourthly, the intrinsic economic and ideological motivations associated with the generation of fake news" [24] p. 72. The European Commission's Communication on tackling online disinformation [25] defines disinformation as "verifiably false or misleading information created, presented and disseminated for economic gain or to intentionally deceive the public" (para. 1), noting how "misinformation and fake news intervene in democratic processes such as elections and create a public opinion based on lies and false information that many people believe to be true" (para. 3). Regardless of the term used, it is considered a danger to democratic life and a geopolitical threat [26]. The rise of fake news and disinformation is, therefore, one of the main issues to be addressed internationally. Young Adults and Fake Information Studies focused on young adults and fake news have mainly addressed one sector: university students. At the international level, the habits of Portuguese university students with regard to fake news have been investigated, including the criteria they adopt before sharing information and the perception they have of fake information [27,28]. Studies have also examined how Salvadoran students from the Monica Herrera School of Communication and the José Simeón Cañas Central American University inform themselves, process news and verify facts [29]. Similarly, the effectiveness of the courses taught on verification to students at the University of Florence has also been analysed [30]. However, academic interest in the university environment has not focused exclusively on young students but also on other sectors of the university community. For example, the study by Pineda et al. [31] examined the habits of consulting, comparing and verifying of news by students, teachers and administrative staff of the Tecnológico de Antiquioquía in Colombia, while Malaquías, Lizbeth, Pérez Rivera, Ramos and Villegas [32] compared young Mexicans aged between 18 and 30 years old with university education and those with only a basic education in order to establish whether people who do not study at university consume and share more fakes news. In Spain-the subject of our study-the level of credibility that young university students studying a degree in Communication and Education at the Loyola Andalusia University give information has been investigated, revealing differences both in terms of gender and level of studies [7]. This field of study was expanded by Mendiguren, Dasilva, and Meso [20], who studied whether university students who study journalism at the University of the Basque Country knew how to identify fake news, if they believed they had the criteria to distinguish it, and how they verify information when they suspect that it lacks rigor, as well as the credibility they give mainstream media and the dissemination of news they trusted least. The study by Catalina, Sousa and Cristina Silva [4] is also significant. They compared Spain, Brazil and Portugal in order to determine how future journalists inform themselves in the digital environment, the uses they make both for consulting and disseminating news, the degree to which they consider themselves capable of identifying fake information, where they believe most fake news is located, the reasons for its spread; and the degree of credibility they give to various media organizations. In addition to these studies are various prominent research projects such as the one carried out by the University of Huelva, Granada and Vigo titled "Conspiracy Theories and Disinformation in Andalusia" [33], which analyses whether the current panorama, characterized by the proliferation of disinformation, paves the way for the creation and rapid dissemination of conspiracy theories among young Andalusian residents aged 18 and over. The study presented here aims to provide data on the identification and dissemination of fake information by young Spanish adults and whether they have received any training in it. The results will be useful in helping to create effective curricular designs that provide them media information literacy training (MILT) that allows them to gain skills and attitudes to address fake news and disinformation. Study Design In order to determine the habits of young Spanish adults when faced with the reception of fake news, its dissemination, their level of literacy and the importance they give to being trained to detect fake news, we used primary data, namely data collected the first time and specifically to cover particular information objectives [34]. The data were gathered through a descriptive quantitative research design [35]. Specifically, a survey was carried out in which a structured questionnaire was sent to the entire Spanish population aged between 18 and 25 years, with a sample of 501 1 panel interviews being conducted online between 23 July and 14 August 2020. The study followed a quality control procedure in each of the processes. To guarantee the quality of the questionnaire design and its correct understanding, prior supervision was requested from three social science research professionals. To guarantee the quality of the fieldwork, we collaborated with the company Netquest, which has at its disposal a community of individuals who participate at single invitation only, thereby reducing the risk of self-selection and duplications and providing exclusive information. Moreover, this company holds an ISO 26362 certificate. Prior to carrying out all the field work, the questionnaire was piloted to check its suitability. Sample Design For the design of the sample [36], the weight of each sociodemographic segment in the Spanish population was sought according to the National Institute of Statistics, applying the same proportions to the scheduled 500 interviews. As the fieldwork was carried out, compliance with study quotas was verified. Therefore, the large sample size and the chosen sampling system allowed us to extrapolate results from the entire Spanish population between 18 to 25 years old, with a sample error of ±4.47% and a confidence level of 95% (Table 1). Questionnaire Design The first part of the questionnaire collected information on sociodemographic data such as sex, age, province, habitat, area, social class and educational level. Next, the central questions of the questionnaire were broken down into why fake news is generated, the ability to detect fake news, why a news story is considered fake, to what extent the news is checked and how this information is verified, how often fake news is disseminated and why, finishing with the importance and level of training in the verification of fake news. Statistical Methods The collected data was cross-referenced with sociodemographic variables to observe whether there were statistically significant differences between the various segments analysed. These segments were: sex, age, level of education (first grade, second grade, third grade), 2 size of habitat (less than 50,000, more than 50,000 inhabitants), social class (high-high, high, medium-high, medium-medium, medium-low, low and low-low) and geographical area (Northeast/Catalonia and Balearic Islands, Levante, South/Andalusia, Cen-tral, Northwest, North central, Canary Islands, Metropolitan area of Barcelona, Metropolitan area of Madrid) of the respondents. 3 To determine the existence of statistically significant differences in the information obtained, a t-test of proportions was carried out, which allows for the comparison of cell by cell data of a table with category variables of independent samples [37]. This test compares the values between two cells of the same row with the columns of the table. For each column, the t-test was used on the hypothesis that the population proportion of case A and case B can be considered equal versus the hypothesis that they are significantly different (either much higher or much lower) at a 95% confidence level. In the tables, significant statistical differences are represented with capital letters, which coincide with the column whose proportion is considered higher. Literacy of Young Spanish Adults Regarding Fake News We found that 76.8% of young Spanish adults aged between 18 and 25 attach great importance to media literacy to prevent disinformation (very important 33.1%, quite important 43.7%). In particular, those who attach greatest importance to training in the verification of information and detection of hoaxes are young people over the age of 20 and those with a higher education. No statistically significant differences were observed in the rest of the segments analysed (Table 2). However, 76.2% of those interviewed were unaware of any literacy program, while 23.8% state that they knew of one, either as a result of their own initiative (11.4%) or because they had been offered one (12.4%). Young people with third grade studies were most familiar with this type of course. No significant differences were observed in the rest of the segments studied (Table 3). Regarding participation in a course or receiving training on how to detect fake news, among those young adults who were aware of any, 76.5% did not take part in any compared to 23.5% who received such training ( Table 4). The courses undertaken were carried out mainly at university (46.4%) ( Table 5) and were mainly free (64.3%) ( Table 6). The reasons why young people who, although aware of a course on how to detect fake news, did not take part in any, were basically because they were not interested (35.2%), lacked time (14.3%), especially those aged 20 to 22 (20.6%), and because they believed that they already knew how to detect fake news (14.3%) ( Table 7). Finally, young people believe that the main reasons that fake news is generated include the following: to gain audiences or more visits, followers or clicks (17%); due to readers' lack of training, who do not know how to inform themselves, corroborate the information or be critical of the information received (13.8%); to attract attention or through interest and convenience (11.8% respectively); to earn money and manipulate and influence society (both reasons, 10.8%). None of the other reasons cited exceeded 10% of mentions (Table 8). Identification of Fake News To achieve the second aim of this study, namely to determine whether young Spanish adults know how to verify the content they consume, we first analysed the extent to which young people believe they know how to identify fake news. The results indicate that 59.5% of 10 young people think they know how to identify fake news very well or quite well (59.5%), a perception that increases among men (63.9%), with age (63.1% from 23 to 25 years) and with the level of studies (third grade, 69.35) ( Table 9). Foremost among a range of reasons presented to the interviewees as to why they think a news item is fake, is the incongruity or meaninglessness of the news item, an aspect most mentioned among women (87%), the population aged 18-19 years (89.5%) and among the upper and upper-middle social class (86.7%). Another notable reason is whether the news comes from social networks such as WhatsApp (58.5%) and, to a lesser extent, if it generates social alarm (43.7%), has a very attractive headline (33.1%) or contains shocking information (28.9%) (Table 10). We found that 4 out of 10 young people (39.5%) are in the habit of always checking whether the news they read is true or fake compared to 55.7% who check it occasionally, while 4.8% never verifies it (Table 11). Regarding the mechanisms that young Spanish adults use to verify information, 49.9% do so through friends and family (primarily women, 54.5%; young people aged 18-19 years, 60%; and those with a lower level of studies, second grade studies, 55.3%), while 40.7% check it through specialized websites (StopBulos, Maldita.es), especially young adults between 23 and 25 years old (44.8%). Other ways of verifying information, cited to a lesser extent and grouped in "Other answers", include consulting other media outlets such as the press, radio or television (13.8%) and investigating the information and sources (7.8%), with other methods reaching much lower percentages (Table 12). When asked about the degree of importance they attach to the actions of organizations to verify the information, the results indicate that the reputation of the media organization is the most important factor in determining whether the news is true or fake (Top Two Box 75.2%), a view held primarily by young people between 23 and 25 years of age (81.8%) and those with third grade studies (81%). In contrast, the least relevant factor is the author of the news item (Bottom Two Box 36.5%) ( Table 13). Table 13. Question 8: Think of the moment when you are reading a news item that you have searched for or have been sent. How much importance do you attach to each of the following in order to know whether the news item is true or fake? (Rotate items and show scale). Dissemination of Fake News We found 87.6% of young people have at some time received fake news, especially women (91.5%), those with the highest level of education (93.7%) and social class (90.7%), while 6.6% claim to have spread fake news at some point, compared to 93.5% who do not tend to spread such news (Table 14). Regarding whether fun, boredom or the prospect of generating more social relations influence the dissemination of fake news, the data indicate that 51.5% never do it because they enjoy it, as an excuse to relate to people (72.7%) or out of boredom (60.6%). On the other hand, 48.5% of those who spread fake news knowingly always did so to warn others that the item was fake news (Table 15). Finally, approximately 4 out of 10 young people always encourage their contacts/friends/family members to disseminate information only if they have first verified it (45.1%); women stand out here, as well as young people with third-grade studies and those from high and medium-high social classes. When they receive a news item and realize that it is or may be fake news, 5 out of 10 young people always warn the person who sent it to them that it is or may be fake (55.1%), with strongest showing from the same segments: women, young people with third-grade studies and those from high and medium-high social class. Seven out of 10 respondents eliminate news from their social networks when they know it to be fake (75%), especially young people from high and medium-high social classes (Table 16). Discussion and Conclusions Young Spanish adults are aware of the importance of training in order to know how to determine the veracity of information. This degree of awareness is probably, as the Digital News Report Spain [38] indicated, a result of the fact that young people between 18 and 24 believe that most news cannot be trusted, a finding we corroborated when we asked them about the causes of disinformation; young Spanish adults indicated that it is a result of a lack of critical knowledge when consuming information, this reason being ranked second among the reasons provided: 13.8% believe that not knowing how to get informed, not knowing how to contrast content, and not being critical of the information received is one of the main reasons why fake news is generated. This is interpreted as apportioning blame to the illiterate reader given that their lack of training contributes to achieving the objectives of those who create fake news items in order to gain an audience, generate more visits and gain more followers (17%). Training is necessary: to acquire the media competencies to tell the truth from falsehoods; to stop the profits made by the creators of fake news; and to combat one of the reasons why they consider such information is generated: readers that lack the ability to discern disinformation. However, it is highly significant that, although they attach great importance to media literacy, 8 out of 10 young people do not know of any training program, which implies that they have not attended one either. These results allow us to conclude that there are problems surrounding the publicity of the programs offered because, despite being abundant, young adults between 18 and 25 years old remain unaware of them. This calls for measures to be taken in order to improve their impact on this age group. However, being aware of courses does not mean that they are going to undertake one either, since only 2 out of 10 young people who are aware of a fake news learning program end up taking one. Those who have mainly did so for free in universities and institutes, allowing us to conclude that only those who have studied in educational centres providing such teaching programs have taken one. This theory is strengthened by the observation that young adults do not take the initiative to find these courses and that the main reasons they fail to enrol include not being interested in the course, a lack of time, or because they believe that they already know how to discern real news from fake. Thus, we believe that educational centres of all levels should be the main places to carry out such training since they eliminate the problem of time and students' refusal to undertake one in favour of acquiring critical knowledge of information. In this regard, UNESCO stresses that this training must be undertaken in the academic sector. Similarly, due to the lack of training in this age group, we can confirm, regardless of gender or level of studies, the presence of a "media literacy crisis" and the urgent need for "transmedia literacy" Scolari [39] or of a media and informational educommunication. Such training is necessary because young people between 18 and 25 years of age believe that, despite not being aware of or having taken a course, they know how to identify the fake news, with 6 out of 10 believing they know how to do so very well or quite well. However, when asked how they identify fake news, for 5 out of 10 young people the most representative answer is asking family and friends. The study "The conditioning factors of disinformation and proposed solutions against its impact based on the degrees of vulnerability of the groups analysed" [40] carried out by the Centro de Estudios de San Pablo CEU revealed the trust they usually have in their relatives, friends and closest personal references, believing the information that comes through them to be reliable and credible. Thus, young Spanish adults believe they know how to identify fake news but do not use the optimal tools for its verification. These results are corroborated by those provided by the "Study on the impact of fake news in Spain" [41] which revealed that more than fifty percent of young people believed they knew how to identify fake news but that only 4% actually knew how to, and by those of Herrero, Conde, Tapia and Varona [7], who concluded that young adults have difficulties in differentiating the veracity of sources. Therefore, these data lead us to believe that it is necessary to create more activities and to provide support to socio-educational projects in order to allow young Spanish adults to attend courses, to take the initiative to look for such courses autonomously and to raise their interest in them. According to the data obtained, they also play a role in the creation of fake news, as they are a vulnerable sector. Young Spanish adults represent an age group that does not always verify information. They primarily suspect the credibility of those news stories that are incongruous or nonsensical, or that reach them through WhatsApp. In second place are those news stories that have an eye-catching headline, that generate social alarm or that are shocking; the students did not, however, indicate any actions various organizations stress as being necessary to perform, such as investigating the reputation of the media outlet, the sources or the date of publication. However, when asked about these actions, they indicated the reputation of the media organization and the sources as being very important. Therefore, while these verification actions are not ranked, young Spanish adults do understand their degree of importance. Therefore, the regular application of these actions vis-à-vis critical information consumption must be encouraged in training programs. However, it is significant that although they receive a lot of fake news, as the study by Panda Security [6] also revealed, young Spanish people do not tend to spread it. These findings are corroborated by those of Carballo and Marroquín [29], who observed that three quarters of the young adults analysed reported that they do not spread fake information, an observation also confirmed internationally by Guess, Nagler and Tucker [42], who found that during the Trump elections "users over 65 years old shared seven times more articles from fake news domains than the youngest age group" (p. 1). Thus, although there is a certain tendency to criticize the younger generations, this has more to do with fear than a real analysis of these younger generations. They are attacked for being connected to the Internet all day sharing any type of information. Not only do they tend not to spread it, they also delete it from their social networks, an observation also made in the study by Carballo and Marroquín (2020) [29]. Therefore, in agreement with Buckingham [43], we conclude that an implementation of news literacy and coherent and rigorous "educational" programs is needed. Reports indicate that in 2022 fake information will be habitually consumed and that although young adults are aware of the dangers of fake news, they are not trained in verifying information or undertaking critical consumption. It is important that such training be undertaken in educational centres and should focus mainly on teaching students how to identify fakes news. Moreover, young adults need to be taught the importance of not spreading it. In addition, it should be stressed to them that although spreading fake news is not a deficiency in this age group, believing so without having training or mastering effective techniques is. Nonetheless, these curricular programs should also teach young people that they should not get carried away with spreading it simply for the fun of it, as this is one of the main reasons that leads them to sharing fake information on the few occasions they do. Similarly, they must be trained to be critical of information, checking the veracity of the information in each news item by e.g., checking the source and the date (among other actions recommended by various organizations), and not just when they believe it to be of doubtful origin. Ranieri, Si Stasio and Bruni (2018) [30] confirm that young adults who take training courses increase their skills. They analysed the results obtained in workshops on fake news provided to students at the University of Florence (2017-2018) and concluded that they are useful because they allow optimal information literacy. Future studies should examine the reasons preventing young Spanish adults between 18 and 25 years old from knowing about training courses on fake news, aggregate the programs being undertaken in educational centres in Spain, and carry out comparative studies in across Europe.
2020-12-03T09:06:12.418Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "d698a0b9a7e8499b0d21d1f796fce54a80a7bfeb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6775/9/1/2/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "57a58f28bae5bdf34bbfa08885e42d6ba483164c", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "History" ] }
187591740
pes2o/s2orc
v3-fos-license
Differential Effects of Selected Psychotherapeutic Interventions on Procrastination Reduction and Improved Mathematics Achievement among Low Achieving Students This study investigated the differential effects of selected psychotherapeutic interventions on procrastination reduction and improved mathematics achievement among low achieving students in Enugu State. Quasi-experimental, Pre-test, post-test, control group, research design was adopted. A purposive sampling was used to select a sample of n=36 Senior Secondary School One (SS1) students both male and female. Participants were randomly assigned to three groups i°A,i± i°Bi± and i°Ci±. Cognitive-Behaviour Therapy (CBT) and Solution-Focused Brief Therapy (SFBT) interventions were used for the study. The treatment with CBT was applied to group i°Ai± for 15 weeks while SFBT was applied to group i°Bi± for 8 weeks. The instruments used include, Procrastination Assessment Scale-Students (PASS), Teacher Made Mathematics Diagnostic Test (TMMDT) and Teacher Made Mathematics Achievement Test (TMMAT). They were validated and reliability tested. Their correlation scores were, .900, .816 and .868 for the three instruments respectively. The ANCOVA result shows that F(2,32)=33.44, p=001 and a differential effects of CBT and SFBT between groups. The result showed that SFBT had more effects on procrastination reduction and also a significant difference in mathematics improvement of the students in CBT and SFBT experimental groups from pre-test and post-test scores, is rejected both at .05 and .01 levels of significance. From the study we conclude that CBT and SFBT interventions are effective on procrastination reduction and improved mathematics achievement among low achieving student but there is no gender effect. We recommend that psychotherapeutic interventions should be used to improve academic achievement in general. Introduction Low academic achievements of students in secondary schools today makes one to look at behaviours they exhibit that are dysfunctional in nature, so as to proffer solution. So many individuals have tried to identify these sabotaging behaviours. The Commissioner for Education in Enugu State, Nigeria, attributed the mass failure to poor reading culture by students as so many of them engage in examination malpractice [1]. While some stakeholders in education likewise raise accusing finger of low academic achievement to lack of interest by students in secondary schools in Enugu State [2]. Others attribute the low performance of students to lack of motivation and readiness to learn. In a similar way, some are of the opinion that students find it difficult to sit down and study their books owing to inclination to distractions and less concentration in their studies [3]. On that same note, the researchers observed some complex habits among students in form of academic procrastination which hinder their academic achievement especially in core subjects like mathematics. One thing is to identify a problem and another is to proffer solution to the problem. Therefore, with this habit of procrastination among students as identified by the researchers, educators are challenged within their areas of practice on which sound intervention approaches to apply on this sabotaging habit and behaviour that hinder students from achieving to their full potential. This sabotaging behaviour of procrastination is what most students have indulged in, in one way or the other (Klein, cited in [4]). Unfortunately, it is noted that no matter how efficient or committed an individual is, there are likelihood that he or she must have been found throwing away time on insignificant things instead of spending that time on school-related tasks. Although this is an observed fact that most students have procrastinated sometimes in their cause of study but this study is mostly concerned with those students who have become habitual procrastinators because procrastination is regarded as negative only if the tendency becomes habitual, addictive and thus resulting to low academic achievement among those students who indulge in it as most of them are experiencing today. From all indications, academic procrastination starting from the researchers' observations, to existing literature and even from its definition, is portrayed as bad and negative in nature. Most often, academic procrastination consists of the intentional delay of an intended course of action, in spite of an awareness of negative outcomes [4,5]. From a simple explanation, procrastination is the act of putting off tomorrow what should be done today. More still, one should not use procrastination inter-changeably with the word delay since it is different from delay because delay has rational reasons behind putting-off something as opposed to procrastination [6]. This practice can have unfortunate effects on an individual if he or she makes a habit of it or indulges in it repeatedly. Procrastination is the delaying of a task that was originally planned despite expecting to be worse off for the delay [7]. All the definitions of procrastination have common theme revealing a core or essential element. It is evident that all conceptualizations of procrastination recognize that there must be a postponing, delaying, or putting off of a task or decision, which is in line with Latin origin of procrastination, pro, meaning forward, forth, or in favour of, and crastinus, meaning of tomorrow (Klein, cited in [4]). Thus, it is very obvious that academic procrastination is irrational but students end up voluntarily choosing a course of action that they know will not maximize their physical, psychological, mental wellbeing and their goals in life. The repercussion of procrastination is very miserable indeed as it has been found that the more an individual finds a particular task as difficult and hard, the more that individual procrastinate over that task. Mathematics is one of the academic tasks that most students complain about and it is tagged as being difficult. This is a big problem both now and in future since mathematics which stands as a life wire of scientific and technological development is mostly avoided by so many low achieving students. It has been observed that some students just avoid mathematics because they do not like figures. This avoidance may not be as a result of their ability but many students refer to mathematics as difficult. As noted by some researchers, when a task seems difficult, unpleasant, or overpowering, it may bring about procrastination [8,9] which remains one of the least understood human miseries [10]. Procrastination is considered bad, irrational way of thinking and as a consequence, produces inefficient and maladaptive behaviour. Therefore, in order to avert the situation of the disrupting, self-defeating chain of thought, feeling and academic procrastination which translates to low academic achievement especially in mathematics among students, selected psychotherapeutic interventions are required and most effective in this case should be Cognitive Behaviour Therapy (CBT) and Solution-Focused Brief Therapy (SFBT). Cognitive Behaviour Therapy (CBT) is an insight-focused therapy that emphasizes, recognizes and changes negative thoughts and maladaptive beliefs in individual [11]. CBT was originally developed by Aaron T. Beck, for depression but has been used to improve other behavioural cases [12]. The premise of cognitive behaviour therapy is that our thoughts influence our feelings and behaviour. Beck theorized that by changing our thoughts or our relationship to our thoughts; we can change behaviour and emotion [11]. Furthermore, CBT approach is based on the theoretical rationale that the way people feel and behave is determined by how they perceive and structure their experience [11]. Cognitive Behaviour Therapy (CBT) has two components, the cognitive and behavioural components. The cognitive helps people change thinking patterns that keep them from overcoming their fears through cognitive restructuring. For example, a person with habit of procrastination will be helped to see that his or her task avoidance attitude due to fear of tackling the problem at hand as previously fared can be overcome. The behavioural component of CBT seeks to change people's reactions to anxiety-provoking situations. A key element of this component here is exposure, in which people confront the things they fear and start to take action instead of procrastinating. CBT proposes that change comes about by changing the client's thinking about the situation. Once the client has converted his or her point of view, the problem-perception switches to a clearer context [12]. Solution Focused Brief Therapy (SFBT) can also be used to challenge the situation of these disrupting selfdefeating chain of thought, feeling and academic procrastination which translates to low academic achievement especially in mathematics among students. It is a new and increasingly used therapeutic approach that focuses on helping clients construct solutions rather than solve problems [13]. Solution-focused brief therapy was developed for over 20 years by Steve de Shazer, and Insoo Kim Berg, and has been used in variety of contexts including schools agencies and private practice, with a wide range of clients including children, adolescents, couples, families (Reiter, cited in [14]). In SFBT, life is imagined as a pie chart, with each slice of the pie a different size [15]. At any given time, some sections of the pie are going to be off or not functioning at its best and that is normal. A client who is problem-focused is looking only at this one slice of the pie that is not functioning. A solution-focused therapist is going to help the client fix that slice by balancing strengths that are part of the rest of the pie and that is how the therapy works. Academic achievement becomes that part of the pie that is not functioning well in the life of the student due to habitual procrastination which needs to be fixed. This remains a big challenge to educationists and requires urgent intervention to improve the present situation. The consequence is that if students do not get help, they progressively deepen their disturbance and end up in a state of hopelessness in which they relinquish efforts to change or improve their circumstances. Mathematics as a foundation for scientific and technological advancement is mostly affected when it comes to task avoidance. That is why there is continues students' low achievement in mathematics both in internal and external examinations in Enugu State [2]. Mathematics holds the key to national development, yet it is observed that the same mathematics has one of the highest failure rates in all public examinations right from common entrance examination into Junior Secondary School, to Senior School Certificate Examination (SSCE), and National Examination Council (NECO) examination [16]. Furthermore, external examination result as shown in May/June 2014 WASSCE indicates that 70 percent of candidates who took the examination failed to obtain credits in Mathematics [1]. This shows an obvious need of effective interventions to restructure students' mind since they believe that mathematics is difficult to understand and leads them to procrastinate. In conclusion, the researchers believe that, if students are given the opportunity to experience mathematics aesthetically during mathematics lessons in addition with the selected psychotherapeutic intervention, their interest will be aroused, stimulated and kindled in studying mathematics and in solving mathematics problems. That was exactly the case as the interest which this approach generated led to students working harder, spending more time and energy solving mathematics problem and their academic tasks in general since people spend more time in things that interest them. As students reduce procrastinating, their interest in academic tasks is sustained, definitely mathematics phobia and sense of difficulty in mathematics vanishes leading to greater positive improved mathematics achievement in Enugu State. It was in an effort to solve the problem of low academic achievement of students in mathematics that this study was carried out and in so doing the researchers have contributed to the frontier of knowledge. This study was guided by two research questions and two corresponding hypotheses thus: 1. What are the differential effects of the two independent variables, CBT and SFBT on procrastination reduction among low achieving students in the experimental groups based on their post-test scores on the Procrastination Assessment Scale-Student (PASS)? 2. What is the difference in mathematics improvement of the low achieving students in CBT and SFBT experimental groups and control group based on their teacher made mathematics achievement test (TMMAT) post test scores? 3. There is no significant differential effects of the two independent variables, CBT and SFBT on procrastination reduction among low achieving students in the experimental groups based on their pre-test post-test score on the PASS. 4. There is no significant difference in mathematics improvement of the low achieving students of CBT and SFBT experimental groups on their mathematics achievement test pre-test and post test scores. Methodology This study adopted quasi-experimental, pre-test, post-test, control group, research design. The three-group quasi-experimental design was most suitable for the study which consists of two independent variables (Cognitive Behaviour Therapy and Solution-Focused Brief Therapy) and two dependent variables (low achieving students' procrastination reduction and mathematics improvement). In notational form, three-group quasi-experimental design is as illustrated in Figure 1 having two experimental groups and one control group. The population for this study consisted of all 160 procrastinating and low achieving students (male and female) in SS1 classes of three secondary schools in urban area in Enugu North of Enugu State. The three secondary schools are Government Secondary School Enugu, New Layout Secondary School and Coal Camp Secondary School. A purposive sampling technique was applied to draw a sample of 36 participants from Government Secondary School, New Layout Secondary School and Coal Camp Secondary School in Enugu North, Enugu State, Nigeria. The treatment of the two experimental groups (CBT and SFBT) lasted for 15 weeks of 55 minutes per session per week for each of the two groups. Group I was treated with CBT while group 2 was treated with SFBT. None of these treatments was given to the control group. The instruments used for data collection include the following: Procrastination Assessment Scale-Student (PASS), Teacher Made Mathematics Diagnostic Test (TMMDT) and Teacher Made Mathematics Achievement Test (TMMAT). Procrastination Assessment Scale-Student (PASS) was adapted from original procrastination scale developed and validated by Solomon and Rothblum (1984). The researchers decided to modify the instrument to adapt it to the characteristics of the target group. Out of 44 items from the original PASS, 30 items were adapted and modified to be used as instrument for data collection in this study. The PASS is a five point Likert scale, with the following response options and corresponding weights -Always = 5, Nearly Always =4, Sometimes =3, Almost Never =2, Never =1. Score 30 stands for an occasional procrastinator; 31-50 is a chronic procrastinator, while above 50 is a severe procrastinator -the higher the score, the higher the level of procrastination (Adapted from [17]). The Teacher Made Mathematics Diagnostic Test (TMMDT) was used as a take-off assessment which was based on the students' first term scheme of work made of 20 objective questions, to be answered in 30 minutes. The TMMDT was used as a confirmation of low achievement in mathematics. Teacher made mathematics diagnostic test (TMMDT) was administered to the three groups before the treatment while the teacher made mathematics achievement test (TMMAT) was administered to the three groups as pre-test and post-test. The teacher made mathematics achievement test (TMMAT) was based on the students' second term scheme of work made of 20 objective questions and 5 theory questions which were answered in 1hour 30 minutes. In order to establish the reliability of the instruments, (Procrastination Assessment Scale for Students (PASS), the Teacher Made Mathematics Diagnostic Test (TMMDT) and the Teacher Made Mathematics Achievement Test (TMMAT), a pilot study was carried out on a sample of six (6) low achieving students of Community Secondary School Eva Valley still in Enugu North L.G.A in Enugu State. Test Retest method whereby same test was given to the same group of subjects on two separate occasions of two weeks interval to avoid memory effect was adopted in ascertaining the reliability co-efficient of the instruments. The reliability co-efficient, using Pearson's product moment correlation method was PASS = .900, TMMDT = 0.816, TMMAT = 0.868, N=6 respectively. With the obtained coefficients, the researchers deemed the three instruments suitable to be used for the study. Results Results of statistical analysis of the research questions and hypotheses are presented in the following tables. Table 1 shows that the CBT group had post-test mean score ( X =37.0000) which is lower than their pre-test mean score ( X = 66.4167) on PASS. This result revealed that there was a reduction in the post-test scores after the students' treatment with CBT. Also, the result for SFBT group from PASS shows that the post-test mean score ( X =34.6667) is lower than their pre-test mean score ( X =66.25000). This result was an indication that SFBT caused procrastination reduction in their post-test score. Furthermore, the result from the Control group from PASS shows that the post-test mean score ( X =66.1667) is the same when compared with pre-test mean scores ( X =65.9167). When comparing the mean scores from the three groups, those in SFBT showed a higher procrastination reduction, followed by CBT group, while the control group who had no psychotherapeutic intervention experienced no procrastination reduction. Again for CBT group, their SD value on procrastination reduction (3.46410) for the post-test suggest that the responses or the scores of the respondents between the pre-test and post-test are widely spread. The SD value of SFBT group on their procrastination reduction is 4.20678 for the post-test. This suggests that the responses or the scores of the respondents between the pre-test and post-test are widely spread from the mean even more than those in CBT. On the other hand the SD of the Control group on their post-test (3.95045) and pre-test (3.91868) are very close. The result on Table 2 shows that students in CBT had post-test mean score of 51.25 and standard deviation value of 6.44. Students in SFBT had post-test mean score of 52.91 and standard deviation value of 6.20. Students in Control group had post-test mean score of 39.27 and standard deviation value of 3.59. The individual responses in mathematics improvement of students in CBT and SFBT widely deviated from the mean more than those in Control group. Table 3 reveals that in between groups, the sum of square is 7393.556 with 2 degree of freedom and a means square 3696.778 for within groups, the sum of square is 498.33 and 33 degree of freedom as well as a mean score of 15.101. The total has 7891.89 sum of square and 35 degree of freedom. The computed F is 244.80 which is statistically significant even as at low as .001 alpha. Therefore, the hypothesis that says that "there is no differential effects of the two independent variables, (CBT and SFBT) on procrastination reduction among low achieving students in the experimental groups based on their post-test scores" is rejected, F(2,33)=244.80, p< .001. The result in Table 4 shows that F(2,32) = 33.44, p=001. Between groups are presented in main effect (VAR00003) rows while within groups is presented in Error rows. In main effect rows, Type III sum of square is 1399.20, degree of freedom, mean square 699.60, 2F ratio and p=.001 significance. The Error has type sum of square of 669.51, degree of freedom of 32 and mean square of 20.92. The computed ANCOVA coefficient (F) is 16.79, p<.001 and is statistically significant at possible chosen alpha level of .05 with its actual probability in the population is as low as .001. Therefore, with the effect of the Pre-test covaried out, adjusted for, removed or partialled out, the hypothesis which states that "there is no significant difference in mathematics improvement of the low achieving students of CBT and SFBT experimental groups on their mathematics achievement test pre-test and posttest scores" is rejected both at .05 and .01 levels of significance. There is statistically significant mean difference among the experimental groups and control group because F(2,32) =, p< .05. Discussion of Results The results from the findings reveal that null hypothesis 1 is rejected. Table 3 shows that in between groups, the sum of square is 7393.556 with 2 degree of freedom and a means square of 3696.778 for within groups; the sum of square is 498.34 and 33 degree of freedom as well as a mean square of 15.101. The total has 7891.89 sum of square and 35 degree of freedom. The computed F is 244.80 which is statistically significant even as at low as .001 alpha. Therefore, the hypothesis is rejected, F(2,33) =244.80, p<.001. From this finding, one can say that although CBT and SFBT are effective in academic procrastination reduction, students seem to be more comfortable with SFBT and that yielded more result. In line with this observation, Bloom and Tam [18], illustrated that models of Cognitive Behaviour Therapy (CBT) and Solution-Focused Brief Therapy (SFBT) differ in their primary focus: problem solving versus solution building. These theoretical differences they noted leads to dissimilar practices, including the content of the therapeutic dialogue. Specifically, CBT sessions include more talk about negative topics in clients' lives such as problems and situational difficulties, whereas SFBT sessions focus on positive topics in clients' lives such as strengths and resources. This differential result may be attributed to the fact that people perform better when one build on their strength than when one emphasis their deficiencies. Furthermore, in the same view Jordan, Froerer, and Bavelas [19] added that CBT and SFBT therapists make different assumptions about their clients. CBT therapists view clients as having unhealthy or faulty cognitions that lead to problematic behaviours. SFBT therapists assume that clients possess all of the resources they need, so there is no need to identify deficiencies or pathologies. Such contrasting assumptions affect the delivery of therapy in important ways. Another reason for the effects of SFBT in this study might be as a result of language as it is in line with De Jong, Bavelas, and Korman, [20] who observed that there are differences in CBT and SFBT in terms of their assumptions about the role of language in psychotherapy. CBT therapists ask questions in order to gather more information about problematic thoughts and behaviours in order to change faulty thinking. SFBT therapists ask questions to introduce new possibilities and co-construct new meanings. In this view, therapists and clients are not simply sending information about the client back and forth to each other. Their dialogue is actively shaping a new version of the client's life. Students are not comfortable when you talk about their inability but are more motivated when you talk about their ability and strength. From the findings of this study SFBT has shown to be effective and that has supported the claim of Franklin, Moore and Hopson [21] in their study which evaluated the effectiveness of Solution-Focused Brief Therapy on children who have classroom-related behaviour problems within a school setting. In their study of five to seven sessions of Solution-Focused Brief Therapy, services were provided to 67 children, who were identified by school faculty and staff as needing assistance in solving behaviour problems. Outcomes were evaluated by using a pre-test/post-test follow-up design with a comparison group. Effect sizes and improved percentage scores were calculated. Findings provide support that Solution-Focused Brief Therapy was effective in improving, internalizing and externalizing behaviour problems. The ANCOVA result in Table 4 which was used to test hypothesis 2 shows that F(2,32) = 33.44, p=001. In main effect rows, type III sum of Square 1399.20 degree of freedom, mean square 699.60, 2F ratio and p<.001 significance. The error has type sum of square of 669.51, degree of freedom of 32 and mean square of 20.92. The computed ANCOVA coefficient (F) is 16.79, p<.001 which is statistically significant at possible chosen alpha level of .05 with its actual probability in the population is as low as .001. In this case there is statistically significant means difference among the experimental groups and control group because F(2,32) =, p<.05. There was real improvement of the students' in the two experimental groups having a significant improvement after the therapeutic interventions. This shows that some psychotherapeutic interventions improve low academic achievements especially in mathematics. This finding is line with Oundo, Nyaga, Barchok, and Mureithi [22] who carried out a study to assess counselling needs related to mathematics performance among secondary school students in Maara District in Kenya. The study examined counselling needs regarding attitudes, study methods and test taking skills related to mathematics performance and determined whether statistically significant differences existed between psychological intervention requirements and mathematics counselling needs among the secondary school students. The study findings indicated that secondary school students had mathematics counselling needs in relation to attitude, study methods and test taking skills for which psychological intervention was necessary. The data analysis results indicated that 53.3% of student participants were female while 46.7% were male. Their age ranged between 16 years and 20 years with the majority comprising 48.9% being 17years old. The students' mathematics performance at the primary level examinations was fair with the majority constituting 53.9% scoring above the average grade compared to only 10.4% at secondary level examinations. This implies that there exist factors limiting students' performance in mathematics at secondary school examinations and therefore, interventions may mean progress in mathematics achievement for the students. From the findings in this study one can deduce a link with the significant academic procrastination reduction as is seen in Table 4 with improved mathematics achievement. This is in agreement with the study carried out by Akinsola, Tella, and Tella [23], where they examined the correlates between academic procrastination and mathematics achievement among the university mathematics undergraduate students. The study used a total sample of 150 part 3 and 4 students in The Department of Mathematics and Mathematics Education in The University of Ibadan and University of Lagos, Nigeria. The 35 items academic procrastination scale developed and validated by Tuckman [24] was used for the collection of data in conjunction with the subjects GPA scores till date in mathematics. Findings indicate that a significant correlation was found in the academic procrastination and academic achievement of the subjects in mathematics, significant difference also exists in the levels of procrastination and mathematics achievement of the subjects, with low procrastinators performing better than the moderate and the high procrastinators. That is to say that while the control could not improve in their mathematics achievement was because they are high procrastinators. Still in another study by Akinsola and Tella [23] in The Department of Mathematics and Mathematics Education of The University of Ibadan and University of Lagos, they found that the more the subjects procrastinate, the more their achievement in mathematics decreased. This goes to explain why those in control group performed poorly. Conclusion From the findings of this study titled: Differential-effect of selected psychotherapeutic intervention on academic procrastination reduction and improved mathematics achievement among low achieving students, the researchers' then conclude that: • There is a differential significant effect of the two independent variables, CBT and SFBT on procrastination reduction among low achieving students although SFBT group achieved better results. • Both psychotherapeutic intervention CBT and SFBT had significant difference in mathematics improvement among low achieving students. From the results of the findings of the study both CBT and SFBT are effective psychotherapeutic interventions that can reduce academic procrastination and improve mathematics achievement among low achieving students in secondary schools although SFBT has proved to be more effective in this case. Recommendations Eight recommendations come to mind in the light of the results of this study. They are as follows: 1. Counselling and psychotherapy should be taken very serious in various levels of education in Enugu State and Nigeria in general as a tool to bring back the glory of academic achievement. 2. Counsellors in the secondary schools should not be given any teaching subject rather they should concentrate in providing their professional counselling services. 3. Teachers in the secondary schools should collaborate with their guidance counsellors in order to help the students achieve their full potentials. 4. Mathematics teachers in the secondary schools should learn to build on students' strength instead of labelling some students as never do well in mathematics. With little encouragement even those who avoid mathematics can turn to be average students in the subject. Counsellors should be involved in constant research and seminars or workshops in order to be current on the best way to help students in their learning processes. 6. The government has a great role to play in motivating counsellors and teachers to carry out their work diligently by paying them good salary and when due. In-service training for teachers and counsellors can also serve as a motivational tool for them to grow in their profession. 7. Very important still, is for government to always provide enabling environment for teaching and learning. Students learn better in good environment and when motivated. Mathematics text books should carry some illustrations and pictures to attract the attention of low achieving students in the subject. 8. Students should be helped to achieve their educational goals. This implies that there exist factors limiting students' performance in mathematics at secondary school examinations and therefore, interventions may mean progress in mathematics achievement for the students.
2019-06-13T13:16:04.670Z
2018-04-14T00:00:00.000
{ "year": 2018, "sha1": "18ec71e7d45ee36734866f204cf1f4adbd1c8311", "oa_license": "CCBY", "oa_url": "http://pubs.sciepub.com/education/6/4/9/education-6-4-9.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "41534b114cadeff16505167897f5bfb3184476c0", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
6310456
pes2o/s2orc
v3-fos-license
M48U1 CD4 mimetic has a sustained inhibitory effect on cell-associated HIV-1 by attenuating virion infectivity through gp120 shedding Background HIV-1 infected cells can establish new infections by crossing the vaginal epithelia and subsequently producing virus in a milieu that avoids the high microbicide concentrations of the vaginal lumen. Findings To address this problem, here, we report that pretreatment of HIV-infected peripheral blood mononuclear cells (PBMCs) with a 27 amino acid CD4-mimetic, M48U1, causes dramatic and prolonged reduction of infectious virus output, due to its induction of gp120 shedding. Conclusions M48U1 may, therefore, be valuable for prophylaxis of mucosal HIV-1 transmission. Findings The majority of new HIV infections worldwide are acquired through heterosexual transmission. Although receptive transmission at the vaginal mucosa is thought to be primarily caused by cell-free virus (CFV) [1], it may also involve transfer of HIV-infected leucocytes (i.e., cell-associated virus; CAV) present in semen [2,3]. The observation that CFV infection via the vaginal mucosa requires a~10 3 -10 6 higher virus dose [4] than is needed to establish infection by the intravenous route suggests that the healthy epithelium of the female genital tract is a robust barrier to HIV transmission. However, in contrast to CFV, infected seminal lymphocytes or macrophages are capable of migrating through intact epithelia and delivering virus directly to the submucosa or even the draining lymph nodes. This putative 'Trojan Horse' infection route ( Figure 1A) is supported by various studies in mouse and macaque models [4][5][6][7][8][9]. Vaginal microbicides currently under development to prevent heterosexual HIV transmission should, therefore, ideally be able to inactivate virus in these migrating leucocytes, as well as CFV. Although most candidate microbicides, including entry inhibitors, can inhibit CAV in vitro [10][11][12], their activity in vivo depends on the inhibitor concentrations that can be achieved in the cervical and vaginal (sub) mucosa where most of the target cells reside. A study in rabbits and macaques measured the levels of the candidate microbicide dapivirine (TMC120) in the cervicovaginal tissue and found that drug-related material was primarily detected in the superficial cellular layers of the mucosal epithelia and not in the submucosa or draining lymph nodes [13]. Hence, following the proposed CAV or Trojan Horse concept ( Figure 1A), infected seminal leucocytes could subvert drug pressure in the vaginal lumen by migrating to the submucosa or regional lymph nodes. Subsequently, infection may be established by virions budding from these migrating leucocytes. However, it remains possible that initial vaginal drug exposure exerts a sustained inhibitory effect on virus production or virion infectivity, even after their migration to deeper tissues. Previous in vitro evidence provides support for such a 'memory effect' , in that pretreatment of chronically infected cells with the non-nucleoside reverse transcriptase inhibitor (NNRTI) UC781 results in the release of attenuated virus [14]. Therefore, here we investigated whether other microbicide candidates exert a similar effect on CAV. To this end, HIV-infected peripheral blood mononuclear cells (PBMCs) were used as a surrogate for migrating seminal leucocytes and treated with antiretrovirals (ARVs) from different classes. Subsequently, extracellular compound and CFV were removed to mimic escape from microbicide exposure. Next, the amount of virus produced from these cells and its relative infectivity were assessed. Among the test compounds was the CD4-binding site inhibitor M48U1, which inhibits the gp120-CD4 interaction in the nanomolar range by targeting the highly conserved and vulnerable Phe43-cavity in the HIV envelope [15,16], and which showed nearly complete protection in Cynomolgus macaques when applied as a vaginal gel [17]. Most ARVs do not inhibit virus production by infected cells PHA/IL-2 stimulated PBMCs were infected with 2 x 10 -3 multiplicity of infection (MOI) of the CCR5-tropic subtype B strain Bal for three days and subsequently washed extensively to remove the inoculum ( Figure 1A). Next, cells were incubated for 24 hours with ARVs from different classes at 100x EC50 concentrations for each compound ( Table 1). The virions produced from these cultures were then quantified in quadruplicate by a Gag p24 capture ELISA [18]. Interestingly, pretreatment with most ARVs did not inhibit virus production by infected cells as compared to the untreated control cultures ( Figure 1B). However, in the supernatant of cultures treated with the protease inhibitors (PIs) lopinavir and saquinavir, a significantly lower (i.e., 0.5-1 log) Gag p24 concentration was observed, likely as a result of unprocessed Gag precursor. Pretreatment with M48U1 affects the infectivity of de novo produced virions Next, we evaluated whether these ARVs affected the infectivity of the virus produced. PBMCs were infected as described above and were washed extensively to remove inhibitor and incubated in fresh medium for a new round of viral replication. After 24 h, supernatant was harvested, and then assessed for its ability to infect TZM-bl cells using identical viral inocula (based on equal amounts of Gag p24). The obtained viral titers were expressed as a percentage, relative to the untreated control cultures ( Figure 1C). Surprisingly, virions produced by PBMCs pre-treated with M48U1, were almost completely defective (relative titer of <1%). A smaller loss in titer was observed after treatment with the PIs saquinavir (relative titer of 47%) and lopinavir (relative titer of 14%), and the carbohydrate binding protein griffithsin (relative titer of 31%). The same samples were assayed in a multi-cycle infectivity assay with PBMCs, with similar results ( Figure 1C). Overall, these observations suggest a continued inhibition by some ARVs on de novo produced virions, even after their removal. To investigate whether this attenuation was further (See figure on previous page.) Figure 1 M48U1 has a memory effect on cell-associated virus. A) The Trojan Horse transmission concept was modeled in vitro using HIV-infected PBMCs which were treated for 24 h with antiretrovirals (ARVs) from different classes at 100x EC50 concentrations. Subsequently, virus production was quantified using a Gag p24 ELISA. To mimic escape from vaginal drug pressure, the infected cells were then incubated in ARV-free medium for three consecutive periods of 24 h. Infectivity of newly produced virions was determined after each period by titration in TZM-bl cells and PBMCs using equal amounts of p24. B) Production of Gag p24 by infected PBMCs during 24 h treatment with eleven ARVs: the non-nucleoside reverse transcriptase inhibitors (NNRTIs; blue) UC781 and TMC120, the nucleotide reverse transcriptase inhibitor tenofovir ( . For all viruses that showed a reduced titer at 24 h, a clear increase in infectivity was observed over time. Whereas in PI-treated cultures, virion infectivity was rapidly restored to normal levels; the virions produced by griffithsin-and M48U1-treated cultures remained respectively two and four times less infectious than virions from the control cultures, as late as 72 hours post-exposure ( Figure 1C). To exclude a strain-specific effect of the lab-adapted Bal virus, the experiment was repeated for M48U1 using the more relevant transmitted/founder (T/F) viruses REJO and THRO (subtype B). Similar results were obtained with both viruses, showing a severely reduced relative titer at 24 h (3.7% and 3.1%, respectively) that restores to normal infectivity levels over time ( Figure 1D). The memory effect of M48U1 depends on direct interaction between M48U1 and gp120 To assess the role of M48U1:gp120 interaction in relation to the sustained reduction in infectivity, we used a miniCD4 resistant, but CD4 receptor binding competent mutant virus bearing the S375R mutation (BalS375R). This mutation is located next to the CD4 binding loop and disrupts the molecular interaction of M48U1 with gp120 [18]. Similar to the previous experiments, PBMCs were infected with wild type (WT), and mutant HIV-1 Bal, subsequently treated with a variety of ARVs and washed 24 h later. As before, cultures were then left to produce new virions in the absence of additional ARV pressure. For most ARVs no significant differences in WT or mutant virus production were found after ARV treatment. However, contrasting the low infectious titer (<1%) of WT virus after M48U1 treatment, a normal titer (i.e., 100%) was found for the S375R mutant virus ( Figure 1E). This result clearly indicates that the gp120: M48U1 interaction is a prerequisite for the memory effect observed with M48U1. As CFV is removed after treatment, the most logical target for M48U1 is the functional envelope proteins (Env) on infected cell surfaces. M48U1 induces gp120 shedding CD4 engagement can result in the spontaneous loss of gp120, without infection, resulting in defective gp41 stumps. To investigate whether gp120 shedding could explain the memory effect observed for M48U1, pseudovirion virus-like particles (VLPs) expressing trimeric Env of the subtype B virus JR-FL were treated with graded doses of sCD4 or M48U1. Two assays were then used to measure gp120 shedding. In the first assay, VLPs were treated with sCD4 or M48U1, washed, then Env was extracted from particles and resolved by BN-PAGE/ Western blot, as described elsewhere [19]. In this assay, gp120 shedding was indicated by a loss of intact native Env trimer coupled with an increase in gp41 stumps that are left behind after gp120 shedding. In a second assay, VLPs were treated with sCD4 or M48U1, washed, then coated on ELISA wells and assayed for binding of mAb 7B2, which reacts with the immunodominant cluster I epitope of gp41 that is exposed on the gp41 stumps exposed after gp120 shedding. In both of these assays, we used JR-FL E168K+N189A trimer VLPs generated by protease digestion to eliminate non-functional Env from particle surfaces, as described previously [19]. By eliminating any gp41 stumps present before drug treatment, this enhances the ability to detect new gp41 stumps that appear as a result of drug-induced shedding. In both assays, M48U1 caused shedding: in proportion to the concentration of M48U1 used, there was a loss in native trimer staining coupled with an increase in gp41 stumps in BN-PAGE ( Figure 2A) and a quantitative increase in 7B2 binding by ELISA ( Figure 2B). Moreover, by the BN-PAGE assay, 50% shedding was induced at approximately 10-fold lower concentrations of M48U1 than of sCD4. This is reflected by the lower EC50 of M48U1 (450nM) compared to sCD4 (900nM) in neutralization assays against JR-FL on PBMCs (data not shown). Consistent with the data on JR-FL, treatment of two T/F viruses (REJO and WITO) with sCD4 and M48U1 also (See figure on previous page.) Figure 2 M48U1 induces gp120 shedding. A) "Trimer VLPs" (E168K+N189A mutant) expressing trimeric Env of the subtype B virus JR-FL were treated with graded doses of M48U1 or sCD4. Shedding was then assessed using BN-PAGE-Western blot analysis of VLP Env. Below the gel, the density of trimeric gp41 stumps (indicated by an arrow on the gel) is shown, as determined using ImageJ densitometry software (NIH, Bethesda, USA, http://imagej.nih.gov/ij/). Second, we coated the same trimer VLPs of JR-FL B) or the subtype B transmitted/founder viruses REJO and WITO (C) on an ELISA plate at 20x the concentration present in transfection supernatants, treated then with graded doses of M48U1 or sCD4, as indicated and then measured ELISA binding by mAb 7B2 that reacts with a cryptic epitope of gp41 that only becomes exposed upon the loss of gp120 from native trimers (i.e. gp120 shedding). D) Finally, virus particles were isolated with magnetic CD44 microbeads from PBMC culture supernatant at 0 h, 24 h and 72 h after M48U1 exposure. Gp120 was then quantified in virion and virion-free fractions using a D7324-based gp120-ELISA. Shedding was expressed as a ratio of free gp120 in the supernatant and intact Env in the virion fraction and compared to the untreated control cultures (i.e., medium). Values are the means +/− SEM of two independent measurements. E) Env incorporation in the virion fraction was determined by quantifying both the gp120 and p24 content in ELISA and plotting the gp120/p24 ratio. resulted in an increased binding of the 7B2 mAb ( Figure 2C). Together, these results show that M48U1 potently induces gp120 shedding. Memory effect of M48U1 is linked to gp120 shedding We next evaluated whether gp120 shedding underlies the observed memory effect of M48U1. If this were the case, treatment of HIV-infected cells with sCD4 should also decrease the infectivity of de novo produced virions, as seen with M48U1 in Figure 1C. Indeed, like M48U1, we found that virus produced from infected PBMCs pretreated with sCD4 was fivefold less infectious (~22%) than the control virus (data not shown). Furthermore, if M48U1 induces gp120 shedding in the treated cell cultures, this should be reflected by an increase in gp120 in the virion-depleted supernatant and a reciprocal decrease in virion-associated gp120. Therefore, virus particles were separated from culture supernatants, using magnetic anti-CD44 microbeads (μMACS ™ VitalVirus isolation kit, Miltenyi Biotec). We then used an ELISA to quantify gp120 in both fractions. Interestingly, at the earliest time point, gp120 was far more abundant in the virion-depleted supernatant of M48U1 treated cultures than of the control cultures ( Figure 2D). The ratio of virion-free and virion-bound gp120 then gradually decreased to reach equilibrium levels 72 h post treatment, coincident with the time-dependent recovery of virus infectivity ( Figure 2D). Finally, if M48U1 induces gp120 shedding from the Env spikes embedded in the cell membrane of infected PBMCs, this would predict a higher incorporation of functional Env molecules in virions produced from untreated PBMCs compared to M48U1-treated PBMCs. Using the same method described above, we determined the gp120 and p24 concentration in the virion fraction 24 h, 48 h, and 72 h after M48U1 treatment. Functional Env incorporation was calculated as the ratio of gp120 vs. p24. As shown in Figure 2E, the results suggest that virions produced from infected PBMCs after removal of M48U1 have a significantly lower degree of functional Env incorporation than control virions produced by untreated PBMCs. Once again, and coinciding with the observed time-dependent recovery of virus infectivity, Env incorporation is gradually restored to control levels at 48 h and 72 h. Implications of M48U1 in preventing mucosal HIV transmission Several studies have shown that infected seminal leucocytes can traverse intact vaginal epithelia to reach the submucosa and even the draining lymph nodes [2,4,7]. Although this might not be the dominant route of infection in natural HIV transmission, in the context of microbicide use, CAV might escape high drug concentrations in the vaginal lumen, by migration across the epithelium that lines the female genital tract. However, whether the initial microbicide exposure can affect virus budding from these migrating seminal leucocytes remains unclear. Therefore, we here evaluated the effect of various ARVs on the virus produced after treatment and subsequent washing of infected PBMCs. Most ARVs had no impact on virus production or virion infectivity. However, surprisingly, infected cells exposed to the CD4 mimetic M48U1 produced largely defective viral particles during the first 24 hours after treatment. Virus infectivity was gradually restored at later time points. These results were unexpected because the candidate microbicide M48U1 acts early in the viral lifecycle by preventing HIV entry into host cells through competition with the CD4 receptor. As the infected PBMCs were washed extensively to remove M48U1 and CFV after initial drug exposure, this 'memory' effect was unlikely to be due to CFV already neutralized by M48U1 or unbound M48U1 in the supernatant. We hypothesized that M48U1 associates with gp120 molecules expressed on the cell membrane of infected cells prior to viral budding. If M48U1 dissociates very slowly from its target and is retained after washing, this could cause the production of defective virions. However, if this were the case, one would expect a similar attenuation effect for the tight-binding RT inhibitors UC781 and TMC120. Previous studies have shown that pretreatment of infected cells with UC781 results in a fivefold reduction in the infectivity of de novo produced virus, which was explained by the membrane compartment hypothesis [14,20]. In this hypothesis NNRTIs, which are sequestered in the cell plasma membrane due to their hydrophobic nature, would be incorporated into the membrane of nascent budding virions. The tight binding to RT would subsequently trap the NNRTI within the virions, rendering them defective. Although different cells (PBMCs versus H9+ cells) and a different read-out of infectivity (Gag p24 and firefly luciferase versus syncytium formation) were used, our results revealed only a very small memory effect for UC781 (relative titer of 75% on TZM-bl cells). This suggests that the NNRTIs in our assay were removed from the PBMC cultures, making it unlikely that the hydrophilic M48U1, which binds to the gp120 molecules in the plasma membrane, would be retained on its target. Hence, another mechanism must explain the sustained M48U1 activity. More than twenty years ago, soluble CD4 was shown to disrupt the non-covalent association of the gp120 and gp41 envelope glycoproteins when used at high concentrations, rendering virions noninfectious [21,22]. Recently, gp120 shedding was also observed to be induced by membrane-proximal external region (MPER)-specific antibodies 2F5 and 4E10 [23]. Although the antiviral effect of M48U1 is mainly caused by competitive inhibition, gp120 shedding might occur at high M48U1 concentrations and hence explain the prolonged inhibitory effect. If this is true, then the continuous recruitment of freshly synthesized HIV envelope proteins from the endoplasmic reticulum to the cell membrane would explain why infectivity is restored over-time (see model in Figure 3). Here, we provided 4 separate pieces of evidence to support this hypothesis. First, by BN-PAGE, we saw that treatment of VLPs with M48U1 led to an increase in gp41 stumps coupled with a decrease in native Env trimer ( Figure 2A). Second, M48U1-induced exposure of the cryptic 7B2 epitope on gp41 stumps on VLP surfaces is also consistent with gp120 shedding ( Figure 2B). Third, the larger proportion of gp120 detected in the virus-depleted supernatant of M48U1 treated cultures is again consistent with gp120 shedding ( Figure 2C). Finally, we showed that the recovery of virus infectivity coincides with an increased incorporation of functional Env molecules in the budding viruses ( Figure 2E). Together, these observations strongly suggest that M48U1 induces gp120 shedding at the cell membrane, resulting in defective nascent virions. From a microbicide perspective, entry inhibitors with a memory inhibitory effect on CAV are desirable. If infected seminal leucocytes can escape microbicide in the vagina by crossing the mucosal barrier, they could establish a founder population of infected target cells that then expands locally using the influx of new target cells recruited through outside-in signaling [24]. Low tissue concentrations of microbicide are of specific concern when entry inhibitors are used because their hydrophilic nature might impede compound accumulation in the (sub) mucosa. However, if the virus budding from these migrating leucocytes is rendered defective by prior microbicide exposure in the vagina, a window of opportunity would be provided to eliminate these invading cells before local infection is established. Aside from M48U1, we also observed a decline in infectivity of virus budding from cells that were pretreated with the PIs lopinavir and saquinavir or the gp120-glycan binder griffithsin, although these effects were modest compared to that of M48U1. The memory inhibitory effect of the PIs most likely results from immature virus particles that are still budding off in the first hours after PI removal. This is supported by the rapid restoration of virion infectivity at later time points. Interestingly, virions from the griffithsin-treated cultures did not completely regain their infectivity 72 h after removal of the ARV. Although the observed memory inhibitory effect of griffithsin remains the subject of ongoing research, it is not due to gp120 shedding (data not shown), indicating that, even after extensive washing, at least some griffithsin is retained within the treated cell culture thereby confirming recent work of Kouokam et al. [25]. Although current clinical trials are mainly testing reverse transcriptase inhibitors as microbicides, there is increasing interest in combining different ARV classes into combination microbicides to increase efficacy and to avoid cross-resistance with first-line therapy. Despite being a miniproteïn, the entry inhibitor M48U1 is easy to produce, is poorly immunogenic and does not induce anti-CD4 antibodies in vivo [26]. Together with its small size (27 amino acids), stable conformation in denaturing conditions (i.e., acidic pH and high temperatures) and relative resistance towards proteases [15], M48U1 thus has a favorable profile as potential microbicide. This was confirmed in a recent trial with a M48U1-loaded gel showing almost complete protection (5 out of 6 animals) after vaginal challenge in macaques [17]. The relative ease at which HIV escapes inhibition by antibodies or small molecules that target the entry process, including M48U1 argues for its use in a combination microbicide [18]. In conclusion, in this study we report for the first time that a highly potent CD4 mimetic HIV entry inhibitor, M48U1, induces gp120 shedding at high compound concentrations resulting in the production of defective nascent virions from infected primary cells up to 72 h after their exposure to the drug. This memory effect adds to the already interesting properties of M48U1 as a potential candidate for development into a combination microbicide.
2017-06-25T05:39:53.370Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "08876e9e1b792fa1e137dce4cca2c433d0e52b7f", "oa_license": "CCBY", "oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-10-12", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08876e9e1b792fa1e137dce4cca2c433d0e52b7f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250364001
pes2o/s2orc
v3-fos-license
Study of Morphological, Structural, and Strength Properties of Model Prototypes of New Generation TRISO Fuels The purpose of this work is to characterize the morphological, structural, and strength properties of model prototypes of new-generation TRi-structural ISOtropic particle fuel (TRISO) designed for Generation IV high-temperature gas reactors (HTGR-type). The choice of model structures consisting of inner pyrolytic carbon (I-PyC), silicon carbide (SiC), and outer pyrolytic carbon (O-PyC) as objects of research is motivated by their potential use in creating a new generation of fuel for high-temperature nuclear reactors. To fully assess their full functional value, it is necessary to understand the mechanisms of resistance to external influences, including mechanical, as in the process of operation there may be external factors associated with deformation and leading to the destruction of the surface of fuel structures, which will critically affect the service life. The objective of these studies is to obtain new data on the fuel properties, as well as their resistance to external influences arising from mechanical friction. Such studies are necessary for further tests of this fuel on corrosion and irradiation resistance, as closely as possible to real conditions in the reactor. The research revealed that the study samples have a high degree of resistance to external mechanical influences, due to the high strength of the upper layer consisting of pyrolytic carbon. The presented results of the radiation resistance of TRISO fuel testify to the high resistance of the near-surface layer to high-dose irradiation. Introduction This study is motivated by the increase in energy consumption and the efforts underway to search for alternative energy sources, including active studies in the field of nuclear and thermonuclear energy. In Kazakhstan, in recent years, new materials for future nuclear and nuclear fuel cycle facilities have been widely studied [1][2][3][4]; reactor [5][6][7][8][9][10][11][12][13][14][15][16] and out-of-pile [17][18][19] experiments simulating their operating conditions have been conducted. One area of such work is corrosion testing of materials for the high-temperature gas reactor (HTGR), a Generation IV in the early design stage [20][21][22][23]. The concept of Generation IV reactors is to improve safety, reliability, and cost-effectiveness, which will allow them to compete more effectively with conventional power generation methods. One of the promising trends in the design of Generation IV reactors is the creation of high-temperature gas-cooled reactors, whose main purpose is not only to produce technological heat, but also to produce hydrogen [24][25][26]. According to the development concept and roadmap for Generation IV power systems, the main questions to be answered before commissioning Research Methods The test samples of TRISO fuel prototypes were obtained for further research from Nuclear Fuel Industries, Ltd. (Muramatsu, Tokai-mura, Japan), which is developing this type of fuel for testing. The work [55] presents a detailed description of the technology for manufacturing such samples, which were taken as objects of this study. The objects of study in this work were models of the TRISO fuel prototype, in which the uranium oxide core was replaced by silicon carbide. All other coatings were applied in full accordance with the manufacturing technology of the original fuel prototype [55]. The purpose of manufacturing such mock-ups of the fuel prototype is to expand the range of laboratories for independent testing of its wear resistance and anti-corrosion properties, while observing the principles of non-proliferation of nuclear materials and radiation safety. The manufacturing technology of such a prototype cannot yet be disclosed for reasons of intellectual property protection. Replacing the core of uranium oxide with silicon carbide is justified in the study of the strength and corrosion properties of the outer protective layers of the test sample, since it does not affect the properties of the coating material because silicon carbide surpasses uranium oxide in mechanical properties. The results obtained in the study of wear resistance are quite interesting and can be exemplary when conducting full tests of the original fuel. The TRISO fuel prototypes with a silicon carbide fuel core with different layers were chosen as the objects of the study. The choice of silicon carbide as the core of fuel particles is due to its high resistance to heating, as well as radiation resistance. As a rule, for model objects, either silicon carbide or zirconium dioxide is used as the core material. The choice of these prototypes of nuclear fuel as objects of study, without a core filled with uranium or plutonium fuel, is due to the need to conduct test experiments aimed at studying the stability of the outer shells to mechanical influences. The main technical characteristics of the prototypes under study are presented in Table 1. The study of morphological features, as well as the elemental composition of the studied samples, was carried out using methods of scanning electron microscopy (Hitachi TM4000 microscope, Hitachi, Tokyo, Japan) and atomic force microscopy (AIST-NT SPM microscope, AIST-NT, Moscow, Russia). The strength characteristics were determined using the method of wear resistance tests at different loads (100-500 N) and determining the coefficient of dry friction. Tests were carried out on a series of samples (10 pieces) using the standard methodology of GOST [56]. Tests with different loads make it possible to assess the degree of resistance of the material to various pressures that occur during operation. For studies on resistance to mechanical stress, the samples were placed on special holders, then half-filled with epoxy resin in order to avoid displacement of the samples during external action: indentation or determination of the dry friction coefficient. Determination of the resistance of the TRISO surface layer to radiation damage associated with their accumulation during irradiation was performed by irradiating the investigated objects with low-energy He 2+ ions (40 keV) with fluences of 10 16 -10 17 ions/cm 2 and high-energy Kr 15+ (150 MeV) and Xe 22+ ions (220 MeV) with fluences of 10 13 -10 15 ions/cm 2 at temperatures of 1000 K. Irradiation was carried out at the heavy ion accelerator DC-60 (Nur-Sultan, Kazakhstan), located on the grounds of the Institute of Nuclear Physics. The choice of ions and irradiation conditions was determined by the simulation of radiation damage comparable to that in high-temperature nuclear reactors. Determination of the irradiation effect was evaluated by the resistance to cracking under single compression of samples and determination of the hardness value when indenting samples before and after irradiation. For irradiation, the special holders for the samples were capable of heating, making it possible to carry out direct temperature heating of the samples during irradiation. Results and Discussion Spherical particles with a diameter of 1 mm, representing model particles of fuel TRISO, were the objects of the research. During the experiments, it was determined that the initial samples were spheres consisting of three different layers and a core ( Figure 1). were capable of heating, making it possible to carry out direct temperature heating of the samples during irradiation. Results and Discussion Spherical particles with a diameter of 1 mm, representing model particles of fuel TRISO, were the objects of the research. During the experiments, it was determined that the initial samples were spheres consisting of three different layers and a core (Figure 1). According to the data obtained, the structure of the studied particles consists of the upper layer of pyrolytic carbon, the second layer of silicon carbide, a thin layer of porous pyrolytic carbon, and a core of silicon carbide and carbon particles. The first pyrolytic carbon layer is a densely packed structure consisting of grains with an average size of 300-400 nm ( Figure 2). According to the data obtained, the structure of the studied particles consists of the upper layer of pyrolytic carbon, the second layer of silicon carbide, a thin layer of porous pyrolytic carbon, and a core of silicon carbide and carbon particles. The first pyrolytic carbon layer is a densely packed structure consisting of grains with an average size of 300-400 nm ( Figure 2). were capable of heating, making it possible to carry out direct temperature heating of the samples during irradiation. Results and Discussion Spherical particles with a diameter of 1 mm, representing model particles of fuel TRISO, were the objects of the research. During the experiments, it was determined that the initial samples were spheres consisting of three different layers and a core ( Figure 1). According to the data obtained, the structure of the studied particles consists of the upper layer of pyrolytic carbon, the second layer of silicon carbide, a thin layer of porous pyrolytic carbon, and a core of silicon carbide and carbon particles. The first pyrolytic carbon layer is a densely packed structure consisting of grains with an average size of 300-400 nm ( Figure 2). From the SEM images presented in Figure 3, we can see that the internal structures of the layers are significantly different, as well as having clear boundaries separating them from each other. The thickness of the first layer is 40-42 µm. The data of energy dispersive analysis of the first layer confirms the fact that the first layer consists entirely of pyrolytic carbon with a graphite-like structure (see data of the mapping results presented in Figures 4 and 5). At the same time, the inner structure of the layer is a porous structure consisting of spherical grains. The estimate of the porosity value for this layer is 5-5.5%. of the layers are significantly different, as well as having clear boundaries separating them from each other. The thickness of the first layer is 40-42 µm. The data of energy dispersive analysis of the first layer confirms the fact that the first layer consists entirely of pyrolytic carbon with a graphite-like structure (see data of the mapping results presented in Figures 4 and 5). At the same time, the inner structure of the layer is a porous structure consisting of spherical grains. The estimate of the porosity value for this layer is 5-5.5%. The second layer, which is 26-27 µm thick, is a ceramic coating based on silicon carbide. The inner core is a mixture of large grains ranging in size from 1 to 3 µm. An amorphous layer of pyrolytic carbon with a thickness of 0.5-1 µm is observed between the core and the ceramic layer of silicon carbide. sented in Figures 4 and 5). At the same time, the inner structure of the layer is a porous structure consisting of spherical grains. The estimate of the porosity value for this layer is 5-5.5%. The second layer, which is 26-27 µm thick, is a ceramic coating based on silicon carbide. The inner core is a mixture of large grains ranging in size from 1 to 3 µm. An amorphous layer of pyrolytic carbon with a thickness of 0.5-1 µm is observed between the core and the ceramic layer of silicon carbide. According to elemental analysis, the inner part of the core is filled with silicon carbide particles with a small amount of silicon oxide impurities (no more than 3-5%). To determine the structural characteristics and phase composition, as well as the porosity of the studied samples, the method of X-ray diffraction was applied. The general The second layer, which is 26-27 µm thick, is a ceramic coating based on silicon carbide. The inner core is a mixture of large grains ranging in size from 1 to 3 µm. An amorphous layer of pyrolytic carbon with a thickness of 0.5-1 µm is observed between the core and the ceramic layer of silicon carbide. According to elemental analysis, the inner part of the core is filled with silicon carbide particles with a small amount of silicon oxide impurities (no more than 3-5%). To determine the structural characteristics and phase composition, as well as the porosity of the studied samples, the method of X-ray diffraction was applied. The general view of the X-ray diffractogram shown in Figure 6 indicates the presence of two phases in the structure of the samples under study. The main phase, characterized by a halo peak in the region of 2θ = 10-17 • , corresponds to the structure of amorphous pyrolytic carbon (PyC). Low-intensity X-ray reflections at 2θ = 36 • , 41 • , 60.5 • , 68.7 • , and 72.8 • correspond to the silicon carbide (SiC) phase with a cubic crystal lattice type and F-43m(216) spatial syngony. Applying the Rietveld full-profile method to evaluate the contributions of the two phases in the structure of the investigated samples, it was found that the phase ratio is PyC:SiC = 73.3:26.7. Analysis of the crystal lattice parameters and volume for the SiC phase made it possible to establish that the integral porosity value of the examined samples is 1.61%, at a phase density of 3.134 g/cm 3 . The degree of crystallinity of the ceramic layer of silicon carbide is more than 95%, which indicates a high degree of structural ordering. According to elemental analysis, the inner part of the core is filled with silicon carbide particles with a small amount of silicon oxide impurities (no more than 3-5%). To determine the structural characteristics and phase composition, as well as the porosity of the studied samples, the method of X-ray diffraction was applied. The general view of the X-ray diffractogram shown in Figure 6 indicates the presence of two phases in the structure of the samples under study. The main phase, characterized by a halo peak in the region of 2θ = 10-17°, corresponds to the structure of amorphous pyrolytic carbon (PyC). Low-intensity X-ray reflections at 2θ = 36°, 41°, 60.5°,68.7°, and 72.8° correspond to the silicon carbide (SiC) phase with a cubic crystal lattice type and F-43m(216) spatial syngony. Applying the Rietveld full-profile method to evaluate the contributions of the two phases in the structure of the investigated samples, it was found that the phase ratio is PyC:SiC = 73.3:26.7. Analysis of the crystal lattice parameters and volume for the SiC phase made it possible to establish that the integral porosity value of the examined samples is 1.61%, at a phase density of 3.134 g/cm 3 . The degree of crystallinity of the ceramic layer of silicon carbide is more than 95%, which indicates a high degree of structural ordering. The study of strength characteristics was carried out by determining the coefficient of dry friction of the samples under different loads on the sample in order to determine the strength of the sphere surface. Wear resistance tests were carried out by rolling with 10% slip. The values of the applied loads were 100, 200, and 500 N. The test results are shown in Figure 7. The presence of small changes in the values of the coefficient of dry friction is due to the emergence of small micro-fractures or micro-cracks arising in the friction process, which create small obstacles in the friction process, which leads to fluctuations in the value of the coefficient. As can be seen from the presented data, in the case of loads 100-200 N the value of the dry friction coefficient is 0.43-0.44 and is kept in this range for a sufficient number of test cycles (more than 15,000 cycles). After 15,000 cycles of consecutive rolling, there is a slight increase in the dry friction coefficient, which indicates the beginning of degradation and partial destruction of the particle surface, leading to an increase in friction. However, these changes do not exceed 1-5%, which indicates high resistance to wear resistance. When the load is increased up to 500 N, an increase in the dry friction coefficient up to 0.45-0.46 is observed, which indicates greater slip resistance of the particles at higher loads. Meanwhile, the increase in the dry friction coefficient during successive rolls shows an earlier increase in the coefficient and a greater loss of volume as a result of long-term tests than at 100 and 200 N loads. shown in Figure 7. The presence of small changes in the values of the coefficient of dry friction is due to the emergence of small micro-fractures or micro-cracks arising in the friction process, which create small obstacles in the friction process, which leads to fluctuations in the value of the coefficient. As can be seen from the presented data, in the case of loads 100-200 N the value of the dry friction coefficient is 0.43-0.44 and is kept in this range for a sufficient number of test cycles (more than 15,000 cycles). After 15,000 cycles of consecutive rolling, there is a slight increase in the dry friction coefficient, which indicates the beginning of degradation and partial destruction of the particle surface, leading to an increase in friction. However, these changes do not exceed 1-5%, which indicates high resistance to wear resistance. When the load is increased up to 500 N, an increase in the dry friction coefficient up to 0.45-0.46 is observed, which indicates greater slip resistance of the particles at higher loads. Meanwhile, the increase in the dry friction coefficient during successive rolls shows an earlier increase in the coefficient and a greater loss of volume as a result of long-term tests than at 100 and 200 N loads. Figure 8 shows AFM images of the surface of the studied particles before and after the tests at a load of 500 N. The obtained data show that in the initial state the sphere surface has a developed surface consisting of fine-grained inclusions with a high degree of uniformity. The average value of surface roughness is 51 ± 5 nm. For samples after wear resistance tests, there is a sharp change in surface topography with the formation of irregularities and crater formations, indicating partial surface degradation. In this case, the value of the average roughness for the samples after the tests increases more than 2.5 times and is 132 ± 7 nm. Figure 8 shows AFM images of the surface of the studied particles before and after the tests at a load of 500 N. The obtained data show that in the initial state the sphere surface has a developed surface consisting of fine-grained inclusions with a high degree of uniformity. The average value of surface roughness is 51 ± 5 nm. For samples after wear resistance tests, there is a sharp change in surface topography with the formation of irregularities and crater formations, indicating partial surface degradation. In this case, the value of the average roughness for the samples after the tests increases more than 2.5 times and is 132 ± 7 nm. The data obtained indicate a high degree of resistance of the studied specimens to wear resistance tests under low loads over a long period of time. Figure 9 shows the results of measurements of the fracture toughness values of the investigated samples subjected to irradiation with different types of ions and irradiation fluences. The choice of He 2+ ions for irradiation is due to the possibility of modeling the processes of helium swelling during irradiation, as well as the formation of gas-filled bubbles during implantation of He 2+ ions with the subsequent formation of He-V type complexes, which can agglomerate. At the same time, the swelling effects are most interesting in the near-surface layer, where degradation can lead to accelerated destruction and decrease in the particles' resistance to cracking and accelerated fracture under me-15+ 22+ The data obtained indicate a high degree of resistance of the studied specimens to wear resistance tests under low loads over a long period of time. Figure 9 shows the results of measurements of the fracture toughness values of the investigated samples subjected to irradiation with different types of ions and irradiation fluences. The choice of He 2+ ions for irradiation is due to the possibility of modeling the processes of helium swelling during irradiation, as well as the formation of gas-filled bubbles during implantation of He 2+ ions with the subsequent formation of He-V type complexes, which can agglomerate. At the same time, the swelling effects are most interesting in the near-surface layer, where degradation can lead to accelerated destruction and decrease in the particles' resistance to cracking and accelerated fracture under mechanical stresses. The choice of Kr 15+ (150 MeV) and Xe 22+ ions (220 MeV) allows for the simulation of radiation damage caused by uranium fission fragments in nuclear fuel. The fluence ranges were chosen to establish the dynamics of damage accumulation and their effect on strength properties. after friction tests at 500 N. The data obtained indicate a high degree of resistance of the studied specimens to wear resistance tests under low loads over a long period of time. Figure 9 shows the results of measurements of the fracture toughness values of the investigated samples subjected to irradiation with different types of ions and irradiation fluences. The choice of He 2+ ions for irradiation is due to the possibility of modeling the processes of helium swelling during irradiation, as well as the formation of gas-filled bubbles during implantation of He 2+ ions with the subsequent formation of He-V type complexes, which can agglomerate. At the same time, the swelling effects are most interesting in the near-surface layer, where degradation can lead to accelerated destruction and decrease in the particles' resistance to cracking and accelerated fracture under mechanical stresses. The choice of Kr 15+ (150 MeV) and Xe 22+ ions (220 MeV) allows for the simulation of radiation damage caused by uranium fission fragments in nuclear fuel. The fluence ranges were chosen to establish the dynamics of damage accumulation and their effect on strength properties. As can be seen from the presented data for the samples irradiated with He 2+ ions, the main changes in the decrease in hardness values and crack formation resistance are observed for fluences greater than 5 × 10 16 ions/cm 2 , which evidences the processes of accumulation of radiation damages caused by irradiation and the following implantation of He 2+ ions. On the other hand, at fluences less than 5 × 10 16 ions/cm 2 , the changes in cracking resistance and hardness are not more than 1-3% and the hardness value changes less intensively than the cracking resistance. According to the estimated data, the implantation of He 2+ ions in the I-PyC layer at fluences of 5 × 10 16 -7 × 10 16 ion/cm 2 is not more than 0.3-0.5 at. %. At that level, the basic depth of penetration of He 2+ ions is not more than 200-300 nm, which leads to the formation of defective areas in this layer, resulting in lower strength and hardness. A further increase in the irradiation fluence leads to a decrease in the cracking resistance and hardness values to 5-12%, which indicates that the accumulation of radiation damage negatively affects the resistance of the near-surface layer to mechanical influences. For the samples irradiated with Kr 15+ and Xe 22+ ions, the significant changes of strength characteristics are observed for irradiation fluences higher than 5 × 10 13 ion/cm 2 but, at the maximum irradiation fluence, the decrease in strength characteristics does not exceed 15-20%. Moreover, for Xe 22+ ions, due to the fact that their initial energy is higher and, consequently, the values of energy losses in the material along the trajectory of motion are higher, the degree of change in strength properties is more pronounced than for As can be seen from the presented data for the samples irradiated with He 2+ ions, the main changes in the decrease in hardness values and crack formation resistance are observed for fluences greater than 5 × 10 16 ions/cm 2 , which evidences the processes of accumulation of radiation damages caused by irradiation and the following implantation of He 2+ ions. On the other hand, at fluences less than 5 × 10 16 ions/cm 2 , the changes in cracking resistance and hardness are not more than 1-3% and the hardness value changes less intensively than the cracking resistance. According to the estimated data, the implantation of He 2+ ions in the I-PyC layer at fluences of 5 × 10 16 -7 × 10 16 ion/cm 2 is not more than 0.3-0.5 at. %. At that level, the basic depth of penetration of He 2+ ions is not more than 200-300 nm, which leads to the formation of defective areas in this layer, resulting in lower strength and hardness. A further increase in the irradiation fluence leads to a decrease in the cracking resistance and hardness values to 5-12%, which indicates that the accumulation of radiation damage negatively affects the resistance of the near-surface layer to mechanical influences. For the samples irradiated with Kr 15+ and Xe 22+ ions, the significant changes of strength characteristics are observed for irradiation fluences higher than 5 × 10 13 ion/cm 2 but, at the maximum irradiation fluence, the decrease in strength characteristics does not exceed 15-20%. Moreover, for Xe 22+ ions, due to the fact that their initial energy is higher and, consequently, the values of energy losses in the material along the trajectory of motion are higher, the degree of change in strength properties is more pronounced than for Kr 15+ ions. In general, the decrease in cracking resistance is caused by the effects of accumulation of radiation damage in the structure of the near-surface layer when the irradiation fluence increases. At the same time, as is known, the increase in irradiation fluence for heavy ions Kr 15+ and Xe 22+ above 10 12 -10 13 ions/cm 2 leads to the formation of overlapping areas of point defects, with subsequent formation of their conglomerates or complexes, which leads to the appearance of additional deformation contributions in the structure and strengthening of their pressure on the structure. As a result, crystalline and chemical bonds are partially destroyed, leading to embrittlement and destruction of the near-surface damaged layer. Conclusions The paper presents the results of characterizing the morphological, structural, and mechanical properties of model prototypes of new generation TRISO fuel designed for Generation IV HTGR-type reactors. The methods used to characterize the properties of the studied samples were scanning electron microscopy, atomic force microscopy, energy dispersive analysis, and X-ray diffraction. In the course of the experiments it was determined that the initial samples are spheres consisting of three different layers and a core. According to the data obtained, the structure of the studied particles consists of the upper layer of pyrolytic carbon, the second layer of silicon carbide, a thin layer of porous pyrolytic carbon, and a core of silicon carbide and carbon particles. It was found that, in the initial state, the sphere surface has a developed structure consisting of fine-grained inclusions located on the surface with a high proportion of uniformity. Using the method of X-ray phase analysis, it was found that for the SiC phase the value of integral porosity of the studied samples is 1.61%, with a phase density of 3.134 g/cm 3 . The degree of crystallinity of the ceramic layer of silicon carbide is more than 95%, which indicates a high degree of structural ordering. During the studies, it was found that the samples under study have a high degree of resistance to external mechanical influences, which is due to the high strength of the upper layer consisting of pyrolytic carbon. The results of the radiation resistance of the studied samples showed high resistance of TRISO to radiation damage caused by irradiation with both low-energy He 2+ ions and high-energy Kr 15+ and Xe 22+ ions. Further research will focus on corrosion and degradation resistance tests, as well as interactions with the gas environment, which will simulate more closely real reactor conditions. Thus, when studying corrosion resistance, the fact that the studied TRISO particles must be placed in a graphite matrix, which also affects the cooling and interaction of particles with the environment, will be taken into account.
2022-07-09T15:32:43.854Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "5a353e630fcac2b45878b38661baf788eafb3d71", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/14/4741/pdf?version=1657246854", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfa6dc304a8e72c4a16e351a1dfc307b050d6765", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
23820108
pes2o/s2orc
v3-fos-license
Feasibility study of robotic hypofractionated lung radiotherapy by individualized internal target volume and XSight Spine Tracking: A preliminary dosimetric evaluation Aim: To investigate the dosimetric impacts of lung tumor motion in robotic hypofractionated radiotherapy for lung cancers delivered through continuous tracking of the vertebrae by the XSight Spine Tracking (XST) mode of the CyberKnife. Materials and Methods: Four‑dimensional computed tomography (4DCT) scans of a dynamic thorax phantom were acquired. Three motion patterns (one‑dimensional and three‑dimensional) of different range were investigated. Monte Carlo dose distributions were generated with 4DCT‑derived internal target volume (ITV) with a treatment‑specific setup margin for 12.6 Gy/3 fractions. Six‑dimensional error correction was performed by kV stereoscopic imaging of the phantom’s spine. Dosimetric effects of intrafractional tumor motion were assessed with Gafchromic films (Ashland Inc, Wayne, NJ, USA) according to 1) the percent measurement dose points having doses above the prescribed (P > Dpres ), mean (P > Dm ), and minimum (P > Dmin ) ITV doses, and 2) the coefficient of variation (CV). Results: All plans attained the prescription dose after three fractions despite marked temporal dose variations. The value of P > Dpres was 100% after three fractions for all plans, but could be smaller (~96%) for one fraction. The values of P > Dmin and P > Dm varied drastically interfractionally (25%‑2%), and could be close to 0% after three fractions. The average CV ranged from 2.8% to 7.0%. Correlations with collimator size were significant for P > Dmin and P > Dm (P < 0.05) but not P > Dpres (P > 0.05). Conclusions: Treating lung tumors with CyberKnife through continuous tracking of the vertebrae should not be attempted without effective means to reduce the amplitude and variability of target motion because temporal dose variations owing to the intrafractional target motion can be significant. The CyberKnife offers two solutions for treating mobile lung tumors, either by Fiducial Tracking which requires radiopaque fiducial markers to be implanted in or near the tumor, [1] or by XSight Lung Tracking (XLT) which uses the tumor shape for tracking and is hence fiducial free. [3]Both target tracking methods could be combined with the Synchrony real-time respiratory tracking system (RTS). [4]The technical basis of the RTS is a correlation model between an external breathing signal and internal target positions determined from stereoscopic x-ray imaging of the implanted fiducials or the tumor combined with the compensation of that motion by the robotic arm. The RTS is most suitable for strong moving tumors because the gain in safety margin reduction is proportional to the range of target motion.But for tumors that are attached to rigid structures such as the spinal column and chest wall and that exhibit a small range of motion, fiducial-based RTS may become unjustified considering the additional risks of pneumothorax [5] and fiducial migration [6] .Furthermore, smaller intra-and interfractional variability of the tumor baseline position was observed with smaller tumor-to-vertebrae distance. [7,8]The XLT method, on the other hand, is not applicable to all lung tumors as not all tumors are visible on the x-ray images due to size and location.Alternatively, the XSight Spine tracking (XST), [9] an offline setup correction strategy that is originally intended for tracking vertebral anatomy in SBRT for spine tumors, may be applied.Such treatment setup strategy coincides in concept with the recently available lung optimized treatment option, called 0-view tracking mode, which utilizes the XST of adjacent vertebrae for global patient alignment.Compared to megavoltage (MV) electronic portal imaging device, kilovoltage (kV) stereoscopic imaging with the XST system offers superior image quality of bony anatomy for accurate auto-registration with the digitally reconstructed radiographs (DRRs).Because XST is not capable of tumor motion tracking and does not account for the interfractional and intrafractional uncertainties of the tumor positions larger safety margin is needed compared to the real-time correction by direct tumor detection. Similar to non-gated treatments, XST requires an internal target volume (ITV) to account for the effect of the semi-periodic respiration induced organ motion.When a large number of small photon beams are combined to dose paint the tumor volume, it often assumes that the tumor moves within a spatially invariant dose cloud.Clearly, as the tumor moves in and out of the radiation fields following the respiratory motion, the delivered dose to each voxel of the tumor may not add up to its expected total dose.As shown in a landmark study by Bortfeld et al., [10] the dose variance introduced by tumor motion depends on the delivery technique because of the arbitrary respiratory phase.The dosimetric impacts of the intrafractional target motion have been experimentally investigated in conventional linac-based isocentric irradiation by Richter et al. [11] for single beam, by Nakamura et al. and Huang et al. for coplanar and noncoplanar conformal radiotherapy, [12,13] by Jiang et al. [14] for sliding and step-and-shot intensity-modulated radiotherapy (IMRT), and Ong et al. for volumetric arc radiotherapy. [15]On the contrary, our group has performed experimental investigations of the intrafractional target motion for the CyberKnife focusing on the RTS. [16,17][20] In this study, we aimed to evaluate the adequacy of using the XLS-based strategy by studying the dose delivered to a moving tumor.Experimental measurements were made with Gafchromic films placed inside a thorax phantom with a moving tumor substitute. Motion phantom setup The dynamic thorax phantom (CIRS Inc., Norfolk, VA, USA) used in this experimental study consisted of a moving spherical target with film inserts that can accommodate Gafchromic films (Ashland Inc., Wayne, NJ, USA) in coronal and axial planes.For our study we used EBT2 film.The tumor substitute has a density of 1.06 g/cc and was embedded in the center of the spherical target.The phantom was programmed to move the target in a fixed period of 4s and at variable amplitudes: #1) 10mm in the superior-inferior (SI) direction, #2) 20 mm in the SI, 5 mm in the anterior-posterior (AP), and 2 mm in the lateral (LR) direction, and #3) 10 mm in the SI, 5 mm in the AP, and 2 mm in the LR direction.The motion parameters were chosen according to the analysis of our institution that most tumors exhibited motion principally in the SI direction (mean = 8 mm) and less in the AP direction (3 mm) and the LR direction (1 mm).A large motion range of 20 mm was also included as an extreme scenario.The maximum distance between the target's center and the phantom's spine was 6.5 cm.Constant motion was assumed in four-dimensional computed tomography (4DCT) simulation and treatment deliveries. 4DCT simulation, target definition and ITV-to-PTV margin determination 4DCT images of 1.25mm thickness were acquired on a GE Light Speed 64-slice computed tomography scanner (General Electric Company, Waukesha, WI, USA) together with the real-time position management system (RMP, Varian Medical Systems, Palo Alto, CA, USA).The 4DCT dataset was then sent to the Advantage Workstation (General Electric Company, Waukesha, WI, USA) for post-processing using the Advantage 4DCT software.For each 4DCT dataset, 10 equally time-binned three-dimensional computed tomography (3DCT) datasets were created, with the 0% image dataset and the 50% image dataset roughly corresponding to the end-inhale phase and end-exhale phase in the respiratory cycle.Additionally, we created two reconstructed datasets using maximum-intensity projection (MIP) and average-intensity projection (AVG).The MIP and AVG created 3DCT images that represented the greatest and average voxel intensity values throughout the 4DCT dataset, respectively. Both the MIP and the AVG datasets were imported into the Multiplan v. 4.0.x(Accuray Inc., Sunnyvale, CA, USA) treatment planning system (TPS).The ITV was produced as the union of the simulated gross tumor volume (GTV) over the motion trajectory on the MIP images.Margins from the ITV to the planning target volume (PTV) were calculated according to the Wolthaus et al. [21] 's margin recipe based on the published data of inter-and intrafractional variability of tumor baseline shift. [7,8]The resulting total margins were 15.0-20.5 mm for the SI direction, 4.5-5.5 mm for the LR direction, and 7.5-10.0mm for the AP direction. Treatment planning and treatment setup Multiplan (v.4.0.x) was also used for treatment planning.The tracking method to be used for treatment correction was defined prior to plan optimization and dose calculation, in which case we used the XST mode. [22]In the XST mode, a region of interest (ROI) that included a spine volume extending two vertebrae beyond the full PTV's length was defined.For each motion profile, we performed Monte Carlo dose optimization on the AVG images using two different collimators, one with a dimension comparable to the PTV's long axis and the other of a dimension that just matched the planning GTV (15 mm).For motion pattern #1, 20 and 35 mm circular collimators were chosen for treatment planning.For motion pattern #2, 20 and 40 mm collimators were used.For motion pattern #3, 20 and 35 mm collimators were chosen.In addition to the calculated ITV-to-PTV margins, we created two other plans with a fixed 5 mm ITV-to-PTV margin for a given motion profile (motion #3).This aimed to assess the sensitivity of dose received by the GTV to the ITV-to-PTV size.Therefore, a total of four treatment plans using 12.5, 20, 25, and 35 mm were created for motion #3.The Monte Carlo dose calculation algorithm of the MultiPlan TPS has been previously described by Ma et al. [23] , and basically implements the MCDOSE, an EGS4 user code.All Monte Carlo dose calculations were performed at 0.5%-1% relative statistical uncertainty.The dose grid resolution was approximately 1.47 × 1.47 × 1.25 mm.Dose distributions were Gaussian-smoothed to reduce statistical noise.Total doses of 12.6 Gy in 3 fractions were prescribed to 65%-73% isodose lines (maximum dose = 100%) to achieve > 99% target coverage.Table 1 gives a summary of the final treatment plans.The fractioned dose was scaled to 4.2 Gy in order to accommodate the dose range applicable to the red channel of the Gafchromic EBT2 films. Treatment setup was performed with the XST.Briefly, when the phantom loaded with the EBT2 films was placed on the treatment couch, stereoscopic kV image pairs were acquired and compared with synthetic DRRs computed a different angles in the predefined ROI of the spine structure by intensity-based 2D-3D registration [Figure 1]. [9]he registration resulted in three translational and three rotational errors given as the differences of the spine structure between the treatment position and the planned position.These errors were subsequently corrected by movement of the treatment couch until the setup errors were reduced to less than 0.5 mm (translational) and 0.5° (rotational).The residual error for the spine alignment was then corrected by the CyberKnife robot and the treatment beams were delivered to the moving target according the spine-tumor relation from the planning CT. The red-channel images of the EBT2 films were registered by the use of the Image Processing Toolbox™ of Matlab (The MathWork, Inc., Natick, MA, USA).We used an in-house Matlab program to analyze the dose distributions. Dosimetric evaluations Measured dose distributions were analyzed based on the percentage dose point that received a dose larger than the calculated prescription dose, minimum dose, and mean dose, denoted as P > Dpres , P > Dmin , and P > Dmean , respectively.We used the coefficient of variation (CV) to evaluate the temporal dose variation.It is defined as where σ is the standard deviation and d is the average dose in a single pixel over three fractions.A smaller CV indicated smaller dose variation in each pixel. RESULTS Figure 2 presents the cumulative dose distributions measured in the axial and coronal planes cutting through the GTV's center.Except the dose distribution obtained with the smallest 12.5 mm collimator, Figure 2 shows that all other measured dose distributions (in cGy) decreased from inside out, a pattern characteristic of the heterogeneous dose distribution with larger collimators in SBRT.The cumulative dose distribution of the 12.5 mm collimator showed a reversed pattern in which the dose distribution was colder at the center characteristic of dose distributions for the CyberKnife with small collimators.More importantly, it demonstrates that the target attained the prescription dose of 12.6 Gy after the same plan was delivered three times, provided that the motion pattern remained constant from planning to delivery.Figure 3 illustrated the distributions of measured dose variations (1 SD) over three fractions in the axial and coronal films.Qualitatively, the dose variations tended to be greater in the coronal films than in the axial films.The dose variations were largest in the plan using 25 mm collimator for motion #3 and were smallest in the plan using 40 mm collimator for motion #2.In general, the dose variations differed from plan to plan without a clear pattern of correspondence to the composite dose distributions in Figure 2.For quantitative analysis, we calculated the percentage of dose points exceeding the prescribed dose (P > Dpres ), the calculated minimum dose (P > Dmin ), and the calculated mean dose (P > Dmean ) as a function of the motion pattern [Table 2]. Table 2 shows that P > Dpres was < 100% for only a few single fractions, but the cumulative P > Dpres approached 100% in all treatments.In contrast, the values of P > Dmin and P > Dmean were seen to differ significantly between fractions.For example, P > Dmin can vary from ~ 25% in the first fraction to ~ 2% in the remaining fractions and ends up ~ 0% after three fractions.The cumulative P > Dmin ranged from 0% to 99.4% (74.9 ± 33.5% [mean ± 1 standard deviation]).The cumulative P > Dmean were <5% except for the plans using the 12.5 mm collimator.Figure 4 showed the histograms of the CV for each motion profile.The average CV for all dose points is shown in the CV histograms for different collimators.For motion #1, the CV ranged from 0.02% to 7.53%, whereas for motion #2, the CV ranged from 0.03% to 8.90%, and last, for motion #3, the CV ranged from 0.01% to 11.80% for a normal ITV-to-PTV margin and from 0.59% to 11.85% for a reduced margin. DISCUSSION In this study, we investigated the feasibility of highly conformal stereotactic body radiation therapy (SBRT) by using CyberKnife for lung tumors that are attached to the rigid spine structure and exhibit small motion.This strategy adapted the XSight Spine Tracking system (XST) for setup correction and employed individualized internal target volumes (ITV) with an additional margin for inter-and intrafractional variability of the tumor baseline.The dosimetric impact of such treatment strategy for SBRT was evaluated with Gafchromic EBT2 films in a lung phantom consisting of a moving target and a static spine structure.Assuming constant target motion during 4DCT scanning and delivery, our results showed that the gross target volume (GTV) received the prescription dose after three fractions despite a marked temporal dose variation. No serious impact of tumor control probability is expected because the cumulative P > Dpres was 100% for all plans, even though it can be smaller (~96%) in a single fraction.On the contrary, values of P > Dmean for each fraction and after three fractions were < 5% for all plans except one, suggesting that the overall effect of target motion was decreasing the delivered dose [Figure 3].In practice, values of P > Dpres , P > Dmean , and P > Dmin primarily depended on the planned dose to the ITV and did not differ by margin size, because with only periodic motion the idealized 5 mm margin was enough to compensate for the dose blurring at the planning target volumes (PTV) edge.Nonetheless, strong temporal dose variations were evidenced in the histogram plots of the CV [Figure 4].The average CV was ~ 3.5% for small motion (10 mm SI motion) and up to 7.0% for large SI motion (20 mm) with a reduced 5 mm ITV-to-PTV margin.In an experimental study, Jiang et al., [14] found negligible dose variation of 1%-2% in a chamber measurement that was made in a moving phantom after 30 fractions, independent of the MLC delivery mode.In the other study, Ehler et al. [24] found a CV of 1.14%-5.51% in segment IMRT and 3.83%-8.25% in dynamic IMRT in measurements that were made with a moving detector array.Due to variations of phantom setup (e.g.homogeneous vs. heterogeneous phantom), dose calculation algorithms (e.g.Monte Carlo vs. pencil beam) and fractionation schemes, direct comparisons between the results of these studies are difficult.In our case, the increased dose variations can be explained by the large dose gradients (e.g.27%~35%) inside the PTV.This was in contrast to conventional 3D conformal radiotherapy and IMRT, where the effect of target motion is generally pronounced at the field edge, but negligible at the center of the uniform field. In SBRT, fractioned doses are up to 15-20 Gy, -3-4.5 times that of this study.If we were to deliver 20 Gy, the dose per fraction and hence the number of monitor units per beam would be roughly scaled up 4.5 times accordingly.This may have some impact on the resulting dose because the larger the number of monitor units (i.e., longer treatment time), the higher the probability of the target will be sampled by the treatment beams.The reduced dose error with larger number of monitor units has been recently examined by Ong et al. [25] in multileaf collimator-based hypofractionated stereotactic lung radiotherapy.Because issues with MLC may not be strictly relevant to the robotic-based IMRT, more studies are necessary for understanding how these factors influence the delivered dose in the present robotic-based treatment scenario. One of the limitations of this study is the relatively large target-to-spine distance in the phantom as this technique is aimed for tumors in the immediate vicinity of the spinal column.However, we do not expect that the results would be affected by the target-to-spine distance because of the constant and regular target motion and the overall rigidness of the phantom.Nevertheless, intrafractional and interfractional variability of tumor motion range, period and baseline are noted frequently in lung radiotherapy. [7,26]Although we explicitly calculated the extra setup margins for these uncertainties, we were unable to assess their dosimetric effects with the present experimental setup.It is expected that increased inter-and intrafractional tumor baseline drifts relative to the tracking spine volume may increase the delivered dose errors.Huang et al. [13] recently showed that treatment plans created with inaccurate ITV led to underdosing (10%) in a portion of the PTV when the irregular target motion was large (~20 mm), whereas good agreement between planned and measured dose distributions was observed for irregular motion <8.8 mm.This seemed to be consistent with our preliminary results that measured minimum and mean doses tended to decrease with increasing motion amplitude (10 mm vs. 20 mm). Because XST represents an offline treatment setup strategy, it is impossible to reduce the setup margin.Yet, there is great potential to reduce the internal margin despite non-real-time tracking.Murphy et al. [27] have demonstrated the effectiveness of breath-holding to reduce and stabilize the tumor motion in hypofractionated radiotherapy.A major concern of such breath-held approach is prolonging the treatment duration beyond the patient's compliance.Recently, the concept of time-weighted average tumor position has been proposed by Wolthaus et al. [21] Unlike the concept of ITV which aims to provide 100% dose coverage to the clinical target volume (CTV) during the entire breathing cycle, Wolthaus et al. [21] suggested that, if a treatment plan is designed for the tumor at its time-weighted average position during the breathing cycle, a good dose coverage is still obtained even though the target is not fully within the PTV during a short portion of the breathing cycle.Guckenberger et al. [28] estimated that 2.4 and 6 mm margins around the CTV at the time-weighted average position were needed to compensate for motion amplitudes of 10 and 20 mm.This nearly halves the internal margin.If such margin design is adapted to treatment planning of our proposed strategy, it may be possible to reduce the total safety margin from 15.2 to 11.9 mm and 20.7 to 15.1 mm for motion amplitudes of 10 and 20 mm.In addition, stereoscopic images do not provide volumetric information about changes in tumor volume that has been noted by Britton et al. [29] It is important to repeat 4DCT simulation to confirm that there is no continuous progressive change in tumor volume and position, particularly for hypofractionated/accelerated regimens that take a few weeks to complete. CONCLUSIONS For the first time, a quantitative dosimetric evaluation of target motion in robotic hypofractionated delivered using the XSight Spine Tracking method was performed.Although the target received the prescription dose after three fractions, this technique should be used with caution because the temporal dose variations can be significant.Unless effective means are employed to reduce the safety margin and variability of tumor motion, we do not recommend the non-real-time spine tracking strategy for treating tumors with motion of more than 10 mm.Finally only long term clinical evaluation of this method will demonstrate efficacy of this treatment strategy. Figure 1 : Figure 1: XSight Spine tracking registered the bony spine anatomy between the digitally reconstructed radiographs and the corresponding orthogonal stereoscopic images.The registration results, which are circled on the right, are three translational (left-right, superior-inferior and anterior-posterior) and three rotational errors (yaw, tilt, and roll).The shadow of the target appeared above (upper row) and beneath (bottom row) the spine structure Figure 2 : Figure 2: Cumulative dose distributions in the axial and coronal planes through the center of the gross tumor volume for different motion patterns and collimators.Doses are in cGy (see color bar) Figure 3 : Figure 3: Distributions of the measured dose variations (1 standard deviation) in the coronal and axial planes of the moving target after the same treatment plans were delivered three times.Dose variations are in cGy (see color bar) Figure 4 : Figure 4: Coefficient of variation histograms for all motion patterns with different collimator sizes.Average coefficient of variation values are given on the histograms as "ave.coefficient of variation"
2018-04-03T06:18:20.225Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "32d8c3bcbd6c2e13c34d968b96170c3aa9c41079", "oa_license": null, "oa_url": "https://doi.org/10.4103/0973-1482.138220", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "56e95fad9649b6ec7bdeee585260f1295ba2a93b", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
52923394
pes2o/s2orc
v3-fos-license
Relationship between Ocular Deviation and Visual Function in Retinitis Pigmentosa In retinitis pigmentosa (RP), peripheral visual-field loss starts in early stages, whereas central vision loss occurs in advanced stages. Sensory strabismus gradually occurs in RP. We investigated the relationship between ocular deviation and visual function and explored for sensory strabismus risk factors in 119 consecutive patients with RP at various stages. We assessed ocular deviation at far and near distances, that is the central visual field, using the mean deviation (MD) value and visual acuity (VA), and the residual binocular field area, using Goldmann perimetry (GP), in 33 patients. The horizontal ocular deviation at near distance was >10° in 30% patients and correlated with residual visual function. Although there was no effective cut-off value for central visual function, a cut-off residual GP area of 40 cm2 distinguished patients with a larger from those with a smaller horizontal ocular deviation at far distance (P = 0.04). Our findings suggest that visual function is negatively associated with ocular deviation in patients with RP and that the sensory strabismus risk is relatively high for patients with a binocular visual field <40 cm2. Thus, screening for ocular alignment may be necessary for patients with RP-associated severe vision loss as part of their comprehensive care. Horizontal deviation values showed a significant or marginally significant correlation with the patient age, VA, MD value, interocular difference in VA, residual GP area, and binocularity status. While VDF exhibited a similar trend, VDN did not show any correlation with the investigated factors (Table 2). Figure 1 shows the correlation of ocular deviation at far distance with the MD value and residual GP area. There was no effective cut-off value for distinguishing patients with a larger ocular deviation from those with a smaller one. However, a cut-off residual GP area of 40 cm 2 , corresponding to a central circular area of approximately 30°, could distinguish patients with a larger ocular deviation from those with a smaller ocular deviation (HDF, 5.1° ± 6.0° vs 1.8° ± 1.5°, P = 0.04; VDF, 0.6° ± 1.0° vs 0.2° ± 0.5°, P = 0.11, respectively). The large-angle group (near distance) exhibited a significantly worse VA and larger interocular difference in VA compared with the small-angle group (P = 0.04 for both), which exhibited a worse MD value and residual GP area compared with the large-angle group, although the differences were not statistically significant (Table 1). Subanalysis for investigating wide visual field and ocular deviation association. In our subanalysis for investigating the association between wide visual field and ocular deviation, we included 30 patients after excluding two patients who could not undergo PACT at far distance because of a poor VA and one patient with interocular difference in logMAR ≥ 0.5 (Table 3). HDN correlated with VA, residual GP area, and binocularity status (P = 0.001, r = 0.56; P = 0.04, r = −0.37; P = 0.001, r = 0.56, respectively), but not with interocular differences in VA and MD value (Table 4). HDF, VDN, and VDF did not correlate with residual GP area (P = 0.13, 0.38, and 0.20, respectively). The results of this subanalysis were similar to those obtained with the overall analyses. Discussion The present study showed that visual function is negatively associated with ocular deviation in patients with RP. Considering the correlation coefficient, the peripheral visual field seems to be more important than the central visual field for the stabilisation of ocular alignment. A residual binocular GP area >40 cm 2 , corresponding to a Table 2. Correlation between horizontal and vertical ocular deviation at each distance and other parameters. Residual GP area: total residual visual field area for the right and left eyes, as determined by Goldmann perimetry. HDN, horizontal deviation at near distance; HDF, horizontal deviation at far distance; VDN, vertical deviation at near distance; VDF, vertical deviation at far distance; logMAR, logarithm of the minimal angle of resolution; MD value, mean deviation value obtained using the 10-2 Swedish Interactive Threshold Algorithm standard program of the Humphrey field analyser. Axial length and logMAR data are presented as average values for the right and left eyes. Data in a,b,c and d are missing for 33, 15, 30, and 2 patients, respectively. Ocular deviation was analysed using absolute values, regardless of the strabismus type. *Statistically significant according to Spearman's rank correlation coefficients (P < 0.05). The vertical deviation at far distance (VDF) also negatively correlated with the MD value (P = 0.004, r = −0.28). There were no effective cut-off MD values for distinguishing patients with a larger ocular deviation from those with a smaller ocular deviation. (c,d) Both HDF and VDF tend to exhibit a correlation with the residual Goldmann perimetry (GP) area (P = 0.08 and 0.13, respectively). A cut-off residual GP area of 40 cm 2 can be set to distinguish patients with a larger deviation from those with a smaller deviation. central circular GP area of approximately 30°, may be a practical cut-off value for screening high-risk patients. Furthermore, VA correlated with MD (P < 0.001) and with the interocular difference in VA (P = 0.01), but not with the residual GP area (P = 0.62). Thus, the residual GP area and central visual function, including VA, MD, and the interocular difference in VA, separately affect ocular deviation in patients with RP. Both the mean VA and interocular difference in VA showed weak correlations with HDF. In patients with a poor VA, photoreceptors in the macular lesion are probably damaged; therefore, a small interocular difference during disease progression would affect the interocular difference in VA. Indeed, VA correlated with the interocular difference in VA (P = 0.01). Furthermore, a decrease in visual function in one eye also induces strabismus, which is known as sensory heterotropia 7 . Thus, it is possibile that the interocular difference in VA causes sensory heterotropia. Total Large-angle hroup Small-angle group P In a previous study, VDN was observed in 13% of patients with RP (n = 23) 13 ; this rate is similar to that observed in the present study (19%). However, the vertical deviation observed in the present study population was small (VDF, 0.4° ± 1.4° vs. HDF, 3.7° ± 5.2°). In patients with impaired vision, a small vertical deviation will neither cause any binocularity disorder nor lead to complaints about aesthetics. Therefore, vertical deviation may have low clinical relevance for most patients, with the exception of some patients with a large deviation. In the present study, esodeviation at near and far distances was observed in 11% (13% without orthophoria) and 7% (9% without orthophoria) of patients, respectively. These findings are in agreement with those of a previous report on patients with acquired vision loss (10% without orthophoria) 8 . Consistent with the fact that RP induces acquired vision loss, the number of patients with sensory exotropia was higher than that of patients with esotropia in the present study 7,8 . Furthermore, patients with exodeviation were significantly older than those with esodeviation at near distance, but not those with esodeviation at far distance. A possible reason for this could be an increase in near exodeviation due to a decrease in convergence ability with age 14 . The lifetime risk of being diagnosed with adult-onset strabismus is approximately 4% 15 . In the present study, 30% patients exhibited HDN >10°. This incidence rate is obviously higher than that for the general population. The present study showed not only a correlation between visual function and ocular deviation but also a high rate of strabismus with surgical indications in patients with RP. Strabismus negatively affects self-esteem in patients with a deviation ≥25 prism dioptres 16 . Hunter mentioned that strabismus surgery, even in patients with no potential for binocular vision, allows an individual to communicate normally with others 17 . In a previous study, all patients who underwent treatment for strabismus >20 prism dioptres were satisfied even though they could not see the outcome 18 . Thus, cosmetic strabismus surgery should be considered, for patients who desire the surgery, even those with advanced RP. This study has some limitations. First was it had a cross-sectional design. It may be more convincing if changes in ocular deviation are demonstrated in accordance with changes in vision loss. Further longitudinal studies are necessary to overcome this limitation. Second, ocular deviation at far distance could not be measured in patients with a poor VA. Although ocular deviation at near distance is affected by convergence, which is unstable, and although the Krimsky test is inferior to PACT in terms of accuracy, patients in advanced stages of RP cannot gaze at a fixation target at a far distance. In this study, we also included patients who could not undergo the far distance test, because the exclusion of patients with common RP would bias the results. Third, we did not assess binocular visual function, as most patients exhibited ambiguous responses in binocular testing using Bagolini striate glasses during the preliminary examinations. Fourth, causative gene mutations were not taken into consideration. At present, approximately 70 genes have been identified to be associated with RP, and variations in these genes may have affected our results 19 . Further research to investigate the relationship between gene mutation and strabismus in patients with RP is necessary. Finally, the GP area was measured in only 33 patients. Further studies with a larger sample size are necessary to verify our findings. In conclusion, our findings suggest that visual function is negatively associated with ocular deviation in patients with RP. The risk of sensory strabismus is relatively high among patients with a binocular visual field <40 cm 2 on GP images. These findings warrant measures for care regarding ocular alignment in patients with RP and severe vision loss. Table 5. Comparison between the phoria and tropia groups. Data are presented as means ± standard deviations where applicable. Phoria group: patients with phoria or phoriatropia. Tropia group: patients with tropia. Residual GP area: total residual visual field area for the right and left eyes, as determined by Goldmann perimetry. logMAR, logarithm of the minimal angle of resolution; VA, visual acuity; MD value, mean deviation value obtained using the 10-2 Swedish Interactive Threshold Algorithm standard program of the Humphrey field analyser. Axial length and logMAR data are presented as average values for the right and left eyes. Ocular deviation was analysed using absolute values, regardless of the strabismus type. # Chi-square test; remaining, t-tests. *Statistically significant (P < 0.05). Methods This single-centre, cross-sectional study was approved by the ethics committee of Kyoto University Graduate School of Medicine (Kyoto, Japan). All study protocols adhered to the tenets of the Declaration of Helsinki, and all study participants provided written informed consent. Heidelberg Engineering, Heidelberg, Germany); and assessment of MD value with a Humphrey field analyser (Carl Zeiss Meditec, Inc., Dublin, CA), using the 10-2 Swedish Interactive Threshold Algorithm standard program. RP was diagnosed by retinal specialists. From March 2016, the axial length was additionally measured using an IOL Master device (Carl Zeiss Meditec). Ocular deviation was measured using PACT at far (5 m) and near (0.3 m) distances (HDF, HDN, VDF, and VDN) after confirmation of the binocularity status (phoria, phoriatropia, or tropia) using single cover testing. We considered horizontal and vertical deviation separately because it is unclear whether sensory strabismus mainly induces horizontal deviation. For patients who could not gaze at a fixation target, ocular deviation was measured using the Krimsky test at near distance. The patients were assigned to two groups on the basis of HDN; those with ocular deviation ≥10° and those with ocular deviation <10°; the latter were assigned to large-and small-angle groups, and the characteristics of the two groups were determined. Subanalysis of peripheral visual field using Goldmann perimetry. GP was performed when clinicians judged that it was clinically required. The binocular visual field area was measured after merging the GP images for both eyes using an open-source software (ImageJ; National Institutes of Health, Bethesda, MD; Fig. 3). The radius of the central 90° line was determined as 10.8 cm on a standard recording paper, and the measurements on the digital images were accordingly calibrated 20 . The isopter of the V/4e white test light was traced, and the sum of the residual binocular visual field areas for the right and left eyes was determined as the residual GP area. The residual GP area was used for the overall analysis and subanalysis for investigating the association between wide visual field and ocular deviation in patients who underwent GP, had an adequate VA for undergoing PACT at far distance, and did not show an interocular difference in VA (logMAR) ≥0.5. Furthermore, we assigned patients included in the subanalysis into phoria (including phoriatropia) and tropia groups and analysed the difference between groups. Statistical analysis. Data are presented as means ± standard deviations where applicable. All VA values were converted to logMAR units for statistical analysis. Ocular deviation measured in prism dioptres was converted to degrees (°) and analysed using absolute values, regardless of the strabismus type. The average logMAR and MD values for the right and left eyes and the interocular difference in these values were used as statistical parameters. In accordance with a previous report 21 , patients with a VA of count fingers, hand motion, light perception, and no light perception were arbitrarily assigned logMAR values of 2.6, 2.7, 2.8, and 2.9, respectively. For patients who could not undergo visual field examination with the Humphrey field analyser, the minimum MD value according to age was used in the analysis. Phoria, phoria-tropia, and tropia were assigned values of 1, 2, and 3, respectively, for analysis. Comparative analyses were performed using the t-test, chi-square test, and chi-square trend test where applicable. Correlation analyses were performed using Spearman's rank correlation coefficients. All statistical analyses were performed using SPSS software (version 21, IBM, NY). A P-value of < 0.05 was considered statistically significant.
2018-10-16T16:43:04.142Z
2018-10-05T00:00:00.000
{ "year": 2018, "sha1": "906a85ed80a50aa4be398aaa4088ddbbe83356a6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-33211-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c67b91aeda2dee95dd08568e17722fe01db05f3b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
33348793
pes2o/s2orc
v3-fos-license
Rewarding Healthy Behaviors—Pay Patients for Performance Despite a considerable investment of resources into pay for performance, preliminary studies have found that it may not be significantly more effective in improving health outcome measures when compared with voluntary quality improvement programs. Because patient behaviors ultimately affect health outcomes, I would propose a novel pay-for-performance program that rewards patients directly for achieving evidence-based health goals. These rewards would be in the form of discounts towards co-payments for doctor’s visits, procedures, and medications, thereby potentially reducing cost and compliance issues. A pilot study recruiting patients with diabetes or hypertension, diseases with clear and objective outcome measures, would be useful to examine true costs, savings, and health outcomes of such a reward program. Offering incentives to patients for reaching health goals has the potential to foster a stronger partnership between doctors and patients and improve health outcomes. INTRODUCTION I n an effort to improve health care quality, insurance companies and Medicare initiated pay-for-performance (P4P) programs that reward health care clinicians for achieving certain evidence-based performance measures. In theory, it seems as though this system should help improve patient health. Yet, a recent study of 500,000 patients in the United Kingdom found that P4P did not result in better hypertension outcomes over a 7-year period, despite input of $2.8 billion from the government. 1 In the United States, studies have also shown mostly negative results, and concerns have arisen concerning deselection of complicated or low-income patients. [2][3][4][5] Why might P4P fail? There are many potential factors, but one major cause is the complexity of effecting behavior change. P4P tries to improve health care based on the assumption that a single physician can successfully change the health-related behaviors of 1,000 patients. Indeed, physician counseling can help prime a patient for change, 6 but ultimately it is the patient's behavior change that will directly affect health outcomes. A program that rewards patients fi nancially for reaching measurable health goals has the potential to reduce issues of cost and compliance, 2 important obstacles to quality care. I therefore propose a new system: pay patients for performance (PP4P). WHAT PP4P WOULD LOOK LIKE Under PP4P, patients who meet evidence-based health care goals, such as keeping their blood pressure less than 140/90 mm Hg, or glycated hemoglobin (A 1c ) at less than 7%, will receive fi nancial incentives that would Joanne Wu, MD be in the form of health care credits, which can be used toward discounts on medications, health insurance, procedures, and co-payments. The credits can be added onto the patients' health cards at their doctor's visits. When patients achieve preventive health goals, such as getting cancer-screening examinations at recommended intervals, they also earn credits. When they exercise, they can wear a heart rate monitor and bring the recordings to their employer to add credits to their health card, so employers can share administrative costs of updating the cards. By rewarding preventive care and healthy behaviors, this system will also benefi t patients who have no chronic health issues; they can build up health credits for potential future use. Those who have no health insurance can also participate and build up discounts on medications and doctor's visits. Because co-payment discounts could be applied at any doctor's offi ce participating in the PP4P plan, the offi ce should not violate antikickback laws governing Medicaid and Medicare. 7 Offering lower costs may further help perpetuate the cycle of compliance. Importantly, in the past, insurance and employer programs that provided cash payments or matched participant contributions for quitting smoking and losing weight have produced mixed results. There is a general trend of improved behavior while the participants are actively receiving the rewards, but behaviors tended to regress once the rewards were no longer offered. [8][9][10][11][12][13][14] Presumably, to produce long-term behavior change, the fi nancial rewards need to be ongoing. Giving rewards in the form of health credits, which reduce costs of future health services or medications, rather than cash payments, may allow for a more sustainable program. Incentives that invest in further health improvement, such as reimbursements for gym memberships offered by insurance companies and employers, have the potential to decrease health care costs in the longterm. For example, Amway offered a low-cost gym membership and wellness program to its employees in 2007. An analysis of that program in 2010 found that active participants decreased their medical and prescription claims by 8% compared with a 2% increase in nonparticipants. 15 FINANCING PP4P The million dollar question is, who would pay for this reward system? Insurance plans, employers, pharmaceutical companies, and the government all stand to benefi t fi nancially in the long-term from patients' healthy behaviors and adherence to medication, so they should all contribute. There are already increas-ing numbers of insurance plans paying physicians for performance, so some of that payment could be shifted toward decreasing costs for patients. PP4P encourages medication compliance, so pharmaceutical companies can use that increased income toward rewarding patients with medication discounts. Government funding can help offset administrative costs of creating a system to monitor health credits. Examining the funds spent on P4P just from insurance plans can give an estimate of what amount might be available to spend on each patient under a PP4P program. In 2003, a health plan offered a physician group with 10,000 patients a potential bonus of $270,000 for that year. 2 It can be assumed that bonus amounts since then have increased to account for infl ation, so that allows for more than $27 per patient. This amount may not seem substantial, but for some patients, even a $10 medication or co-payment for a doctor's visit represents a considerable deterrent. Using the $30 available for that patient, one could reduce co-payments for doctor's visits every 2 months from $10 to $5, which could boost compliance with follow-up visits. Because not every patient will meet goals, the potential reward amount for a successful patient will be even higher. With the added cost savings that can be offered by pharmaceutical companies for medications and potential contributions from employers and the government, rewards can become even more signifi cant. ACCEPTABILITY OF PP4P It will be important to study patient acceptance of incentive programs, particularly because discounts for medications and doctor's visits may have a disproportionate effect on patients living in poverty and those utilizing Medicaid and Medicare. Studies have found divided opinions regarding fi nancial rewards for changing behaviors, such as quitting smoking, losing weight, controlling blood pressure or blood glucose. 16,17 Among participants in one study, smokers were significantly more likely to think that it is a good idea to pay smokers to quit smoking, and obese participants were more likely to think it is a good idea to pay people to lose weight. 16 Another study showed that rewards for behavior changes are generally more acceptable than are penalties for not meeting health goals. Also, participants were more willing to support funding for interventions when they deemed members of the target group to be less responsible for their condition. 17 Both these studies examined attitudes regarding direct monetary rewards rather than incentives in the form of discounts on further health services or medications, which is what PP4P would offer. People may fi nd PAY PAT IEN T S F O R P ER F O R M A NC E PP4P's health credit incentives more acceptable. Also, people tended to be more approving of an incentive program when they were likely to benefi t from it themselves, and PP4P has the potential to reward a broad range of patients, even those who are generally healthy. It will be crucial to communicate to patients how enrollment in the PP4P program works and fully explain the added benefi ts of the program. In 2006, West Virginia reformed its Medicaid program, providing reduced basic benefi ts to most healthy children and adults, but allowing them to opt in for enhanced benefi ts by signing a form in which they agreed to try to stay healthy and comply with regular check-ups, treatment plans, and preventive screenings. 18 Unfortunately, initial reports looking at the effects of the program show that it has suffered from poor enrollment because patients do not understand what it offers, whether it costs more money, and how to opt in for the program. 19 Learning from that experience, it is clear that any new patient incentive program will need to be explained carefully to potential participants, and it may have more success with automatic enrollment with a choice to opt out rather than having to opt in. NEXT STEPS A pilot project implementing PP4P in a small community would be useful to expose potential fl aws and show true costs and savings. It would be practical fi rst to recruit patients with hypertension and diabetes to participate. There are clear and objective evidencebased quality measures for those diseases, such as blood pressure and hemoglobin A 1c goals. It would be interesting to see how a reward, such as lower medication costs, may motivate patients to take the initiative in improving diet and exercise, scheduling more frequent appointments, and asking physicians to increase medication doses. The medical system as it currently operates places all the responsibility and rewards of quality of care on physicians. This system perpetuates a paternalistic approach that is not cost effi cient or sustainable. Highquality health care requires a team approach. Patients deserve a greater role in improving their wellness and reaping the rewards from it.
2017-09-26T13:48:15.145Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "ae6d793f4a9550f037b01c92aba4f6f2770d2f0d", "oa_license": null, "oa_url": "http://www.annfammed.org/content/10/3/261.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae6d793f4a9550f037b01c92aba4f6f2770d2f0d", "s2fieldsofstudy": [ "Business", "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6679884
pes2o/s2orc
v3-fos-license
How do multi-stage, multi-arm trials compare to the traditional two-arm parallel group design – a reanalysis of 4 trials Background To speed up the evaluation of new therapies, the multi-arm, multi-stage trial design was suggested previously by the authors. Methods In this paper, we evaluate the performance of the two-stage, multi-arm design using four cancer trials conducted at the MRC CTU. The performance of the design at fictitious interim analyses is assessed using a conditional bootstrap approach. Results Two main aims are addressed: the error rate of correctly carrying on/stopping the trial at an interim analysis as well as quantifying the gains in terms of resources by employing this design. Furthermore, we make suggestions for the best timing of this interim analysis. Conclusion Multi-arm, multi-stage trials are an effective way of speeding up the therapy evaluation process. The design performs well in terms of the type I and II error rates. Background Multi-arm, multi-stage (MAMS) trials employing an intermediate outcome in the early stages of a multi-stage trial with multiple research arms have been proposed as means of speeding up the evaluation of new therapies [1]. The design itself is based on discontinuing randomisation to 'poor' treatments at an early stage, and allowing through to the further stages only those treatments which show a predefined degree of advantage against the control treatment. In the early stages, the experimental arms are compared pairwise with the control according to an intermediate outcome measure. The advantage of using intermediate endpoints at interim analyses in general has been examined by Goldman et al. in simulation studies [2]. Treatment arms that survive this comparison then enter a further stage of patient accrual which ultimately culminates in pairwise comparisons against the control based on the primary endpoint. Thus, the trial is only stopped for lack of benefit, not evidence of a benefit as in many other designs [3,4]. A major issue for these designs is to assess their operating characteristics and in particular to preserve pairwise power at each stage and for the trial overall, whereby power in the second stage is conditional upon a treatment jumping the 'hurdle' at the first stage analysis. The aim of this study was to determine whether trials which had been conducted as standard parallel group trials would have benefitted from being conducted as a multi-stage trial using the MAMS methodology. To achieve this, data from four different trials was used. There were two main objectives to this study: Firstly to assess whether using the MAMS methodology, trials would have been correctly carried on/stopped at interim analyses and secondly to quantify gains made in terms of use of resources by employing the MAMS methodology. Hence, we were concerned about the error rate at the first stage given a particular outcome at the final stage analysis. An error is defined in this context as either stopping early a trial where the final analysis would have shown an effect or carrying on a trial at the interim analysis when the final analysis would have shown no evidence of an effect. To accomplish this, a series of analyses of the intermediate outcome measure were mimicked for each of the trials using both the original trial data and bootstrapped samples from the data [5]. Trials Four trials comparing treatments in different cancer sites were included in the analysis. Two of these had a positive outcome in favour of the new therapy, one was 'negative', showing no evidence of a difference between research and control arm, and one was a multi-arm trial with 'negative' and 'positive' results. The first trial, RE01, showed considerable evidence of a treatment difference in terms of overall survival [6]. In this trial patients with metastatic renal cancer were randomised equally to subcutaneous alpha-interferon (experimental) or oral medroxyprogesterone acetate (control). A comparison of overall survival in the two groups showed a 28% reduction in the mortality rate in the alpha-interferon group (Hazard ratio = 0.72, 95% Confidence Interval 0.55-0.94). This trial used a triangular sequential design which allowed for early stopping as soon as results were conclusive. Two trials in ovarian cancer were also re-analysed. In ICON3 [7], patients were randomised 1:2 to a research arm of paclitaxel plus carboplatin against a control arm of single agent carboplatin or CAP (cyclophosphamide, doxorubicin, cisplatin). The trial showed no evidence of an improvement of the experimental against the control arm (HR = 0.98, 95% Confidence interval 0.86 -1.10). The second ovarian cancer trial, ICON4 [8] was designed to assess whether giving paclitaxel with platinum-based chemotherapy would benefit women with relapsed 'platinum-sensitive' ovarian cancer more than conventional platinum-based chemotherapy. The trial showed a marked improvement of the experimental over the control treatment (HR = 0.78, 95% Confidence interval 0.65 -0.95). Finally, the multi-arm trial FOCUS [9] in poor prognosis advanced colorectal cancer was also re-analysed. In this trial patients were randomised 2:1:1:1:1 to five treatment plans A, B, C, D and E. Regimen A (control) comprised single-agent fluorouracil until clinical disease progression, then single-agent irinotecan. B and C were fluorouracil then, respectively, fluorouracil/irinotecan or fluorouracil/oxaliplatin. D and E were fluorouracil/irinotecan or fluorouracil/oxaliplatin from the outset. Only the comparison of experimental arm C with control was found to be superior using the outcome measure of overall survival. All comparison results are given in table 1. Design of multi-stage re-analysis We re-designed all four trials as though they had been run in a two-stage design. To do this, parameters given in table 2 and based on the original trial protocols were used to calculate the necessary number of control arm events e I for the first stage using the n-stage program for Stata [10]. This program is also available from the authors upon request. For trials ICON3, ICON4 and RE01, these calculations were based on using progression free survival (PFS) as the intermediate outcome. In FOCUS, however, it was decided that using PFS as an intermediate outcome was not appropriate since the randomisation was to a package of treatment both in the first line setting and at progression. Hence, this trial re-analysis employs overall survival as the outcome at both Stages 1 and 2. While the targetted hazard ratio for the intermediate and final outcome (for the alternative hypothesis) was specified as the same value in most trials, in ICON4, the trial protocol had specified a slightly lower hazard ratio for the final outcome. This value was also used for this analysis. The lower final stage significance level used for FOCUS reflects an adjustment that was made for multiple comparisons due to the number of arms considered in the trial. An analysis on the intermediate endpoint was conducted at the point where the target number of events for the interim analysis had been 'accrued' in real time in the control group. Patients who had not 'entered' the trial at this time point were excluded from the analysis. At this analysis a hazard ratio for progression free survival was calculated and compared with a critical value for the hazard ratio determined at the design stage. This critical hazard The number of control arm events targetted in this calculation remains the same under both the null and alternative hypothesis. Thus, this critical value gives the lower bound of a 1 -% confidence interval around the hazard ratio under H 0 of no effect [11] (p.79). This means that the one-sided 1 -% confidence interval around will just include H 0 at a power of %. Hence if an observed hazard ratio is greater than this critical value, we can reliable exclude an effect larger than under the nullhypothesis at level . Please see the Additional File 1 for a worked example. Therefore, if the hazard ratio was smaller than this critical value a trial was counted as continuing to randomise further patients and if it was larger, the trial was counted as 'stopped' in the sense that no further randomisation would be conducted to that particular experimental arm. A hazard ratio at the end of the trial using overall survival was also calculated to obtain correlation values between the test statistics on the intermediate outcome and the primary outcome. All hazard ratios were calculated using the Cox regression model with treatment the only covariate. To calculate an error percentage estimate, 5000 trial datasets were created based on each original trial by taking bootstrap samples with replacement from the trial dataset [5] (p.82). If this scheme were employed without any further adjustments, we would obtain a number of trials in which the overall result does not match the original trial result in terms of significance at the two-sided 5% level. However, the question we wanted to answer was whether, given a final result showing evidence of an effect of the experimental treatment, this trial would have been correctly continued at the interim analysis using the intermediate outcome measure. Hence it was decided that we would discard bootstrap samples in which the treatment effect was non-significant at the 5% level. The same applied to negative trials whereby we would discard if the bootstrapped treatment effect was significant at this level. Therefore, we employed a bootstrap sampling mechanism which was conditional on the result of the final analysis. On examining the mean of the resulting hazard ratios, we find that across the 5000 datasets, this is very close to the original trial result. To obtain the graphical displays, patient accrual into the trial was mimicked as described above and in addition, a hazard ratio was calculated each time a new event was observed. All analyses were conducted in Stata version 9.2. Results of re-analysis using trial data only In the first instance, all trials were re-analysed as described in the methods to consider what would have happened hypothetically if they had been designed as two-stage trials. evidence of a difference. The log hazard ratio is below the critical value initially, then moves above it for a short period of time after which it again moves below it. Only very much later on in the trial does it return to above the critical value again. The situation in FOCUS is similarly ambiguous in the first stages of the trial. In all comparisons the observed estimate of the hazard ratio is very close to the critical value at all times. This suggests that at best there is a very small effect of the experimental treatment over control, a long way away from the targetted hazard ratio. Tables 5 to 8 give results for a re-analysis of all trials in 2 stages with the first two tables employing PFS as the intermediate outcome. For comparisons between the control and treatment arms for ICON3, the percentage error relates to those trials where the treatment arm would have been carried on. This is the case because the final trial result showed no evidence of a difference. This error is perhaps of not such great concern as we are continuing arms which have no clear effect at the end of trial and thus it can be argued that this is a 'conservative' error. For the comparison in ICON4 for example though, percentage error refers to the number of times when the treatment arm would have been stopped at the stage 1 analysis even though the final result was positive. This is an error of greater concern to us since we are stopping a trial which would have shown a benefit. Results of re-analysis using bootstrapped data in 2 stages Our analyses show that if a treatment is successful on the final outcome at the end of the trial, it has a very good chance of 'jumping' the hurdle at the intermediate stages. ICON4 and RE01 are both trials with a strong positive outcome at the end of the trial and for these two trials, the maximum error lies below 5% using PFS as the intermediate outcome and just above 6% using overall survival as figure 1. The results for ICON3 (table 6) also illustrate that since the hazard ratio is not constant over time, the trend in the error rate is also non-monotonic and counter-intuitive. In this trial, as illustrated in figure 2, a very small effect size can be observed which is close to but not at H 0 and is also near the critical value. Therefore, the bootstrap gives a number of realisations which lie either side of the critical value. Tables 7 and 8 give the results for a bootstrap analysis using the primary outcome for all stages of the trial. Interestingly, results for the error rate are not improved (and sometimes worse) when using this outcome. This suggests that the error rate is not inflated by using a different outcome at the intermediate analysis as compared to the final analysis of the trial. The correlation displayed is the correlation between the test statistic at stage 1 and the end of the trial, i.e. the correlation between the log hazard ratio for the intermediate outcome at the time of that analysis and the log hazard ratio for overall survival at the end of the trial. In the FOCUS reanalysis, the outcome used at both stages is the final outcome. As the results illustrate, the correlation is surprisingly low for early analyses. The reason is that at a high significance level, the number of control arm events e I which are available for analysis is small and there is a longer time interval between that analysis and the final analysis. Furthermore, the analysis is based on a smaller subset of patients. Tables 6 and 8 additionally give information on the mean time saved if a trial was stopped at an earlier stage for those trials with a negative overall outcome. Mean time saved was calculated by subtracting the mean time required for a stage 1 analysis in the bootstrap runs from the estimated time required for a standard parallel group trial under the design assumptions. In this case, it was assumed that had the trial only been conducted with a final analysis on overall survival, this would have been conducted at a significance level of 5% two-sided and 90% power. If an unsuccessful treatment is rejected at an early stage, savings in trial time of 1.5 -2.3 years can be made. Simultaneously considering the savings possible in total trial time if a trial is correctly stopped at an earlier stage and the error rate estimated for trials with a final analysis showing evidence of a difference, we can draw some conclusions on the best placing of the stage 1 analysis. This trade off between correctly and incorrectly stopping the trial early suggests that ideally, the stage 1 analysis should be placed at a significance level of 0.2 or 0.3. At this point, the error rate in both the RE01 and ICON4 re-analyses is negligible. At the same time, a saving in total trial time of on average 1 and 2 years could have been made in FOCUS and ICON3 respectively. However, more extensive studies would need to be conducted to obtain the optimal timing of the stage 1 analysis. As the results in table 7 illustrate, FOCUS comparison A vs C would have been stopped early in some cases even though the overall trial result at the end of the trial could be viewed as being at the 'margins' of statistical significance. To circumvent this problem, it is anticipated that in a MAMS trial design, patients in those arms which were dropped at an earlier stage will still be followed up and the data analysed at a later stage. Conclusion Currently, according to FDA estimates [12], about 90% of agents entering Phase I do not succeed at Phase III. This is set against advances in our knowledge about pathways connected to cancer development and metastases and hence the availability of more agents for testing in clinical trials. After passing Phase I/II, the success rate of cancer Phase III trials is still only about 50%. However, if we for example employ a multi-arm, multi-stage design with four experimental regimens and one control, the probability of having at least one successful agent at the end of the trial increases to 87%. This calculation assumes independence of all experimental arms. Therefore, this type of trial design uses resources more efficiently. In this design we propose to target the event rate in the control arm rather than the number of events in all arms combined. There are two reasons for this approach. Firstly, an event rate different to that anticipated for the trial overall could either arise due to a different underlying event rate in all arms or due to a hazard ratio different to that targeted initially. This level of ambiguity is removed by using the control arm event rate as the deciding factor for when to conduct the analysis. Secondly, when more than one experimental arm is recruited to, it is unlikely that we shall observe the same hazard ratio in all comparisons, giving different total numbers of events for each comparison. However, the calculation for the overall number of events assumes the same event rate in all comparisons in the experimental arms. We have demonstrated that using the MAMS designs, significant savings can be made in terms of trial time if a treatment does not prove to be effective over the course of the trial. In this case, the trial re-analyses have demonstrated that around 50% of all bootstrapped trials would have been rejected at Stage 1, regardless of when this intermediate stage was conducted. Greater savings can thus be made if a MAMS trial is designed with three or more stages as the probability of early rejection increases. Trials with good evidence of an effect on overall survival jump all of the intermediate hurdles with very high probability. For ICON4, this was found to be as high as 100%. However, these methods do come at a cost. As the re-analysis of FOCUS comparison A/C has shown, trials or treatment arms with very small treatment effects do risk being discontinued earlier on during the trial. To alleviate this problem in employing this design at the MRC CTU we follow patients up further on discontinued arms while randomisation to these arms is stopped. Thus, these arms will also be analysed on the primary outcome at a later date albeit with reduced power in some instances, depending on the maturity of the data, so at least an estimate of the effect size can be obtained. Our re-analyses not only demonstrate the efficiency of the MAMS design in general, but also explore the best timing of the first stage analysis. The results suggest that the first stage is ideally placed at a significance level between 0.2 and 0.3 when the trade-off between the errors for correctly and incorrectly stopping is considered. This was the point at which the error rate in the RE01 re-analysis for example became negligible, both for PFS and overall survival as the intermediate endpoint. However, this is a practical recommendation only and does not reflect an optimal design as could be obtained from a simulation study. A further observation in this study was the behaviour of the correlation between the test statistics on the interme-diate and primary outcome. If the first stage is conducted early on with a very small number of control arm events, this correlation coefficient is generally low, around 0.3. It increases over time and reaches values in the range of 0.5 to 0.7 for a very late first stage analysis. This type of analysis would ideally be done as a simulation study, taking account of all possible trial scenarios. However, we believe that the much simpler conditional bootstrap approach that we employed is adequate. In this analysis, we needed to be able to distinguish between the error of stopping a trial early conditional on the final result showing good evidence of an effect, and the error of continuing a trial conditional on the final result showing no evidence of an effect. Hence we used a conditional bootstrap rather than the standard version since our interest was in the error conditional on a particular final outcome. In this design no adjustment for multiple testing is made to the type I error at the interim stages. There are a number of reasons for this: i) early stopping for efficacy where the issue of type I error is perhaps more acute, is not incorporated in the design, ii) the significance level at each stage has a screening role only and is set close to 0.5 to ensure an early first stage look and iii) the overall significance level is bounded above by the significance level chosen for the final stage. If desired, this final stage significance level may be adjusted according to the number of experimental arms considered. In fact, as the methods we present are for stopping arms for lack of benefit it may be appropriate to adjust the type II error for multiple comparisons at each stage. However, in our experience this issue is typically not considered. When analysing the trial at the intermediate stages, power may be increased by including covariates. Since the early stages will contain few patients, the trial population across the arms is more likely to be unbalanced in terms of potentially confounding covariates such as age. Including these known influential covariates in the analysis may increase the robustness of the results. For details see legend of table 5
2016-05-12T22:15:10.714Z
2009-04-17T00:00:00.000
{ "year": 2009, "sha1": "136049be819ce83f648136bcb0770d15d5f775bb", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-10-21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "deb6f2e3967500e841e94a6270e6301a3db810ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244885481
pes2o/s2orc
v3-fos-license
Detection of Tuberculosis Disease Using Image Processing Technique Machine learning is a branch of computing that studies the design of algorithms with the ability to “learn.” A subfield would be deep learning, which is a series of techniques that make use of deep artificial neural networks, that is, with more than one hidden layer, to computationally imitate the structure and functioning of the human organ and related diseases. +e analysis of health interest images with deep learning is not limited to clinical diagnostic use. It can also, for example, facilitate surveillance of diseasecarrying objects. +ere are other examples of recent efforts to use deep learning as a tool for diagnostic use. Chest X-rays are one approach to identify tuberculosis; by analysing the X-ray, you can spot any abnormalities. A method for detecting the presence of tuberculosis in medical X-ray imaging is provided in this paper. +ree different classification methods were used to evaluate the method: support vector machines, logistic regression, and nearest neighbors. Cross-validation and the formation of training and test sets were the two classification scenarios used. +e acquired results allow us to assess the method’s practicality. Introduction Since the origin of artificial intelligence in the 1940s, with some incipient work [1], it was sought to use computers as a help tool to solve problems of interest to humans [2]. It is from the influential work of Alan Turing in 1950 to determine if a machine was intelligent, performing the test that bears his name, that artificial intelligence and the number of scientific contributions related to this area increased significantly, emerging thus automatic learning, whose objective is to develop techniques that allow computers to learn. One technique used for machine learning is artificial neural networks which consist of a set of units, called artificial neurons, connected to each other to transmit signals. e evolutions of learning methods in conjunction with neural networks gave rise to deep learning (DL) which is made up of a set of machine learning algorithms that attempt to model high-level abstractions in data using computational architectures that support nontransformations. Multiple linear and iterative data are expressed in matrix or tensor form. ese techniques are used in a large number of projects, among which we can find digital image processing. Digital image processing has, in recent years, acquired an important role in information and computing technologies. Today, it is the basis for a growing variety of applications including medical diagnostics, remote sensing, space exploration, and computer vision, among many others. Digital image processing (DIP) is the set of techniques that is applied to digital images with the aim of improving the quality or facilitating the search for information, using a computer as the main tool. Today, the DIP is a very specific research area in computing [3]. During the last 15 years, a growing number of techniques related to digital images and their processing in digital format have been introduced into medical practice. As in the case of the present work, digital image processing is used for the detection of tuberculosis. Tuberculosis (TB), also known as consumption, is a chronic infectious disease caused by a germ called Mycobacterium tuberculosis. e bacteria usually attack the lungs mainly but can also damage other organs of the human body. TB spreads through the air, when a person causes, in this way, the spread of the disease. It can be preventable and curable if detected early; otherwise, it could cause the death of the patient. Tests such as a chest X-ray or culture of a sputum sample can be done to find out if a person has TB disease [4]. In Iraq, one of the main causes of mortality is tuberculosis, with a rate of 9.24% per 100,000 inhabitants, according to data from the Statistical and Death System of the General Directorate of Epidemiology [5]. According to the World Health Organization, tuberculosis is one of the top 10 diseases with cough, and you can get tuberculosis by inhaling air droplets or from an infected person's cough or sneezing which is called primary tuberculosis (TB) [6] causes of mortality in the world. In 2015, about 10.4 million people became ill with tuberculosis and 1.8 million died from this disease. More than 95% of tuberculosis deaths occur in third world countries [6]. In Figure 1, we can see 4 X-ray images; the two images on the left show healthy patients and the two on the right show patients with the disease detected. As shown in Figure 1, the methods of diagnosis and treatment of the disease have improved; however, the goal of the End TB Strategy for 2020 will not be met, and globally, estimated 10 million individuals have been contracting tuberculosis (TB) by 2020. ere are 5.6 million males, 3.3 million women, and 1.1 million children in the country. Tuberculosis is seen in all nations and in all age groups. TB, on the other hand, is both treatable and avoidable [5,6]. Undoubtedly, this public health problem continues to be a great challenge for the health system of countries, mainly developing countries. A very interesting review of the evolution of medical image analysis and processing techniques since the 1980s can be found in [4]. e task of extracting classes of information from an image is referred to as image classification. ere are two forms of classification: supervised and unsupervised. e supervised classification starts from a set of known classes; these classes must be characterized according to the set of variables by measuring them in individuals whose membership of one of the classes does not present doubts, while the unsupervised classification does not establish any class, although it is necessary to determine the number of classes that we want to establish, and let a statistical procedure define them. In the present work, artificial intelligence is applied in the automatic classification of chest X-ray images of patients with tuberculosis and without tuberculosis. Review of Literature Image processing is widely used, for example, in [6]; image processing is used to extract regions of interest with properties that can potentially be related to the medical diagnosis of Parkinson's; it uses computer-aided diagnosis technology to process the images, extract the textures, make a segmentation of the image, and find the area of interest. Within [7], we found the use of image processing, pattern recognition, and artificial intelligence to help detect clusters of microcalcifications in digitized mammography images. Few articles [8,9] used free databases, making it difficult to compare new techniques and even replicate results. e first results of their use were presented to the literature after the consolidation of two open and free databases of radiographic images [9] but considering the use of computer vision techniques for lung region segmentation [9]. Although the latter takes a different approach than the one that will be discussed in this paper, it encourages the use of these freely available databases for model training and testing. Most of the studies that looked into the use of multilayer perceptron neural networks for TB detection did not consider the use of medical images to feed such ANNs. On the contrary, they thought that laboratory parameters (cholesterol, creative, blood pressure, amylase level, and so on) and data from office exams (body temperature, cough, and difficulty breathing) could be used to provide experience with ANNs. Although these studies demonstrate the feasibility of using ANNs to detect tuberculosis from real-world data, they require medical examinations and trained professionals to provide input parameters to neural networks, which may not be available or feasible in certain circumstances, particularly given the majority profile of TB patients. To address these limitations, the approach proposed in this paper only uses radiographic images of the lungs, a low-cost and widely available exam that is thus more appropriate for realistic scenarios. Some works of interest where a compendium of techniques to improve medical images can be found are [8][9][10], and to eliminate noise from the image using techniques that range from erosion, extraction, and others commonly used in the state of the art, they can be referred in [11]. Another research found on image processing is found in [9] which like [12][13][14][15] helps the early diagnosis of breast cancer using image processing; within this research, we can find that it uses the technique of segmentation by texture; the images used as evidence are from a database, which has images of cancerous masses and microcalcifications manually labelled by experts. In [16][17][18], they carry out the identification of breast cancer using thermal images, perform a digital processing of the images, using a texture analysis of the images to identify, and extract all the regions of interest. Proposed Method e proposed method was developed. e characteristics of the images that are used as classification attributes are extracted with KERAS. KERAS is an open-source neural networks library created in Python that contains the ResNet50 architecture; this architecture will help to extract the characteristics of the images through arrays. In the present work, three classification methods were used: (i) e first method used based on support vector machines (SVMs) is a supervised learning model with associated algorithms that analyse the data and recognize patterns. e data points closest to the hyperplane, or the elements of a data set that, if deleted, would change the location of the dividing hyperplane, are called support vectors. As a result, they might be regarded essential components of data collection. SVM stands for support vector machine and is a supervised machine learning technique that may be used for classification and regression. SVMs are more typically utilized in classification issues, thus that is where we will concentrate our efforts in this paper. When a dataset is divided into two groups, the SVM performs both linear and nonlinear classification, and the Kernel function is used to accomplish nonlinear classification; the kernels in nonlinear classification are homogeneous polynomial and complex regression analysis [10]. (ii) e second method used is based on logistic regression (LR), which is a classification machine learning algorithm that is used to predict the probability of a categorical-dependent variable that is dichotomous; that is, it contains data that can be classified in one of two possible categories (dead or alive, sick or healthy, yes or no, and so on). One of the most crucial statistical models for designing the probability of a particular category or event, such as success or failure, is the logistic model. Logistic regression, on the other hand, employs a number of predicted variables that can be either numeric or categorical. is can be used to model a variety of events, such as determining whether an image contains a cat, tiger, fish, or another animal. Each detected object in the image will be assigned a probability between 0 and 1, resulting in a total of one. e Logit model or the broad sense classifier of entropy is other name for logistic regression. One of the supervised machine learning algorithms for "classification" tasks is logistic regression. It has developed a particularly positive reputation in the financial sector over the last two decades as a result of its exceptional ability to detect embezzlers. e general use of logistic regression and other prevalent linear classifiers is depicted in the diagram of network to extract the characteristics of the images that will be used as classification attribute for generating the maps. A logistic regression, therefore, requires that the dependent variable be binary. Also, the level 1 factor should represent the "desired" value. Only significant variables should be included as independent variables, which, in turn, should be independent of each other [14]. (iii) e third method used is based on the nearest neighbors (KNN, K-Neighbors Classifier), which is an algorithm based on supervised type instances of machine learning. e fact why closest neighbor approaches have remained popular in practice is primarily due to their empirical success throughout time. is explanation, however, may be unduly simple. We focus on four elements of closest neighbor approaches that we feel are important to their continuing popularity. First, the ability to choose what "near" means in nearest neighbor prediction allows us to easily handle ad hoc distances or use existing representation and distance learning types of machinery, such as deep networks or decision treebased ensemble learning approaches, to handle ad hoc distances. Second, the computational efficiency of a number of approximate closest neighbor search algorithms allows nearest neighbor prediction to scale to large, high-dimensional datasets that are typical in current applications. ird, closest neighbor approaches are nonparametric, relying on data to make minimal modelling assumptions rather than allowing the data drive predictions directly. Finally, nearest neighbor approaches are interpretable: they show the closest neighbors discovered as proof for their predictions. is method is particularly useful for classifying new samples (discrete values) or for predicting or estimating future values (regression, continuous values). It basically searches for the most similar data points (by proximity) learned during the training stage and makes suggestions for new points based on that classification [2]. Mobile Information Systems For the present work, the Montgomery database was used, the X-ray images of this database were collected from the tuberculosis control program adopted by UNDP, Iraq, and set contains 138 radiographs, of which 80 radiographs correspond to healthy patients (normal) and 58 radiographs show manifestations of tuberculosis (abnormal). is database is available. All images have been de-identified and are in DICOM format. e set includes a wide variety of abnormalities, such as spillage patterns. e dataset contains radiological readings in the form of a text file. Each image contains a label that aids with image identification. e labels can be with TB (label with the number "1," success) and normal or without TB (label with the number "0," failure). A preprocessing was carried out on the images used. ere are two main parts to preprocessing: (1) padding and (2) resizing. Both stages are carried out after the images were extracted; these stages result in a matrix for each input image with dimensions of 224 × 224 and with numbers from 0 to 255; this corresponds to a 224 × 224 image in 3 channels (RGB); the purpose of this process is to provide the network with a matrix of these dimensions. Once these steps have been completed, they enter the ResNet50 network. Figure 2 shows the process carried out by the network. e last stage of the preprocessing is when they enter the network to extract the characteristics of the images that will be used as classification attributes. e network takes as input the generated matrix of [x224x3], and in each layer, it performs convolutions to the matrix and in this way generates extraction maps. In the penultimate layer of the network, a vector of dimension 2048 is obtained, which contains the general characteristics of the image such as saturation, luminosity, and intensity, among others. Processing for Cross-Validation. Once the arrangements with the characteristics of the images for TB and Normal are obtained, the labels are created in a text document to name each one of the images that will be used for the training of the program. Within the processing program, labels and characteristics are called and relationships between labels and characteristics are created to later be converted into arrangements. When the system finishes sorting the data for its best interpretation, it converts the relations into 0 and 1; in order to be able to interpret them, in this part, each detected object in the image will be assigned a probability between 0 and 1, resulting in a total of one. e Logit model or the broad sense classifier of entropy are other names for logistic regression. Figure 3 shows the diagram of the processing carried out for the cross-validation scenario, with each of the automatic classification methods. Processing for Training and Test Set. For the second scenario, training and test sets were created. 80% of the images were used for training, and the remaining 20% were used for testing; this in order that the test images are never seen by the training set. For each classification performed, we recorded the following evaluation metrics: Accuracy, Precision, Recall, and F-Measure (statistical measure F). In the state of the art, it is common to name these metrics by their names in English. e results obtained are shown in the next section. Results e graph in Figure 4 shows a comparison of results between both scenarios. (i) Cross-Validation (CV). A methodological error is learning the characteristics that facilitate surveillance of disease-carrying objects and a generally assumed evaluation for clinical diagnostic use of the same data x, y. A model that just repeats the labels of the samples it has just seen would get a perfect score but it would fail to anticipate anything valuable on yet unseen data. Overfitting is the term for this circumstance. To avoid this, it is usual practice to set aside a portion of the available data as a test set such as X test and y test when doing a (supervised) machine learning experiment. It is worth noting that the term "experiment" is not just for academic purposes; even in commercial contexts, machine learning frequently begins as an experiment. A typical cross-validation approach in model training is depicted in the flowchart below. Grid search strategies may be used to find the optimal parameters. (ii) Training Sets (TS). e study and creation of algorithms that learn from and make judgments on data is a typical job in machine learning. ese algorithms work by constructing a computational formula from incoming data to make data-driven predictions or judgments. In most cases, the input data needed to develop the model are split into different data sets. ree data sets, in particular, are often employed at distinct phases of the model's development: training, validation, and test sets. For each of them, the four recorded evaluation metrics are shown. In the graph, it can be seen that the best result of both scenarios is obtained when using SVM as a learning method ( Figure 4) and the worst values obtained were that of close neighbors for both classification scenarios in practically all the metrics. Table 1 shows the results obtained in the crossvalidation scenario for the four evaluation metrics. While in Table 2, we find the results obtained with the training and test sets formed. e mean of scenario that shows a better performance is the training and test sets, which is the most desirable classification scenario, when there are enough instances to form both sets, because the training set never sees the test set, thus avoiding having any kind of influence or tendency when assigning the category to the image under study. It can also be observed that in both scenarios the classifier that shows the best performance is SVM. Mobile Information Systems A test harness is required while constructing a framework for a predictive modelling issue. e test harness specifies how the domain's sample of data will be used to assess and compare potential models for a predictive modelling challenge. ere are several methods to organize a test harness, and there is no one-size-fits-all solution for all applications. Using a piece of the data for training and tuning the models and a portion for giving an objective assessment of the tuned model's skill on out-of-sample data is a popular strategy. A training dataset and a test dataset are created from the data sample. e model is assessed using a resampling approach such as k-fold pass on the training dataset, and the set may be further separated into a test data for tuning the model's hyperparameters. Conclusions In the present work, results of automatic classification of medical images are presented in two categories: with and without tuberculosis. To carry out the classification, features are extracted using deep learning and the RESNET50 neural network. Cross-validation and the formation of training and test sets were the two classification scenarios used. e scenario with the best results was the one in which the training and test sets were formed with an accuracy greater than 85%. e classification method that shows the best performance in the two scenarios implemented in this work is SVM. As can be seen in the results obtained in the present work, these far exceed chance and allow to carry out the classification of images in an efficient way. Computer tomography (CT) of the abdomen, CT of the head, magnetic resonance imaging (MRI) of the brain, and MRI of the spine were all used in this investigation. Our suggested CNN architecture could automatically categorize these 4 sets of medical photos by image modality and anatomic location after converting them to JPEG (Joint Photography Experts Group) format. In both the validation and test sets, we achieved outstanding overall classification accuracy (>99.5 percent). e collected results allow us to assess the viability of the methods adopted. It also allows us to identify the best classification scenario and machine learning method to carry out the classification of radiographs with and without tuberculosis. Data Availability e data underlying the results presented in the study are included within the manuscript. Disclosure It was performed as a part of the Employment of Institutions.
2021-12-05T16:09:57.624Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "3941021091ade4b86681ed70cdf7b09637ee9cc2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/misy/2021/7424836.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e1ee7883be4a04f1d1d7b24c882a73b4ab3e7dd0", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
49470292
pes2o/s2orc
v3-fos-license
Salicylic Acid Induces Resistance in Rubber Tree against Phytophthora palmivora Induced resistance by elicitors is considered to be an eco-friendly strategy to stimulate plant defense against pathogen attack. In this study, we elucidated the effect of salicylic acid (SA) on induced resistance in rubber tree against Phytophthora palmivora and evaluated the possible defense mechanisms that were involved. For SA pretreatment, rubber tree exhibited a significant reduction in disease severity by 41%. Consistent with the occurrence of induced resistance, the pronounced increase in H2O2 level, catalase (CAT) and peroxidase (POD) activities were observed. For defense reactions, exogenous SA promoted the increases of H2O2, CAT, POD and phenylalanine ammonia lyase (PAL) activities, including lignin, endogenous SA and scopoletin (Scp) contents. However, SA had different effects on the activity of each CAT isoform in the particular rubber tree organs. Besides, three partial cDNAs encoding CAT (HbCAT1, HbCAT2 and HbCAT3) and a partial cDNA encoding PAL (HbPAL) were isolated from rubber tree. Moreover, the expressions of HbCAT1, HbPAL and HbPR1 were induced by SA. Our findings suggested that, upon SA priming, the elevated H2O2, CAT, POD and PAL activities, lignin, endogenous SA and Scp contents, including the up-regulated HbCAT1, HbPAL and HbPR1 expressions could potentiate the resistance in rubber tree against P. palmivora. Introduction Rubber tree (Hevea brasiliensis Muell. Arg.) is one of the economically important crops in Thailand. Its products are currently exported worldwide and produce major revenue for the country. Phytophthora palmivora, an aggressive hemibiotrophic oomycete pathogen, is a causal agent of abnormal leaf fall and black stripe diseases of the rubber tree. It infects petioles causing extensive defoliation of the mature leaves within a few weeks and attacks tapping surfaces leading to dark vertical lines in the panels, thereby reducing plant growth and latex productivity, subsequently leading to significant economic loss [1]. Upon sensing of the invading pathogens, plants have evolved multitudinous constitutive and induced basal defense mechanisms to protect themselves against pathogen attack, for examples, oxidative bursts, defense signaling pathways, transcriptional expressions of pathogenesis-related encoding by a multigene family [23,24]. The PAL activity can be induced in response to developmental processes and various stress conditions, including wounding, pathogenic attack, UV light, and phytohormones such as SA [25]. To date, the use of chemical fungicides in disease management is the most effective tool. However, repeated use has led to serious problems, such as environmental pollution, fungicide resistance, and residual toxicity. Exploiting uniquely the plant potential to fight pathogens, the induced resistance by natural or synthetic compounds may minimize the use of fungicides for disease control, and therefore could be considered as an alternative and environment-friendly approach for plant protection in the sustainable agricultural production. Salicylic acid (SA), an endogenous elicitor, is an important defense hormone mediating in signal transduction systems, which can stimulate both localized acquired resistance (LAR) and systemic acquired resistance (SAR) [26]. SA is predominantly correlated with resistance against biotrophic and hemibiotrophic pathogen [27]. Exogenous application of SA results in resistance against certain pathogens, which is related not only to the induction of SAR and expressions of PR-genes [28], but also potentiated to respond abruptly and efficiently with diverse defensive strategies against subsequent challenge by pathogens [29]. The present study was conducted to investigate the effect of exogenous SA on the accumulation of H2O2 level, the activities of defense-related enzymes (CAT, POD and PAL), lignin deposition, amounts of total phenolic compounds, endogenous SA and Scp; also analyzed the expressions of HbCAT1, HbPAL and HbPR1, since these components might function synergistically in priming defenses in rubber tree against subsequent invasion by P. palmivora. Induced Resistance of SA-Pretreated Rubber Tree against P. palmivora In this study, we investigated the effect of SA priming on induced resistance in rubber tree against P. palmivora infection. In our preliminary study, the effect of SA on the leaf of rubber tree seedlings was tested with various concentrations (0, 5, 10 and 20 mM). After 1 day of the treatment, the leaves of some SA-treated plants at 10 mM and 20 mM showed leaf shrinkage, while there was no shrinking effect on leaves of SA treated with 5 mM ( Figure 1A). Henceforth, we selected 5 mM SA for further experiments. Effect of exogenous SA treatment on rubber tree against P. palmivora. (A) Physiomorphological traits of rubber tree leaves at 1 day after treatment with SA at different concentrations of 0, 5, 10 and 20 mM; (B) Disease symptom; and (C) disease severity (%) of rubber tree leaves pretreated with either DW or 5 mM SA for 1 day prior to subsequent inoculation with P. palmivora zoospore suspensions (1 × 10 5 zoospores mL −1 ) at 4 dpi. The columns and vertical bars represent means ± standard errors (SE) of three independent replicates of 10 seedlings. Pretreated plants and nonpretreated control differ significantly at p ≤ 0.05, according to paired-samples t-test. All leaves were detached from the seedlings at 4 dpi for better pictorial representation. (B) Disease symptom; and (C) disease severity (%) of rubber tree leaves pretreated with either DW or 5 mM SA for 1 day prior to subsequent inoculation with P. palmivora zoospore suspensions (1 × 10 5 zoospores mL −1 ) at 4 dpi. The columns and vertical bars represent means ± standard errors (SE) of three independent replicates of 10 seedlings. Pretreated plants and non-pretreated control differ significantly at p ≤ 0.05, according to paired-samples t-test. All leaves were detached from the seedlings at 4 dpi for better pictorial representation. The exogenous 5 mM SA could induce resistance in rubber tree when compared to the control ( Figure 1B,C). Leaf symptoms were evaluated at 0, 1, 2, 3 and 4 dpi, the increased necrotic lesion numbers and the rapidly expanded P. palmivora growth were obviously revealed on the infected rubber tree leaves in contrast to its disease symptom in the SA pretreatment with 5 mM which showed the remarkably decreased necrotic lesion numbers and disease severity by 41.67% ( Figure 1B,C). The results indicated that SA pretreatment could effectively inhibit P. palmivora growth on rubber tree leaves. Priming rubber tree seedlings with 5 mM SA before P. palmivora inoculation (SA + P. pal) caused a significant increase in H 2 O 2 content by 9.44-fold, total protein content by 4.45-fold and CAT activity by 2.76-fold, whereas inoculation with P. palmivora (DW + P. pal) or treatment with 5 mM SA (SA + DW) slightly affected their levels, as compared to the control (Figure 2A-C). In addition, the activity band, visualized by native-PAGE, showed patterns similar to that of CAT activity determined by spectrophotometry ( Figure 2C,D). The exogenous 5 mM SA could induce resistance in rubber tree when compared to the control ( Figure 1B,C). Leaf symptoms were evaluated at 0, 1, 2, 3 and 4 dpi, the increased necrotic lesion numbers and the rapidly expanded P. palmivora growth were obviously revealed on the infected rubber tree leaves in contrast to its disease symptom in the SA pretreatment with 5 mM which showed the remarkably decreased necrotic lesion numbers and disease severity by 41.67% ( Figure 1B,C). The results indicated that SA pretreatment could effectively inhibit P. palmivora growth on rubber tree leaves. 2.1.2. Effect of SA Priming on H2O2 Content, Total Protein Content, CAT and POD Activities and Total Phenolic Content in Rubber Tree Leaves after P. palmivora Challenge Priming rubber tree seedlings with 5 mM SA before P. palmivora inoculation (SA + P. pal) caused a significant increase in H2O2 content by 9.44-fold, total protein content by 4.45-fold and CAT activity by 2.76-fold, whereas inoculation with P. palmivora (DW + P. pal) or treatment with 5 mM SA (SA + DW) slightly affected their levels, as compared to the control (Figure 2A-C). In addition, the activity band, visualized by native-PAGE, showed patterns similar to that of CAT activity determined by spectrophotometry ( Figure 2C,D). Effect of exogenous SA pretreatment on H2O2 content, total protein content, CAT and POD activities and total phenolic content in rubber tree leaves after inoculation with P. palmivora (P. pal). The leaves of rubber tree seedlings were sprayed with either distilled water (DW) or 5 mM SA. After 1 day of the treatment, leaves were subsequently treated with either DW or P. palmivora zoospore suspensions at 1 × 10 5 zoospores mL −1 . After 4 dpi, the leaf samples were collected for determining (A) H2O2 content; (B) total protein content; (C) CAT activity; (D) CAT activity staining; (E) POD activity; and (F) total phenolic content. The columns and vertical bars represent mean ± standard errors (SE) of three independent replicates of 10 seedlings. Different letters represent significant differences, according to Tukey's HSD test at p ≤ 0.05. Arrows indicate the position of CAT activity band. Effect of exogenous SA pretreatment on H 2 O 2 content, total protein content, CAT and POD activities and total phenolic content in rubber tree leaves after inoculation with P. palmivora (P. pal). The leaves of rubber tree seedlings were sprayed with either distilled water (DW) or 5 mM SA. After 1 day of the treatment, leaves were subsequently treated with either DW or P. palmivora zoospore suspensions at 1 × 10 5 zoospores mL −1 . After 4 dpi, the leaf samples were collected for determining (A) H 2 O 2 content; (B) total protein content; (C) CAT activity; (D) CAT activity staining; (E) POD activity; and (F) total phenolic content. The columns and vertical bars represent mean ± standard errors (SE) of three independent replicates of 10 seedlings. Different letters represent significant differences, according to Tukey's HSD test at p ≤ 0.05. Arrows indicate the position of CAT activity band. For POD, the activity was not significantly increased after triggering with P. palmivora or SA. However, in the leaves treated with 5 mM SA prior to P. palmivora inoculation, the POD activity was increased by 1.26-fold when compared to the control ( Figure 2E). Differently, the treatment with SA (SA + DW) resulted in an increase of total phenolic content by 1.10-fold in rubber tree leaves as compared to the control (DW) ( Figure 2F). In the leaves that were treated with SA before P. palmivora inoculation, total phenolic content was higher than that of the inoculation with P. palmivora (DW + P. pal) even though it was not significantly different from the control (DW). For POD, the activity was not significantly increased after triggering with P. palmivora or SA. However, in the leaves treated with 5 mM SA prior to P. palmivora inoculation, the POD activity was increased by 1.26-fold when compared to the control ( Figure 2E). Differently, the treatment with SA (SA + DW) resulted in an increase of total phenolic content by 1.10-fold in rubber tree leaves as compared to the control (DW) ( Figure 2F). In the leaves that were treated with SA before P. palmivora inoculation, total phenolic content was higher than that of the inoculation with P. palmivora (DW + P. pal) even though it was not significantly different from the control (DW). To determine the effect of SA on the induction of H2O2 content in rubber tree, we used DAB staining method to detect the distribution of H2O2 in rubber tree leaf cells. The results revealed that the SA treatment caused a great accumulation of H2O2 in leaf cells at 24 h till 72 h, predominantly along the veins appearing as dark brown spots. Whereas, the control leaves showed a low level of H2O2 as appearing a weak background of DAB staining ( Figure 3). The results indicated that SA could induce an increase in the rate of generation of H2O2 in rubber tree leaves. In situ detection of H2O2 using 3,3′-diaminobenzidine (DAB) staining on rubber tree leaves after treatment with 5 mM SA. The rubber tree leaves were sprayed with either DW as a control or 5 mM SA. Leaf pieces were collected and stained after treatment at different time points (0, 3, 6, 12, 24, 48, 72, 96 and 120 h). Pictures represent three independent biological replicates. The dark brown precipitates indicate the presence and distribution of H2O2 in plant cells, which could be visualized using a light microscope (scale bars = 100 μm). Effect of SA on CAT, POD and PAL Activities The effect of SA on the activities of CAT, POD and PAL was investigated in rubber tree leaves after treatment with 5 mM SA over a time-course. The results showed that CAT activity was continuously increased at 3-6 h, culminated at 12 h (1.85 fold) and slightly decreased at 24-48 h, then increased at 72 h and peaked again at 96 h (1.69 fold) when compared to the control, and abruptly declined thereafter ( Figure 4A). For POD, the activity was slightly increased from 3-12 h, subsequently remained at the same level of the control at 24 h and then decreased to a lower level than the control at 48-120 h ( Figure 4B). For PAL, the activity was continuously increased at 3 h till 48 h and reached the highest level at 72 h (3.48 fold) when compared to the control and started to decline from 96-120 h ( Figure 4C). Effect of SA on CAT, POD and PAL Activities The effect of SA on the activities of CAT, POD and PAL was investigated in rubber tree leaves after treatment with 5 mM SA over a time-course. The results showed that CAT activity was continuously increased at 3-6 h, culminated at 12 h (1.85 fold) and slightly decreased at 24-48 h, then increased at 72 h and peaked again at 96 h (1.69 fold) when compared to the control, and abruptly declined thereafter ( Figure 4A). For POD, the activity was slightly increased from 3-12 h, subsequently remained at the same level of the control at 24 h and then decreased to a lower level than the control at 48-120 h ( Figure 4B). For PAL, the activity was continuously increased at 3 h till 48 h and reached the highest level at 72 h (3.48 fold) when compared to the control and started to decline from 96-120 h ( Figure 4C). and (F) Scp contents in rubber tree. The leaves of rubber tree seedlings were sprayed with either distilled water (DW) as control or 5 mM SA and harvested at different points of time (3,6,12,24,48,72,96 and 120 h) for enzyme activity measurements by spectrophotometry and endogenous SA and Scp detections by HPLC. All data represent mean ± SE of three biological replicates of 10 seedlings. Asterisks indicate statistically significant differences in the level of Scp content in SA treatment compared to its content in the control at the same time point (** for p  0.005, *** for p  0.001), according to paired-samples t-test. Lignin deposition was visualized by phloroglucinol-HCl staining of the rubber tree leaves treated with 5 mM SA at 48 h. Effect of SA on Lignin Deposition After rubber tree seedlings were treated with either distilled water (DW) or 5 mM SA for 48 h, the leaves of seedlings were harvested and stained with phloroglucinol-HCl to detect lignin deposition. The results indicated that SA-treated rubber tree leaves revealed an increase in lignin, appearing as red spots, while the control ones were slightly stained ( Figure 4D). The results suggested that SA could induce lignin deposition in rubber tree leaves. Effect of SA on Endogenous SA and Scp Levels To determine whether exogenous SA application could induce the biosynthesis of endogenous SA and Scp in rubber tree, we sprayed 5 mM SA on the leaves of rubber tree seedlings and collected the treated leaves at different times. All leaves were washed thoroughly three times with distilled water and dried with absorbent paper to remove the remaining SA on the leaf surface before analysis. and (F) Scp contents in rubber tree. The leaves of rubber tree seedlings were sprayed with either distilled water (DW) as control or 5 mM SA and harvested at different points of time (3, 6, 12, 24, 48, 72, 96 and 120 h) for enzyme activity measurements by spectrophotometry and endogenous SA and Scp detections by HPLC. All data represent mean ± SE of three biological replicates of 10 seedlings. Asterisks indicate statistically significant differences in the level of Scp content in SA treatment compared to its content in the control at the same time point (** for p ≤ 0.005, *** for p ≤ 0.001), according to paired-samples t-test. Lignin deposition was visualized by phloroglucinol-HCl staining of the rubber tree leaves treated with 5 mM SA at 48 h. Effect of SA on Lignin Deposition After rubber tree seedlings were treated with either distilled water (DW) or 5 mM SA for 48 h, the leaves of seedlings were harvested and stained with phloroglucinol-HCl to detect lignin deposition. The results indicated that SA-treated rubber tree leaves revealed an increase in lignin, appearing as red spots, while the control ones were slightly stained ( Figure 4D). The results suggested that SA could induce lignin deposition in rubber tree leaves. Effect of SA on Endogenous SA and Scp Levels To determine whether exogenous SA application could induce the biosynthesis of endogenous SA and Scp in rubber tree, we sprayed 5 mM SA on the leaves of rubber tree seedlings and collected the treated leaves at different times. All leaves were washed thoroughly three times with distilled water and dried with absorbent paper to remove the remaining SA on the leaf surface before analysis. The results showed that extremely high content of endogenous SA was detected in the SA-treated leaves at 3 h (138.11 ng g −1 FW). The obtained high value of SA content in the leaf might be due to some penetration of exogenous SA at the early time. In the SA-treated leaves, the endogenous SA content was diminished continuously at 6-24 h, slightly increased at 48 h (17.09 ng g −1 FW) and then decreased at 72 h till 120 h. In contrast, the endogenous SA content was barely detected in the control leaves (less than 0.21 ng g −1 FW) throughout the experimental period of 120 h ( Figure 4E). Furthermore, the Scp content in the SA-treated leaves was found to be increased continuously at 6-24 h and reached the highest content at 48 h (1.13 ng g −1 FW), and declined thereafter ( Figure 4F). Interestingly, a prominent increase in Scp content at 48 h was correlated with the increased endogenous SA in the SA-treated leaves ( Figure 4E,F). The results indicated that exogenous SA could induce the biosyntheses of endogenous SA and Scp in rubber tree. Effect of SA on CAT Activity in Different Rubber Tree Organs To determine the effect of SA on CAT activity in different rubber tree organs, crude protein was extracted from the leaves, stems, hypocotyls and roots of in-vitro rubber tree plantlets grown in MS medium supplemented with 5 mM SA and subjected to determine CAT activity. The CAT activity was highly found in the leaves than stems and hypocotyls, respectively and barely detected in the roots of in-vitro plantlets. After treatment with SA, the CAT activity in the leaves started to decrease at 24 h and decreased even more at 48 h. On the other hand, the CAT activity in the stems, hypocotyls and roots turned to remarkably increased at 48 h. The results suggested that SA led to different changes in CAT activity of each organ of rubber tree plantlets ( Figure 5B). The results showed that extremely high content of endogenous SA was detected in the SA-treated leaves at 3 h (138.11 ng g −1 FW). The obtained high value of SA content in the leaf might be due to some penetration of exogenous SA at the early time. In the SA-treated leaves, the endogenous SA content was diminished continuously at 6-24 h, slightly increased at 48 h (17.09 ng g −1 FW) and then decreased at 72 h till 120 h. In contrast, the endogenous SA content was barely detected in the control leaves (less than 0.21 ng g −1 FW) throughout the experimental period of 120 h ( Figure 4E). Furthermore, the Scp content in the SA-treated leaves was found to be increased continuously at 6-24 h and reached the highest content at 48 h (1.13 ng g −1 FW), and declined thereafter ( Figure 4F). Interestingly, a prominent increase in Scp content at 48 h was correlated with the increased endogenous SA in the SA-treated leaves ( Figure 4E,F). The results indicated that exogenous SA could induce the biosyntheses of endogenous SA and Scp in rubber tree. Effect of SA on CAT Activity in Different Rubber Tree Organs To determine the effect of SA on CAT activity in different rubber tree organs, crude protein was extracted from the leaves, stems, hypocotyls and roots of in-vitro rubber tree plantlets grown in MS medium supplemented with 5 mM SA and subjected to determine CAT activity. The CAT activity was highly found in the leaves than stems and hypocotyls, respectively and barely detected in the roots of in-vitro plantlets. After treatment with SA, the CAT activity in the leaves started to decrease at 24 h and decreased even more at 48 h. On the other hand, the CAT activity in the stems, hypocotyls and roots turned to remarkably increased at 48 h. The results suggested that SA led to different changes in CAT activity of each organ of rubber tree plantlets ( Figure 5B). Cloning and Sequence Analysis of Three Partial cDNAs Encoding Catalases By RT-PCR amplification, three 834-bp nucleotide sequences were obtained from total RNA samples of the leaf and the root of in-vitro rubber tree plantlets. All nucleotide sequences encoding 278 amino acid residues showed similarities to other plant catalases ( Figure 6). Two sequences obtained from the leaf were named as H. brasiliensis catalase 1 (HbCAT1) and H. brasiliensis catalase 2 (HbCAT2) and another one obtained from the root was named as H. brasiliensis catalase 3 (HbCAT3). Three partial cDNA sequences of HbCAT1, HbCAT2 and HbCAT3 have been deposited in the National Center of Biotechnology Information GenBank (NCBI) under the accession no. MF383167, MH010572 and MH010573, respectively. The amino acid sequences of HbCAT1, HbCAT2 and HbCAT3 are shown in Figure 6. Cloning and Sequence Analysis of a Partial cDNA Encoding Phenylalanine Ammonia Lyase A 1701 bp nucleotide sequence was obtained from total RNA of rubber tree seedling leaves. This sequence, named as HbPAL, exhibited similarities to the PAL genes of other plants. The partial cDNA of HbPAL has been deposited in NCBI under the accession no. MG992015. The deduced HbPAL amino acid sequences (567 and amino acid residues) showed a high similarity with M. esculenta phenylalanine ammonia lyase 2 at 96% (AAK60275.1), J. curcas phenylalanine ammonia lyase at 94% (XP_012082374.1), and R. communis phenylalanine ammonia lyase at 93% (AGY49231.1) (Figure 7). The conserved amino acid residues are shaded dark and the highly conserved amino residues are shaded grey. Effect of SA on HbCAT1, HbPAL and HbPR1 Expressions in H. brasiliensis To investigate the effect of SA on rubber tree defense gene expression, the expressions of HbCAT1, HbPAL and HbPR1 were determined by semi-qRT-PCR. The expression of HbCAT1 was induced by 2.07-fold and 2.48-fold at 3 h and 6 h, respectively, in the SA-treated plants, then the expression decreased and was suppressed at 48 h when compared to the control plants. Subsequently, the expression was induced by 1.82-fold and 2.66-fold at 72 h and 120 h, respectively ( Figure 8A). Effect of SA on HbCAT1, HbPAL and HbPR1 Expressions in H. brasiliensis To investigate the effect of SA on rubber tree defense gene expression, the expressions of HbCAT1, HbPAL and HbPR1 were determined by semi-qRT-PCR. The expression of HbCAT1 was induced by 2.07-fold and 2.48-fold at 3 h and 6 h, respectively, in the SA-treated plants, then the expression decreased and was suppressed at 48 h when compared to the control plants. Subsequently, the expression was induced by 1.82-fold and 2.66-fold at 72 h and 120 h, respectively ( Figure 8A). For HbPAL, the expression was slightly up-regulated at 6 h and reached the highest level at 12 h with 2.03-fold of control, thereafter the expression was suppressed to below the level of control ( Figure 8B). The expression of HbPR1 was slightly induced at 3 h till 24 h and greatly induced by 8.67-fold of control at 72 h, then the expression was rapidly declined at 96 h and suppressed to below the level of control at 120 h (0.65-fold) ( Figure 8C). The leaves of rubber tree seedlings were sprayed with either distilled water or 5 mM SA. Total RNA was extracted from the leaves taken at various time points (3, 6, 12, 24, 48, 72, 96 and 120 h), converted to cDNA, and subjected to semi-qRT-PCR. Relative transcript levels were calculated relative to expression of Hbmitosis mRNA. The expression levels of HbCAT1, HbPAL and HbPR1 genes were expressed as a relative transcript fold change to their controls. All data show the average of three replications. Error bars indicate standard errors. Different letters represent significant differences, according to Tukey's HSD test at p ≤ 0.05. Discussion For HbPAL, the expression was slightly up-regulated at 6 h and reached the highest level at 12 h with 2.03-fold of control, thereafter the expression was suppressed to below the level of control ( Figure 8B). The expression of HbPR1 was slightly induced at 3 h till 24 h and greatly induced by 8.67-fold of control at 72 h, then the expression was rapidly declined at 96 h and suppressed to below the level of control at 120 h (0.65-fold) ( Figure 8C). Discussion P. palmivora is one of the significant destructive plant pathogens that severely threaten rubber tree cultivation and latex production [1]. Traditional methods for disease control depend mainly on the application of chemical fungicides; however, the use of them has various hazardous effects on the environment and human health, including the emergence of highly resistance fungal strains. Induced resistance exploiting intrinsic defense mechanisms of plants by specific biotic or abiotic elicitors is proposed as an eco-friendly approach to protect plants against disease. Its introduction into agricultural practice may reduce chemical applications, therefore contributing to the development of sustainable agriculture [30]. SA has long been recognized to play a central role in the regulation of priming defense responses against various pathogenic infections [31] and induces SAR in plants [32]. Several studies have supported that the application of SA can induce resistance to various types of pathogens [26,27,30]. Pretreatment with SA primes the plant cells to react more rapidly and effectively to subsequent pathogen attack [33]. In this study, we found that the pretreatment of 5 mM SA provided significant protection in rubber tree against P. palmivora infection. The biochemical changes of rubber tree treated either with SA alone or SA prior to subsequent inoculation with P. palmivora were then investigated. For priming study, SA could reduce necrotic lesion numbers and mitigate the expansion of P. palmivora mycelium on the infected leaves. In addition, the resistance stimulating effects of this study lasted for at least 5 days in rubber tree leaves (Figure 1). Our finding was similar to that of Zhang et al. (2016) who reported that SA could induce resistance in 'Gala' apple leaves against Glomerella cingulata [30]. Moreover, Mandal et al., (2009) reported that the application of SA reduced the disease severity and conferred resistance to Fusarium oxysporum f. sp. lycopersici in tomato [32]. One of the earliest responses of plant-pathogen interactions is the rapid accumulation of ROS at the pathogen attack site (a phenomenon called oxidative burst), which plays an important role in the plant immune system [34]. These ROS can destroy invading pathogens directly and participated in the orchestration of hypersensitive response (HR). H 2 O 2 is a stable intermediate of ROS and acts as a diffusible selective signal for inducing the expression of genes encoding proteins involved in defensive and antioxidant processes [35]. After rubber tree seedlings were primed with SA for 1 day then challenged by P. palmivora for 4 days, the level of H 2 O 2 and the activities of CAT and POD (SA + P. pal of Figure [36]. However, an excess H 2 O 2 level is also harmful to plant cells and must simultaneously be detoxified by antioxidant enzymes, such as CAT and POD [2]. Our data showed that the activity of CAT was enhanced much higher than that of POD in rubber tree pretreated with SA before inoculation with P. palmivora ( Figure 2C-E), which have resulted from the degrading function of CAT at relatively high H 2 O 2 concentration. The massive accumulation of H 2 O 2 during the pathogen-induced oxidative burst caused less damage and that might be due to the induction of these detoxifying enzyme activities in rubber tree. In addition to detoxifying function, POD is one of the key regulatory enzymes for the biosynthesis of a variety of secondary metabolites in plants [37]. It is well known that plant secondary metabolites are involved in plant defense responses against pathogens and herbivores [38]. Our current study also showed the increase of total phenolic content in the rubber trees treated with SA before inoculated with P. palmivora (SA + P. pal of Figure 2F) when compared to the unprimed plants (DW + P. pal of Figure 2F). We speculate that the SA pretreatment caused a consequent chain of defense responses and subsequently resulted in the induced resistance in rubber tree against P. palmivora infection. Similarly, the previous study in tomato plants proposed that the SA pretreatment decreased the percent of vascular browning and plant wilting, leading to the reduction of bacterial wilt disease incidence and induced total phenolic compounds and defense-related enzymes in tomato leaves when inoculated with Ralstonia solanacearum [39]. The effect of SA as abiotic elicitor was also investigated in rubber tree. We found that the exogenous application of SA could induce the accumulation of H 2 O 2 ( Figure 3) and a change in the levels of CAT and POD activities ( Figure 4A,B). CAT and POD are considered as main antioxidant systems to protect cells against oxidative damage [40]. It was proven that SA increases the activities of CAT, POD and PAL, as well as, the defensive compounds [20,41,42]. Our results also revealed a significant increase in activity of PAL with a concomitant increase in lignin content, including endogenous SA and Scp ( Figure 4C-F). Enhancement of POD, PAL and lignin in SA-treated rubber tree might result in the reinforcement of cell wall and formation of a physical barrier to restrict the penetration of pathogen [42]. These results are supported by Mandal (2010), who suggested that SA acted as elicitor in inducing POD and PAL activities and lignin deposition leading to cell wall strengthening in eggplant roots [43]. Scp is a phytoalexin found in rubber tree which showed an effective fungitoxicity to inhibit the mycelium growth of P. palmivora and other pathogens [44]. In addition, Dorey et al. (1999) reported that the production of H 2 O 2 from the oxidative burst could elicit activity of PAL and consequent stimulation of endogenous SA and Scp in cultured tomato cells [45]. Even though, there are many pieces of evidences that SA increased the activity of CAT [30,45,46], its activity in rice, wheat and cucumber was decreased after SA treatment [47]. We found two peaks of CAT activity after SA treatment; however, CAT activity was decreased at 24 h and 48 h ( Figure 4A) while endogenous SA was detected at 48 h ( Figure 4E). Furthermore, it has been proposed that SA could bind to iron-containing enzymes (SA binding proteins including CAT), resulting in the inhibition of CAT activity which might be due to its conformation change [48]. It was suggested that the reduction of CAT activity would maintain the level of H 2 O 2 for signaling propose in plant defense [49]. Corresponding to our results, previous studies reported that SA caused the increase of H 2 O 2 level, while the decline of CAT activity was occurred [50,51]. For POD kinetics, our study showed that the activity was decreased at 24-120 h ( Figure 4B). The reduction of POD in our results and other plants [11,50] might be due to the binding of SA to heme enzymes including POD [48]. In this work, CAT isozymes activities were detected in different organs of in-vitro rubber tree plantlets. We found that the activities of CAT isozymes in the rubber tree leaves were inhibited by SA, whereas the induction of CAT isozyme activities in the stems, hypocotyls and roots were observed. Our findings suggested that SA could result in different regulations of individual CAT isozyme activities ( Figure 5B). SA has been proposed to inhibit CAT and thereby enhance the production of H 2 O 2 , which may act as a secondary messenger in plant defenses [52]. In support, Rao et al. (1997) found that SA treatment in A. thaliana elevated the H 2 O 2 content and H 2 O 2 -metabolizing enzymes depended upon the dose of SA concentration and time-duration [50]. In plants, the various CAT present multiple isoforms with different activities in diverse plant organs [53][54][55]. CAT isozymes of chickpea plant were sensitive to SA differently, because SA was a specific inhibitor of CAT activity both in shoot and root [56]. Particularly, inhibition of CAT activity by SA may be involved in SA-mediated induction of SAR in plants [8]. Three partial cDNAs of catalase genes were obtained from rubber tree leaves (HbCAT1 and HbCAT2) and root (HbCAT3). The three fragments of HbCAT were consisted of 834 bp and encoded 278 amino acid residues. An amino acid sequence of HbCAT1 showed the maximum similarity with catalase 2 of M. esculenta (97%) (Figure 6). Drory and Woodson (1992) reported that the nucleotide sequence of CAT1 from tomato was 1822 bp (492 amino acids) [57]. In this study, we also obtained a fragment of HbPAL (1701 bp) which was encoded 567 amino acid residues and showed a high similarity with phenylalanine ammonia lyase 2 of M. esculenta (96%) (Figure 7). The full-length cDNA of PAL was isolated from Juglans regia containing 1935 bp and encoded 645-amino-acid protein [58]. These results indicated that the obtained HbCAT1 and HbPAL were similar to catalase and phenylalanine ammonia lyase, respectively, with other plants. The response of defense-related genes to exogenously applied SA was examined in rubber tree leaves. Our results showed that the expression of HbCAT1, HbPAL and HbPR1 genes was significantly induced by SA ( Figure 8A-C). We also observed a significant up-regulation of HbCAT1 ( Figure 8A) which was correlated with the increase of CAT activity ( Figure 4A) by SA treatment. Correspondingly, the previous report was shown that catalase genes (CAT1, CAT2, and CAT3) from maize differently responded to exogenous SA application [55]. This event supported the idea that SA may cause oxidative stress via an increase of H 2 O 2 level [59]; however, pretreatment of plants with the suitable concentrations of SA might induce defense-related genes in scavenging H 2 O 2 [36]. The increase of three different catalase genes expression in hot pepper and small radish can play an important role in response to environmental stresses [60,61]. In addition to the induction of catalase, the action of SA may have other functions in plants [8]. PAL is considered a key enzyme in plant defense mechanisms since it catalyzes the biosynthesis of various related-defense metabolites including phenolic compounds and lignin. The transcript of PAL gene was differentially expressed during plant growth and development [62]. It was required to regulate the activity of PAL as a rate-limiting enzyme in the phenylpropanoid pathway [63]. Silencing of PAL genes resulted in the reduction of SA level as PAL enzyme was not accumulated [64]. In addition, previous research reported that SA could activate the expression of PAL gene and PAL activity in grape berry [65]. Our results showed a significant induction of HbPAL gene ( Figure 8B) and PAL activity ( Figure 4C) as well as an enhanced amount of lignin, endogenous SA and Scp ( Figure 4E,F) in SA-treated rubber tree leaves. It is widely accepted that the accumulation of lignin and Scp involves in supporting the mechanical resistance to pathogen penetration by increasing cell wall reinforcement and potential antimicrobial agent, and therefore forming nonpermeable barriers against pathogens attack [17,21,22]. Moreover, SA is also known to be a natural transduction signal in plant defense response. Application of exogenous SA induced PR gene expression and increased resistance to plant disease [66]. An elevated expression of PR1 gene reported herein ( Figure 8C) is consistent with the previous study showing that the expression of PR1 gene was highly induced in SA treatment of wheat seedling leaves and that correlated with increased resistance during infection of wheat plants by Blumeria graminis f. sp. tritici [67]. SA treatment induced the production of PR proteins which act as molecular markers for the establishment of defense response [26,67]. Recently, foliar spraying of SA in apple prior to inoculation of G. cingulata stimulated the up-regulation of PR1 gene and significantly reduced the Glomerella leaf spot disease [30]. Besides, the accumulation of endogenous SA was correlated with the induction of PR1 gene expression and the activation of SA signaling in defense mechanisms against Collectotrichum in strawberry plants [68]. Glazebrook (2001) reported that NahG-transgenic plants and Arabidopsis mutants impaired in SA production increased the susceptibility to various pathogens and indicated the importance of SA for SAR establishment [69]. In the present investigation, the exogenous SA resulted in the increase of endogenous SA accumulation and the up-regulation of PR1 gene, probably contributing in increased resistance of rubber tree to P. palmivora through the onset of SAR. From our results and those aforementioned reports, we suggest that the SA-induced HbCAT1, HbPAL and HbPR1 gene expressions lead to the biosyntheses of defense-related enzymes and secondary metabolites, thereby inducing resistance in rubber tree against P. palmivora infection. Phytophthora palmivora Culture P. palmivora was routinely grown on potato dextrose agar (PDA) plate at 25 • C. Zoospores were produced from the actively growing P. palmivora mycelium on V8 agar, following the method described earlier [70]. In brief, 1-cm agar plugs of P. palmivora mycelium from a 7-day culture were transferred to 10% unclarified V8 agar (10% V8 juice, 0.1% CaCO 3 , 1.5% agar) and cultured at 25 • C under fluorescent light for a week. The culture was covered with 10 mL of cold (4 • C) sterile distilled water, incubated at 4 • C for 15 min, and then shaken at 50 rpm for 15 min at room temperature to release the motile zoospores from sporangia. The zoospore concentration was counted using a hemocytometer under light microscope. A concentration of 1 × 10 5 zoospores mL −1 was used for the inoculation of rubber tree leaves. Plant Materials and Treatments Twenty-one-day-bud grafted rubber tree (H. brasiliensis, RRIM600 cultivar) seedlings were propagated in a climate-controlled room at 25 • C under 12 h/12 h light/dark photoperiod. For treatments, the rubber tree leaves at the developmental B2C stage of plantlets with uniform growth were sprayed with either distilled water (DW) or salicylic acid (SA) at concentrations of 5 mM. After that, the treated seedlings in each treatment group were separately covered with plastic bags to maintain high humidity and incubated in a climate-controlled room. Each treatment was conducted in triplicates and each replicate contained 10 plants. Six leaves were detached from two petioles in the same layer of seedling stem for assays at 0, 3, 6, 12, 24, 48, 72, 96 and 120 h, respectively. In-vitro culture of rubber tree was carried out, according to the method described earlier [70]. For SA treatment, 45-day-old plantlets were transplanted in the semi-solid Murashige and Skoog's (MS) [71] medium containing growth hormones supplemented with or without 5 mM SA and maintained in a climate-controlled room at 25 • C, under 12 h-light and 12 h-dark cycle. The leaves, stems, hypocotyls, and roots of in-vitro rubber tree plantlets were collected at each time points (0 h, 24 h and 48 h) of treatment and subjected to CAT activity staining after native-PAGE separation. Induced Resistance Bioassays One day after treatment of either DW or SA (5 mM) on the leaves, the seedlings were subsequently sprayed with 1 × 10 5 zoospores mL −1 of P. palmivora. The inoculated seedlings were then maintained in a climate-controlled room. Then, the disease evolution was evaluated for 4 days after the P. palmivora inoculation. The treated leaves at 4 dpi were collected for assays. Disease severity (DS) was recorded using a scale of 1-4, where 0 = no disease symptom, 1 = less than 10%, 2 = 11-30%, 3 = 31-50% and 4 = more than 50% of the leaf area appearing lesions [30]. It was expressed as a percentage (%) calculated according to the equation of Kranz [72]. Protein Extraction, H 2 O 2 Content and Enzyme Activity Assays Leaf samples (0.5 g fresh weight) were frozen immediately in liquid nitrogen, ground to fine powder with a chilled mortar and pestle and then homogenized with 1 mL of cold 0.1 M Tris-HCl buffer, pH 7.0 containing 0.25% (v/v) triton-X and 3% (w/v) polyvinylpolypyrrolidone (PVPP). The extracts were then centrifuged at 12,000 rpm for 20 min at 4 • C. The supernatant was used for determining H 2 O 2 content, enzyme activities and total protein content. H 2 O 2 content was measured by monitoring the reaction of undecomposed H 2 O 2 with ammonium molybdate to generate a yellowish color, following the method of Hadwan and Abed [73]. The reaction mixture contained 200 µL of 50 mM sodium-potassium phosphate (Na 2 KPO 4 ) buffer, pH 7.4 and 20 µL of enzyme extract and then incubated at 37 • C for 3 min. The reaction was terminated by adding 800 µL of 64.8 mM ammonium molybdate and then centrifuged at 12,000 rpm for 10 min. The absorbance of the obtained supernatant was recorded at 415 nm. The absorbance values were calibrated against an H 2 O 2 standard curve and expressed as µmole per gram fresh weight (µmole g −1 FW). CAT activity was determined by measuring the initial rate of H 2 O 2 disappearance according to the method of Hadwan and Abed [73], with slight modifications. The reaction mixture contained 20 µL of enzyme extract and 200 µL of 100 mM H 2 O 2 in 50 mM sodium-potassium phosphate (Na 2 KPO 4 ) buffer, pH 7.4. The reaction was incubated at 37 • C for 3 min and then stopped by adding 800 µL of 64.8 mM ammonium molybdate. The decrease in H 2 O 2 was followed by a decline in absorbance at 415 nm. One unit of CAT activity was defined as 1 µmole of H 2 O 2 used in 1 min. The activity was represented as units per gram fresh weight (U g −1 FW). POD activity was assayed following the method of Liu et al. [74] with slight modifications. The reaction contained 34 µL of enzyme extract, 33 µL of 0.25% (v/v) guaiacol, 33 µL of 0.1 M H 2 O 2 and 900 µL of 10 mM phosphate buffer, pH 7.0. The reaction was assayed by monitoring the increase in absorbance at 470 nm. One unit of POD activity was defined as the amount of enzyme that provided a change of 0.01 in absorbance per min per gram fresh weight (U g −1 FW). PAL activity was measured according to the method of D' Cunha et al. [75], with some modifications. The reaction was started by addition of enzyme extract (70 µL) to 50 mM Tris-HCl pH 8.9 containing 1 mM β-mercaptoethanol and 0.1 M L-phenylalanine then incubated at 37 • C for 1 h. The reaction was stopped by adding 160 µL of 6N HCl and then centrifuged at 12,000 rpm for 10 min. The obtained supernatant was monitored by absorbance at 290 nm. One unit was defined as the amount of enzyme that caused an increase in absorbance for 0.01. PAL activity was identified as units per gram fresh weight (U g −1 FW). Protein content was analyzed by the method of Bradford [76] using bovine serum albumin (BSA; Sigma Chem. Co., St. Louis, MO, USA) as a standard protein. Histochemical detection of H 2 O 2 In situ detection of H 2 O 2 content in rubber tree leaf cells was performed by staining with 3, 3 -diaminobenzidine (DAB; Sigma Chem. Co., St. Louis, MO, USA) solution, according to the method of Thordal-Christensen et al. [77] with some modifications. Leaf samples were incubated in DAB solution (1 mg/mL; 10 mg DAB in 10 mL of 0.01 M phosphate buffer saline (PBS), pH 7.2) for 12 h under the dark and then boiled in 95% (v/v) ethanol for 10 min. H 2 O 2 was visualized as a dark brown spot under a light microscope. CAT Activity Staining after Native Polyacrylamide Gel Electrophoresis (Native-PAGE) Crude protein extract (5 µL per lane) was electrophoresed on a 10% polyacrylamide gel without sodium dodecyl sulfate (SDS) under non-denaturing condition [78] for 4 h at 100 V. CAT bands were visualized by activity staining, according to the procedure of Woodbury et al. [79]. Briefly, after electrophoresis, the gel was placed in the dark and soaked in 1.3 mM H 2 O 2 for 25 min. After that, the solution was poured out and the mixed solution in 1:1 ratio of 1% (w/v) K 3 Fe (CN) 6 and 1% (w/v) FeCl 3 was then applied on the gel. The gel was incubated with gentle shaking for 4 min and then rinsed with distilled water. Areas corresponding to CAT activity were visualized as yellow to light-green bands against a dark-green background on the gel because of the depletion of H 2 O 2 by enzyme CAT. Total Phenolic Content and Lignin Detection Leaf samples (0.2 g fresh weight) were frozen immediately in liquid nitrogen, ground to a fine powder with a chilled mortar and pestle and then homogenized with 1 mL of sterile distilled water. The homogenate was centrifuged at 12,000 rpm for 20 min at room temperature. Total phenolic content of the extract was determined by the method of Torres et al. [80]. In brief, the supernatant 50 µL was mixed with 0.5 mL of 1 N Folin-Ciocalteu's phenol reagent, and then incubated with gentle shaking at room temperature for 5 min. The mixture was combined with 1 mL of 20% (w/v) Na 2 CO 3 solution, allowed to stand for 10 min and the absorbance was measured at 730 nm. Total phenolic content determined by a calibration curve of gallic acid was expressed as mg Gallic acid equivalents (GAE). Lignin deposition was determined following the method of Jensen [81]. The leaves were soaked in 2% (w/v) phloroglucinol containing 20% (v/v) HCl for 20 min. The accumulation of lignin was visualized as red color spots. SA and Scp Measurements The contents of SA and Scp in the leaf of rubber tree seedlings treated with 5 mM SA were measured by high performance liquid chromatography (HPLC; Agilent1100, Waldbronn, Germany). Leaf samples (0.5 g fresh weight) were ground to a fine powder with mortar and pestle and then homogenized with 1 mL of 90% methanol, according to the method of Ederli et al. [82]. The homogenate was vortexed vigorously and then centrifuged at 12,000 rpm for 10 min at 4 • C. The supernatant was adjusted with 50% (w/v) trichloroacetic acid (TCA) to produce a final concentration of 5% (w/v) TCA, and subsequently filtrated through a 0.2 µm cellulose acetate fiber membrane. The chromatographic separation was achieved on a C18 reverse-phase column (ZORBAX Eclipse XDB-C18, 250 mm × 4.6 mm; 5 µm). The compounds in the sample (20 µL) were separated with mobile phase containing acetonitrile (ACN) and 0.1% (v/v) formic acid. The gradient condition was programmed as follows (time in min/percentage acetonitrile): 0-2/80, 8.5-10/60, 12/55, 13/40 and 15/15 with a flow rate of 1 mL min −1 under a controlled temperature column at 40 • C. The compounds from the separation were identified by a fluorescence detector with an excitation wavelength of 294 nm and emission wavelength of 426 nm for detecting SA and an excitation wavelength of 337 nm and an emission wavelength of 425 nm for detecting Scp. Each sample was conducted by HPLC with three independent replicates. Total RNA Isolation and cDNA Synthesis Rubber tree samples (0.2 g of leaves, stems, hypocotyls and roots of in-vitro rubber tree plantlets as well as leaves of rubber tree seedlings) were flash-frozen in liquid nitrogen and subsequently ground to a fine powder with a pre-chilled mortar and pestle. RNA isolation was done using the RNeasy ® Plant Mini Kit (Qiagen, Valencia, CA, USA), according to the manufacturer's guidelines. The contaminating DNA was eliminated during total RNA purification step using an on-column RNase-free DNaseI digestion set (Qiagen, Valencia, CA, USA), according to the procedure of the RNeasy ® Plant Mini Kit. The RNA samples were measured for the quality and quantity by measuring a ratio of 260/280 nm absorption and their integrity was evaluated by visualizing the bands on a 1% agarose gel electrophoresis. Total RNA (2 µg) was reverse-transcribed into cDNA using the SuperScript TM III Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's guidelines. The remaining RNA was eliminated from the cDNA products with RNase H (Invitrogen, Carlsbad, CA, USA). The first-strand cDNA was stored at −20 • C until use. The PCR reaction contained 7.5 µL of Emerald Amp ® GT PCR Master Mix (Takara, Otsu, Shiga, Japan), 0.67 µM of each degenerate primer and 0.5 µL of the first-strand cDNA (~100 ng). RT-PCR was done with a thermal cycler (TECHNE; TC-512 model), under the following conditions: preheating step at 94 • C for 1 min, 35 cycles of denaturing step at 94 • C for 1 min, annealing step at 60 • C for 1 min, extension step at 72 • C for 1 min, and a final elongation step at 72 • C for 10 min. The amplicons were analyzed by electrophoresis on 1.5% (w/v) agarose gel, visualized under the UV transilluminator and photographed by a Gel Documentary. PCR products of the expected size were gel-purified using the Gel/PCR DNA Fragments Extraction Kit (Geneaid, New Taipei City, Taiwan), ligated into pGEM ® -T Easy Vector (Promega, Madison, WI, USA) and then transformed into Escherichia coli JM109 competent cells (Promega, Madison, WI, USA) by heat shock method. The transformants were selected on MacConkey agar plates containing 50 µg mL −1 carbenicillin. The recombinant plasmid was purified from 3 mL of bacterial culture using the E.Z.N.A. ® Plasmid Mini Kit I (OMEGA, Bio-Tek, Norcross, GA, USA) and subjected to sequencing by the Macrogen DNA sequencing service (Seoul, Korea). Gene Expression Analyses of HbCAT1, HbPR1 and HbPAL By Semi-Quantitative Reverse Transcription Polymerase Chain Reaction (Semi-qRT-PCR) The change of transcription levels of HbCAT1, HbPR1 and HbPAL genes was evaluated by semi-qRT-PCR and the expression of H. brasiliensis mitosis protein YLS8 gene (HbMitosis), a constitutively expressed housekeeping gene, was examined as an internal control [83]. The specific primers for HbCAT1, HbPR1 and HbPAL genes were designed based on HbCAT1, HbPR1, and HbPAL genes, respectively. The PCR reaction was carried out using 10 µL of Emerald Amp ® GT PCR Master Mix (Takara, Otsu, Shiga, Japan), 0.25 µM of each specific primer ( Table 2) and 0.5 µL of cDNA template (~100 ng). The semi-qRT-PCR reaction conditions were performed with an initial denaturation step at 94 • C for 4 min, followed by 25 cycles (for HbCAT1 and HbPR1 genes) and 35 cycles (for HbPAL) at 94 • C for 1 min, annealing step at 60 • C for 1 min, extension step at 72 • C for 1 min and a final elongation step at 72 • C for 10 min. The PCR products were analyzed by electrophoresis on 1.5% agarose gel, visualized under the UV transilluminator and photographed by a Gel Documentary. The band intensity on the gel was measured by the Image-Using VisionWorksLS software (UVP BioSpectrum ® MultiSpectral Imaging System TM , Cambridge, UK). Table 2. Specific primers for semi-qRT-PCR.
2018-06-30T06:00:29.729Z
2018-06-26T00:00:00.000
{ "year": 2018, "sha1": "d439df0cc72c147ba87ef67ce37ca91ff0121f3c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/7/1883/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d439df0cc72c147ba87ef67ce37ca91ff0121f3c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
9971278
pes2o/s2orc
v3-fos-license
Development and evaluation of human AP endonuclease inhibitors in melanoma and glioma cell lines Aims: Modulation of DNA base excision repair (BER) has the potential to enhance response to chemotherapy and improve outcomes in tumours such as melanoma and glioma. APE1, a critical protein in BER that processes potentially cytotoxic abasic sites (AP sites), is a promising new target in cancer. In the current study, we aimed to develop small molecule inhibitors of APE1 for cancer therapy. Methods: An industry-standard high throughput virtual screening strategy was adopted. The Sybyl8.0 (Tripos, St Louis, MO, USA) molecular modelling software suite was used to build inhibitor templates. Similarity searching strategies were then applied using ROCS 2.3 (Open Eye Scientific, Santa Fe, NM, USA) to extract pharmacophorically related subsets of compounds from a chemically diverse database of 2.6 million compounds. The compounds in these subsets were subjected to docking against the active site of the APE1 model, using the genetic algorithm-based programme GOLD2.7 (CCDC, Cambridge, UK). Predicted ligand poses were ranked on the basis of several scoring functions. The top virtual hits with promising pharmaceutical properties underwent detailed in vitro analyses using fluorescence-based APE1 cleavage assays and counter screened using endonuclease IV cleavage assays, fluorescence quenching assays and radiolabelled oligonucleotide assays. Biochemical APE1 inhibitors were then subjected to detailed cytotoxicity analyses. Results: Several specific APE1 inhibitors were isolated by this approach. The IC50 for APE1 inhibition ranged between 30 nM and 50 μM. We demonstrated that APE1 inhibitors lead to accumulation of AP sites in genomic DNA and potentiated the cytotoxicity of alkylating agents in melanoma and glioma cell lines. Conclusions: Our study provides evidence that APE1 is an emerging drug target and could have therapeutic application in patients with melanoma and glioma. Monofunctional alkylating agents are routinely used for the treatment of patients with advanced melanoma and glioma. However, the response rate to chemotherapy is modest and the overall prognosis is poor. The cytotoxicity of alkylating agents is directly related to their propensity to induce genomic DNA damage. However, the ability of cancer cells to recognize this damage and initiate DNA repair is an important mechanism for therapeutic resistance that negatively impacts upon therapeutic efficacy. Pharmacological inhibition of DNA repair, therefore, has the potential to enhance the cytotoxicity of alkylating agents and improve patient outcomes Madhusudan and Middleton, 2005). The DNA base excision repair (BER) pathway is critically involved in the repair of bases that have been damaged by alkylating agents such as temozolomide and dacarbazine (Hoeijmakers, 2001). Although there is more than one sub-pathway of BER, in most cases base excision is initiated by a DNA glycosylase, which recognizes a damaged base and cleaves the N-glycosidic bond, leaving a potentially cytotoxic apurinic/apyrimidinic (AP) site intermediate (Hickson et al, 2000). This product is a target for the human AP endonuclease (APE1). The DNA repair domain of APE1 cleaves the phosphodiester backbone on the 5 0 side of the AP site resulting in a single-strand break, which is further processed by proteins of the BER pathway. AP endonuclease 1 accounts for over 95% of the total AP endonuclease activity in human cell lines (Demple et al, 1991). In addition to its DNA repair activity, APE1 also performs functions such as redox regulation (mediated through a separate redox domain) and transcriptional regulation (Xanthoudakis et al, 1992;Okazaki et al, 1994;Bhakat et al, 2003). AP endonuclease 1 is a member of the highly conserved exonuclease III family of AP endonucleases, named after the E. coli homologue of APE1 . The endonuclease IV family of AP endonucleases, the prototypical member of which is E. coli endonuclease IV (Ramotar, 1997), is structurally unrelated to APE1, despite being able to carry out the comparable AP site incision reaction Gorman et al, 1997;Hosfield et al, 1999). Using either antisense oligonucleotides or RNA interference approaches, several groups have reported that depletion of intracellular APE1 sensitizes mammalian cells to a variety of DNA damaging agents (Chen and Olkowski, 1994;Walker et al, 1994;Silber et al, 2002). In melanoma cell lines, APE1 downregulation led to increased apoptosis, whereas APE1 overexpression conferred protection from chemotherapy-or hydrogen peroxide-induced apoptosis. (Yang et al, 2005). Antisense oligonucleotides directed APE1 depletion in SNB19, a human glioma cell line lacking O(6)-methylguanine-DNA-methyltransferase, lead to potentiation of MMS and temozolomide cytotoxicity (Silber et al, 2002). In patient tumours, APE1 expression may have prognostic and/or predictive significance. We have recently shown that APE1 expression has prognostic significance in ovarian, gastrooesophageal and pancreatico-biliary cancers (Al-Attar et al, 2010). AP endonuclease 1 is also aberrantly expressed in other human tumours and strong nuclear expression has consistently been observed in these studies (reviewed in ). In head and neck cancer, nuclear localisation of APE1 was associated with resistance to chemoradiotherapy and poor outcome (Koukourakis et al, 2001), and in cervical cancer, an inverse relationship between intrinsic radiosensitivity and levels of APE1 has been demonstrated (Herring et al, 1998). Preclinical and clinical studies suggest that APE1 is a viable anticancer drug target. We recently initiated a drug discovery programme to identify small molecule inhibitor-lead compounds of APE1 . Fluorescence-based high throughput screening of a chemical library, as well as biochemical and cellular investigations were undertaken. We reported the identification and characterisation of CRT0044876 (7-nitro-1Hindole-2-carboxylic acid), the first small molecule inhibitor of APE1 that potentiated the cytotoxicity of alkylating agents such as temozolomide . The ability of CRT0044876 to block BER has also been demonstrated independently by other investigators (Guikema et al, 2007;Koll et al, 2008). In a recent study, BER inhibition using CRT0044876 was shown to confer selectively enhanced cytotoxicity in an acidic tumour microenvironment (Seo and Kinsella, 2009). However, the ability of CRT0044876 to block BER has not been consistently demonstrated by other groups (Fishel and Kelley, 2007) implying that further work needs to be done before a genuine lead inhibitor could emerge. Here, we report on a new structure-based drug design strategy to identify APE1 inhibitors. This approach has allowed us to identify several novel APE1 inhibitors that potentiate the cytotoxicity of alkylating agents and that have potential as lead compounds for further optimisation and development. We also present preclinical data that support APE1 modulation as a particularly promising new strategy in melanoma and glioma where alkylating agents remain an important treatment modality. Enzymes, oligonucleotides and chemicals Human APE1, uracil-DNA glycosylase and E. coli endonuclease IV were obtained from New England Biolabs (Ipswich, MA, USA). Virtual screening strategy Virtual screening was done against the high resolution crystal structure of APE1 (PDB accession code 1BIX). Sybyl8.0 was used to build inhibitor templates based on the previously reported APE1 inhibitor and three new pharmacophore templates designed in silico (M1, M2 and M3) based on the structural features of the APE1 active site (see results and discussion). Using these templates, ROCS 2.3 (Open Eye Scientific, Santa Fe, NM, USA) (Hawkins et al, 2007) was used to extract pharmacophorically-related (Tanimoto cut-off between 0.6 and 0.75) subsets of compounds from the ZINC database (http:// zinc.docking.org/; 2008 version with ca. 2.6 million drug-like compounds) (Irwin and Shoichet, 2005). The 1679 filtered ligands were docked into the APE active site pocket using GOLD2.7 (Hartshorn et al, 2007). Predicted ligand poses were ranked on the basis of two fitness scoring functions: GOLDScore (Jones et al, 1997)and ChemScore (Verdonk et al, 2003). A total of 100 docking runs were performed for each ligand. Fluorescence-based AP site cleavage assay A fluorescence-based AP site cleavage assay was performed as described previously with slight modifications . Briefly, APE1 (50 nM) (New England Biolabs) was incubated in a buffer system consisting of 50 mM Tris-HCl, pH 8.0, 1 mM MgCl 2 , 50 mM NaCl, 2 mM DTT at 371C for 10 min. 5 0 -F-GCCCC CXGGGGACGTACGATATCCCGCTCC-3 0 and its complementary Q-labelled oligonucleotide (see above) were annealed in a buffer containing 100 mM Tris-HCl, 50 mM NaCl and 1 mM EDTA. AP-site cleavage was initiated by addition of the annealed substrate (25 nM) to the reaction mix. Fluorescence readings were taken at 5 min intervals during 30 min incubation at 371C using an Envision Multilabel reader from Perkins Elmer (Cambridge, UK) with a 495 nM excitation and a 512 nM emission filter. If the DNA is cleaved at the abasic site at position 7 from the 5 0 -end by APE1, the 6-mer fluorescein-containing product will dissociate from its complement by thermal melting. As a result, the quenching effect of the 3 0 dabcyl (which absorbs fluorescein fluorescence when in close proximity) is lost, and APE1 activity is measured indirectly as an increase in fluorescence signal ( Figure 2A). Similar assays were developed for monitoring the AP endonuclease activity of endonuclease IV using a buffering system containing 10 mM HEPES-KOH, pH 7.4, 100 mM KCl and 60 ng of endonuclease IV (Trevigen, Abingdon, UK). The final DMSO concentration was maintained at 1.2% in all assays. APE1 wild-type and D148E polymorph was quantified using NanoDrop 2000c spectrophotometer (Thermo Scientific, Wilmington, NC, USA), and 50 nM of protein was used in all assays. D148E polymorph was generated as described previously (Hadi et al, 2000). Experiments were repeated at least five times. Screening of virtual APE1 inhibitor candidates APE1 was incubated with the candidate inhibitors at 100 mM (final DMSO concentration, 1.2%) before initiating the AP site cleavage assay described in the previous section. Those candidates that showed 490% inhibition of APE1 activity were subjected to serial dilution experiments for IC 50 calculations. In addition, screening of potential inhibitors for their specificity (at 100 mM concentration) was performed using endonuclease IV cleavage assays. IC 50 value estimations To estimate IC 50 for APE1 inhibition, the ability of the compounds to inhibit APE1 at a range of concentrations (10nM-100 mM) were evaluated in black 384-well plates. The reactions were set up as before and fluorescence intensity was measured every 5 min for 30 min following reaction initiation. Using the initial rate values from the assay, percent activity was calculated for each sample relative to a negative DMSO only control. The data was fitted to a sigmoidal dose-response model using Graphpad Prism 3.0 (GraphPad Software, La Jolla, CA, USA) and IC 50 values were determined using the formula: % Activity ¼ 100/(1 þ 10 (log[I]Àlog IC 50) ). Fluorescence quenching assay To investigate the possibility that compounds might possess intrinsic quenching activity, fluorescence quenching assays were performed. Briefly, the oligonucleotides 5 0 -F-oligonucleotide (see above) and 3-CGGGGGCCCCCTGCATGCTATAGGGCGAGG-5 0 were annealed as described previously. The double stranded oligonucleotide (5 nM) was incubated with 100 mM of potential APE1 inhibitor in a buffer consisting of 50 mM Tris-HCl, pH 8.0, 1 mM MgCl 2 , 50 mM NaCl and 2 mM DTT at 371C for 30 min. Fluorescence intensity was measured every 5 min. Any hits that showed a decrease of more than 50% in the fluorescence intensity were considered as quenchers and discarded from further analyses. Radiolabelled oligonucleotide-based APE1 cleavage assay This basic assay was performed as described previously . Briefly, a radiolabelled uracil-containing oligonucleotide (5 0 -CTCGCAAGUGGGTACCGA-3 0 ) was annealed to a complementary oligonucleotide. To generate AP sites, the annealed DNA substrate was pretreated with uracil-DNA glycosylase and the resulting AP site was chemically reduced by the addition of sodium borohydride. AP site cleavage reaction consisted of 50 nM APE1 and 0.75 ng reduced AP site double-stranded oligonucleotide incubated at 371C for 1 h. The sample was resolved on a 15% TBE Criterion Pre Cast Gel (Bio-Rad, Hemel Hempstead, Herts, UK) and the radiolabelled substrate and reaction products were visualised using a phosphorImager (Molecular Dynamics, Buckinghamshire, UK). Whole-cell extract AP-site cleavage assay HeLa cells -maintained in DMEM with 10% fetal bovine serum and 1% penicillin -Streptomycin -were harvested, washed with 1 Â PBS, and the pellet was resuspended in cold 222 mM KCl plus protease inhibitors (0.5 mM PMSF, 1 mg ml À1 each of Leupepetin and Pepstatin A), incubated on ice for 30 min, and clarified by centrifugation at 12 000 Â g for 15 min at 41C (Simeonov et al, 2009). The supernatant WCE was retained, the protein concentration determined using the Bio-Rad Bradford reagent, and aliquots were stored at À801C. AP endonuclease activity assays using 18-mer radiolabelled oligonucleotide substrates (see above) were performed. In brief, all potential APE1 inhibitors were incubated at 100 mM concentrations with 30 ng of HeLa WCE at room temperature for 15 min in incision buffer consisting of 50 mM Tris-HCl, pH 8, 1 mM MgCl 2 , 50 mM NaCl and 2 mM DTT. After incubation, 0.5 pmol 32 P-radiolabeled THF-containing 18-mer double-stranded DNA substrate was added. Incision reactions were then carried out immediately at 371C for 5 min in a final volume of 10 ml after which the reaction was terminated by the addition of an equal volume of stop buffer (0.05% bromophenol blue and xylene cynol, 20 mM EDTA, 95% formamide), followed by denaturation of samples at 951C for 10 min. The radiolabeled substrate and product were separated on a standard polyacrylamidedenaturing gel and quantified by phosphorimager analysis. Kinetics analysis APE1 protein (80 ng) was incubated at room temperature for 30 min without or with APE1 inhibitor (5, 10 and 20 mM). Fluorescent DNA substrate was then added to a final concentration of 100, 200 and 500 nM (in 40 ml final volume), and enzyme activity was allowed to proceed for 30 min at 371C. The percentage APE1 cleavage activity was plotted. Lineweaver -Burk plots and kinetic parameters (k cat and K M ) were determined from eight independent data points. Quantification of AP sites in genomic DNA AP sites were quantified as described previously . Genomic DNA was extracted from a pellet of 1 Â 10 6 cells using the guanidine/detergent lysis method. Briefly, 0.5 ml APE1 inhibitors for melanoma and glioma MZ Mohammed et al of DNAzol (Helena Biosciences, Gateshead, UK) was added to the pellet and the cell lysate was gently passed several times through a pipette. The resultant viscous solution was centrifuged at 10 000 g for 10 min at 251C. DNA was precipitated from the supernatant using 0.25 ml of 100% ethanol by gently inverting the tube 5 -8 times at room temperature for 1 -3 min. The DNA was washed twice in 0.4 ml of 75% ethanol. The DNA was then solubilized in TE buffer (pH 8.0), and the final concentration was adjusted to 100 mg ml À1 (using a Gene Quant pro spectrophotometer). AP-site determinations were performed on the genomic DNA using an aldehyde reactive probe assay kit using the protocol provided by the manufacturer (BioVision Research Products, Mountainview, CA, USA). Untreated cells were compared with cells exposed to either MMS alone, APE1 inhibitor alone or combination of MMS and APE1 inhibitor. DNA was extracted at 90 min and AP site quantified as described previously. All experiments were performed in triplicate. AQ ueous non-radioactive cell proliferation assay (MTS assay) To evaluate intrinsic cytotoxicity and to evaluate the potentiation of toxicity of cytotoxic agents by APE1 inhibitors, MTS assays were performed as per the manufacturer's recommendation (Promega, Southampton, UK). Briefly, 2000 cells per well (in 200 ml of medium) were seeded into a 96-well plate. For HUVEC cells, 5 ml of 2% type 2 gelatine (Sigma) was added to the wells and the plates were preincubated for 20 min at 371C before seeding of cells. For intrinsic cytotoxicity assessments, cells were incubated with varying concentrations of APE1 inhibitors and the MTS assay was performed on day 5. For potentiation experiments, cells were preincubated with a relatively nontoxic concentration of APE1 inhibitor for 24 h and then exposed to MMS, temozolomide or doxorubicin. Non-radioactive cell proliferation assay was conducted as described previously. Virtual screening The virtual screening process requires the precise definition of the ligand-binding site in the target protein. The DNA repair domain active site was localised on the basis of the previously reported 10 critical amino acid residues that are essential for the AP endonuclease activity of APE1 (D70, D90, E96, Y171, D210, N212, D219, D283, D308 and H309) Rothwell and Hickson, 1996;Erzberger and Wilson, 1999;Fritz et al, 2003;Mundle et al, 2004). The active site is a welldefined deep V-shaped cleft, with a Mg 2 þ ion at its 'elbow' ( Figure 1A). Our virtual screening strategy was to take a known 'first generation' APE1 inhibitor, plus prototypical molecular scaffolds designed on the basis of the shape of the ligand-binding site, and perform a rapid structure-based similarity search of a large virtual library of drug-like molecules. 'Hits' from this search were then subjected to the more computationally costly process of dockingbased evaluation. We used Sybyl8.0 to build molecular models for the previously reported APE1 inhibitor, CRT0044876 (Figures 1B), and to build models for three prototypical scaffolds (M1, M2 and M3) (Figures 1B) that were predicted to fit well into the APE1 binding site cleft and interact with key residues. Template M1 features a central tetrahedral centre bearing a potential Mg 2 þ -interacting carboxylate group plus two heteroaromatic branches that have dimensions and relative orientations designed to fit snugly into the active site groove. Template M2 bears the same key features, but the heteroaromatic substituents are extended to interact with more of the groove . Template M3 bears an additional heteroaromatic sidechain that can access a subsidiary cleft in one branch of the ligand-binding groove (Figures 1B). Using these templates, a shape-based similarity searching strategy using ROCS 2.3 (OpenEye Scientific)(Hawkins et al, . The conformations of these compounds were then energy minimised and subjected to docking against the active site of the APE1 model. A consensus score plot was constructed for each virtual hit by adding the GOLDScore and ChemScore (Figure 2A). The top ranking 25% of the compounds were shortlisted from the consensus plot and subjected to detailed biochemical analyses. Inhibitory activity of compounds against the D148E polymorphic variant of APE1 The D148E polymorphic variant of APE1 has been implicated in cancer predisposition including melanoma (Li et al, 2006;Farkasova et al, 2008;Gu et al, 2009). In addition, the D148E polymorph may also alter ionising radiation sensitivity (Hu et al, 2001). We tested if our isolated inhibitors would have differential Figure 2 (A) Consensus score plot was constructed by plotting Gold Score (x-axis) and Chem Score (y-axis) for the 1679 virtual APE1 inhibitor candidates. The top ranking 25% of the compounds were shortlisted from the consensus plot and subjected to detailed biochemical analyses. (B) Primary screening. Fluorescence-based APE1 cleavage assay is shown here. If the DNA is cleaved at the abasic site at position 7 from the 5 0 end by APE1, the 6-mer fluorescein-containing product will dissociate from its complement by thermal melting. As a result, the quenching effect of the 3 0 dabcyl (which absorbs fluorescein fluorescence when in close proximity) is lost, and APE1 activity is measured indirectly as an increase in fluorescence signal. For detailed protocol see Materials and methods section. (C) APE1 inhibition by CRT0044876 is shown here. Control ¼ no APE1 in reaction. (D) APE1 inhibition by compound 4 is shown here (IC 50 ¼ 11 mM). activity against the variant compared with the wild-type protein. Although the AP-site cleavage activity of D148E variant was similar to that of the wild type ( Figure 4C), consistent with a previous report (Hadi et al, 2000), Figure 4D demonstrates that for compound 4, the IC 50 for APE1 inhibition was significantly reduced by 50.5% for the D148E protein (5.56 mM) compared with the wild type (11 mM). The preferential inhibitory activity of compound 4 towards the D148E protein was also confirmed in radiolabelled oligonucleotide assays (data not shown). We were not able to demonstrate preferential inhibitory activity of other compounds either in fluorescence or radiolabelled assays. Kinetics analyses To evaluate potential mechanism of action of APE1 inhibitor, kinetic analysis was performed ( Figure 5). As compound 4 had the strongest inhibitory activity (490% inhibition) in whole-cell extracts, we selected this compound for kinetic analysis. Lineweaver -Burk plots and kinetic parameters was determined from eight independent data points. K M and k cat decreased at each inhibitor concentration (compared with no inhibitor) and the k cat /K M decreased at increasing inhibitor concentration. The data is consistent with uncompetitive inhibition. However, we cannot exclude the possibility that compound 4 operates as a weak uncompetitive inhibitor (meaning it binds the protein -DNA substrate complex), as we observed a reproducibly lower K M in the presence of the compound, though this is unlikely. Genomic AP site accumulation in cells In order to test the biological activity of APE1 inhibitors under physiological conditions, analysis was then undertaken in melanoma cell lines (MeWo, SKMel and MM418) and glioma cell lines To evaluate potential mechanism of action of APE1 inhibitor, kinetic analysis was performed. Lineweaver -Burk plots and kinetic parameters determined from eight independent data points (note: error bars are in some cases too small to see) for compound 4 is shown here. The APE1 inhibitor was tested at three dose levels (5, 10 and 20 mM) and oligonucleotide substrate was evaluated at three different concentrations (100, 200 and 500 nM). The reaction was performed as described in methods. K M and k cat decreased at each inhibitor concentration (compared with no inhibitor) and the k cat /K M decreased at increasing inhibitor concentration. The data is consistent with uncompetitive inhibition. (U89MG and SNB-19). We initially tested if these cell lines expressed APE1 protein. Robust APE1 expression was seen in these cell lines using western blot analyses ( Figures 6A and 7A). In order to confirm that the isolated inhibitors block APE1 function in living cells, the aldehyde reactive probe assay that allows quantification of genomic AP sites was utilised in this study. Figure 6B shows that compared with untreated cells, glioma cells exposed to compound 4 accumulated AP sites confirming target Figure 7D). We took the survival fraction as 100%. The percentage survival for those cells exposed to both inhibitor and temozolomide was plotted as a relative survival to cells exposed to the inhibitor alone. Potentiation of cytotoxicity of MMS by compound 4 (10 mM) in U89 MG cell line is shown here. (D) Potentiation of temozolomide by compound 4 (10 mM) in U89MG cell line is shown here. Figure 7D). We took the survival fraction as 100%. The percentage survival for those cells exposed to both inhibitor and temozolomide was plotted as a relative survival to cells exposed to the inhibitor alone. Compound 4 was relatively nontoxic to HUVEC cells. inhibition in vivo. As AP sites are obligatory intermediates during the repair of MMS-induced base damage, accumulation of AP sites were also demonstrated in cells exposed to MMS alone. Moreover, AP-site accumulation in cells exposed to a combination of APE1 inhibitor and MMS was more than the cells exposed to either agent alone. Similar accumulation of AP sites was also demonstrated in melanoma cells. Cytotoxicity analysis in melanoma, glioma and HUVEC endothelial cell lines Potentiation of cytotoxicity was also demonstrated with other APE1 inhibitors that showed moderate to strong WCE AP-site cleavage inhibition (compound 2, 5 and 6) but not with mild WCE AP-site cleavage inhibition (compound 1). Compound 7, which was a non-specific inhibitor (i.e blocked both APE1 and endonuclease IV), did not show any potentiation of cytotoxicity and Compound 3, which was a specific APE1 inhibitor but had no activity in WCE assay, also did not shown any potentiation of cytotoxicity (data not shown). To exclude non-specific activity and potentiation, we performed toxic studies using doxorubicin. Compound 4 did not potentiate the cytotoxicity of doxorubicin in melanoma (SK-Mel30) and glioma cell line (U89MG) (Figures 8A and B). Similar results were seen for MeWo, MM418 and SNB-19 cells. In order to investigate whether APE1 inhibitor was toxic to noncancer cells, we conducted toxicity analysis in HUVEC endothelial cells. Figure 7D shows that compound 4 was relatively non-toxic to HUVECs compared with melanoma (SK-Mel30) and glioma (U89MG) cell lines. Similar results were seen for MeWo, MM418 and SNB-19 cells. DISCUSSION The overall prognosis of advanced melanoma and glioma remains poor and strategies to improve tumour response to chemotherapy remain a high priority. Blocking DNA repair may enhance cell kill in cancer and improve outcomes Madhusudan and Middleton, 2005). APE1, a critical protein in BER, is involved in the pathogenesis of glioma and melanoma. Elevated AP endonuclease activity is frequently seen in human glioma tumours (Bobola et al, 2001). Moreover, in preclinical studies, antisense oligonucleotides directed APE1 depletion in SNB19, a human glioma cell line lacking O(6)-methylguanine-DNA-methyltransferase, lead to potentiation of MMS and temozolomide cytotoxicity, implying that pharmacological modulation of APE1 is a promising strategy in glioma (Silber et al, 2002). A recent study has demonstrated that microphthalmia-associated transcription factor (MiTF), a key transcription factor for melanocyte lineage survival, regulates APE1 expression. Microphthalmia-associated transcription factor-positive melanoma cell lines accumulated high levels of APE1 (Liu et al, 2009). In a separate study, downregulation of APE1 using antisense constructs promoted apoptosis in melanoma cell lines (Yang et al, 2005). Interestingly, the APE1 genetic polymorphism D148E may also alter melanoma predisposition (Li et al, 2006). These studies therefore suggest that APE1 is also a novel target in melanoma. In this investigation, we have focussed on the development of novel APE1 small molecule inhibitors and have provided the first evidence that blocking APE1 is a promising strategy in melanoma and glioma cells. Our previous study provided the first evidence that small molecule inhibition of APE1 is a viable anticancer strategy . In order to develop novel drug-like chemotypes, we recently adopted a virtual screening approach. The architecture of the active site of APE1 in the absence and presence of bound AP-DNA indicates that there is little or no remodelling of the active site upon substrate binding, a feature that is suitable for a virtual screen Gorman et al, 1997). We have exploited the structural features of APE1 to develop an enhanced virtual screening strategy and identified several novel small molecule inhibitors for further drug development. Three new pharmacophore templates were designed in silico (M1, M2 and M3) and a total of 1679 virtual hits with similarities to the templates were identified (CRT template ¼ 359, M1 template ¼ 373, M2 template ¼ 459 and M3 template ¼ 488). Detailed biochemical screening showed that majority of the compounds conform to the M3 template, which bears an additional heteroaromatic sidechain that can access a subsidiary cleft in one branch of the ligand-binding groove ( Figures 1B). Although the structural details of M3 template binding to APE1 active site is unknown, cocrytallization trials may provide structural insight to guide a rational drug-design strategy. In this study, we also provide evidence for the first time that certain APE1 inhibitors may be more effective in blocking the endonuclease activity of the D148E polymorph (a common polymorph associated with cancer predisposition) compared with the wild type. The inability of six of the seven compounds examined to inhibit the activity of endonuclease IV provides presumptive evidence that the compounds indeed act by interaction with APE1 rather than by obscuring the abasic site on the DNA substrate. Moreover, the kinetics analysis has provided insight into the mechanism of action of the inhibitor. We have shown that compound 4 decreased K M , k cat (compared with no inhibitor) and decreased the k cat /K M implying uncompetitive inhibition. Future cocrytallization experiments in the presence of DNA are likely to provide further information regarding the exact mechanism of action of this compound. To assess potency and specificity of our compounds, we screened their ability to block AP-site cleavage activity using WCE. This is a good system to screen for compounds that may have non-specific binding to other cellular proteins. Compound 4 exhibited more than 90% inhibition in the WCE assays, implying strong potency and specificity. Although compound 3 blocked APE1-directed AP-site cleavage activity in purified APE1-based assay, it had no effect in the WCE assay. This implies that the compound has 'off target' non-specific protein-binding effect and suggests that it is unlikely to be a good development candidate. In order to provide further preclinical evidence that blocking the repair domain of APE1 is a potential treatment strategy, we conducted studies in glioma and melanoma cell lines. We confirmed APE1 expression in these cancer cell lines. We then confirmed accumulation of AP sites in vivo in cells exposed to inhibitor, providing direct evidence of target inhibition in vivo. Intrinsic cytotoxicity for several of the inhibitors was demonstrated in glioma and melanoma cell lines, a finding consistent with the observation that APE1 downregulation in melanoma cell lines promotes apoptosis, although non-specific toxicity at higher doses of the compound cannot be excluded in our study (Yang et al, 2005). Interestingly, the inhibitors were relatively non-toxic to HUVEC cells implying selectivity to cancer cells. In a recent study, BER inhibition using CRT0044876 was shown to confer selectively enhanced cytotoxicity in an acidic tumour microenvironment (Seo and Kinsella, 2009), suggesting a further novel opportunity to target tumours. We then showed potentiation of MMS and temozolomide cytotoxicity in melanoma and glioma cell lines. We did not observe potentiation of doxorubicin toxicity in these cell lines implying that APE1 inhibitor potentiates chemotherapy that induce base damage and repaired through BER. Moreover, potentiation of cytotoxicity was not demonstrated in HUVEC cells, again implying selectivity to cancer cells. These studies indicate that APE1 inhibitors, either alone or in combination with chemotherapy, may be a promising strategy in cancer. Following our initial report, other investigators have identified various APE1 inhibitors for potential pharmaceutical application (Seiple et al, 2008;Simeonov et al, 2009;Zawahir et al, 2009). In conclusion, these studies and our two reports (including this one), confirm the validity of APE1 as an emerging anti-cancer drug target.
2014-10-01T00:00:00.000Z
2011-01-25T00:00:00.000
{ "year": 2011, "sha1": "b3fd8fdbdbbece30a5f7362265ea061cb3421d3c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6606058.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b3fd8fdbdbbece30a5f7362265ea061cb3421d3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
113535850
pes2o/s2orc
v3-fos-license
Driving and Fostering Strategic University and Industry Collaboration : Perspective from the Food Manufacturers Universities were considered as the ivory towers which were less concerned about the world outside as they were more concerned about the intellectual pursuits decades ago (Ismail & Abas, 2010). However, the social and egalitarian awakening has caused the university to gradually move away from this traditional concept and to display a stronger commitment to the welfare of the society. Universities are fostering links and facilitating technology transfer with industries. Malaysia is heading toward a new Abstract Introduction Universities were considered as the ivory towers which were less concerned about the world outside as they were more concerned about the intellectual pursuits decades ago (Ismail & Abas, 2010).However, the social era of knowledge based economy.Knowledge has become the main driver of Malaysia's economy.Numerous researchers have concluded that in order for a country to success in this knowledge based economy, the industry should know how to acquire, use and leverage knowledge effectively (Sohail, 2009). There are a total of 44 food processing companies in Batu Pahat township.UTHM with a students' population of more than 16,000 people and work force of 2,000 people is considered an anchor institution in this town.It is important to target UTHM and the food companies for this research as they are critical players in the region to build linkages between university, industry and commerce in order to contribute to the economic development of the region.The performance of UIC could be particularly affected by geographical proximity as this facilitates frequent direct contacts between the academicians and the industrial partners.Dodd and Patra (2002) suggested that networking processes are particularly beneficial when the network partners are geographically close to each other.Firms are more likely to collaborate with university near to where it is located although the prestige of university mitigates the effect of geographical proximity (Laursen et al., 2001;Piva & Rossi-Lamastra, 2013).Theory of anchor institutions (Agrawal and Cockburn, 2003) claimed that anchor institutions e.g.universities and large firms have important economic impacts due to their employment, revenue-garnering and spending patterns thus benefiting the regional economy.Thus, it is very significant to determine what drives UIC success of UTHM and the anchor institutions to benefit Batu Pahat economic and social development. Through UIC, the interactions between university and industry can aid in knowledge transfer and stimulate the generation of new knowledge.However, there are some barriers and challenges in understanding the collaboration between university and industry.In this case, understanding key drivers can help to increase the chances of success in UIC.The key drivers can be identified based on the STEEPV analysis which stands for Social, Technological, Economic, Environmental, Political and Values.Drivers act as the forces of change.STEEPV is used to create an overview of the current situation and brought to a further investigation (Pillkahn, 2008).In this concern, the purpose of this study is to determine the key drivers of change in UIC between UTHM and the food processing industry in Batu Pahat, a town located in the state of Johor, Malaysia using foresight approach. Literature Review The roles of university and industry in innovation have evolved from the traditional model to the new model.University and industry continue to produce their traditional outputs since decades ago.Due to the new demands for sustenance and survival in this competitive business environment, university and industry have undergone real-time relationships and direct technology transfer.They are creating new opportunities such as new ventures and new knowledge (Dasher, 2004). Reviews and works on UIC in Malaysia and other countries indicate the criticalness of forging close collaborations is a very active research area and it is attracting lots of attentions (Pecas & Henriques, 2006;Hermans & Castiaux, 2007;Esham, 2008;Kondo, 2011;Bjerregaard, 2009;Gertner et al., 2011).Strategic Enhancement Plan for University and Community Collaboration stated that universities today are not confined only to the generation of new knowledge, but also to encompass the creation of applicable and economically useful knowledge for the well beings of society, as universities are supposed to be the nation's economic and intellectual engines (Ismail & Abas, 2010).Most of the universities in Malaysia have initiative to set up a liaison office to further strengthen its strategic alliance with the industry.Liaison office in university is similar to the marketing department of a company (Fassin, 2000). Collaboration between university and industry can yield synergy.It begins with university providing professional education which results in producing competent workforce that is needed by the industry.Besides producing competent human capital, university does industry-based research which will yield more customized study programmes.On the other hand, the industry can provide funds and helps to enhance and develop curriculum by certifying the study program (Junaini et al., 2008).As a whole, this synergy model will yield mutual benefits for both university and industry. The major events in the history of food industry had conveyed an important message to every one of us in the food chain.It is telling us that technology advancement, creativity, innovation, research and development (R&D) have played a major role in the evolution of food industry.From the perspective of industry, the increase of market competitiveness has worsened the condition for the players in the industry to get a piece of the pie.For this, collaboration with university is an alternative for food industry to access to the sources of funds, skills, expertise, talents and facilities for R&D.Vauterin (2012) reviewed that these sources are crucial in increasing their competitive advantages.A better understanding and adequate managing of boundary roles between university and industry will help to decrease the perceived market demand uncertainty surrounding university and industry.He further suggested that in-depth research is needed for the development of a holistic understanding of how partnering for university and industry is experienced. The drivers of change are the key factors that support the important trends and issues.The drivers will be identified using the STEEPV analysis that determine the uncertain terms of possible development and implication.The drivers are the trends, technologies and issues that act as driving forces for future changes.Table 1 tabulates the categories of STEEPV analysis of UIC from the works of previous researches. Economic The need to spread the costs and risks of innovation is one of the motives of strategic alliance (Mowery, Oxley, & Silverman, 1996).Partnership is one of ways for market development as it increases the customers' awareness (Foryszewski, 2010).Many small businesses are looking for partnership as a way to reduce costs (Holland, 2011). Partnership is an important part of globalization process as "Trans-Pacific Partnership" is the prove of it (Bolton, 2011). Sharing innovation costs and risks Market development Reduce costs Globalization Environmental Pallot (2009) mentioned that the factor of geographical dispersion is one of the barriers in collaboration. Partnership is a way of facility sharing.For example in automobile industry, Renaults utilized Nissan's factory in Mexico to produce its own model (Kang & Sakai, 2000).Arora and Cason (1996); King and Lenox (2000) suggested two major motives that industry voluntary participate in green partnership.Firstly, the motive of industry is to respond to environmental conscious investors and consumers and develop a "green" reputation that allows it to take a competitive position in markets (Arora & Cason, 1996).Secondly, the motive of industry is to deal with institutional and regulatory pressures (King & Lenox, 2000).profiled from 5 different representative companies were chosen to be interviewed.These companies are made up of big, medium and small companies allowing control of size of companies as a controlled factor in this study because the size of the companies affects UIC performance.The university chosen is UTHM as it is located in Batu Pahat town.This is a new University and has started to provide courses in food technology that makes it suitable to be explored on collaboration between the food processing industry and the university.The data collections of this study relied on the Phase 1: base data collection (secondary data) and Phase 2: main data collection (primary data).The primary data was collected through interviews and questionnaires.The interview questions are in the form of semi-structured format. Geographical dispersion The drivers of change are the key factors that support the important trends and issues.These drivers were identified using the STEEPV analysis that determine the uncertain terms of possible development and implication.The drivers are the trends, technologies and issues that act as driving forces for future changes.The second step in scenario building is the analysis of the impact-uncertainty.This step is used to determine the uncertainties in the determination of future of UIC between food processing industry and UTHM.The lists of drivers were placed in accordance to the level of the uncertainty and the level of impactful to UIC in year 2025.The drivers with the greatest impact and also most uncertain are the key drivers.The impactuncertainty axis is used to find the most uncertain drivers that might have the higher influence over the future of UIC in food processing industry. Data Analysis This section is on the profile of respondents, the categories of driver determination and reporting on the impact-uncertainty analysis to derive the key drivers of UIC. Profile of Respondents The characteristics of the sampled firms are tabulated in Categories of Drivers Table 3 tabulates the categories of drivers obtained from the interviews.Data reduction and data display were applied to get the inference of the interviews' contents.Table 4 tabulates the distribution of the drivers based on the degree of impact and Table 5 tabulates the distribution of the dirvers based on the degree of uncertainty. Impact-uncertainty Analysis Basing on the drivers derived in the Table 4 and Table 5, an impact-uncertainty analysis was figured out as shown in Figure 1.The list of key drivers coded in the figure is shown in Table 6. At this stage, the most uncertain and the most impactful drivers were identified.There are two key drivers as shown in the figure derived from analysis of the primary data collected.In this case, the rate of technology adoption and government policies are the key drivers which will shape the future of UIC between UTHM and the food manufacturers of Batu Pahat.These are D2 and D3 as highlighted in Table 6. Discussion Malairaja and Zawdie (2008) reviewed that companies collaborate with university are usually more productive and more competitive in their market share, quality of products and services and cost.This is reflected by many higher education institutions and governments in the world in reforming their educational and policies or strategies to foster and commercialize their innovation to reap the benefits of university and industry collaboration for economic and social gains (Harman, 2005).Etzkowitz and Leydesdorff (2000). Recommendation More studies on how to reduce the gap between academicians (university) and the industry is needed as it is a main barrier for successful UIC.Another suggestion is to find out a suitable model of collaboration which can be used by UTHM and the food processing industry in Batu Pahat.There is much to learn from bringing together key experts, experienced leaders, and the players together in highlighting, disseminating and sharing knowledge on issues which will be significant in building links and bridges the gaps between universities and private industries to work towards a successful, winwin knowledge and economic benefits partnership.Sharing of ideas, practices, and to connect between the different sectors leads to building a thriving model for regional university industry collaboration (Kim-Soon et al., 2014).A model can help both parties to increase the chances of success of UIC.Reyhani and Mazzarol (2012) proposed government and other relevant institutions can focus on strategies adoption at macro level and the academics, industrial practitioners to work together at the micro level in a conducive business environment for economic growth.As described earlier, Ssebuwufu et al., (2012) state that there are three theoretical models academia-industry-government collaboration.However, there are also approaches that top-down, bottom-up or mixed mode.Thus, depending on government initiative, governance modes of university, incentive systems e.g.related to university ownership of intellectual property, market competitive and other environment factors, a relevant and efficient model will serve well for UIC success. Conclusion This study found that the rate of technology adoption and government policies are the two key drivers of change in UIC between UTHM and the food processing industry in Batu Pahat.The future of UIC is shaped by the government initiative in policy making and also the rate of technology adoption of UTHM and the food processing industry in Batu Pahat.However, depending on government initiative, governance modes of university, incentive systems e.g.related to university ownership of intellectual property, market competitive and other environment factors, a relevant and efficient model will serve well for UIC success. Figure 1 :Figure 1 Figure 1: Impact-uncertainty analysis (Code of driver is listed in Table6) Figure1 __________________________________________________________________________________________________________________ ______________ Ng Kim-Soon, Juwita Anwar, Wahid Razzaly and Abd Rahman Ahmad (2015), Journal of Southeast Asian Research, DOI: 10.5171/2015.725644 its Grant Number C043 to Dr. Ng Kim-Soon.This work is also done in collaboration with MIGHT on the use of STEEPV, impact / uncertainty analaysis methodology propagated by MIGHT.The authors wish to thank the respondents who were interviewed and their willingness of sharing their experiences, time and patience for participating in this project.Glossary of abbreviations CEOs -Chief Executive Officers R&D -Research and Development STEEPV -An analysis which stands for Social, Technological, Economic, Environmental, Political and Values UIC -University-Industry Collaborations UTHM -Universiti Tun Hussein Onn Malaysia (A Public University located in Batu Pahat Township) Table 2 . The profiled respondents from the industry are the CEOs of identified regional food industry players.
2017-10-11T01:40:30.408Z
2015-10-07T00:00:00.000
{ "year": 2015, "sha1": "d78f08e19f4b265cae1615ad25b16ad46795d68c", "oa_license": "CCBY", "oa_url": "http://ibimapublishing.com/articles/JSAR/2015/725644/725644.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d78f08e19f4b265cae1615ad25b16ad46795d68c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
234441422
pes2o/s2orc
v3-fos-license
The Effect of Slurry Wet Mixing Time, Thermal Treatment, and Method of Electrode Preparation on Membrane Capacitive Deionisation Performance : Capacitive deionisation (CDI) electrodes with identical composition were prepared using three deposition methods: (1) slurry infiltration by calendering (SIC), (2) ink infiltration dropwise (IID), and (3) ink deposition by spray coating (IDSC). The SIC method clearly showed favourable establishment of an electrode with superior desalination capacity. Desalination results showed that electrodes produced from slurries mixed longer than 30 min displayed a significant reduction in the maximum salt adsorption capacity, due to the agglomeration of carbon black. The electrodes were then thermally treated at 130, 250, and 350 ◦ C. Polyvinylidene difluoride (PVDF) decomposition was observed when the electrodes were treated at temperatures higher than 180 ◦ C. The electrodes treated at 350 ◦ C showed contact angles of θ = 0 ◦ . The optimised electrodes showed a salt adsorption capacity value of 24.8 mg/g (130 ◦ C). All CDI electrodes were analysed using specific surface area by N 2 adsorption, contact angle measurements, conductivity by the four-point probe method and salt adsorption/desorption experiments. Selected reagents and CDI electrodes were characterised using thermogravimetric analysis coupled with mass spectrometry (TGA-MS) and differential scanning calorimetry (DSC), as well as scanning electron microscopy energy dispersive X-ray spectroscopy (SEM-EDS). Electrode structure and the development of the critical balance between ion- and electron-conductive pathways were found to be a function of the electrode slurry mixing procedure, slurry deposition technique and thermal treatment of the electrodes. Introduction The scarcity of fresh drinking water is a growing concern. Desalination and wastewater recycling are becoming increasingly more important as the shortage of drinking water will continue to increase. Currently, electro-dialysis (ED), multi-stage flash (MSF) desalination, reverse osmosis (RO) membrane desalination, and multi-effect distillation (MED) desalination are thus far the dominating technologies that deliver drinkable water to millions of people around the globe. Alternative, less energy intensive technologies exist but have not progressed to industrial applications due to perceived high upfront capital installation cost. Capacitive deionisation (CDI), including membrane CDI (MCDI), is one of these technologies. The term capacitive deionisation is derived from the words capacitive and deionisation. Deionisation refers to the removal of charged molecules or atoms in the form of ions (cations and anions) and capacitive refers to the capacitor used to facilitate the removal of ions. A capacitor is a device constructed of one or multiple pairs of electrodes that are oppositely charged. An electric potential is applied over the electrodes, resulting in positively and negatively charged poles. Salts dissolved in water exist as positively • A large total surface area with appropriate pore size distribution and porosity. Such large surface areas are created using materials with large specific surface area. Several commercially available activated carbons meet the requirements and are currently used as a cost-effective substance in commercial CDI systems [12,13]. • Appropriately developed multi-dimensional electron conductive links between this large specific surface area material and an external power source. Carbon black is typically mixed with the active material and the mixture is deposited onto a current collector. These current collectors are eventually connected to an external power source [14,15]. • Electrode stability. As long as CDI electrode structure provide continuous, low resistive pathways for both ions (from the entire surface area of the activated carbon to the feed water source) and electrons (from the entire surface area of the activated carbon to the external power source), the electrode is of value. The quantity and type of binders play an important role in maintaining the structural integrity of the electrode [16,17]. Several researchers over the years have prepared electrodes for CDI research and tested these electrodes for their salt adsorption performance [18,19]. Generally, the electrodes are prepared by mixing the active materials (activated carbon), electrically conductive material (carbon black), and binder, polytetrafluoroethylene (PTFE), in optimised ratios, typically 80, 10, and 10%, respectively. Subsequently, such a mixture is deposited onto a current collector using a variety of tools and techniques. The "doctors' blade" technique seems most common. Lu et al. described three methods for CDI electrode preparation, namely, evaporation casting, roller coating, and spray coating [20]. For all methods, a carbon mixture consisting of activated carbon, carbon black, ethanol, and PTFE solutions was prepared. During the evaporating casting method, the carbon mixture was added drop-wise onto carbon fiber paper placed on a heated plate to facilitate solvent evaporation. During the roller coating method, the carbon mixture was pressed into the carbon fiber paper with a spoon. For the spray coating method, a spray gun was filled with carbon mixture and sprayed onto the carbon fiber paper. The available literature addresses neither mixing procedures of the activated carbon, carbon black, solvent, and binder for CDI electrodes, nor thermal studies of these electrodes. In Li-ion battery research, electrode slurry mixing procedures appear to be important and have been studied in detail [21][22][23]. The Li-ion positive electrode may have a different active material than CDI electrodes but they share the same binder, polyvinylidene difluoride (PVDF), to enhance the mechanical integrity and the same carbon black to enhance the electron conductivity. Both CDI and Li-ion electrodes consist of porous structures to be filled with electrolyte/feed solution to facilitate conduction of ions/electrons. The authors used the slurry mixing procedures developed for the Li-ion electrode process and studied its impact on the performance of CDI electrodes. The authors of this paper believe that the electrode structure and the development of the critical balance between ion and electron conductive pathways are both a function of the mixing procedure and the deposition technique. In addition, the optimal treatment process was established. A series of CDI electrodes were prepared with an identical composition of 80% activated carbon, 10% carbon black, and 10% PVDF on a common carbon fiber current collector (JNT45). Three different techniques were used in the production process to transfer the active material onto or into the current collector: (1) slurry infiltration by calendering (SIC); (2) ink infiltration dropwise (IID); (3) ink deposition by spray coating (IDSC). All CDI electrodes were analysed using specific surface area by N 2 adsorption, conductivity by the four-point probe method, wettability by determination of the contact angle and, lastly, the maximum salt adsorption capacity (mSAC). Selected reagents and CDI electrodes were additionally characterised using thermal analysis TGA-MS and DSC, as well as SEM EDS. After the most effective electrode production technique was identified, the effect of the slurry mixing time on electrode performance was investigated too. Electrode Preparation The electrode preparation process starts by preparing a mixture of all the necessary ingredients, which is called a slurry or an ink, before it is deposited onto the substrate. In this paper, slurry refers to the composition prepared for the infiltration by calendering methods, while the word "ink" refers to the composition used for the infiltration drop-wise and spray coating method. The slurry preparation procedure is discussed below, followed by a description of the three different slurry/ink deposition techniques. All electrodes were prepared in pairs with JNT45 (JNTG Co. LTD, Hwaseong, Korea) as the current collector. Slurry Preparation Slurries were prepared by adding the dry ingredients, activated carbon (YP80F, 0.72 g) and carbon black (Super-p CB, 0.091 g), into a syringe (20 mL)). To this mixture, a zirconia ball (10 mm) was added and placed into an in-house prepared holder. The zirconia ball aids the thorough dry mixing of the activated carbon (YP80F) and carbon black (Super-p CB). The holder with the syringe was placed inside a devil shaker (Red Devil Equipment Co, Plymouth, MN, USA) and was mixed for 60 min to complete the dry mixing process. After dry mixing, a previously prepared binder solution was added (3 mL). This binder solution was prepared by adding PVDF (1.81 g) to N,N-dimethylacetamide (DMAC) (Sigma Aldrich, Modderfontein, South Africa, 23.50 g) heated to and kept at 110 • C while vigorously stirring until all the PVDF powder was dissolved. After addition of the binder solution, the syringe was placed back in the devil shaker and mixed for 30, 60, or 120 min to complete the wet mixing step. Slurry/"Ink" Deposition The following three modified methods were utilised to produce the electrodes in this study. All electrodes were prepared at 25 • C, unless otherwise stated. Slurry Infiltration by Calendering A slurry was squirted onto the substrate from the syringe. Using a metal blade, the slurry was spread to cover the entire surface of the substrate. Subsequently, the substrate with the slurry was passed through a rolling press (MSK-HRP-MR100A, MTI Corporation, Richmond, CA, USA) six times to ensure appropriate distribution of the slurry throughout the electrode. Three electrode pairs were produced applying this method but using a slurry, wet mixed for 30, 60, and 120 min, respectively. Ink Infiltration Dropwise (Wet Mixing Time 60 min) For this method, the ink was added drop-wise to the substrate placed in a glass petri dish. Two electrode pairs were produced applying this method, keeping the surface of the petri dish at 25 and 130 • C, respectively. Ink Deposition by Spray Coating (Wet Mixing Time 60 min) For the spray coating experiments, a SONO-TEK ExactaCoat Benchtop Coating System (Sono-Tek, Milton, NY, USA) was used. Additional solvent (15 mL) was mixed with the slurry to reach the desired flow properties associated with this method and will be called "ink". The ink was sprayed onto the substrate using the following parameters: 0.86 psi, path speed 50 mm/s, area spectrum of 4 mm, and a spray head height of 48 mm. The temperature was at 150 • C to evaporate the solvent (DMAC). The ink was magnetically stirred for 60 min. Before spray-coating commenced, the ink was sonicated for 60 min. All the electrodes were dried at 130 • C overnight and cut to size (79 × 39 mm). 2.5. Electrode Characterisation 2.5.1. Scanning Electron Microscopy Energy Dispersive X-ray Spectroscopy (SEM EDS) All samples were sputter coated with a carbon layer, using an EMITECH K950X Turbo Evaporator (Kent, United Kingdom), to enhance the conductivity of the electrodes and thus augment ultimate image visibility. Electrode characterisation was performed using a Zeiss Auriga field emission gun scanning electron microscope (FEG-SEM, Carl Zeiss Microscopy GmbH, Oberkochen, Germany) operated at an accelerating voltage of 5 kV using an in-lens high resolution detector. Thermogravimetric Analysis Coupled with Mass Spectrometry (TGA-MS) TGA experiments, were performed on a TGA/SDTA851e (Mettler Toledo, Zürich, Switzerland) coupled to a ThermoStarTM (Pfeifer, Hessen, Germany) vacuum mass spectrometer to identify m/z values of the liberated gasses. Instruments were initialised and purged with nitrogen gas 24 hours in advance of experimentation. Successive blank curves (no sample) were run on empty 60 µL aluminium oxide crucible until reproducible curves were obtained. Samples sizes for all tested compounds were 2-8 mg and were weighed on a five decimal Mettler XS205 dual range electrobalance (Mettler Toledo, Zürich, Switzerland). Samples were heated at a rate of 10 • C per minute in the temperature range 30 • C < T < 700 • C. A constant flow of nitrogen was maintained over all experiments, unless stated otherwise. Differential Scanning Calorimetry (DSC) DSC measurements, were performed on a METTLER DSC822e (Mettler Toledo, Zürich, Switzerland) differential scanning calorimeter with FRS5 detector and robotic auto-sampler. The maximum temperature range for the instrument is −50 • C < T < 500 • C. Samples sizes for all tested compounds were 2-8 mg and were weighed on a five decimal Mettler XS205 dual range elecrobalance (Mettler Toledo, Zürich, Switzerland), then sealed in standard 40 µL aluminium crucibles. The aluminium cap was perforated with a needle creating a hole of~1 mm in diameter to allow any gasses liberated during potential decomposition to escape without deformation of the crucible. All experiments were performed under nitrogen. The scan rate in all cases was 10 • C per minute for both heating and cooling (unless specifically stated otherwise). Isothermal segments of one minute prior to and at the final temperature between dynamic heating and cooling segments were included to allow for instrumental overshot and setting. Curve analyses were performed using STARe SW 9.0 ® software (9.0, Mettler Toledo, Zürich, Switzerland, 2019). Specific Surface Area Nitrogen Adsorption The surface area of the electrodes was determined using a Micromeritics 3 Flex Surface Characterization Version 5.00 (Micromeritics, Norcross, GA, USA). All specific surface area (SSA) of the prepared electrodes were calculated from nitrogen adsorption isotherms at 77 K. All samples were weighed (0.05 g) and placed in separate Brunauer, Emmett and Teller (BET, Micromeritics, Norcross, GA, U.S.A) test tubes for analysis. These samples were degassed overnight at 130 • C using nitrogen gas and weighed again after degassing. See Table S1 for BET data. Electrode Conductivity An in-house-produced four-point probe setup was used to determine the electrical conductivity of the electrode samples, presented in Figure S1. A power supply (EA-PSI 8032-10T, RS components SA, Midrand, South Africa) was used to provide a current through the most outer electrode pair while a digital multimeter DT9205A was used to measure the electric potential between the inner electrode pair. The applied current was increased in set intervals from 0.1 to 0.3 A with increments of 0.05A to yield 5 measurements per sample. The distance between the inner electrodes was 25 mm. The electrode thickness was measured using a micrometre screw gauge (electronic digital caliper, RS components SA, Midrand, South Africa). The current, the potential, the distance between the inner electrodes and the sample thickness were used to calculate the electrode conductivity. Contact Angle Measurements All contact angle measurements were obtained with instruments designed and built in-house, as presented in Figure S2. For the contact angle testing, an electrode sample was placed on the base of the contact angle setup. A micropipette with a tip positioned at 10 mm above the electrode was used to dispense 50 µL of 1 g/L NaCl solution (KIMIX, Cape Town, South Africa) onto the electrode. A HUAWEI P20 cellular phone was attached to the setup, assuring the correct position of the camera. Lighting conditions were optimized to ensure a clear contrast between the droplet outline and the background. Using the "lite camera function", an image was captured within 5 s after the droplet contacted the surface of the electrode. This process was repeated five times using the same electrode but different positions to obtain a more representative measurement. The same procedure was repeated for the back side of the electrode. ImageJ software was utilised to analyse the images and to measure the contact angle. Salt Adsorption Capacity A description of the setup and procedures used to establish the specific salt adsorption capacity of the electrodes follows. Cell Assembly Procedure Assembly of the in-house-designed MCDI cell, Figure 1, entailed placement of a goldplated electrical contact disk (ECD) (Goldfinger Electroplating, Cape Town, South Africa) inside the high-density polyethylene (HDPE, Maizey plastics, South Africa, machined inhouse) bottom and top plate. A graphite current collector (Mersen South Africa, machined in-house) was placed on the gold-plated ECD, followed by fixing of the silicone gasket (Cape Town Rubber, South Africa, machined in-house) on the HDPE bottom plate of the cell. Subsequently, the electrode was placed on top of the graphite with its active side facing upwards. Next, a cation exchange membrane (Fumasep E-620-PE, Fumatech, Bietigheim-Bissingen, Germany) was fixed onto the cell. The PVC/PET spacer (Fumatech, Bietigheim-Bissingen, Germany) was placed on the cation exchange membrane ensuring that the influent and effluent holes were not blocked. The anion exchange membrane (Fumasep FAA-3-PE-30 Fumatech, Bietigheim-Bissingen, Germany) was positioned next, followed by the electrode with the active side facing down towards the spacer. The graphite current collector was then placed on the electrode and the silicone gasket was positioned cautiously, without blocking the influent and effluent holes. The HDPE top plate was positioned onto the stack of components, ensuring that the flow paths are not obstructed. The top and bottom plates were then fastened using stainless steel bolts, washers, and nuts. A torque wrench (Wiha Werkzeuge GmbH, Schonach im Schwarzwald, Germany) was used to apply 0.2 Nm of torque to the gold coated ECD. The influent and effluent feed tubes were connected and the conductivity meter (Portavo 904 X Knick SE 204 4-electrode sensor measuring range: 0.05-500 mS/cm, Mecosa Pty Ltd., Randburg, South Africa) was inserted into the conductivity and temperature port of the cell. Lastly, the terminals of the power source were connected through the ECDs using Hirschmann crocodile clips (RS Components SA, Midrand, South Africa). Experimental Desalination Setup All desalination experiments conducted were done in single-pass mode using a flowby cell configuration. Refer to Table S2 for experiments performed and standard deviation. To transfer the feed saline water with a concentration of 1000 ppm NaCl from the 3-litre feed reservoir to the MCDI cell, a Watson-Marlow Sci-Q 300 Series peristaltic pump was used. The feed water was introduced into the cell through a silicone tube (4.8 mm). Utilising the Nova 1.9 software package, the Autolab PGSTAT320N was programmed to supply a constant voltage with alternating polarity (1.2 and −1.2 V) to the MCDI cell for 90 min, while the resulting current was recorded. Simultaneously, the conductivity and temperature of the discharged water were measured utilising a Knick SE 204 4-electrode sensor and utilising Paraly SW112 software to collect the data. The water exiting the cell during desalination and regeneration mode was collected in a separate reservoir. Determination of mSAC The maximum salt adsorption capacity (mSAC) is a measure of the maximum amount of salt adsorbed on an electrode in mg per gram (mg/g) of electrode used in the cell. The mSAC was calculated by integrating the reduction of salt concentration, multiplied by the flow rate as a function of time, multiplied by the molecular weight of NaCl and divided by the mass of both electrodes (See Equation (1)). where: Mw is the molecular weight of NaCl (58.443 g/mol) C i and C o are the influent and effluent concentrations (mM), respectively ϕ is the flowrate (mL/min) t is the time during which charging occurred (min) m e is the mass of the active material in the electrode pair (g) Pre-Conditioning and Test Procedure Prepared electrodes were pre-treated by wetting (overnight in saline solution) before any adsorption-desorption measurements were executed. A feed solution of NaCl (1 g /L) was prepared at the start of each desalination test. Nitrogen gas was used to flush the feed solution for 15 min. The MCDI cell was assembled utilising the cell assembly procedure, as described in 2.6.1. A constant voltage of 1.2 V was utilised for the adsorption-desorption measurements using the Metrohm Autolab BV, the Netherlands. The flow rate was set to 13 mL/min, which corresponds to 9 rpm on the Watson-Marlow Sci-Q 300 peristaltic pump. All tests conducted were done in a single-pass method. To accurately evaluate each electrode pair, seven adsorption-desorption cycles were conducted to pre-condition the MCDI electrodes. Each pre-conditioning cycle ran for 360 s, with 180 s adsorption (1.2 V between anode and cathode) and 180 s desorption (−1.2 V between anode and cathode). The test to determine maximum salt adsorption capacity consisted of a 900 s desorption and a 900 s adsorption period to permit the electrodes to be fully regenerated at the start of the salt adsorption cycle, as illustrated in Figure S3. The conductivity and current data for the substrate JNT45, electrodes E4, E6 and E7 can be found in Figures S3-S6. Scanning Electron Microscopy Energy Dispersive X-ray Spectroscopy (SEM EDS) SEM coupled with EDS was used to probe the surface morphology of the reagents and prepared electrodes. Figure 2 shows the SEM images of the reagents: (A) activated carbon (AC), consists of irregular shaped microscale structures; (B) carbonblack (CB) will provide an electric pathway between particles; (C) PVDF, the binder, has a spherical microstructure; lastly, (D) JNT45, the carbon fiber substrate, consists of intertwined carbon microfibers. These carbon microfibers form a porous electrically conductive network that will host the active materials to form an effective CDI electrode. The activated carbon and carbon black were dry mixed for 60 min. Figure 3A depicts a SEM image where the carbon black is spread evenly on the activated carbon. This dry mixing process has an impact on the micro-structure of the electrode and in particular the distribution of carbon black on the activated carbon [24]. It is important to note that the carbon black appears to be completely de-agglomerated into its primary spherical particles of roughly 100 nm and dispersed and attached to the surface of the active material. This dispersion of CB on the active material will provide a sufficient electric pathway between the AC particles. After dry mixing, a solution of PVDF and DMAC was added, and this suspension was wet mixed for 30 min. This wet mix was dried at 130 • C for 12 h and SEM images, presented in Figure 3B, were obtained of the dried powder. Notable differences are observed between the dry and wet mix SEM images. It appears that the dispersion of carbon black achieved during dry mixing is to a great extend undone during wet mixing. Figure 3B reveals that the primary particles regrouped into larger CB aggregates. This re-agglomeration of the carbon black resulted in undesirable large lumps of activated carbon, rendering the electrode inefficient for CDI application, due to lack of a conducting and binding network. The activated carbon, depicted in Figure 3B, is covered with spherical and nonspherical structures. Differentiation between PVDF and carbon black with the aid of SEM images is impossible. For this reason, EDS analysis was performed on this sample, as presented in Figure 4. Carbon, oxygen, and fluoride are detected by the EDS analysis. As a reference, EDS was performed on the pure PVDF powder, presented in Figure S4. From the figures, a significant difference in the fluoride concentration is observed. The carbon/binder mixture has a lower fluoride concentration because the activated carbon and carbon black "dilutes" the PVDF concentration, which results in a smaller peak. The physical appearance of the CB and PVDF changes because of the wet mixing procedure that results in the adhesion of non-spherical structures to the activated carbon fiber and the presence of thin films of PVDF around the CB and activated carbon. The slurries and "ink" were deposited into-onto the substrate using three methods: (1) slurry infiltration by calendering (SIC), (2) ink infiltration dropwise (IID), and (3) ink deposition by spray coating (IDSC). Figure 3C shows the SEM image of electrode E2b that was prepared by IID, with wet mixing time of 60 min and thermal treatment at 130 • C for 16 h. This image shows the adhesion of active material to the substrate. As expected, the concentration of fluoride decreases when the slurry is deposited into-onto the substrate, as illustrated in Figure S5, since the active material is distributed into and over the entire electrode. Lastly, Figure 3D depicts electrode E4 that was prepared by slurry infiltration by calendering (SIC), with a wet-mixing time of 30 min and heat treatment at 130 • C. Figure 5A,B shows the difference between calendered and non-calendered electrodes E4 and E2b, respectively. The non-calendered electrode has large cavities between the substrate and active material, while the calendered electrode is denser with fewer cavities between the active material and substrate. Figure S6 illustrates the zoomed-out electrodes, showing the difference between calendered and non-calendered electrodes. Maximum Salt Adsorption Capacity (mSAC) The membrane capacitive deionisation (MCDI) performance of each electrode pair was evaluated as described in the experimental Section 2.6.2. The conductivity and electric current resulting from the change in potential during the pre-treatment and the mSAC cycles are shown in Figure 6. The dashed line (orange) indicates the conductivity of the feed solution, constant at 1766 µS/cm, and the solid lines (conductivity: blue and current: green) indicate the conductivity and current of the produced water. The salt adsorption and salt desorption half cycles are clearly visible and show the typical conductivity changes expected during an MCDI experiment. The mSAC value was determined by integration of the adsorption peak surface area during the 900 s adsorption half cycle. Results for the electrodes produced via slurry infiltration by calendering (SIC), ink infiltration dropwise (IID), and ink deposition by spray coating (IDSC) are listed in Table 1. The results clearly indicate that electrode E1, produced using the SIC method, showed (1) a relatively high electrical conductivity, (2) the lowest thickness, and (3) the highest desalination capacity of the three deposition methods. All electrodes were heat treated at 130 • C. The mSAC value of the JNT45 substrate, listed in Table 1, has a maximum salt adsorption capacity of 0.42 mg/g. This proves that the substrate has minimal contribution to the desalination performance of the electrode. Electrode E1, which was produced by the slurry infiltration by calendering method, outperformed E2a, E2b, and E3 in terms of mSAC value, despite possessing a lower specific surface area. Literature reports a direct relationship between the surface area of an electrode and its corresponding salt adsorption capacity, indicating that an electrode with a high BET surface area is predicted to have a higher salt adsorption capacity [25,26]. Some reports describe electrodes with a low BET surface area (~500 m 2 /g) exhibiting superior salt adsorption capacity (12.27 mg/g), in comparison to an electrode with a BET surface of~1600 m 2 /g with a corresponding mSAC value of 7.0 mg/g [27]. E3, produced via the spray coating method, exhibited the lowest mSAC, with a value of 12.2 mg/g. Lu et al. produced a spray-coated electrode with an mSAC value of 5.6 mg/g [20]. Of the three electrodes they produced, electrodes produced by spray-coating resulted in the lowest desalination performance, a finding that accords with the results of this paper, as summarised in Table 1. During the spray coating method, ink is sprayed onto the substrate as a fine mist. Due to the high substrate temperature of 150 • C, the solvent evaporates almost instantaneously, leaving the solid residue of activated carbon, carbon black, and binder immobilised on the surface of the substrate. Since these solids are not filling the pores in the substrate, the thickness increases and shows a lower conductivity, a lower surface area, and the lowest mSAC value. Table 1 also shows the specific surface area of the electrode normalised to the weight of the active material, since 99% of the surface area of the electrode is provided by the activated carbon. According to the specific surface area measurements, electrodes E2a and E2b, both produced by applying the IID method, showed the highest specific surface area normalised to its AC content. Despite the higher values for the specific surface area, these electrodes did not show the highest desalination capacity, meaning that the surface area accessible for N 2 may not necessarily contribute to the desalination capacity. This may occur if a portion of the activated carbon is not electrically linked to the current collectors. The compacting action of the calendering process assisted with formation of a denser electrode. Calendering of electrodes forces the active particles closer together resulting in improved electrical conductivity. While improved conductivity may assist in obtaining improved salt absorption rates, it may not necessarily improve mSAC values. Influence of Wet Mixing Time on Electrode Performance The slurry infiltration by calendering (SIC) method was applied to produce two additional electrodes, E4 and E5. Apart from a change in the slurry wet-mixing time, the electrodes were identical to E1. The electrodes were evaluated and the results are listed in Table 2. A comparison of electrode E1, E4, and E5 reveals a distinct reduction of mSAC as a function of the increase in wet mixing time. While the electrical conductivity and thickness of the electrodes E1, E4, and E5 are very similar, the electrode with the highest mSAC (E4: 24.8 mg NaCl /g AC ) also shows the highest normalised specific surface area, which is close to the specific surface area of the virgin activated carbon YP80F material. Table 2. Electrodes were produced using slurry infiltration by calendering (SIC) and were thermally treated at 130 • C. Wet mixing times were applied as follows: E1-30 min, E4-60 min, and E5-120 min. Based on the SEM images presented in Figure 3, the authors suggest that the carbon black particles re-agglomerate upon increasing wet mixing time. When the solvent evaporates, lumps of carbon black are left on the surface of the activated carbon, which results in less effective coverage of the activated carbon surface area and a subsequent a reduction of the mSAC value. Influence of Thermal Treatment on Electrode Performance After the remarkable improvement of the mSAC value for electrode E4, an attempt was made to further improve the performance of this electrode by optimising the thermal treatment procedure. Table 3 shows the results of four electrodes that were produced using the SIC method, whereby the slurry was wet mixed for 30 min. The electrodes were exposed to different temperatures prior to their routine analysis. Electrode JNT45 is an unmodified piece of substrate that was cut to size. Comparison of the mSAC results of E4, E6, E7, and E8 shows that the optimal temperature to treat an electrode produced with the SIC method is 130 • C. Electrode E8, which was dried at 25 • C overnight, showed an mSAC value of 16.0 mg NaCl /g AC . A significant portion of the extended surface area is likely occupied by DMAC in electrode E8. Both the normalised specific surface area and the mSAC value are reduced for E6 and E7 when the electrode is heated to 250 and 350 • C, respectively, compared to E4. Thermal treatment of the electrodes above the melting point of PVDF (177 • C) could be the reason for the decline in the normalised specific surface area and the mSAC values. Heating PVDF above the melting temperature induces partial decomposition as evident from the EDS and DSC analysis, presented in Figures S7 and S9. To accurately determine the optimal temperature for the thermal treatment of the electrodes that could improve the mSAC values, selected reagents were chosen and probed with thermogravimetric analysis coupled with mass spectrometry (TGA-MS). A pre-dried 30 min wet mix powder was subjected to thermal analysis using TGA-MS. Results are provided in Table 4 for MS and Figure 7 for TGA. Step 1 corresponds to the evaporation of DMAC (boiling point 165 • C) with a weight loss of 2.97% and a detected m/z ratio of 87 (molar mass of DMAC, 87.12 g/mol). The onset of the DMAC weight loss is approximately 129-131 • C. Slow solvent evaporation from the pores of the electrodes, below its boiling point, is preferred to minimise damage. This is proven by probing the electrodes using TGA-MS and DSC. Step 2 corresponds to the decomposition of PVDF (liberation of gaseous hydrogen fluoride, HF) with a weight loss of 7.62%, and a detected m/z ratio of 20 (molar mass of HF, 20.01 g/mol). The liberated gases clearly correspond to the evaporation of DMAC and liberated HF. Further evidence that step 2 corresponds to liberated HF is a comparison to the TGA of pure PVDF, which is shown in Figure S8, and the TGA of the 30 min wet mixture shown in Figure 7. In both figures, the HF mass loss is initiated at approximately 400 • C. The highest mSAC value achieved by electrodes reported in literature, as summarised in Table 5, was 14,4 mg/g, while the values reported in this paper exceeds 20 mg NaCl /g AC , with a maximum of 24.8 mg NaCl /g AC . Slurry infiltration by calendering seems to be the best option for synthesising CDI electrodes, using activated carbon, carbon black, PVDF, and DMAC comprised slurries on the carbon substrate, JNT45 [18,20,28]. Contact Angle The contact angle (θ) is a measure of the hydrophilicity or wettability of an electrode: If θ = 0 • to 5 • , it is considered perfect wetting. A contact angle of = 6 • -90 • is considered highly wettable, whereas if θ = 91 • -150 • , wettability is considered to be low. If θ = 151 • or larger is observed, it is considered perfectly non-wettable [29]. The contact angles were established as described in the experimental Section 2.5.6 and the results are listed in Table 3. A hydrophilic electrode will allow salty feed water solution to enter the electrode, providing the desired access of ions to the extended surface area of the electrode [30,31]. Electrode JNT45 (unmodified substrate) shows the highest contact angle at 116 • and is classified as low wettable. With such low wettability, access of ions in solution may be blocked by trapped gas bubbles. The front side of the electrodes is the side where the slurry is applied. It is evident from Table 3 that the application of slurry improves the hydrophilicity. It is also clear that an increase of the treatment temperature reduced the contact angle, even to the extent where the contact angle became θ = 0 • on the front and back side when electrode E7 was exposed to a temperature of 350 • C. The authors of this paper postulate that heat treatment of the electrodes with PVDF as binder increases the hydrophilicity. PVDF melts at 177 • C, allowing it to cover a greater portion of the active materials. The decomposition of PVDF then proceeds by liberating hydrogen fluoride gas, previously discussed in Table 4 and Figure 7, and explains why only a minute amount of fluoride is still present on the surface of electrode E7 ( Figure S7) compared to electrodes treated at 130 • C (Figure 4). In accordance with research published by Nguyen [32], hydrocarbons with double bonds between the carbons of the newly formed products remain. As seen in Figure 8, the electrode treated at 350 • C has perfect wettability (θ = 0 • ) as the saline water droplet completely soaked into the surface of the CDI electrode, while electrodes E4 and E6 have contact angles of θ = 47 • and θ = 23 • , respectively. Lu et al. obtained contact angles of θ = 0 • , θ = 60 • , and θ = 127 • for electrodes synthesized by the evaporation casting method, roller coating method and the spray coating method, respectively. The substrate JNT45 (θ = 116 • ) has a lower contact angle, and thus is more wettable, than the electrode prepared by spray coating, which proves that the spray coating method is not suitable for the synthesis of carbon-based CDI electrodes [20]. Application and Future Perspective of CDI Electrodes For CDI technology to compete with well-established water purification technologies like reverse osmosis, the combination of capital expenses (CAPEX) and operational expenses (OPEX) of the selected technology needs to be competitive. Improved production processes, yielding optimised CDI electrodes, will both reduce CAPEX and OPEX cost, allowing wider application of CDI as water treatment technology. Furthermore, the optimised electrode production method presented in the current work can be combined with recent advances in CDI technology, such as the addition of metal additives as demonstrated by Byles et al. [33]. They obtained mSAC values as high as 44.4 mg/g with manganese (Mn) as additive for a KCl solution and an mSAC value of 40.8 mg/g was obtained for a NaCl solution with FeFe(CN) 6 as additive. Conclusions Electrodes were prepared using three different electrode preparation techniques. The measured salt adsorption capacity showed that the method of electrode preparation had a significant impact on its performance, even though the composition of the dried electrodes was identical. Slurry infiltration by calendering (SIC), ink infiltration dropwise (IID), and ink deposition by spray coating (IDSC) produced electrodes with mSAC values of 17.0 mg NaCl /g AC , 14.1 mg NaCl /g AC , and 12.2 mg NaCl /g AC , respectively. After the identification of the most effective electrode production technique, the electrode's mSAC values was improved by optimising the ink processing procedure. The wet mixing time of the ink was specifically investigated. It was found that wet mixing for 30 min resulted in an electrode with an mSAC of 24.8 mg NaCl /g AC , which is significantly higher than other reported mSAC values for pure capacitive electrodes. Prolonged wet mixing lead to a decrease of the salt adsorption capacity, an observation that has not been described in any prior publications. Finally, the mSAC values for electrodes exposed to different temperatures were studied together with the correlation between mSAC and surface area, as established by N 2 adsorption. Nitrogen adsorption analysis enables measurement of the loss in available surface area during electrode production. A thermal treatment of 130 • C, deemed sufficient to remove the DMAC solvent, yielded an electrode in which 80% of the original surface area of the electrode material is still accessible. Drying temperatures of 250 and 350 • C resulted in both a loss of accessible surface area that correlated with a reduction of the maximum salt adsorption capacity (mSAC). An increase in temperature was accompanied by a decrease of the contact angle. The impact of the reduced contact angle on adsorption rate will be studied in a subsequent paper. To conclude, this paper highlights important aspects of CDI electrode production not described before in available literature. The authors are convinced that implementation of their findings will assist in the creation of CDI systems showing a reduction of the overall cost of water treatment, which will reduce the cost and improve the availability of water worldwide.
2020-12-24T09:07:01.978Z
2020-12-22T00:00:00.000
{ "year": 2020, "sha1": "7fc911c069db5bb137e667ca706edc956b66bcb4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/9/1/1/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a780bc68dac1457d63edfaad1a381758c0987ba4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
228885432
pes2o/s2orc
v3-fos-license
Nanoemulsion Gel Formulation Optimization for Burn Wounds: Analysis of Rheological and Sensory Properties : Background: Despite the variety of treatment methods for wounds and scars after burns, there are still few e ff ective preparations that can be used in a non-invasive therapy. Recent years have seen significant development of nanomedicine and nanotechnology in the treatment of infection in burn wounds. Proposal: The aim of this work was to develop a formula of a nanoemulsion gel for skin regeneration after burns, and to compare its rheological and sensory properties, as well as the e ff ectiveness of post-burn skin regeneration with preparations available on the market. Methods: At the first stage of studies the composition and parameters of the preparation of sea buckthorn oil-based O / W (oil-in-water) nanoemulsion containing hyaluronic acid and aloe vera gel, as the active ingredients were optimized. Then, the nanoemulsion was added to the gel matrix composed of carbomer (1%) and water which resulted in receiving nanoemulgel. The physicochemical parameters of the obtained samples were characterized by means of dynamic light scattering method and scanning electron microscope. Rheological, sensory and influence on skin condition analysis was conducted for selected market products and developed nanoemulgel. Results: Nanoemulsion gel (d = 211 ± 1.4 nm, polydispersity index (PDI) = 0.205 ± 0.01) was characterized by semi-solid, non-sticky consistency, porous structure, law viscosity, good “primary” and “secondary” skin feelings and pleasant sensorical properties. It improves the condition of burned skin by creating a protective layer on the skin and increasing the hydration level. Conclusion: Due to the fact that the obtained nanoemulsion gel combines the advantages of an emulsion and a gel formulation, it can be a promising alternative to medical cosmetics available on the market, as a form of formulation used in skin care after burns. Introduction Burn wound healing is a multi-stage biological process that involves a number of tissue interactions to restore the skin integrity and homeostasis after an injury. This process can be divided into four overlapping stages: homeostasis, inflammation, proliferation, and reconstitution of tissues [1][2][3]. If the basement membrane is not damaged, the epidermis regenerates slowly and normally without leaving a scar. One of the factors contributing to scar formation is the depth of the wound. If the basement membrane of the epidermis is damaged, together with partial damage of the dermis, non-hypertrophic scars can be formed during healing. If the dermis is deeply damaged, it is necessary to remove dead tissues and use surgical methods, cell therapies and/or tissue engineering procedures. In such cases, healing is a long-term process, which leaves hypertrophic scars. The treatment of a burn wound is usually carried out in three ways: treatment without a dressing, treatment with a dressing, early surgical removal of dead tissues and reconstruction [4]. Despite the variety of treatment methods of wounds and scars after burns, there are still few effective preparations that can be used in a non-invasive therapy. These restrictions are related to cytotoxicity and adverse reactions caused by medicinal substances that are present in preparations. As a result, more recent studies are devoted to alternative propositions from plant origin, which could potentially represent safer therapeutic agents with lower minimal side effects. These plant-derived preparations could be used in support of skin hydration and overall protection as medical cosmetics, the so-called cosmeceuticals, which support skin regeneration [5]. Out of many preparations that are applied topically to the skin after burns, the most common forms available on the cosmetics and pharmaceutical market are classic emulsions (including lotions and balsams), gels, ointments and aerosols. Emulsions Emulsions are widely used as cosmetic and pharmaceutical formulations because of their excellent solubilizing capacities to lipophilic and hydrophilic active ingredients and application acceptability. Emulsions are heterogeneous systems comprising at least two immiscible liquid phases, where one liquid is dispersed as globules (dispersed phase) in the other liquid (continuous phase) [6]. The emulsions available on the market are preparations that both support burn treatment and provide skin care after the burn. The market offers a wide variety of emulsions (o/w and w/o types) at relatively low prices and in various consistencies-liquid (lotions, which include less than 20-30% fatty substances), semi-liquid (balsams, which include more than 20-30% of the oil phase), or semi-solid (creams, in which the share of the internal phase is between 30-70%) [7]. Emulsions are easy to apply and spread well on the skin. They are also characterized by a high rate of absorption and they do not leave an as oily film on the skin as ointments. What makes them a favorable option is the phase system, whose structure resembles a water-lipid coat of the epidermis, effectively transporting active components into the skin. The oil phase components that are included in an emulsion form a protective layer, which shields the epidermis against external factors [7,8]. Gels Gels are semisolid formulations, usually made up of two components, a solvent and gelator phase. The three dimensional network gelator molecules phycically entraps the solvent phase, resulting in a viscoelastic gel. They have a broad range of applications in food, cosmetics, biotechnology, pharmatechnology, etc. Typically, gels can be distinguished according to the nature of the liquid phase. For example, organogels (oleogels) contain an organic solvent and hydrogels contain water. Recent studies have reported other types of gels for dermal drug application, such as proniosomal gels, emulgels, bigels and aerogels [9]. The gels are characterized by by the semi-solid consistency which allows longer contact time of the preparation with a burn. Due to the content of moisturizing and cooling (e.g., menthol) substances which soothe pain and irritation, they may act as first-line formulations for the treatment of skin burns [10,11]. Hydrogels are a type of gel in which the dispersed phase is water and the gelling agents are polymers. Unlike gels, hydrogels are not absorbed from the surface of the burn, but they leave a semi-transparent layer on the skin to protect it against external factors and microbes. They also have absorption properties and, as a result, they can absorb exudate from wounds and accelerate healing by regenerating the damaged tissues [9]. Ointments Ointments are semi-solid preparations and they have the highest percentage of fatty substances in their composition out of all formulations. On the one hand, they inhibit the multiplication and penetration of microbes to the wound by creating an occlusive barrier on the skin. On the other, they have heavy consistency and a long absorption time. If the substrate in which the active substances are dispersed has been properly selected, ointments have good spreadability and adhesion. Once they are applied, they leave an oily film that creates a barrier to protect and moisturize the skin after injury [12,13]. Aerosols Aerosols exist in the form of liquid or solid particles with the size of 0.01-100 µm in diameter, and they are stably suspended in air. The aerosol form of the preparation allows easy and quick application to a large area of the damaged skin and eliminates pain and unpleasant burning sensation, which is particularly important in the case of sunburns. The structure of the dosing packaging prevents the cosmetic mass from being infected with microbes. The foam that is created after the application is not sticky. Aerosols also do not leave a greasy deposit on the skin, as is the case with ointment preparations. Although aerosols allow quick healing of burns, their prolonged use on the damaged skin is inadvisable because of the propellants present in the recipe, as they can cause irritation and sensitization [14,15]. Nanostructured Systems In recent years, there has been significant development of nanomedicine and nanotechnology in the treatment of infection in burn wounds. Nanostructured systems are effective media for the active and medicinal substances used to treat bacterial infections and to stimulate wound healing. The nanoformulations used in the treatment of skin burns can be divided into two classes: organic nanostructures (polymer nanoparticles, nanoemulsions, nanogels, liposomes and lipid nanoparticles) and inorganic nanostructures (nanoparticles of gold, copper or silver) [16]. Nanoemulsions are colloidal systems composed of an aqueous phase and an oil phase, stabilized with surfactants and sometimes with the addition of auxiliary surfactants. They are characterized by the very small size of the dispersed phase droplets (20-500 nm), large interphase surface and low interphase and surface tension [17]. Due to these properties, nanoemulsion increases the absorption rate and eliminates variability in the absorption, helps in solubilizing lipophilic drugs, and hence increases their bioavailability. However, the low viscosity of nanoemulsions makes their direct topical application inconvenient and modifies skin permeation profile. These problems can be solved by incorporating a nanodispersion into the polymer solution to form an in situ nanoemulgel. In a nanoemulgel, hydrophobic drugs are loaded in the oil cores and the droplets of emulsion are entrapped in the hydrogel cross-linked network [18,19]. The incorporation of nanoemulsions into hydrogel systems, commonly referred to as nanoemulgels (NEGs), has improved the topical efficacy of various, otherwise poorly permeable, therapeutics [20]. It also solves the problem of hydrophilicity of hydrogels, which limits the applications for the delivery of hydrophobic drugs [18]. The combination of a nanoemulsion and hydrogel provides sustained/controlled drug delivery and easy administration, thus, enhancing patient compliance. A nanoemulgel, as a prolonged release system, allows to extend the duration of the substance concentration in the therapeutic range, while it also has a reduced dose of the active component. The internal phase (nanoemulsion) contains an active substance that is released into the skin by the dispersing phase. This phase is further cross-linked and, as a result, the particles of the penetrating substance are captured and slowly released by the cross-linked structure. This prolongs the skin exposure to the medicine [19]. Moreover, the combination of both of these forms provides an improvement to the rheological and sensory properties of the nanoemulsion, and facilitates its application to the skin. The addition of rheological modifiers makes it possible to prolong the contact time of the preparation with the skin, and to increase the skin hydration level by forming a hydrophilic film on the skin surface and reducing the transepidermal water loss. This is a very important aspect of skin regeneration. A great advantage of a nanoemulgel is the increased content of hydrophilic ingredients, which distinguishes it from creams and ointments. Usually, this physicochemical form of a drug delivery system also does not show stability problems such as phase inversion (emulsions) or rancidity (ointments). In addition, a nanoemulgel formula protects active ingredients from degradation, due to external factors such as light or temperature [16,[21][22][23]. The widespread use of a given physicochemical form of a preparation is closely related to its effectiveness, market availability, price, as well as a number of characteristics, which result from the rheological and sensory properties of a product. The most important ones include simple application, consistency, spreadability, absorption rate and sensation on the skin after the application. The aim of this work was to develop the formula of a nanoemulgel for skin regeneration after burn and to compare its rheological and sensory properties, as well as the effectiveness of post-burn skin regeneration with other physicochemical forms of preparations (emulsions, ointments, gels) available on the market. Characterization of the Obtained Nanoemulsions In the first stage of the study, a number of parameters were investigated with regard to their impact on the stability of the basic emulsions. These parameters were: emulsifier concentration (4% w/w or 6% w/w), oil phase concentration (3% w/w, 5% w/w, or 10% w/w), stirrer speed (300 rpm or 500 rpm), and emulsification time (10 min or 15 min) during pre-emulsification and during ultrasonication (60 s or 120 s). The oil phase of the obtained oil-in water type (O/W) nanoemulsions was sea buckthorn oil, while decyl glucoside acted as an emulsifier. The quantitative and qualitative composition of the basic nanoemulsions and the process parameters are shown in Table 1. Process parameters Mixing speed (rpm) 300 300 300 300 300 300 500 500 300 300 500 500 500 500 Pre-emulsification time (min) 10 10 10 15 15 10 10 10 10 10 10 10 10 10 Ultrasonic homogenization time (s) 60 60 60 60 60 120 60 120 60 120 60 120 120 120 Stability -creaming or phase separation +/− stable after 7 days + stable after 14 days As can be seen from the data in Table 1, a stable basic nanoemulsion was obtained by increasing the concentration of the emulsifier to 6%, reducing the concentration of the oil phase to 3% and extending ultrasonication time to 120 s. According to the literature [24][25][26], in most cases, the extended ultrasonication time, increased concentration of the emulsifier and reduced concentration of the oil phase have a positive effect on obtaining stable formulations. However, it should be noted that each formulation has to have its own optimization of parameters obtained, and should be selected according to a given qualitative and quantitative composition of the formulation. It should be also noted that if the concentration of a surfactant is too high, it may result in a lower diffusion rate of surfactants and may cause the coalescence of emulsion droplets [27,28]. Additionally, if sonication time is increased, it may result in monomodal size distribution, but the input of energy should be at the optimum level. The researchers explain this phenomenon by the fact that an increase in the energy density causes greater droplet deformation and the disruption of the interfacial layer of the stabilized emulsifier [25,[29][30][31]. Moisturizers are one of the most important classes of cosmetic products, because they prevent xerosis, delay premature ageing and help in dermatological therapies of a wide variety of skin disorders [32]. According to the state of the art, the next stage was an attempt to add the following moisturizing substances to the basic nanoemulsion: hyaluronic acid, allantoin, aloe vera gel and D-panthenol. The recipe of the nanoemulsion was also enriched with a preservative (sodium benzoate) and antioxidant (vitamin E). In the final stage of the recipe optimization, the nanoemulsion was thickened with an aqueous carbomer solution to obtain the desired consistency. The acid-base reaction was brought to pH = 6 with a citric acid solution. In the case of each formulation, the emulsification parameters were used as established in the previous stage: pre-emulsification time was 10 min, pre-emulsion stirring speed was 500 rpm, ultrasonic homogenization was 120 s. The qualitative and quantitative composition of the emulsions enriched with other ingredients is shown in Table 2. The data in Table 2 show that stable nanoemulsions were obtained by adding hyaluronic acid (N14-C), aloe vera gel (N14-E) to the aqueous phase, and by combining these moisturizing ingredients (N14-G). The addition of moisturizing agents, antioxidant and a preservative did not influence the emulsified particle size, polydispersity index (PDI) and stability in time (Table 3). Nanoemulgel Characteristics The desired nanoemulgel consistency was obtained by adding a carbomer to its recipe at a concentration of 1% ( Table 2). As the result of the study, a stable nanoemulgel containing sea buckthorn berry oil, hyaluronic acid, and aloe vera gel as the active ingredients was obtained. It was characterized by pH = 6.0. The slightly acidic environment of the formulation applied to skin burn is advisable, because it reduces the skin susceptibility to microbes and, thus, accelerates the regeneration of the damaged tissue [33]. The addition of a carbomer to the N14-G formulation caused the change of the nanoemulsion from liquid to semi-solid consistency (NG-Carb-1.0). The scanning electron microscope (SEM) images of the nanoemulgel (Figure 1) show interconnected pores with random size distribution. This porous structure may provide sufficient space for high drug loading, movement of drug throughout and enhance the drug release rate [18]. characterized by pH = 6.0. The slightly acidic environment of the formulation applied to skin burn is advisable, because it reduces the skin susceptibility to microbes and, thus, accelerates the regeneration of the damaged tissue [33]. The addition of a carbomer to the N14-G formulation caused the change of the nanoemulsion from liquid to semi-solid consistency (NG-Carb-1.0). The scanning electron microscope (SEM) images of the nanoemulgel (Figure 1) show interconnected pores with random size distribution. This porous structure may provide sufficient space for high drug loading, movement of drug throughout and enhance the drug release rate [18]. Rheological Properties of the Preparations Supporting Burn Healing Skin care after burns is often a long and complicated treatment process, which depends on the condition of the epidermis basement membrane, and which is affected by several aspects. One of them is the application of multi-component preparations with a wide spectrum of activity. In order to be effective, the skin-applied preparations which support the healing process after burns have a semi-solid consistency and plastic properties at human body temperature to allow easy spreading and adequate skin adhesion. The longer the preparation remains on the injured skin, the more the burned area is protected against external factors and the preparation itself can form a protective layer. These characteristics can be established by determining the apparent viscosity and the yield stress of the sample. The products with an exceedingly high yield stress show heavy consistency and difficult spreadability on the skin. These may cause irritation and lower the regularity and frequency of the product use, which is necessary for burn wound healing, and, as a result, decrease the therapeutic and/or care effect. Such preparations are also less efficient, which is also important for consumers in the case of long treatment [34][35][36]. In practice, the inadequate apparent viscosity of the preparation means shorter contact time between the preparation and the skin, more difficult spreading, and, in certain cases, an occlusive layer will not be created as a result [34,35,37,38]. By taking the physicochemical form as a criterion, the studied market preparations which support the regeneration of burned skin (Table 4) were divided into three groups: emulsions, ointments and gels. Rheological Properties of the Preparations Supporting Burn Healing Skin care after burns is often a long and complicated treatment process, which depends on the condition of the epidermis basement membrane, and which is affected by several aspects. One of them is the application of multi-component preparations with a wide spectrum of activity. In order to be effective, the skin-applied preparations which support the healing process after burns have a semi-solid consistency and plastic properties at human body temperature to allow easy spreading and adequate skin adhesion. The longer the preparation remains on the injured skin, the more the burned area is protected against external factors and the preparation itself can form a protective layer. These characteristics can be established by determining the apparent viscosity and the yield stress of the sample. The products with an exceedingly high yield stress show heavy consistency and difficult spreadability on the skin. These may cause irritation and lower the regularity and frequency of the product use, which is necessary for burn wound healing, and, as a result, decrease the therapeutic and/or care effect. Such preparations are also less efficient, which is also important for consumers in the case of long treatment [34][35][36]. In practice, the inadequate apparent viscosity of the preparation means shorter contact time between the preparation and the skin, more difficult spreading, and, in certain cases, an occlusive layer will not be created as a result [34,35,37,38]. By taking the physicochemical form as a criterion, the studied market preparations which support the regeneration of burned skin (Table 4) were divided into three groups: emulsions, ointments and gels. Due to the fact that the obtained nanoemulgel combines the characteristics of an emulsion and a gel, the flow curves for NG-Carb-1.0 are included for both these groups for comparative purposes. The results of the flow curve analyses (Figures 2-4) show that the tested preparations were characterized by a complex rheological behavior. They belonged to the group of non-Newtonian liquids with a clear yield stress. Due to the fact that the obtained nanoemulgel combines the characteristics of an emulsion and a gel, the flow curves for NG-Carb-1.0 are included for both these groups for comparative purposes. The results of the flow curve analyses (Figures 2-4) show that the tested preparations were characterized by a complex rheological behavior. They belonged to the group of non-Newtonian liquids with a clear yield stress. The application of cosmetic and medicinal preparations on the skin is accompanied by physical operations that are closely related to a specific shear rate: draining under gravity for lotions on hands (shear rate ranges between 0.01-10 s −1 ), cream spooning and pouring (shear rate ranges between 10-100 s −1 ), and cream rubbing (shear rate around 1000 s −1 ) [35,[39][40][41]. The numerical values of apparent viscosity of the tested preparations at 1, 10, 100, 500 s −1 shear rates, at temperatures of 25 °C and 32 °C, and yield stress and dissipation energy are shown in Table 5. The application of cosmetic and medicinal preparations on the skin is accompanied by physical operations that are closely related to a specific shear rate: draining under gravity for lotions on hands (shear rate ranges between 0.01-10 s −1 ), cream spooning and pouring (shear rate ranges between 10-100 s −1 ), and cream rubbing (shear rate around 1000 s −1 ) [35,[39][40][41]. The numerical values of apparent viscosity of the tested preparations at 1, 10, 100, 500 s −1 shear rates, at temperatures of 25 • C and 32 • C, and yield stress and dissipation energy are shown in Table 5. In the course of the study, Herschel-Bulkley (1) model parameters were determined (R 2 > 0.990). This equation is used to describe the flow curves of non-Newton liquids that are shear-thinned with a yield stress. The test results are presented in Table 6. where: τ-shear stress, Pa τ y -yield stress, Pa k-consistency parameter, Pa·s n γ-shear rate, s −1 n-flow behavior index, -. A decrease in apparent viscosity together with an increase in shear rate was observed for all tested preparations. Different apparent viscosity for certain shear rates of the tested preparations determines their different skin application, sensory properties, and, as a result, healing effect. In the literature [34][35][36], it was shown that the product flow properties, established by rheological measurements, may be correlated with the empirically subjective assessment of skin feeling. "Primary" and "secondary" skin feelings are distinguished for the products applied topically to the skin. The primary skin feeling refers to the sensation that occurs when the preparation is taken out of its container, applied and spread on the skin. This feeling is closely related to the yield stress and apparent viscosity in the lower ranges of the shear rate. The secondary feeling refers to the moment when the applied layer of the care/medicinal product covers the skin evenly. Secondary feeling is the absorption capacity perceptible on the skin and how the product feels on the skin after the application. Absorption capacity perceptible on the skin increases as viscosity decreases. This is related to the apparent viscosity determined, in this case, for the upper range of shear rate (γ = 500 s −1 ) and allows an assessment of the final degree of spreading the test sample on the skin. The study method proposed by Brummer [34] was used to determine the primary skin feeling caused by the studied preparations supporting skin healing after burns. The correlation of the assessments by sensory testing panels with the values measured for the flow onset and maximum viscosity gave the initial "window of measured values", shown as rectangles in Figure 5. s −1 ) and allows an assessment of the final degree of spreading the test sample on the skin. The study method proposed by Brummer [34] was used to determine the primary skin feeling caused by the studied preparations supporting skin healing after burns. The correlation of the assessments by sensory testing panels with the values measured for the flow onset and maximum viscosity gave the initial "window of measured values", shown as rectangles in Figure 5. The window (rectangle) boundaries are determined on the basis of the preparation with a given physicochemical form with the lowest sensory assessment. The values measured with this method for the shear stress and viscosity at flow onset provide the upper and lower limits for the respective measured variable. The limits include the values measured for the preparation assessed as good. Products falling within these rheology boundaries were assessed as good when they were first spread on the skin, whereas products with less acceptable skin feeling fell outside of these parameters [34][35][36]. In the literature, the window boundaries were determined only for the o/w emulsion [34,36] and for petroleum jelly samples with Fischer-Tropsch wax [36]. In our study, we defined them for the following type systems: ointment, gels and gel nanoemulsions. In the case of the nanoemulsion, the results of our previous studies [42] were used. These boundaries are, respectively: ointments 120-320 Pa, 120-230 Pas; emulsions: 120-505 Pa, 120-530 Pas; gels: 20-55 Pa, 40-55 Pas; nanoemulsions 3.5-23 Pa, 2.3-26 Pas. The window boundaries for the ointment were within the window boundaries for the emulsion. Our study showed that the boundaries of good sensory assessments depended on the physicochemical form and consistency of the tested preparations. This is in line with the results published by Beeker et al. [36]. The highest are for emulsions and ointments, and the lowest for nanoemulsions. Viscosity values are 4.6 times lower than for emulsions and ointments, while for gels, it is a factor of 1.5 times. The data in Figure 5 show that, in the case of emulsions, the products G, F are within the boundary window for good primary skin feeling; for ointment: H, for gels: D, M. The obtained nanoemulsion was also within the boundary window for nanoemulsion products. The viscosity value, which characterizes the secondary skin feeling, occurs only at a very high shear rate, at which the apparent viscosity of the preparation does not change. The products, such as nanoemulsions, gel D, M, with lowest viscosity gave the best secondary feeling on the skin. Sensory Assessment Profile of Selected Preparations The obtained gel nanoemulsion combines the advantages of emulsions and gels. Therefore, the products of both these physicochemical forms, i.e., emulsions (preparation G) and gels (preparation M), were chosen for a detailed sensory analysis and assessment of the influence of the preparations on burned skin condition. The chosen preparations (G and M), as well as the nanoemulgel, also had the best primary feeling on the skin. In the case of the preparation M and nanoemulgel, they also had the best secondary skin feeling. Eleven characteristics of the preparations were evaluated in the sensory assessment. The assessed parameters were color, fragrance, consistency, uniformity, spreadability on the skin, adhesion, smoothing, pillow effect, stickiness, oiliness and absorption. The sensory assessment profiles obtained are shown in Figure 6. The data in Figure 5 show that, in the case of emulsions, the products G, F are within the boundary window for good primary skin feeling; for ointment: H, for gels: D, M. The obtained nanoemulsion was also within the boundary window for nanoemulsion products. The viscosity value, which characterizes the secondary skin feeling, occurs only at a very high shear rate, at which the apparent viscosity of the preparation does not change. The products, such as nanoemulsions, gel D, M, with lowest viscosity gave the best secondary feeling on the skin. Sensory Assessment Profile of Selected Preparations The obtained gel nanoemulsion combines the advantages of emulsions and gels. Therefore, the products of both these physicochemical forms, i.e., emulsions (preparation G) and gels (preparation M), were chosen for a detailed sensory analysis and assessment of the influence of the preparations on burned skin condition. The chosen preparations (G and M), as well as the nanoemulgel, also had the best primary feeling on the skin. In the case of the preparation M and nanoemulgel, they also had the best secondary skin feeling. Eleven characteristics of the preparations were evaluated in the sensory assessment. The assessed parameters were color, fragrance, consistency, uniformity, spreadability on the skin, adhesion, smoothing, pillow effect, stickiness, oiliness and absorption. The sensory assessment profiles obtained are shown in Figure 6. A large variation in the assessments of individual preparations was seen in the following assessed characteristics: consistency (difference of 1.75 points), spreadability (difference of 2.5 points), absorption (difference of 1.5 points) and oiliness (difference of 1.5 points). According to the assessors, the preparations differed least among themselves (difference of 0.25 points) in uniformity, adhesion, smoothing and stickiness. The obtained results for the sensory analysis of the preparations are consistent with their rheological properties. The obtained nanoemulgel has the lowest values for the yield stress and apparent viscosity, and, out of the assessed preparations, it received the best evaluation in: color (4.5), consistency (3.75), spreadability (4.5), smoothing (4.5), pillow effect (4.0), stickiness (4.25), oiliness (4.5), absorption (5). It received equally high values for uniformity (4.75) and adhesion (4.25). The lowest evaluated nanoemulgel property was fragrance (3.5), which probably was due to the absence of fragrance substances in the composition. According to the assessors, the nanoemulgel as a skin burn preparation was characterized with light consistency, which was quickly absorbed, and did not leave the feeling of oiliness and stickiness after application. The gel form allows good spreadability and skin adhesion of the product. According to the assessors, the preparation G, which has the highest yield stress and viscosity, had the worst spreadability. It also had heavy consistency and medium oiliness. As a result of the combination of these properties, the preparation showed medium absorption, as it left a medium-oily film on the skin after the application. Nevertheless, the assessors noted its good adhesion, smoothing and low stickiness. The preparation M obtained intermediate results between the cream G and the nanoemulgel. In comparison with the preparation G, it had better absorption as well as heavier consistency and oiliness. Even though the yield stress for the preparation M was lower than for the preparation G, the assessors noted that it was difficult to spread, which might be the result of its thin consistency. For the validity level of p < 0.1, the statistically significant differences were related only to the spreadability property. The variance analysis results are shown in Table 7. The cream G was the preparation with the lowest average, the gel nanoemulsion was the preparation with the highest average. There are significant differences between these preparations. However, such relationships are not found for the preparation M and nanoemulgel, as well as between the preparation M and preparation G. Effect of the Gel Nanoemulsion and Market Preparations on the Skin after Burns Skin wound healing is a systematic process, traditionally including four overlapping classic phases: hemostasis, inflammation, proliferation and maturation. Several factors influence skin healing after burn injuries, e.g., the causes, the degree and size of burn, the patient's general condition and the types of graft or materials for covering burn wounds [43]. In the case of our study, we first focused on the effect of the preparations on the skin after first-degree thermal burns and sunburns. Thermal burns result from tissue exposure to an external heat source [44]. On the other hand, sunburns differ significantly from thermal burns, as they result from infrared radiation. Although infrared radiation gives sunlight its warmth, it is not the heat of the sun that burns the skin. Over the course of the conducted studies, the obtained nanoemulsion NG-Carb-1.0 and market preparations (cream G and gel M) were examined for their impact on skin burns of four assessors. Assessors 1 and 2 had first-degree burns caused by the contact of the skin with a hot iron and an electric heater, respectively. In the case of assessors 3 and 4, the effectiveness of the above-mentioned preparations was examined for first-degree sunburns. The examined preparations were applied to the burn area twice a day for 7 days. Tables 8 and 9 present the obtained results of the effect of the gel nanoemulsion and two market preparations on the skin affected by thermal burns and sunburns. Taking into account the examination results of the thermal burn skin presented in Table 8, it can be concluded that improved hydration and elasticity of the skin smoothness were observed for each of the examined preparations for assessors 1 and 2 in comparison with the control sample Similar observations were made after the examination of the skin with first-degree sunburns. On the basis of the data in Table 9, it can be concluded that all the analyzed preparations had a positive impact on the condition of the skin after sunburn, as also shown in Figure 7. Discussion of the Rheological Results The significant differences in the rheological properties (apparent viscosity, yield stress) of the analyzed samples depended on their physicochemical formulation, and on the concentration of bodying substances, emollients and emulsifiers in the formulation (Table 4). Discussion of the Rheological Results The significant differences in the rheological properties (apparent viscosity, yield stress) of the analyzed samples depended on their physicochemical formulation, and on the concentration of bodying substances, emollients and emulsifiers in the formulation (Table 4). These differences are also the result of the presence of various modifiers of rheological properties in their formulation, as they show different mechanisms of thickening or gelling. Solid lipids, such as waxes, fatty alcohols or butters, create crystal networks, which increase emulsion rigidity [45,46]. In the case of hydrocolloids (e.g., xanthan gum, carbomers), hydration reaction, as well as polymer chain swelling occur, and a network structure is formed [47][48][49]. All the analyzed ointment products had a semi-solid, heavy and sticky consistency, as their recipes included lanolin and petroleum jelly-non-polar emollients with a thick and very sticky form. Out of the examined physicochemical forms, ointments were characterized with some of the higher values of yield stress and apparent viscosity ( Figure 3, Table 5). Moreover, high consistency of hydrophobic substances with high melting temperatures in their composition result in poorer spreadability [50,51]. The market of products for supporting burn healing includes emulsions with various consistencies and types (o/w and w/o). The tested lotions and balsams were semi-liquid, while the creams were semi-solid. Out of the examined creams and all the other preparations, the cream G (w/o-type emulsion) showed the highest yield stress and apparent viscosity, as it included beeswax and sodium stearate that regulate the consistency and viscosity of the product [52]. Beeswax also served as an emulsifier in this preparation. The preparation E included the mixture of cetearyl alcohol and sodium lauryl sulfate (emulsifying wax), which thickened the product to improve application properties, and, as a result, it had lower apparent viscosity (in the examined range of shear rate) and yield stress (52.7 Pa) out of all the analyzed creams. The lotion K had almost 2-times higher yield stress (83.2 Pa) than lotion L (43.18 Pa). This is probably due to the use of a thickening complex of sodium acrylate, sodium acryloyldimethyl taurate/copolymer/isohexadecane/polysorbate 80 and shea butter. The consistency of this lotion was heavier, and the preparation had lower spreadability on the skin. The lotion L contains dimethicone, which lowers the viscosity of preparations and lowers surface tension to improve the applicability of the product. The values of the yield stresses and viscosity of the balsams were similar to those of the analyzed lotions (K, L). The balsams A and B contained a rheological modifier (thickening complex): polyacrylamide, C13-14 isoparaffin, Laureth-7. Additionally, the balsam B contained a fatty substance (caprylic capric triglyceride) to facilitate glide during the application, thus, improving its utility capabilities and skin adhesion. This resulted in a sligh17pprox.rox. 14 Pa) reduction of its yield stress in relation to product A ( Figure 2, Table 5). The gel nanoemulsion obtained by us (NG-Carb-1.0) showed the lowest yield stress (21.75) and apparent viscosity out of all the analyzed physicochemical forms. It has a low content of fatty substances (sea buckthorn berry oil, 3.0% mass) and, therefore, it had a light and non-sticky consistency and very good spreadability on the injured skin. The tested hydrogel preparation (C, D) showed similarly low yield stress value17pprox.rox. 21-24 Pa) and apparent viscosity (as in the case of the nanoemulgel). The gels (N, M) had higher yield stress values (39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49)(50)(51)(52)(53)(54)(55) and viscosity in comparison with the analyzed hydrogels. A common feature of all these systems was a carbomer. In both the nanoemulgel and gel preparations, the carbomer allowed to obtain products with low values of the yield stress and apparent viscosity (Figure 4, Table 5). Taking into account the rheological characteristics of the studied products, it can be concluded that, just like gels, the nanoemulgel obtained by us can be a first-choice preparation in the treatment and/or the healing support of the skin after burns. Correlation Relating Rheological and Sensory Measurements With regard to application, consistency, spreadability, absorption, adhesion and stickiness were considered to be the most important properties of a preparation for skin treatment after burns. In order to determine the relationship between the sensory assessment parameters and the specific rheological parameters, an attempt was made to develop a correlation between the results of the rheological and sensory studies. A linear relationship was determined between the sensory assessment parameters and the rheological parameters (k, n in Herschel-Bulkley model) [53,54]. According to the results obtained (Table 10), a strict relationship was obtained for the following pairs: spreadability, k; spreadability, n. Both the analysis of variance (ANOVA) analysis and the correlation that combines the results of the rheological and sensory examinations confirm that the assessors with skin burns regard the skin spreadability as the most important property, which may determine their willingness to regularly use the preparation for post-burn skin regeneration, and also may result in its effectiveness. In our opinion, this is related to the fact that the sensations after the product application to the skin are different for burned skin in comparison with the healthy skin. It may also be important that, in order to ensure its effectiveness, it is necessary to touch the damaged skin several times when applying the preparation. As a result, the application process should not increase the discomfort or cause pain [55]. The Active Components of the Nanoemulgel Supporting Burned Skin Regeneration Our nanoemulgel recipe includes sea buckthorn (Hippophae rhamnoides) berry oil, which is one of few vegetable oils containing a rare palmitoleic acid (ω-7, omega 7)-a component of skin lipids. The palmitoleic acid stimulates regenerative processes in the epidermis and supports wound healing. In other words, this oil has the ability to activate physiological skin functions for skin regeneration and minimization of scarring. The regeneration of burned skin is positively affected by oil also containing unsaturated ω-3 and ω-6 fatty acids, carotenoids and tocopherols, which stimulate fibroblast proliferation, collagen biosynthesis and induce tissue reparation and angiogenesis [56][57][58]. Out of the analyzed market preparations, this ingredient was found in only one cream (preparation G, Table 4), which was intended for irritated skin after sunbathing, recommended for accelerated wound healing, the stimulation of regeneration and restoration of the epidermis. The most important moisturizing substances found in the formula of the analyzed market preparations (Table 4), which are intended for skin regeneration after burns include: dexpanthenol (present in the composition of seven products), allantoin (present in the composition of six products), aloe vera gel (present in the composition of two products), and hyaluronic acid (present in the composition of two products). Dexpanthenol is an alcohol derivative of pantothenic acid, a component of the B complex vitamins. It acts like a moisturizer, improving stratum corneum hydration, reducing transepidermal water loss and maintaining skin softness and elasticity. Activation of fibroblast proliferation, which is of relevance in wound healing, has been observed both in vitro and in vivo with dexpanthenol. Dexpanthenol has been shown to have an anti-inflammatory effect on the experimental ultraviolet-induced erythema [59]. Allantoin, a derivative of urea, stimulates cell division and epithelization, thus, accelerating the regeneration process of damaged skin inflammation. It also has strong moisturizing properties, and it is highly safe during the therapy-it is not allergenic and it does not cause skin irritation. Thanks to allantoin, the proliferation of epithelial cells is accelerated, which leads to quicker recovery of the skin. Allantoin makes the skin easily retain additional water, and can therefore rebuild the protective hydro-lipid coat. Pharmaceutical and cosmetics products with allantoin are also recommended as an adjunct in the treatment of chronic skin diseases with impaired keratosis or skin damage, such as atopic dermatitis, contact dermatitis, psoriasis, ichthyosis, ulcers or burns [60]. Aloe vera is an herbaceous and perennial plant that belongs to the Liliaceae family, and it is used for many medicinal purposes. Scientific studies have shown that the gel can increase the flexibility and reduce the fragility of the skin, since 99% of the gel is water. Additionally, mucopolysaccharides along with amino acids and zinc present in aloe vera can have a positive effect on skin integrity, moisture retention, erythema reduction and they also help in preventing skin ulcers. Aloe vera is known for its anti-tumor, anti-inflammatory, skin protection, anti-diabetic, anti-bacterial, anti-viral, antiseptic and wound healing properties [61]. Gels obtained from aloe vera include, among others, polysaccharide, amino acids, plant sterols, enzymes, vitamins, minerals, sugars, as well as 75 other components. Due to its antibacterial, anti-inflammatory and moisturizing effects, aloe vera is successfully used to treat wounds and skin burns [62]. Hyaluronic acid (hyaluronan), a naturally occurring polymer is polysaccharide belonging to the glycosaminoglycan family. The hyaluronic acid molecule is readily soluble in water, producing a viscous liquid or gel that behaves like a lubricant. It enhances keratinocyte proliferation and migration, as well as the angiogenic response from the wound bed. Topical application of hyaluronic acid derivatives stimulates the healing of not only fresh wounds, but also ulcers and chronic wounds [63]. It was possible to add only the hyaluronic acid (N14-C), aloe vera gel (N14-E), and the combination of both ingredients (N14-G) to the nanoemulsion recipe (Table 2). This is probably due to the fact that hyaluronic acid and aloe vera gel could additionally act as agents that thicken the aqueous phase and create a steric barrier [64]. As reported in the literature, water-soluble polysaccharides are frequently employed as texture modifiers, e.g., thickening agents or gelling agents. Thickening agents typically comprise of soluble polymers displaying extended structures and they can achieve higher solution viscosity since they can modify the fluid flow profile. Gelling agents are capable of generating chemical or physical cross-linking with its neighbors and impart solid-like properties to a nanoemulsion solution. The improved stability of the nanoemulsions by texture modifiers can be attributed to the inhibition of droplet movement and the resulting retardation of gravitational separation [65]. Therefore, apart from decreasing the droplet size, gravitational separation can also be reduced by adding thickeners to improve the viscosity of the aqueous phase, or by adding weighting agents to decrease the density differences. Droplet aggregation due to flocculation and coalescence can be limited by making sure that the steric and electrostatic interactions of droplets exceed their attractive interactions (e.g., hydrophobic, van der Waals). This is often accomplished by changing the aqueous phase composition [65,66]. The preparations intended for burned skin should contain non-irritating and non-allergic components. Therefore, a decyl glucoside was used as a nanoemulsion emulsifier. Decyl glucoside is a non-ionic surfactant made from renewable, plant-derived feedstocks, an effective alternative to polyethoxylated group /sulfate-containing formulations. It belongs to the group of alkyl glucosides (APG), surfactants synthesized through the condensation of long-chain fatty alcohols and glucose, extracted from vegetal, renewable sources. They are used in various leave-on and rinse-off cosmetics and are considered to have low irritancy and allergenicity. Due to its invaluable mildness, this surfactant is also a perfect choice for sensitive skin and baby products [67,68]. Effectiveness of Skin Regeneration after Burns-Analysis of Functional Parameters of the Skin Our criterion for the effectiveness of a nanoemulgel in skin regeneration after burns was the increase in skin functional parameters such as: hydration, elasticity, and smoothness compared to the control test during 7 days of use of the preparations on the skin area after thermal burns or sunburns. For comparative purposes, the products G (cream) and M (gel) were selected, due to their physicochemical form but also due to the results of rheological and sensory examinations. The improvement in hydration, elasticity and smoothness of the skin after a thermal burn was observed in assessor 1 and assessor 2 for each of the tested preparations in comparison with the control group. This indicates that the preparations for the regeneration of burned skin require moisturizing ingredients and ingredients that affect skin hydration and elasticity. The emulsion (the preparation G) and nanoemulgel had a greater impact on the improvement of skin elasticity in comparison with the gel preparation M. This is most likely related to the emollients contained in those preparations, sea buckthorn oil, among others. The results of the skin parameters after the use of the preparation G were also influenced by betulin present in its recipe. Betulin is known for its effective action in the treatment of burns. According to Frew et al. [69], betulin allows the wounds to heal faster as well as it reduces scar formation. On the other hand, when the improvement of skin hydration is concerned, the gel M and nanoemulgel were more effective. Good hydration of burned skin is probably associated with the gel form of these preparations and the content of moisturizing ingredients, such as: brine water, aloe vera (aloe barbadiensis extract), propylene glycol, glycerin, symphytum officinale extract, panthenol and allantoin (in the case of the preparation M) and hyaluronic acid and aloe vera gel (in the case of the nanoemulgel). Moreover, the gel matrix makes it possible to prolong the contact time of the preparation with the skin, and to increase the skin hydration level by forming a hydrophilic film on the skin surface and reducing the transepidermal water loss. The whole results in adequate nutrition, hydration and skin tension, as reflected by the results obtained by us. After the tested products were applied, the increase in the hydration and elasticity levels in the skin after sunburns was observed, and this indicates good regeneration of the epidermis. Damaged epidermis after long exposure to sunlight creates erythema, which increases in proportion to the overall dose and if excessive exposure occurs, the skin has desquamation of the outer dead skin cells after some days. Out of the analyzed products, the highest hydration level was achieved by the application of the obtained gel nanoemulsion. The nanoemulgel in the analyzed cases prevented skin exfoliation and softened the reddened areas. The light formula and consistency, as well as the increased rate of absorption of the received preparation, can allow the application of the preparation to the skin several times a day and reduce discomfort during the application to the damaged skin. All the results of our analysis of the rheological and sensory properties of the products with a new physicochemical form for the regeneration of burned skin indicate that a nanoemulgel, like gel preparations, can be a first-line product to treat first-degree thermal burns and sunburns. Materials Based on our previous studies [42], in order to obtain the nanoemulsion gel, Plantacere 2000UP (International Nomenclature of Cosmetic Ingredients (INCI): Decyl Glucoside, HLB = 12.8) from BASF Company (Warsaw, Poland) was used as the emulsifier. The oil phase was composed of sea buckthorn oil (INCI: Hippophae rhamnoides Berry Oil) and tocopheryl acetate as the antioxidant (Ecospa Company, Warsaw, Poland). The aqueous phase included deionized water and moisturizing agents such as: Aloe Barbadensis leaf juice and 1 wt% sodium hyaluronate solution, allantoin, and D-panthenol. All of them were purchased from Ecospa Company, Warsaw, Poland. OPTASENSE™ G-40 (INCI: Carbomer) was used as a rheology modifier and purchased from Croda Company (Cracow, Poland). Sodium benzoate plays the role of a preservative (Brenntag Company, Kędzierzyn-Koźle, Poland). The material for the comparative analysis included the obtained nanoemulgel and 14 market preparations (manufactured in the European Union (EU)) for the treatment of first and second-degree burns, scars after burns and surgical procedures, sanitizing wounds after burns, skin regeneration after burns. The detailed characteristics of the market preparations and their scope of use, in accordance with the manufacturers' declarations, are set out in Table 4. Nanoemulsions Gel Preparation and Characterization The preparation process of the nanoemulsion by ultrasound method was described in our previous study [42]. At the initial stage of the process, a basic emulsion was prepared, which consisted of an emulsifier, sea buckthorn oil and water. For this purpose, part of the aqueous phase with the emulsifier and oil phase was heated to 40-50 • C in a water bath. In turns, both phases were combined, adding the oil phase to part of the aqueous phase and they were mixed with PHOENIX Instrument RSM-10HS magnetic stirrer for 10 min or 15 min, 300 rpm or 500 rpm. Then, the prepared pre-emulsion was processed with an ultrasonic homogenizer UP200Ht (Hielscher), where the emulsification time was 60 s or 120 s, and ultrasound power was 40 W (40%), amplitude 89% (Table 2). Stability assessment of the prepared samples was carried out visually. In the next stage, the pre-emulsion was prepared and enriched with the following ingredients: aloe vera juice, 1% hyaluronic acid solution, allantoin, D-panthenol, as well as an anti-oxidant (vitamin E), and preservative (sodium benzoate). For all the enriched nanoemulsions, the pre-emulsification time was 10 min, pre-emulsion stirring speed was 500 rpm, ultrasonic homogenization was 120 s (Table 3). Stability assessment of the prepared samples was carried out visually and by means of the dynamic light scattering (DLS) method. To prepare the nanoemulgel, the procedure described by Mao et al. was applied. The first hydrogel matrix was prepared by soaking the carbomer in a water carbomer gel matrix. Then, the nanoemulsion was added to the gel matrix with slow continuous stirring. The final nanoemulgel was obtained by adjusting pH to 6 with the addition of a citric acid aqueous solution [18]. Determination of Nanoemulsion Droplet Size The measurement of the nanoemulsion droplet size was made with Zetasizer Nano ZS analyzer by Malvern Instruments. Dynamic light scattering (DLS) technology was used in the analysis. The device is equipped with an analyzer with a laser, which emits light of a wavelength equal to λ = 633 nm. The measurements were carried out at 25 • C. The device measures diffused light at an angle of 173 • . In order to avoid multiple scattering effects, the samples were diluted to 1 wt% with deionized water before the analysis. The measurements were carried out at a temperature of 25 • C and a relative humidity of 60%. pH Measurements The measurement of pH was made with a SevenMulti pH-meter (by Mettler Toledo Inc., Columbus, OH, USA) equipped with an InLab ® Expert Pro electrode. The measurement was made by immersing the electrode in the resulting nanoemulsion. The results presented show the arithmetic mean from three measurements. The measurements were carried out at 25 • C. SEM Analysis The morphology of the obtained nanoemulgel was observed by a scanning electron microscope (Mira3-FEG-SEM, Tescan, Brno-Kohoutovice, Czech Republic) with a pole emission (Schottky emitter), equipped with an X-ray energy dispersive spectrometer EDX (Oxford Instruments) and a cooling table (Peltier), operating in a temperature range from as low as −30 • C. The microscope allows work in high, low and variable vacuum modes. For SEM investigations, the samples were prepared by rapid freezing in liquid nitrogen followed by freeze-drying for 48 h [18,70]. Rheological Studies The flow curves and viscosity curves of the examined market products used in supporting burn healing and of the obtained nanoemulgel were determined with a rotational rheometer, model R/S Plus Brookfield. The cone-plate C-25-2 measurement system was used. Shear speed range was 1-500 s −1 , measurement time 30 s, number of measurement points 30. The measurements were carried out at a constant temperature (25 • C) with a Huber Ministat 125 thermostat and RheoWin 3000 rheometer firmware to automate and save measurements. The presented results of the study for each sample show the average from three measurements. Sensory Analysis The conducted sensory analysis focused on the assessment of the examined products, beginning with their appearance and accompanying sensations during and after the application of the preparation on burned skin. The analysis consisted of two stages. The first stage was the empirical assessment of the primary and secondary feeling on the skin left by the analyzed preparations (the nanoemulgel and 14 market products) [34,35]. The assessors were asked to evaluate (on a 5-point scale) what sensations the tested product produced on the skin during and after the application. The obtained results of the assessor's evaluation (number-average) were then compared with the results of the rheological measurements. On this basis, the window boundaries of a good primary and secondary sensations on the skin were determined, as defined by Brummer et al. [34]. The second stage of the study was a detailed sensory assessment of three selected preparations and combined with the assessor's evaluation of the effect of the preparations on the skin affected by thermal burns and sunburns. The selected preparations were the cream G (which contained sea buckthorn oil), the gel M (with an aloe vera extract) and the nanoemulgel obtained by us, which was a combination of the form and active ingredients present in these preparations. The preparations selected for the second stage of the sensory assessment had the best sensations on the skin after the application according to the criteria defined by Brummer et al. In first stage, the assessors were 10 women, aged 23-25, who had no previous experience in the evaluation of cosmetic preparations. In order to train the assessors and to obtain correct and repeatable results, the assessments were conducted under standardized conditions. The standardization of the conditions concerned the definition of the characteristics selected for the evaluation and the establishment of sensory assessment conditions. In the first part, the assessors were informed about the general concept of the study, and then the test procedure was described in detail, as well as the assessment method for different characteristics of the tested preparations. Individual characteristics are defined in accordance with the ASTM D1490 Standard Guide for descriptive analysis of creams and lotions and our previous works [35,71]. The assessors gave their informed and written consent to participate in the study. The assessment was conducted in a laboratory with controlled temperature and humidity and appropriate lighting conditions. The temperature of all the samples tested was the same and was 21 ± 0.5 • C. The tested PCs were placed in tight, plastic containers. The assessors were asked to evaluate the individual characteristics of the tested preparations on a 5-degree scale, where 1 was the lowest and 5 the highest score. The given numerical value of the evaluated characteristic for each sample is the average of the scores from each assessor. Eleven characteristics of the preparations were assessed in the test. The evaluated parameters were color, fragrance, consistency, uniformity, spreadability on the skin, adhesion, smoothing, pillow effect, stickiness, oiliness and absorption [35,72]. Test of the Effect of the Nanoemulsion and Market Preparations on the Skin Condition after Burns The assessment of the effect of the gel nanoemulsion and two market preparations on the skin regeneration rate after burns was made with the ARAMO ® TS device (by EURTEX, Warsaw, Poland) and synchronized software ARAMO Skin XP PRO Diagnostic System. The device allows the analysis of the skin condition with sensors which measure, among others, elasticity, hydration, hydration level, the amount and size of discoloration or wrinkle depth. The study of the impact on the burn healing process and comparative evaluation of the prepared nanoemulgel and two market preparations was conducted on four assessors, aged 23-25, who had thermal burns and/or sunburns on the skin of upper limbs. The assessors gave their informed and written consent to participate in the study. The examined preparations were applied to burned skin twice a day for 7 days. At the beginning of the study, an analysis of the burn areas was carried out before the preparations were applied and after seven days of use. The measure of the effectiveness of the regenerating preparation was the effect on such factors as: elasticity, hydration, smoothness. The study also covered the area of burned skin that was not treated with any of the preparations (control test). The results given are the difference in the measurements of the above parameters of the skin condition before and after use of the tested preparations. Negative values mean deterioration and positive values mean improvement in a given skin parameter. All subjects gave their informed consent for inclusion before they participated in the study. The scope of the study is in line with the Regulation ( Statistical Analysis All data concerning the mean droplet size of nanoemulsions, PDI, and skin parameter were presented as a mean of three different experiments ± SD. The analyses of variance (ANOVA) were conducted to check the differences between the sensory attributes of the samples. A value of p < 0.1 was considered statistically significant. In case of significant difference at 90%, Tukey's HSD for post hoc comparisons were then carried out. Conclusions The nanoemulsion gel combines the advantages of an emulsion and a gel due to its light, non-oily, semi-solid consistency, transportability of hydrophobic substances, while, at the same time, it creates a protective layer on the skin and increases the hydration level. The obtained formulation showed the lowest yield stress and apparent viscosity out of all the analyzed physicochemical forms. According to the sensory analysis, the nanoemulgel as a skin burn preparation was characterized with light and non-sticky consistency, which was quickly absorbed and did not leave the feeling of oiliness and stickiness after application. The gel form allows good spreadability and skin adhesion of the product on the injured skin. Nanoemulsion gel is a promising alternative to medical cosmetics available on the market as a form of formulation used in skin care after burns. Funding: Work financed from project "Innovation Incubator 2.0". Conflicts of Interest: The authors declare no conflict of interest.
2020-11-12T09:10:12.892Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "53319b04a32ef7001a608488c95726aa932e065f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/8/11/1416/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7f08bbc3802491be48466783cddd6bdad3eba4e5", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
11099031
pes2o/s2orc
v3-fos-license
Resting energy expenditure and substrate oxidation rates correlate to temperature and outcome after cardiac arrest - a prospective observational cohort study Introduction Targeted temperature management improves outcome after cardiopulmonary resuscitation. Reduction of resting energy expenditure might be one mode of action. The aim of this study was to correlate resting energy expenditure and substrate oxidation rates with targeted temperature management at 33°C and outcome in patients after cardiac arrest. Methods This prospective, observational cohort study was performed at the department of emergency medicine and a medical intensive care unit of a university hospital. Patients after successful cardiopulmonary resuscitation undergoing targeted temperature management at 33°C for 24 hours with subsequent rewarming to 36°C and standardized sedation, analgesic and paralytic medication were included. Indirect calorimetry was performed five times within 48 h after cardiac arrest. Measurements were correlated to outcome with repeated measures ANOVA, linear and logistic regression analysis. Results In 25 patients resting energy expenditure decreased 20 (18 to 27) % at 33°C compared to 36°C without differences between outcome groups (favourable vs. unfavourable: 25 (21 to 26) vs. 21 (16 to 26); P = 0.5). In contrast to protein oxidation rate (favourable vs. unfavourable: 35 (11 to 68) g/day vs. 39 (7 to 75) g/day, P = 0.8) patients with favourable outcome had a significantly higher fat oxidation rate (139 (104 to 171) g/day vs. 117 (70 to 139) g/day, P <0.05) and a significantly lower glucose oxidation rate (30 (−34 to 88) g/day vs. 77 (19 to 138) g/day; P < 0.05) as compared to patients with unfavourable neurological outcome. Conclusions Targeted temperature management at 33°C after cardiac arrest reduces resting energy expenditure by 20% compared to 36°C. Glucose and fat oxidation rates differ significantly between patients with favourable and unfavourable neurological outcome. Trial registration Clinicaltrials.gov NCT00500825. Registered 11 July 2007. Electronic supplementary material The online version of this article (doi:10.1186/s13054-015-0856-2) contains supplementary material, which is available to authorized users. Introduction Targeted temperature management improves neurological outcome after cardiopulmonary resuscitation (CPR) although the target temperature is matter of discussion [1][2][3]. According to the guidelines of the European Resuscitation Council and the American Heart Association comatose patients resuscitated after cardiac arrest should undergo therapeutic hypothermia with a target temperature of 32 to 34°C for 24 hours [4,5]. Reduction of resting energy expenditure (REE) might be one of the possible mechanisms underlying the protective effects of hypothermia provided that the counterregulatory mechanism of shivering is prevented by the use of adequate medication [6,7]. The extent of REE reduction is expected to be approximately 8% per°C [6,7]. In critically ill patients with fever, indirect calorimetry showed REE reduction by 6 to 12% per°C using external cooling [8,9]. In patients with traumatic brain injury no reduction of REE was found below 35°C in contrast to temperatures higher than 35°C [10]. Studies evaluating the influence of hypothermia on substrate metabolism are rare. Most of them are animal investigations or studies including non-sedated humans and can, therefore, not be compared to the patients in the present study [11][12][13][14]. Influence of therapeutic hypothermia on REE and substrate metabolism in patients resuscitated after cardiac arrest has not been investigated so far. Therefore, the objective of this study was the investigation of the correlation of REE and substrate oxidation rates with temperature management at 33°C after cardiac arrest and outcome. Methods This was a prospective, observational study using indirect calorimetric assessments in patients undergoing targeted temperature management at 33°C after cardiac arrest as part of routine clinical care according to established protocols in our Department of Emergency Medicine [15]. The study was investigator-initiated and investigatordriven. The study protocol conformed to the ethical guidelines of the Declaration of Helsinki and was approved by the research ethics committee of the Medical University of Vienna. According to the Austrian law the Institutional Review Board for human studies approved the protocol with an exception from informed consent guidelines. None of the patients ended up having consent refused by next of kin or recovered patients after they awoke and were contacted. The study was performed in the intensive care units (ICUs) of the Department of Medicine III, Division of Gastroenterology and Hepatology (Intensive Care Unit 13 h1) and the Department of Emergency Medicine at the Medical University Hospital of Vienna. Patients were eligible for inclusion in the study if they were older than 18 years of age and hospitalized within 6 hours after resuscitation from cardiac arrest. Treatment General Patients were stabilized according to the treatment protocols [15]. Standards of post-resuscitation care included airway management and mechanical ventilation, treatment of haemodynamic instabilities and arrhythmias, blood glucose control and temperature control with a temperature probe in the oesophagus. For every patient, age, sex, presumed cause of cardiac arrest, initial rhythm, duration of CPR, comorbidities as well as laboratory data and medication were documented. Patients were followed up at 1, 6 and 12 months according to the Utstein Style using the following scoring instruments: Glasgow coma scale (GCS), cerebral performance category (CPC), and overall performance category (OPC) [16]. Targeted temperature management All patients were cooled using either Alsius Coolgard 3000™ (Zoll Medical Corporation., Chelmsford, MA, USA) or Arctic Sun™ (Medivance, Inc., Louisville, CO, USA) with the maximal cooling rate until the target core temperature of 33°C. The cooling period lasted 24 hours and rewarming was performed with a rate of 0.4°C/hour until a core temperature of 36°C was reached. Medication All patients received sedation and analgesia with midazolam and fentanyl as well as muscle paralysis with rocuronium according to the therapeutic standards at the Department of Emergency Medicine at the Medical University of Vienna [15]. Depth of neuromuscular blockade was assessed using the TOF Watch™ SX (Organon Medical Systems, Roseland, NJ, USA), which monitors neuromuscular transmission during surgery or intensive care by means of acceleromyography. Two electrodes are placed above the ulnar nerve and the response to the nerve stimulation is measured by using a small piezoelectrode acceleration transducer distally placed on the volar side of the thumb. Four pulses are given and the measuring is based upon a ratio of the amplitude of the fourth evoked mechanical response to the first one. Sedation as well as analgesic and paralytic medication was stopped after rewarming at 36°C. Patients did not receive any intravenous fluids containing dextrose during the study period. As preferred, crystalloid Ringer lactate solution was used. Some patients received isotonic saline solution. Medications were dissolved or diluted using isotonic saline or Ringer lactate solution or distilled water as specified. Indirect calorimetry Respiratory gas exchange was measured by computerized open-circuit indirect calorimetry (Deltatrac™ II Metabolic Monitor, Datex-Ohmeda Instruments, Helsinki, Finland) as previously described [17,18]. Oxygen consumption and carbon dioxide production were measured in 1-minute intervals and the average of a 30-minute period was calculated. The paramagnetic oxygen sensor allows inspiratory oxygen concentration up to 65% according to the manufacturer. Measurements were performed over a period of 1 hour. Data for further calculation were taken from the second half of the measurement to assure steady state conditions. Ventilator setting remained unchanged at least half an hour before and during measurement. Patients did not receive any form of caloric intake during the cooling and rewarming period. They underwent an 8-hour fasting period before the last calorimetry. To get an overview about REE and substrate metabolism in the different phases of post-resuscitation care (stable cooling phase, passive rewarming, active rewarming, rewarmed stable phase) indirect calorimetry was performed five times: 12 to 24 hours after achieving the target temperature of 33°C (stable phase), during rewarming at a temperature of 34.5°C, 36°C, 36.5 to 37.5°C and 48 hours after cardiac arrest. Sedation as well as analgesic and paralytic medication was stopped after the measurement at 36°C. Calculations REE was expressed in kJ (Kcal)/day/m 2 body surface area. REE and oxidation rates for glucose, fat, and protein were calculated according to Ferrannini et al. [19]. It was assumed that for each 1 g nitrogen produced, 5.923 L oxygen were consumed and 4.754 L carbon dioxide were produced (respiratory quotient for protein: 0.803) [20]. For calculation of urea nitrogen appearance rates, changes in plasma urea concentration were taken into account (measurement at the beginning and at the end of each calorimetry) [21]. Urine production was measured along with each calorimetric measurement. Urinary urea nitrogen was measured colorimetrically [22]. The protein oxidation rate (g/d) was calculated as 6.25 × 24-hour urea nitrogen production (g/d) [23]. Statistical analysis Due to the pilot study character and data characteristics we regarded a sample size of 25 patients to be feasible and adequate to yielding sufficient precision of our estimates. Continuous data are presented as median and 25 to 75% interquartile range (IQR) or mean ± standard deviation as appropriate by distribution. Categorical data are presented as count and relative frequency. Twentyfive independent patients each contributed data from five occasions defined by certain temperature levels. Accordingly, we treated data as panel data. Generally linear effects were assessed across categories of temperature and not across actual temperature on a°C scale. To assess the influence of temperature on metabolic variables we used linear random intercept models. The independent variable was the metabolic variable, temperature was modelled as linear variable, and the clustering variable was a patient identifier. To assess a potential relation between metabolic variables and sedation we grouped data according to temperature ≤36°C and >36°C, because sedation protocols were linked to core temperature. We then repeated the above regression by replacing temperature by the dichotomous variable sedation. To assess a potential association between metabolic variables and neurological outcome we calculated temperature level-wise independent t tests adjusted for multiplicity by the Bonferroni method. We then developed a logistic random intercept model where favourable neurological outcome was the independent variable, and each one of the metabolic variables was entered as predictor, whilst allowing for the panel structure of the data. To assess the influence of norepinephrine dose on metabolic variables we used linear random intercept regression models with each metabolic variable as independent variable, and norepinephrine therapy as predictor allowing for clustering as described above. Likewise, we investigated the influence of temperature on insulin dose (U/hour) and serum creatinine (mg/dl). Both variables were handled as continuous outcomes. We also tested for an interaction of norepinephrine on the relation between neurological outcome and metabolic variables. To reassess our models we employed standard regression models using robust standard errors. Results from both approaches were comparable for all calculations. For data management we used Excel 2008 for Mac (Microsoft Corporation, Redmond, WA, USA), for data analysis we used Stata 9.0 for Mac (Stata Corp, College Station, TX, USA). Generally a two-sided P value less than 0.05 was considered statistically significant. Results Patient characteristics and outcome are given in Table 1. The median sequential organ failure assessment (SOFA) score was 10 (9.25 to 10.75) in patients with unfavourable neurological outcome and 10 (9 to 11) in patients with favourable neurological outcome. Targeted temperature management was performed using surface cooling in 15 patients and intravascular cooling in 10 patients. We compared C-reactive protein, fibrinogen and leukocyte count as markers of inflammation as well as platelet count, D-dimer, prothrombin time and activated partial thromboplastin time as markers of coagulation between the patient groups undergoing different cooling methods. No differences were observed between the two different cooling methods (data not shown). Mean train-of-four (TOF) value was 0. 114 indirect calorimetric measurements were performed. Two patients died after the first measurement and in one patient indirect calorimetry had to be stopped after the third measurement due to technical failure of the monitor. In one patient the last measurement could not be performed because of a fraction of inspired oxygen (FiO 2 ) level of 100% due to adult respiratory distress syndrome after aspiration. In case of premature study termination collected data were used for analysis until the patient left the study. Time points of indirect calorimetric measurements resulted in temperature categories of 1.5°C below 36°C and temperature categories of 0.8°C above 36°C (as presented in Figures 1 and 2 and Table 2). The overall mean oxygen consumption (VO 2 ) was 241 ml/min with a range from 125 to 662 ml/min over all measurements. Within individuals the mean standard deviation (SD) was 13 ml/ min (5% of the mean) representing measurement variability. Between individuals, SD was 68 ml/min on average (28% of the mean) representing the variability between the patients. During the initial stable conditions (with sedation, analgesics and paralysis at 33°C) variability was smaller: overall range 125 to 476 ml/min. Within-individual SD was 9.00 ml/min (4.5% of the mean). The between-individual SD was 52.13 ml/min (26% of the mean). Calorimetric measurements and substrate metabolism of all patients are given in Table 2. Negative glucose values reflect net gluconeogenesis whereas positive values indicate glucose oxidation. Fat oxidation rate showed a temperature dependency (10 g/day/°C category; P <0.0001), while glucose and protein oxidation rates were not significantly dependent from temperature (P = 0.07 or P = 0.06, respectively). In contrast to protein oxidation rate (favourable vs. unfavourable outcome: 35 (11 to 68) g/day vs. 39 (7 to 75) g/day, P = 0.8) patients with favourable outcome had a significantly higher fat oxidation rate (139 (104 to 171) g/day vs. 117 (70 to 139) g/day, P <0.05) and a significantly lower glucose oxidation rate (30 (−34 to 88) g/day vs. 77 (19 to 138) g/day; P <0.05) as compared to patients with unfavourable neurological outcome ( Figure 2). REE was not associated with neurological outcome (odds ratio (OR) 0.88, 95% confidence interval (CI) 0.63 to 1.22 of favourable neurological outcome for each unit increase in REE quartile, P = 0.4; Figure 2). Probability of unfavourable neurological outcome increased by 69% (OR 1.69, 95% CI 1.15 to 2.49; P <0.01) with every increase of glucose oxidation rate from one quartile to the next. Probability of unfavourable neurological outcome decreased by 38% (OR 0.62, 95% CI 0.44 to 0.87; P <0.01) with every increase of fat oxidation rate from one quartile to the next. For all patients dosage of sedation and analgesic medication (midazolam, fentanyl), neuromuscular blocker (rocuronium) and norepinephrine as well as insulin at the different measurement points is given in the electronic supplementary file (Table S1 in Additional file 1). One patient received dobutamine during the rewarming period (measurement 2, 3 and 4). Another patient received levosimendan during the first two measurements. Although both medications might influence metabolism, subgroup analysis was not possible due to the small number of patients. Even though we did not find an association between norepinephrine therapy and REE (P = 0.7) or protein oxidation rate (P = 0.6), norepinephrine therapy was associated with increased glucose oxidation rate (P <0.05) and reduced fat oxidation rate (P <0.05). However, there was no significant interaction of norepinephrine on the relation between neurological outcome and metabolic variables. Norepinephrine therapy was not associated with outcome (P = 0.7). Temperature was not associated with insulin therapy (P = 0.6) or serum creatinine levels (P = 0.8). We did not find an association between outcome and serum creatinine levels (P = 0.6). Dosage of all administered medications and blood glucose values at different calorimetric measurements are given in the Table S1 in Additional file 1 for all patients. Discussion Our study showed that in patients after cardiac arrest and successful return of spontaneous circulation undergoing targeted temperature management at 33°C REE was reduced (20% (18 to 27%)) compared to 36°C with a linear relation to temperature alterations but without differences between outcome groups. Besides temperature, sedation, analgesia and NMB were associated with a REE reduction. Only fat oxidation rate was associated with temperature, sedation and analgesia. Glucose oxidation rates and fat oxidation rates were significantly different between patients with favourable and unfavourable neurological outcome. An interaction between norepinephrine therapy, neurological outcome and metabolic variables could be excluded. Use of medication (sedatives, analgetics, neuromuscular blockers, insulin and norepinephrine) was not different between patients with favourable and unfavourable neurological outcome. The effect of cooling on REE has already been described. In hyperthermic patients cooling was able to reduce REE between 6 and 12% per 1°C temperature reduction depending on sedation, analgesia and NMB [7,8]. In neurosurgical patients, a significant reduction of REE by cooling was only detectable above a temperature of 35°C using sedation and neuromuscular blockers [10]. Generally, a reduction of 6 to 9% of REE per 1°C decrease in temperature is accepted [6,7]. Our study showed a 6.6% reduction of REE per 1°C decrease at temperatures below 36°C. A limitation of our study is that effects of sedation, analgesia, NMB and hypothermia cannot be separated due to the study design. All indirect calorimetric measurements at 36°C and below were performed with concomitant sedation, analgesia and NMB. In surgical ICU patients increasing Ramsay sedation scale using midazolam resulted in a significantly decreased REE and oxygen consumption [24]. NMB in combination with controlled mechanical ventilation was able to reduce oxygen consumption by approximately 20% [25]. In critically ill children NMB was able to reduce REE by approximately 10% [26]. We found a median reduction of REE of 20% in our study in adult patients after cardiac arrest at 33°C. Our data suggest that effects of sedation, analgesia, NMB and hypothermia on REE might not be additive. This assumption is supported by another study, which showed that in normothermic patients sedation has a major effect on REE; however, in sedated patients temperature was the main determinant of REE [27]. Due to the chosen medication regimen with continuous midazolam, fentanyl and rocuronium administration during the cooling and the passive rewarming period the effect of shivering, which is known to highly influence REE, can be disregarded [28,29]. In our study, patients after cardiac arrest and return of spontaneous circulation (ROSC) with favourable neurological outcome had significantly different fat and glucose oxidation rates compared to patients with unfavourable neurological outcome. An increase in quartiles of fat oxidation rate was associated with favourable and an increase in quartiles of glucose oxidation was associated with unfavourable neurological outcome. Data on substrate metabolism during hypoxia and hypothermia were only available from animal experiments or experiments with healthy subjects. In six healthy males resting at a temperature of 5°C, free fatty acid turnover, as well as lipid oxidation and carbohydrate oxidation remarkably increased, however this increase was accompanied by a significant increase in the REE [11]. The increase of REE in this experiment is predominantly caused by shivering and thus is not comparable to our analysis. Hypoxia seems to be associated with remarkable changes in substrate utilization especially in the brain. In cooled anaesthesized dogs, hypoxic conditions increased glucose utilization in the brain [12]. Further animal experiments showed that glucose transporter (GLUT) 3 expression increases up to nine times both in the affected and non-affected neurons after 48 hours of rat brains under ischaemic conditions [13]. This finding is in accordance with data from a rat model of traumatic brain injury, which showed a 300% increase of GLUT 3 expression up to 48 hours after the event [14]. In traumatic brain injury, a remarkable increase in glucose utilization of the whole brain (hyperglycolysis) can be found up to 7 days after the trauma [30,31]. We therefore hypothesize that in patients with unfavourable neurological outcome increased cerebral glucose utilization was present due to severe hypoxiainduced brain injury during cardiac arrest. We assume that this increase accounted for the elevated glucose oxidation rates measured by indirect calorimetry. Since brain glucose utilization normally accounts for 25% of whole body glucose metabolism [32], changes in glucose oxidation rates of the brain are obviously detectable by indirect calorimetry evaluating whole body substrate metabolism. We cannot rule out influence of other post-cardiac arrest organ injury, especially liver and kidney, on REE and substrate metabolism in our study patients [33,34]. However apart from brain injury, other organ injury seemed to be equally distributed in both patient groups and, therefore, is improbable of having caused the effect shown in our study. Norepinephrine therapy was significantly associated with glucose and fat oxidation rates but did not interact with the relation between neurological outcome and metabolic values and was not associated with outcome in the present study. Influence of norepinephrine on fat utilization has already been shown before in healthy subjects [35]. Although hypothermia is associated with increased insulin resistance and glucose intolerance, insulin therapy was not associated with temperature in our study [6,36]. Insulin was neither associated with REE nor with substrate oxidation rates. Influence of insulin on substrate oxidation rates and energy expenditure seems to be limited [37,38]. A further limitation of our study is the technique of indirect calorimetry itself, which has the advantage of being non-invasive, however, is prone to multiple errors. Although, measurements were performed according to the manufacturer's instructions and with greatest diligence, errors influencing our results cannot be fully excluded [39]. Using indirect calorimetry, only whole body net balances of substrate metabolism can be described without knowledge of localized substrate oxidation or biosynthesis rates. We can rule out bias of anaerobic metabolic pathways in a substantial way because lactate levels were in the normal range throughout the study period. That is why measurements started in the second half of the 24-hour cooling period, in order to have already achieved stabilized conditions, without anaerobic metabolism. Besides the small number of patients included, a major limitation of this study is the substantial variability of the measurements of substrate metabolism themselves. The factors that impact substrate metabolism -including the specific blend of organ failures, host global and organ-specific inflammation and function -are multitudinous. Accordingly, we cannot discount the possibility of confounding by variables not measured in this study like interleukin levels or organ-specific substrate metabolism, but our study nonetheless raises an interesting hypothesis for future studies that might control for the most important confounders. Results have to be reproduced and further investigation is necessary to elucidate pathophysiological actions during the post-cardiac arrest period resulting in detectable metabolic differences between patients with favourable and unfavourable neurological outcome. Conclusions Targeted temperature management at 33°C after cardiac arrest reduces REE by 20% with a linear relation to temperature variations. Only fat oxidation rate was temperature dependent. A significant difference in glucose and fat oxidation rates was found between patients with favourable and unfavourable neurological outcome. An increase of fat oxidation rate and a decrease of glucose oxidation rate were associated with favourable outcome.
2016-05-04T20:20:58.661Z
2015-03-29T00:00:00.000
{ "year": 2015, "sha1": "7053fb93796acf1929fe52479aeb31361d4d9eb3", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-015-0856-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7053fb93796acf1929fe52479aeb31361d4d9eb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259700748
pes2o/s2orc
v3-fos-license
Incidence of Kikuchi-Fujimoto Disease After Vaccination with SARS-CoV-2: A Case Report A 30-year-old Iranian woman referred to Yasrabi Hospital in Kashan, Iran, developed fever and swollen neck lymph nodes after receiving the Sinopharm vaccine. An ultrasound-guided needle biopsy of the patient’s lymph nodes confirmed the Kikuchi-Fujimoto disease(KFD)diagnosis, andherfever andswellingresolved22daysaftertreatment. Althoughtheexactcause of thisdiseasewhich is arising from COVID-19, is unknown, it can be said that vaccination against COVID-19 can be the cause of KFD. Introduction Kikuchi disease, or necrotizing histiocytic lymphadenitis, is more common in Asians than others, with young people being more susceptible than the elderly. The incidence of Kikuchi-Fujimoto disease (KFD) is currently unknown. Although all age groups may be affected, the disease usually occurs in people under 30. It is more common in women than men. However, recent studies have shown that the proportions are equal between the two genders. Several infectious and autoimmune agents, including herpes simplex types 6 and 8, parvovirus B19, human immunodeficiency virus (HIV), and infectious mononucleosis, have been proposed as the causes of the disease. The immune response of T cells and histiocytes to infectious agents is likely to play a role in the pathogenesis of KFD (1) (Figure 1). One of the most common clinical manifestations of KFD is unilateral neck lymphadenopathy. It can lead to multiple and widespread adenopathies in other body parts with a firm and rubbery consistency, rarely larger than 2 cm in diameter, and sometimes painful when touched. Fever is reported in 20% -50% of patients. Other symptoms include nausea, vomiting, sweating, weight loss, chills, diarrhea, abdominal pain, headache, myofascial pain, and peripheral neuropathy (2). The prevalence of leukopenia and atypical lymphocytosis in KFD is 25% -50% and 25%, respectively. An increase in CRP, ALT, and LDH and a slight increase in erythro-cyte sedimentation rate (ESR) are also observed. The antinuclear antibody, rheumatoid factor (RF), and systemic lupus erythematosus (SLE) cytology tests are usually negative. This disease is benign and usually resolves spontaneously within a few weeks to a few months, and its recurrence is rare (3). The clinical and paraclinical features of this disease are very similar to lymphoma and lupus. In addition to these two diseases, Kawasaki disease, tuberculosis, cat scratch diseases, Yersinia, toxoplasma, and infectious mononucleosis should also be considered. Patients with persistent fever and cervical lymphadenopathy should be considered for this disease (4). Pathology of lymph nodes is the key to diagnosing this disease. Ultrasound findings, including the size, shape, and boundaries of the lesion, can help distinguish Kikochi lymphadenopathy from lymphoma. One of the most important differential diagnoses of this disease is lupus, which should be ruled out in all patients diagnosed with KFD. Therefore, females should be monitored for autoimmune diseases. This disease has no specific treatment, and the treatment is supportive with anti-fever and anti-inflammatory drugs, but glucocorticoids are used in severe cases of the disease (5, 6). The incidence of KFD after vaccination is relatively low. Therefore, in this report, we are trying to raise awareness among doctors and pathologists about the rarity of this disease and how to deal with similar cases. It should be noted that the purpose of this paper is not to describe the type and dosage of treatment. Case Presentation A 30-year-old woman with painless cervical lymphadenopathy visited Yasrebi Hospital, Iran, in October 2022. Clinical examination revealed that she had received the first dose of Sinopharm in her right arm 30 days earlier with no side effects. The second dose was injected into the left arm 24 days after the first one. Five days after the second injection, she experienced fatigue and a non-painful left cervical nodule of 8 × 6 cm behind the sternocleidomastoid muscle. The nodule was soft, mobile, and without tenderness or redness. A computerized tomography scan showed multiple highly hypoechoic lymph nodes behind the sternocleidomastoid muscle on the right side of the neck, each measuring approximately 1.5 × 1.6 cm ( Figure 2). Abdominal and pelvic ultrasound revealed only slight spleen enlargement. A serological test for (severe acute respiratory syndrome-Coronavirus 2) SARS-CoV-2 RNA was negative. Preliminary tests at the hospital were Hb: 12.5 g/dL, WBC: 3600/µL (Neutrophils: %67), and platelets: 235000/µL. However, C-reactive protein (CRP), Rheumatoid factor (RF), and purified protein derivative (PPD) were negative, and liver, thyroid, and urine tests were normal. After treatment with broad-spectrum antibiotics, the ESR increased to 100 with WBC: 2600/µL, LDH, and liver enzymes increased, and Ab HCV, Ag HBS, and ANA were negative, so it was recommended to consult with a hematologist to investigate lymphoma, and finally lymph node biopsy was suggested. In the pathology report, the structure of the lymph nodes was severely necrotic, with much karyocardial debris, fibrin deposits, and accumulations of mononuclear cells. Moreover, the number of neutrophils and plasma cells was low, for which necrotizing lymphadenitis (Kikuchis disease) diagnosis was confirmed ( Figure 3). According to the pathological results, the patient was prescribed prednisolone, and finally, after 15 days of hospitalization, the patient was discharged. Discussion The first case of Kikuchi's disease was reported in Kashan, Iran, in 2014 (7,8), and after that, several cases were reported in different cities across Iran. Most patients were female, except for two studies involving two 16-year-old boys and a 24-year-old man from Mashhad, Iran (9, 10). Our patient was a 30-year-old woman who had no history of the disease and acquired this disease as a result of a Sinopharm vaccine injection. After admission, the patient's ESR increased from 35 to 98. Symptoms of fever, adenopathy, leukopenia, and a high ESR in our patient were unresponsive to antibiotic treatment. KFD was confirmed as a diagnosis in lymph node biopsy after ruling out lymphoma (11). Kikuchi-Fujimoto disease can generally be considered in patients with long-term fever and cervical lymphadenopathy after ruling out tuberculosis, lymphoma, and autoimmune diseases, especially after COVID-19 vaccination. Lymph node biopsy is the only definite diagnosis of the disease, and most cases are benign and require supportive care. The most striking histological features of KFD are coagulative necrosis, apoptosis associated with karyorrhexis, and marked nuclear and phagocytic activity (12). However, in some cases, laboratory tests have shown reductions in hemoglobin and bilirubin and slight changes in SGOT and SGPT (13). In another similar study, this disease was observed in an 18-year-old male from Qatar with a history of two 2 Health Scope. 2023; 12(3):e133434. CT scan of the neck shows multiple right profound cervical lymphadenopathy in the right posterior triangle, the largest of which is 1.6 cm with central necrosis episodes of KFD and fever and swelling on the left side of the neck within ten days of receiving the vaccine (11). In addition, KFD after vaccination was reported in a woman, an 18-year-old Asian man without a history of the disease at 35 days, and a 34-year-old man with diabetes and hypertension at 23 days (14)(15)(16). In our study, this disease occurred while patients were vaccinated with the Sinopharm vaccine. However, it cannot be said with certainty that this complication is related to the vaccine. Given that KFD affects cervical lymph nodes with a frequency of 60% -90%, we can assume that vaccines can be one of the risk factors and inducers of this disease (17). The mechanism of KFD is unclear. However, three main theories have been proposed. The first theory is a viral infection. Several studies have found a link between viral infections (e.g., herpes, Epstein-Barr, and varicella-zoster) and KFD, but no study has found a clear link between them. The second hypothesis is autoimmunity, as KFD leukocytes and macrophages have a tubular network structure similar to SLE. Therefore, one of the important differential diagnoses of KFD is SLE. In addition, there is evidence that patients with a protein genetic element called human leukocyte secondary antigens secondary B cell human leukocyte secondary antigens secondary B-cell (HLA-SB) have a CD8+ T Figure 3. A close-up view of lymph node biopsy specimens from the individual, contrasting numerous macrophages colored brown cell-mediated immune response (14). The third hypothesis of intramuscular vaccine injection is that the COVID-19 vaccine may stimulate the production of CD8+ T cells and antibodies, triggering the production of pro-inflammatory factors after injection in the injected area and causing lymph formation. Inflammatory factors irritate the nodes in the injected areas and cause KFD. However, this hypothesis needs further investigation (18)(19)(20). Conclusions Kikuchi-Fujimoto disease is an important differential diagnosis for physicians, and they must be careful not to confuse it with lymphoma so that it does not lead to unnecessary treatment. Lymph node biopsy is KFD definite diagnosis. Moreover, health institutions must ensure the effectiveness and safety of vaccines. According to reports, KFD might have occurred in several cases after COVID-19 vaccination, particularly in patients with inflammatory symptoms after vaccination. Consequently, doctors should be aware of this issue and conduct the necessary tests to ensure accurate diagnosis.
2023-07-12T08:18:29.264Z
2023-06-26T00:00:00.000
{ "year": 2023, "sha1": "a4b7e34507faf10bf075cef39d1b92190d7b068a", "oa_license": "CCBYNC", "oa_url": "https://brieflands.com/articles/healthscope-133434.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22f8e6757cae4b933de483620166d624ad2b9d04", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
3531735
pes2o/s2orc
v3-fos-license
Physical Exercise Enhanced Heat Shock Protein 60 Expression and Attenuated Inflammation in the Adipose Tissue of Human Diabetic Obese Heat shock protein 60 (HSP60) is a key protein in the crosstalk between cellular stress and inflammation. However, the status of HSP60 in diabetes and obesity is unclear. In the present study, we investigated the hypothesis that HSP60 expression levels in the adipose tissue of human obese adults with and without diabetes are different and physical exercise might affect these levels. Subcutaneous adipose tissue (SAT) and blood samples were collected from obese adults with and without diabetes (n = 138 and n = 92, respectively, at baseline; n = 43 for both groups after 3 months of physical exercise). Conventional RT-PCR, immunohistochemistry, immunofluorescence, and ELISA were used to assess the expression and secretion of HSP60. Compared with obese adults without diabetes, HSP60 mRNA and protein levels were decreased in SAT in diabetic obese together with increased inflammatory marker expression and glycemic levels but lower VO2 Max. More interestingly, a 3-month physical exercise differentially affected HSP60 expression and the heat shock response but attenuated inflammation in both groups, as reflected by decreased endogenous levels of IL-6 and TNF-α. Indeed, HSP60 expression levels in SAT were significantly increased by exercise in the diabetes group, whereas they were decreased in the non-diabetes group. These results were further confirmed using immunofluorescence microscopy and anti-HSP60 antibody in SAT. Exercise had only marginal effects on HSP60 secretion and HSP60 autoantibody levels in plasma in both obese with and without diabetes. Physical exercise differentially alleviates cellular stress in obese adults with and without diabetes despite concomitant attenuation of the inflammatory response. Heat shock protein 60 (HSP60) is a key protein in the crosstalk between cellular stress and inflammation. However, the status of HSP60 in diabetes and obesity is unclear. In the present study, we investigated the hypothesis that HSP60 expression levels in the adipose tissue of human obese adults with and without diabetes are different and physical exercise might affect these levels. Subcutaneous adipose tissue (SAT) and blood samples were collected from obese adults with and without diabetes (n = 138 and n = 92, respectively, at baseline; n = 43 for both groups after 3 months of physical exercise). Conventional RT-PCR, immunohistochemistry, immunofluorescence, and ELISA were used to assess the expression and secretion of HSP60. Compared with obese adults without diabetes, HSP60 mRNA and protein levels were decreased in SAT in diabetic obese together with increased inflammatory marker expression and glycemic levels but lower VO2 Max. More interestingly, a 3-month physical exercise differentially affected HSP60 expression and the heat shock response but attenuated inflammation in both groups, as reflected by decreased endogenous levels of IL-6 and TNF-α. Indeed, HSP60 expression levels in SAT were significantly increased by exercise in the diabetes group, whereas they were decreased in the non-diabetes group. These results were further confirmed using immunofluorescence microscopy and anti-HSP60 antibody in SAT. Exercise had only marginal effects on HSP60 secretion and HSP60 autoantibody levels in plasma in both obese with and without diabetes. Physical exercise differentially alleviates cellular stress in obese adults with and without diabetes despite concomitant attenuation of the inflammatory response. Keywords: cellular stress, heat shock response, heat shock protein 60, physical exercise, adipose tissue inTrODUcTiOn Obesity and type 2 diabetes (T2D) are global public health problems affecting both people's quality of life and socioeconomics around the globe (1). The pathophysiology of these metabolic diseases is closely linked, with the resulting insulin resistance (IR) as the cause of several health comorbidities (2). IR has been demonstrated to be associated with various micro-and macrovascular complications (3). However, in obesity, the risk for these complications differs among individuals, as a significant proportion of obese people are metabolically healthy (4). Thus, the degree of metabolic dysregulation is a determinant of future complications in obese people. The heat shock response (HSR) is a major stress adaptation mechanism that prevents insults to tissues, through a set of highly conserved proteins called heat shock proteins (HSPs) (5,6). Some members of HSP are ubiquitously expressed, whereas others are expressed upon stress insults highlighting the critical role of HSP in maintaining cellular homeostasis. HSP can also be released into the circulation and exert an immune-stimulatory effect by interacting with pattern recognition receptors, such as toll-like receptors, and consequently activate the host inflammatory response (7,8). Previous research demonstrated that HSR is attenuated in patients with T2D; in particular, heat shock protein 72 (HSP72) expression was decreased in patients with diabetes (9). Moreover, HSP72 induction resulted in protective effects in humans with diabetes and diabetic animal models (10,11). Specifically, HSP72 induction led to improved lipid accumulation in the liver and adipose tissue, reduced inflammatory signals, and improved insulin sensitivity. By contrast, we recently observed increased expression of major HSPs, including HSP72, in both adipose tissue and blood cells from obese people without diabetes (12). This finding suggests that in this population the HSR can resolve metabolic stresses attributable to obesity, thus highlighting differences in molecular pathophysiology between obese subjects with and without diabetes even though both conditions are associated with IR. Another key member of the HSR, HSP60, is notable for its ability as mediator of immunity in several inflammatory diseases such as cancer, atherosclerosis, adjuvant arthritis, obesity, and diabetes (13,14). HSP60 is mainly a mitochondrial chaperone, but its translocation to the cytosol and cell membrane and secretion into blood have been reported (15). Furthermore, the ability of HSPs to induce different stress-related responses according to their subcellular localization has been reported (16). It is reported that circulating HSPs can have immunostimulating or immunosuppressive effects, in an apparently contradictory effect, depending on the context and types of interacting partners (17)(18)(19)(20). Accumulating evidence suggests that circulating HSP60 may contribute to cardiovascular disease associated with diabetes, supporting earlier observations regarding the association between HSP60 and atherosclerosis (13,21). Recent findings demonstrated that autoimmunity to HSP60 contributes to metabolic dysregulation in a murine obesity model, which were partially reversed by HSP60 peptide treatment (22). In contrast, human HSP60 displayed protective effects against adjuvant arthritis and contributed to remission in juvenile idiopathic arthritis in humans (14). Likewise, another recent study revealed that HSP60 promotes tissue regeneration and wound healing by regulating inflammation in animal models such as db/db mice and zebrafish (23). Furthermore, HSPs, including HSP60 and its derived peptides, can protect allografts from Ischemia-reperfusion injury and improved graft survival through IL-10 induction (24). Finally, in a recent study morbid obese have displayed a sustainable decrease in circulating HSP60 levels after bariatric surgery intervention concomitantly with a decrease in CRP but not in IL-6 (25). However, the biological significance of extracellular HSP60 remains to be elucidated. Intracellular HSP60 has a complex function, given that it inhibits caspase-3 but facilitates the maturation of pro-caspase-3 to its active form (13). Furthermore, HSP60 is implicated in mitochondrial biogenesis, and this capacity to promote the folding of mitochondrial proteins appears crucial for its cytoprotective function (26). Conversely, it was reported that HSP60 levels are decreased in the heart but increased in the kidneys and liver of diabetic rats, thus highlighting the tissue specificity of the alteration of HSP expression in diabetes (27). However, the effect of different degrees of adiposity and related IR on variations in intra-and extracellular HSP levels across individuals and its influence on metabolic diseases remain to be clarified. Therefore, this study was designed to investigate the status of HSP60 in obese subjects with and without diabetes and assess the effects of physical activity on its levels in these two groups. MaTerials anD MeThODs study Population The study consisted of obese (30 kg/m 2 ≤ BMI < 40 kg/m 2 ) adult men (n = 120) and women (n = 110) (non-diabetes group, n = 138; diabetes group, n = 92). Informed written consent was obtained from all subjects before their participation in the study, which was approved by the Review Board of Dasman Diabetes Institute and conducted in line with principles of the Declaration of Helsinki. Participants who performed any physical exercise within the last 6 months prior to study entry and those with prior histories of major illness or the use of medications and/ or supplements known to influence body composition or bone mass were excluded from the study. The physical, clinical, and biochemical characteristics of the participating subjects are shown in Table 1. exercise Protocol and anthropometric Measurements All eligible subjects were enrolled in a supervised exercise pro gram at the Fitness and Rehabilitation Center (FRC) of the Dasman Diabetes Institute as previously reported (28). Briefly, prior to exercise, each subject underwent an initial physical assessment to determine his or her maximum heart rate (max HR) as well as his or her response to aerobic exercise as measured by the maximum oxygen consumption (VO2 Max). The exercise regimen involved a combination of moderate-intensity aerobic exercise and resistance training using either a treadmill or stationary bicycle. Each exercise session included 10-min warm-up and cooldown steps at 50-60% max HR and 40 min of the prescribed exercise program at 65-80% max HR. For the duration of the 3-month period, participants exercised three times per week. All sessions were supervised by qualified fitness professionals at FRC to ensure that participants reached and maintained the recommended HR range. Anthropometric measurements were taken at baseline and after 3 months of exercise, and the intensity and duration of exercise as well as blood pressure were recorded for each session. Whole-body composition was determined blood and Tissue sampling Venous peripheral blood and subcutaneous adipose tissue (SAT) biopsies were obtained at baseline and after 3 months of exercise. Plasma samples were prepared using EDTA Vacutainer tubes, aliquoted, and stored at −80°C. Subcutaneous superficial adipose tissue biopsies (approximately 0.5 g) were obtained from the periumbilical area via surgical biopsy after local anesthesia. Once removed, each biopsied tissue was rinsed in cold PBS, divided into four pieces, and stored appropriately until assayed. blood inflammatory and Metabolic Markers Glucose and lipid profiles were measured using a Siemens Dimension RXL chemistry analyzer (Diamond Diagnostics, Holliston, MA, USA). Hemoglobin A1c (HbA1c) levels were determined using the Variant™ device (BioRad, Hercules, CA, USA). Insulin and high-sensitivity CRP (hsCRP) levels were determined using a Mercodia Insulin ELISA Kit (Mercodia AB, Uppsala, Sweden) and an hsCRP ELISA kit (Biovendor, Asheville, NC, USA), respectively. Plasma levels of inflammatory and metabolic markers were measured using bead-based multiplexing technology on a Bioplex-200 system (BioRad). All of the aforementioned assays were performed according to the manufacturers' instructions. The Homeostatic Model Assessment of Insulin Resistance (HOMA-IR) index was calculated using the following formula: HOMA-IR = (glucose × insulin)/22.5. Quantitative real time (qrT)-Pcr Total RNA was extracted from frozen adipose tissue using an RNeasy Lipid Tissue Mini Kit (Qiagen, Inc., Valencia, CA, USA). cDNA was synthesized from total RNA samples using High Capacity cDNA Reverse Transcription Kits (Applied Biosystems, Foster City, CA, USA). Conventional qRT-PCR was performed on a Rotor Gene Q-100 system using SYBR Green normalized to GAPDH (Qiagen). The relative gene expression between the groups was assessed using the ΔΔCT method (29), and GAPDH was used as internal control for normalization. Primers used for validation are displayed in Table S1 in Supplementary Material. Quantification of circulating Proteins by elisa Plasma levels of HSP60 were measured by using a sandwich immunoassay EIA kit (ADI-EKS-600, Enzo, PA, USA). Plasma levels of anti-HSP60 IgG/A/M were measured using an ELISA kit (ADI-EKS-650, Enzo). Samples were diluted 1:2 before analysis for HSP60. After optimization, undiluted serum samples were used to measure anti-HSP60 IgG/IgA/IgM levels. All assays were performed according to the manufacturer's instructions. Absorbance was measured at 450 nm on an H4 Synergy plate reader (Biotek, Winooski, VT, USA). statistical analysis Statistical analyses were performed using SPSS software (v22.0; SPSS Inc., Chicago, IL, USA). Unless otherwise stated, all descriptive statistics for the variables in the study were reported as the mean ± SD. Normality tests were run to assess the data distribution. A parametric t-test was used for variables with normal distributions to assess the significance of differences in means between the groups before exercise, whereas the Mann-Whitney non-parametric t-test was used for the skewed variables. A paired t-test was used to determine the significance of differences in means inside non-diabetic and diabetic groups before and after exercise. To evaluate the effect of groups and exercise intervention as well as their combination, we conducted two-way repeated measures analysis of variance (ANOVA). Effect sizes and homogeneity for ANOVA outcomes were examined using partial eta-squared, Box's M test and Levene's test of equality. For all analysis, differences were considered statistically significant at p < 0.05. resUlTs baseline characteristics of the study Population and the effects of Physical exercise The anthropometric, clinical, and metabolic characteristics of the subjects are summarized in Table 1. There were no significant differences between the two groups regarding gender, age, waist or hip circumference, BMI, percent body fat (PBF), and blood pressure. Subjects in the diabetes group had a significantly higher resting HR and a significantly lower VO2 Max than those in the non-diabetes group. Concerning lipid profiles, the diabetes group had higher triglyceride (TG) levels but lower HDL levels, whereas total cholesterol and LDL levels were similar between the two groups. Although fasting blood glucose (FBG), HbA1c, and HOMA-IR values were significantly higher in the diabetes group, there was no difference between the two groups regarding serum insulin or C-peptide concentrations in the blood. Furthermore, GLP-1 levels were higher in the diabetes group (p = 0.02), whereas leptin, glucagon, and GIP levels were similar between the two groups. Finally, no significant difference was detected between the two groups concerning all inflammatory markers assayed ( Table 1). Physical exercise is considered the first-line non-pharmacologic treatment for preventing and managing lifestyle-related diseases. Our group and others have previously demonstrated the beneficial effects of physical exercise on the expression and secretion of stress proteins (12,30). In this study, we performed a pairwise comparison of physical, clinical, and metabolic parameters in the diabetes and non-diabetes groups (n = 43, each) before and after physical exercise, the results of which are displayed in Tables 2 and 3, respectively. For the non-diabetes group, significant decreases were observed in adiposity markers (BMI, waist circumference, and PBF) after exercise (p ≤ 0.01). Likewise, we detected significant decreases in systolic and diastolic blood pressure (p < 0.01 and p < 0.05, respectively), along with an improvement of VO2 Max (p < 0.001). Furthermore, physical exercise decreased glycemic index markers such as insulin, HOMA-IR, and C-peptide values (p < 0.01, p = 0.05, and p < 0.05, respectively) in addition to a significant decrease in GIP levels (p = 0.005). Finally, our results revealed a trend toward increase for some circulating inflammation markers after physical exercise ( Table 2). In the diabetes group, physical exercise had limited effects on physical parameters, as only waist circumference and VO2 Max were significantly improved (p < 0.05) ( Table 3). However, superior improvements in metabolic markers were recorded in this group. Indeed, exercise significantly decreased metabolic markers such as cholesterol, HbA1c, C-peptide, glucagon, GIP, and GLP-1 levels (p < 0.05), whereas no effects were observed on inflammatory markers. To further assess the effect of diabetes and exercise intervention as well as their combined effect, we used two-way ANOVA with repeated measures analysis. As displayed in Table 4, the separate effects of exercise and diabetes were in agreement with the results obtained using paired t-test in particular for adiposity and glycemic index markers. Interestingly, with ANOVA analysis, the exercise significantly increased circulating inflammatory markers (IL-1β, IL-6, TNF-α, and IL-10) and decreased WBC, while diabetes displayed significant effect on HR and TNF-α. The combined effect of both disease and intervention, however, hsP60 Differentially expressed and Modulated by Physical exercise in Obese subjects with and without Diabetes Decreased expression of HSPs, especially HSP72, has been widely reported in both human and animal models of IR and diabetes. By contrast, in previous work using SAT biopsies and PBMCs from obese patients without diabetes and their lean controls, we unexpectedly observed significant increases in HSP expression in obese subjects (12). As HSP60 is also involved in inflammation, a hallmark of diabetes, we assessed HSP60 expression levels in obese adults with and without diabetes. Our results revealed decreased expression of HSP60 at the protein ( Figure 1A) and mRNA levels ( Figure 1B) in SAT along with decreased HSP72 expression ( Figure S1 in Supplementary Material) in the diabetes group. Using SAT and confocal IF microscopy, differential HSP60 patterns were confirmed between the groups ( Figure 1C). Interestingly, the downregulation of HSP60 in adults with diabetes was concomitant with the increased expression of the tissue inflammatory cytokines produced by macrophage upon TLR or Th1 activation, IL-6 and TNF-α, as shown in Figure 2. HSP60 levels in blood serum were lower in the diabetes groups, whereas HSP60 autoantibody levels did not significantly differ between the two groups. We further examined the effects of physical exercise on the expression and secretion of HSP60, and our results illustrated that exercise differentially affected HSP60 expression depending on the presence of diabetes. Indeed, HSP60 levels were increased in the diabetes group together with an increase in HSP72 levels, whereas clear decreases in IL-6 and TNF-α levels were noted in this group. However, an opposite pattern was observed in the nondiabetes group for HSP60 and HSP72, in addition to a decrease in inflammatory marker levels (Figures 1 and 2). Similarly, confocal IF microscopy confirmed the differential effect of physical exercise on HSP60 expression between the two groups. Finally, our physical exercise protocol did not significantly change the levels of circulating HSP60 and its autoantibodies in either study group, as shown in Figure 3. It is worth noting that the expression pattern of HSP60 was not related to gender as both males and females have shown similar trends for HSP60 levels in the SAT as well as in the blood before and after exercise intervention (data not shown). DiscUssiOn Obese patients with diabetes have increased risks of morbidity and mortality compared with their non-diabetic counterparts, some of whom are metabolically healthy (31). HSP60 is a key protein involved in the crosstalk between metabolic stress and inflammation, as it participates in both the HSR and pro-inflammatory/anti-inflammatory processes. The aim of the present study was to assess the differential expression of HSP60 in the adipose tissue of obese adults with and without diabetes and its changes in response to physical exercise. Our main findings were as follows: (i) HSP60 levels were decreased in the diabetes group together with increased inflammatory and glycemic marker levels and lower fitness compared with the findings in the nondiabetes group; and (ii) moderate physical exercise differentially modulated HSP60 and HSR but attenuated inflammation in both groups, suggesting different beneficial effects between obese patients with and without diabetes. The status of the HSR and differential expression of its major components between obese people with and without diabetes remain to be investigated, especially in adipose tissue. We previously reported that obesity increased the expression of HSR components in obese people without diabetes compared with their levels in normal-weight controls (12). Recently, we demonstrated that GRP78, another heat shock-induced chaperone participating in the unfolded protein response (UPR), was upregulated in obese people without diabetes, but its upregulation was more pronounced in obese people with diabetes (30). By contrast, other groups previously observed decreasing levels of HSPs in obese people (9,11,32,33). However, these studies mainly used muscle tissue from obese people with diabetes and animal models, thus highlighting the possibility of tissuespecific expression patterns. In this study, in agreement with other findings, we found that HSP72 levels in SAT are clearly attenuated with a concomitant decrease in HSP60 expression in obese people with diabetes compared with the findings in obese people without diabetes. Another study illustrated that the ratio of HSP60 levels between visceral adipose tissue and SAT was higher in obese people with diabetes than in obese people without diabetes (34). This finding reflected either a decrease in HSP60 levels in SAT or an increase in its levels in visceral adipose tissue. Interestingly, in vivo and in vitro heat treatment differentially affected HSP expression patterns across adipose tissue depots, underscoring the fact that the HSR is also depot-specific (35). Moreover, Marker and colleagues (34) used primary adipocytes, and thus, differences in experimental procedures and types of samples, in this case biopsies versus primary cell culture, must be considered when attempting to reach a consensus concerning HSR response signaling. Moreover, our results indicated that blood HSP60 levels were lower in obese people with diabetes than in their counterparts without diabetes (Figure 3), in line with a previous report suggesting that lower blood HSP60 levels were associated with an increased diabetes risk in male patients (36). In our current study, we included both sexes, and our controls were obese people without diabetes. This attenuation in HSP60 expression and secretion into blood in obese people with diabetes might have resulted from chronic glucolipotoxicity rather than changes in insulin secretion. Indeed, our two study groups exhibited similar levels of insulin secretion markers (insulin and C-peptide), whereas people with diabetes exhibited higher glucose and TG levels despite receiving treatment for the disease. In support of this finding, we further compared HSP60 expression levels in SAT from lean people with and without diabetes using samples available from our previous study (12). Our results revealed that HSP60 levels were attenuated in lean subjects with diabetes along with increased expression of the inflammatory markers IL-6 and TNF-α, as observed in our obese subjects ( Figure S2 in Supplementary Material). The fact that despite the clear increase in the levels of inflammatory markers in the adipose tissue of diabetic subjects, we have not observed significant difference in soluble inflammatory markers in the blood might be related to the treatment taken by the diabetic, known to impact cellular stress and inflammation (37). The WBC levels, however, were significantly higher in diabetic obese ( Table 1) which supports further the hypothesis that inflammation is primarily cellular. In obesity, an increased HSR is an adaptive response to chronic stress concomitant with increased local inflammation in SAT but not in the circulation (12). Similarly, obesity-induced inflammation could be an initial protective and adaptive mechanism in response to fat storage. Accordingly, inflammation is considered a catabolic process facilitating energy expenditure (38). In obese people with diabetes, however, the evidence illustrates that inflammation is worsened (39). This was explained by persistent oxidative stress due to glucolipotoxicity, hormone dysregulation, and inflammation, leading to downregulation of the HSR and the transcription factor HSF1 through a diverted UPR (40). In line with this finding, we previously reported an increased UPR in obese people with diabetes compared with the findings in their counterparts without diabetes (31). In this context, the association between HSP60 and diabetes is complex. Indeed, several lines of evidence suggest that HSP60 induces both pro-inflammatory and anti-inflammatory cytokines (41). It was reported that when HSP60 acts as a proinflammatory mediator, it plays a role in unresolved vascular inflammation, which is strongly associated with diabetes, thus highlighting the regulatory role of HSP60 in modulating the inflammatory processes in diabetes and linking mitochondrial stress to inflammation. Furthermore, reduced levels of HSP60 in diabetic patients might be reflective of lower mitochondrial content in adipose tissue and thus less mitochondrial biogenesis, required for adipogenesis and lipid metabolism. In support of this hypothesis, diabetic (db/db) mice displayed lower HSP60 and mitochondrial capacity than obese (ob/ob) mice and which were corrected by rosiglitazone, PPARγ agonist that alleviates IR and lowers glucose levels in type 2 diabetic rodents (42,43) as well as in human patients (44). Finally, and due to its broad function, HSP60 may have a direct role in the development of IR as reported in mice with heterozygous deletion of HSP60, which displayed IR and reduced mitochondrial capacity, aside with increased inflammation (45). Several observational studies reported a marked reduction in the incidence of diabetes among physically active individuals, suggesting that a healthy lifestyle remains an important non-pharmacologic intervention for preventing diabetes (46). One of the beneficial effects of exercise is the modulation of inflammation and metabolic stress (47). Thus, understanding the effect of exercise on the crosstalk FigUre 1 | Decreased expression of HSP60 and modulation of its expression by exercise in the subcutaneous adipose tissue (SAT) of obese subjects with diabetes. (a) Immunohistochemical analysis of HSP60 expression in SAT sections from obese people without (ND) and with diabetes (D) before and after a 3-month physical exercise intervention (n = 10 for each group). (b) mRNA levels were measured by quantitative real-time PCR using SAT from obese subjects without (ND) and with diabetes (D) (n = 10 for each group) and normalized using GAPDH. Data are presented as fold changes in obese people with diabetes compared with the findings in the counterparts without diabetes. (c) Representative confocal immunofluorescence images illustrating HSP60 expression and localization in SAT from obese people without and with diabetes (n = 3 for each group). Densitometry quantification of the staining in SAT slides was performed as mentioned in Section "Materials and Methods." The p-value was determined using the Mann-Whitney test for comparisons between the groups and using a paired t-test for intragroup comparisons before and after exercise. * denotes p < 0.05 between the diabetes and non-diabetes groups, and # denotes p < 0.05 between before and after exercise. between the HSR and inflammation in obesity and diabetes would clarify its molecular rationale. From this perspective, we investigated the effect of 3 months of exercise on HSP60 expression in our study population, and our results interestingly demonstrated that HSP60 expression was differentially modulated in SAT depending on the presence of diabetes. Our previous study revealed that in obese people, HSP expression was decreased relative to that in normal-weight controls (12). In the current study, we confirmed our previously published results, specifically for HSP60 and HSP72, using obese people without diabetes as a control group. However, in obese subjects with diabetes, exercise increased HSP60 and HSP72 levels. This upregulation was concomitant with decreased inflammation in the SAT of both groups due to the exercise intervention. The hypothesis regarding whether this differential response was due to greater compliance with the physical exercise protocol in one group or the inability of the other group to appropriately respond to the exercise training program was eliminated, as Circulating levels of (a) HSP60 protein and (b) HSP60 auto-Abs were measured by ELISA using plasma samples from obese people without (ND) and with diabetes (D) before and after a 3-month physical exercise intervention (n = 43 for each group). The p-value was determined using the Mann-Whitney test for comparisons between the diabetes and non-diabetes groups and using a paired t-test for intragroup comparisons before and after exercise. * denotes p < 0.05 between the diabetes and non-diabetes groups. FigUre 2 | Increased inflammation and its modulation by exercise in the subcutaneous adipose tissue (SAT) of obese subjects with diabetes. Immunohistochemical analysis of (a) IL-6 and (b) TNF-α expression in SAT sections from obese people without (ND) and with diabetes (D) before and after 3 months of physical exercise (n = 10 for each group). Data are presented as fold changes in the diabetes group compared with the findings in the non-diabetes group. The p-value was determined using the Mann-Whitney test for comparisons between the diabetes and non-diabetes groups and using a paired t-test for intragroup comparisons before and after exercise. * denotes p < 0.05 between the diabetes and non-diabetes groups, and # denotes p < 0.05 between before and after exercise. obviously, the effect was opposite and our exercise protocol was similarly prescribed to both groups under the supervision of experts at our FRC. Thus, the differential effects of physical exercise between the two groups might be explained by differences in metabolic flexibility and adaptation between the groups. Indeed, it was previously reported that the metabolism of free fatty acids (FFAs) during physical exercise was different between obese people with and without diabetes, as the utilization of plasma FFAs was reduced in the latter group (48). Moreover, people with diabetes exhibit increased flux of FFAs and glucose, which is associated with the excessive production of reactive oxygen species in adipocytes (49). These effects might lead to a decrease in the differentiation capacity of preadipocytes in subjects with diabetes, as previously reported (50), and thus a reduced response to physical exercise. Furthermore, we observed that exercise more effectively improved the expression of molecular markers of inflammation and metabolism in obese people without diabetes even though no major change in body weight was observed in either group. The fact that ANOVA analysis displayed significant increase in circulating inflammatory markers does not contradict the decreased levels of IL-6 and TNF-α in SAT. An initial increase of those circulating cytokines due to exercise has been suggested to be an adaptive process to exercise stress and highlighting the FigUre 4 | Status of HSP60 in subcutaneous adipose tissue (SAT) of obese subjects with and without diabetes and its modulation by physical exercise. In adipose tissue of lean subjects, most resident macrophages are M2 phenotype that contribute to insulin sensitivity. Metabolic overload and lack of physical activity increase body weight, hypertrophy of adipocytes, and number of M1 macrophages, which increase the secretion of pro-inflammatory cytokines such as TNF-α, IL-6 leading to obese inflamed adipose tissue. This contribute to the chronic subclinical metaflammation causing insulin resistance locally and probably in liver that amplifies the inflammation by secreting other pro-inflammatory mediators including IFNγ (53). At this stage, the HSR, in particular HSP60, levels are increased to cope with this cellular stress. However, in diabetic obese this metaflammation process is amplified due to high oxidative stress, which decreases mitochondrial function and HSP60 levels and finally a failure to control such hyper-inflamed adipose tissue. Regular physical exercise intervention decreases stress levels and inflammation in the adipose tissue for both diabetic and non-diabetic obese. While HSR is consequently decreased in non-diabetic obese subjects, in diabetic subjects, HSR and thus HSP60 are increased which might reflect an increase in mitochondrial capacity to reduce excessive metabolic stress. Cell pictures were adapted from Servier Medical Art. good side of a subclinical inflammation (38,51). The beneficial effects of exercise are further supported by the decreased levels of WBC, known to be increased in diabetic and CVD subjects as previously reported (52). As summarized in Figure 4, progression from a normal healthy status toward obesity and subsequently diabetes, increased fat accumulation, and metabolic dysregulation appears to be associated with the coordinated upregulation of the HSR and immune response in non-diabetic obese toward the development of an adaptive mechanism to cope with increased cellular stress. This HSR pattern is however reversed in diabetes, leading to an impaired response to exercise. A potential explanation of this differential effect is as follows: (i) in the case of obese without diabetes, exercise intervention has decreased the stress load on the SAT and the overall body and thus the HSR levels are attenuated, whereas (ii) in the case of obese with diabetes, the HSR as reflected by HSP60 is enhanced to cope with the persistent cellular dysregulated status despite the apparent decreased inflammation. Another potential explanation of the observed HSR in adipose tissue of obese subjects with diabetes would be linked to cell senescence and necrosis (Figure 4). For instance, those processes are known to amplify inflammation through attracting more monocytes and pro-inflammatory mediators into the SAT. It was also reported that adipose tissue of obese and diabetic patients display both compromised HSR in adipocytes as well as in hepatocytes where adipose tissue displayed cellular senescence that spreads to all the metabolic tissues thereby determining a failure to resolve inflammation (40,54). Furthermore, our previous observation that HSPs expression in obese subjects was "unexpectedly" increased in relation to lean volunteers (12) might be just a question of timing context as the HSR is enhanced when the tissues are under homeostatic-threatening situations (early stages of T2DM in obese people) but this is progressively reversed with time or lifestyle intervention. On the other hand and as expected, HbA1c levels were higher in obese subjects with diabetes (Tables 1-3) but the moderate exercise program was not able to reverse these levels. Indeed, HOMA-IR values, despite being higher in those subjects, indicated just a moderate level of IR (ranges: 0.84-3.27 and 0.81-1.88, in subjects with diabetes and without diabetes, respectively). In this regard, exercise reduced HOMA-IR levels in subjects without diabetes but not in those with diabetes (ranges after exercise: 0.38-1.08 and 0.81-2.66, respectively). These observations can be explained by the fact that there is great variability between different geographic areas in the threshold of HOMA-IR levels to define IR and that HOMA-IR does not adequately predict IR in all individuals, in particular, with confirmed diabetes. Furthermore, it is reported that HOMA-IR and insulin action do not clearly correlate, particularly in individuals with impaired glucose tolerance (55)(56)(57). Furthermore, the hypothesis that our subjects with diabetes were not so metabolically jeopardized could be ruled out as all our diabetic subjects were clinically confirmed with diabetes and most of them already for more than 5 years. Finally, despite the clear differential HSR patterns between obese people with and without diabetes, our study had some limitations including the lack of access to visceral adipose tissue or hepatic function markers. Indeed, these tissues are more reflective of metabolic events, and people with diabetes are known to have more visceral and intramuscular fat than those without diabetes (58). Another limitation of our study was the absence of any diet intervention, which might have increased the efficacy of physical exercise. However, our subjects were instructed to maintain a stable diet during the 3-month exercise program, but we did not monitor their compliance. Moreover, we chose to study the direct effects of exercise alone because we believe that moderate exercise is an attractive behavioral approach to improve global health without drastic diet restriction. Further cellular work is also warranted to elucidate in details the source and the function of HSP60 in the SAT. In summary, our data illustrated that obese subjects with diabetes had decreased expression and secretion of HSP60. This decrease in expression was reverted by physical exercise in parallel with decreased expression of inflammatory markers in SAT despite marginal changes in BMI. Our results provide further molecular evidence of the beneficial effects of physical exercise for restoring cellular stress defenses through improving HSR in diabetes. eThics sTaTeMenT Informed written consent was obtained from all subjects before their participation in the study, which was approved by the Review Board of Dasman Diabetes Institute and conducted in line with principles of the Declaration of Helsinki. aUThOr cOnTribUTiOns AK, MD, and AT designed the study. AK and AT wrote the manuscript. AK, JA, MD, and AT supervised data collection and analysis. AK, MD, and AT revised the manuscript. SK, PC, and SW participated in data collection and analysis. acKnOWleDgMenTs We would like to thank the Fitness and Rehabilitation Center and the Tissue Bank and Clinical Laboratory at the Dasman Diabetes Institute for their assistance throughout this study. The authors also thank Enago (www.enago.com) for the English language review. Immunohistochemical analysis of HSP72 expression in SAT sections from obese people without (ND) and with diabetes (D) before and after a 3-month physical exercise intervention (n = 10 for each group). Data are presented as fold changes in the diabetes group compared with that in the non-diabetes group. The p-value was determined using the Mann-Whitney test for comparisons between the diabetes and non-diabetes groups and using a paired t-test for intragroup comparisons before and after exercise. * denotes p < 0.05 between the diabetes and non-diabetes groups, and # denotes p < 0.05 between before and after exercise. FigUre s2 | Expression of HSP60, IL-6, and TNF-α in the subcutaneous adipose tissue (SAT) of lean subjects with diabetes. Representative confocal immunofluorescence images illustrating HSP60 (a), IL-6 (b), and TNF-α (c) expression and localization in SAT from lean people with and without diabetes (n = 3 for each group). Quantification of the staining in SAT slides was performed as mentioned in Section "Materials and Methods."
2018-02-06T14:03:29.066Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "3f5f0bc1d286b83dcc3ad53f17436c411345a1e3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00016/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f5f0bc1d286b83dcc3ad53f17436c411345a1e3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247254317
pes2o/s2orc
v3-fos-license
A Risk Scoring System Utilizing Machine Learning Methods for Hepatotoxicity Prediction One Year After the Initiation of Tyrosine Kinase Inhibitors Background There is currently no method to predict tyrosine kinase inhibitor (TKI) -induced hepatotoxicity. The purpose of this study was to propose a risk scoring system for hepatotoxicity induced within one year of TKI administration using machine learning methods. Methods This retrospective, multi-center study analyzed individual data of patients administered different types of TKIs (crizotinib, erlotinib, gefitinib, imatinib, and lapatinib) selected in five previous studies. The odds ratio and adjusted odds ratio from univariate and multivariate analyses were calculated using a chi-squared test and logistic regression model. Machine learning methods, including five-fold cross-validated multivariate logistic regression, elastic net, and random forest were utilized to predict risk factors for the occurrence of hepatotoxicity. A risk scoring system was developed from the multivariate and machine learning analyses. Results Data from 703 patients with grade II or higher hepatotoxicity within one year of TKI administration were evaluated. In a multivariable analysis, male and liver metastasis increased the risk of hepatotoxicity by 1.4-fold and 2.1-fold, respectively. The use of anticancer drugs increased the risk of hepatotoxicity by 6.0-fold. Patients administered H2 blockers or PPIs had a 1.5-fold increased risk of hepatotoxicity. The area under the receiver-operating curve (AUROC) values of machine learning methods ranged between 0.73-0.75. Based on multivariate and machine learning analyses, male (1 point), use of H2 blocker or PPI (1 point), presence of liver metastasis (2 points), and use of anticancer drugs (4 points) were integrated into the risk scoring system. From a training set, patients with 0, 1, 2-3, 4-7 point showed approximately 9.8%, 16.6%, 29.0% and 61.5% of risk of hepatotoxicity, respectively. The AUROC of the scoring system was 0.755 (95% CI, 0.706-0.804). Conclusion Our scoring system may be helpful for patient assessment and clinical decisions when administering TKIs included in this study. INTRODUCTION Tyrosine kinase inhibitor (TKI) is a prominent cancer treatment. Tyrosine kinase is a major enzyme involved in cell signaling, growth, and division during cell signal transduction (1). TKI inhibits tyrosine kinase, which is involved in cancer (2). Since the U.S. Food and Drug Administration (FDA) approved imatinib for the treatment of chronic myeloid leukemia in 2001, over 30 TKIs have been developed (3,4). Hepatotoxicity is a major safety concern when using tyrosine kinase inhibitors (5). The FDA requires five TKIs (lapatinib, pazopanib, ponatinib, regorafenib, and sunitinib) to have black box warnings for liver damage (4,6). Several studies have investigated TKI-induced hepatotoxicity, mostly in patients experiencing grade I-IV hepatotoxicity (7). However, it is difficult to find clinically significant grade I cases as these include mild and asymptomatic patients. Since there are no reliable markers for the detection of drug-induced hepatotoxicity, it is important to exclude other possible causes (8,9). The follow-up period should be limited, as longer observation periods make it difficult to detect druginduced hepatotoxicity because other factors may come into play (7,10,11). The period from TKI initiation to hepatotoxicity onset varies widely, with the latency to the onset of hepatotoxicity reported within two months for crizotinib and several days to several months for lapatinib (12). A proper observation period for hepatotoxicity has not been established, but one year (365 days) may be appropriate. Machine learning establishes computational modeling for automatic learning based on existing data (13). Since the machine learning approach can devise learning algorithms to deduce clinical action and decision making, it has been applied in various ways in the field of health science, including risk prediction (14,15). Utilizing various methods of machine learning may build models with higher risk predictability that can explain risk factors. Risk scoring systems, such as the GerontoNet ADR risk score for elderly patients and TIMI risk score for cardiovascular disease, allow a rapid assessment of patients for medical decision-making and patient management (16). They reveal the relationship between patient risk factors and the incidence of an adverse event and a disease (17). Although it may help clinicians predict hepatotoxicity after TKI administration, a risk scoring system has not yet been investigated. Although TKI-induced hepatotoxicity is a significant clinical concern, there is currently no tool to predict its development. The purpose of this study is to identify risk factors for TKIinduced hepatotoxicity of grade II or higher that occur within one year of TKI initiation using machine learning methods and to propose a risk scoring system of TKI-induced hepatotoxicity. Dataset The dataset was constructed from five previous studies that demonstrated factors affecting the hepatotoxicity of selected TKIs (gefitinib, erlotinib, crizotinib, imatinib, and lapatinib). The detailed methodology was reported in five published studies. In a gefitinib study, patients with non-small cell lung cancer (NSCLC) were orally administered 250 mg of gefitinib per day (18). Patients with NSCLC or pancreatic cancer were administered 150 mg or 100 mg of erlotinib, respectively (19). Patients with NSCLC containing an anaplastic lymphoma kinase (ALK) rearrangement or c-ros oncogene 1 (ROS1) rearrangement were orally administered 250 mg of crizotinib twice per day (20). Patients with Philadelphia chromosomepositive acute lymphoblastic leukemia (ALL), chronic myeloid leukemia (CML), gastrointestinal stromal tumors (GIST), or other malignancies were orally administered imatinib (100-800 mg/day) (21). Patients with metastatic breast cancer were orally administered lapatinib (750-1250 mg/day) (22). In all five studies, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels were measured before initiation of TKI therapy and then every two to three months thereafter. Eligible patients were those who were followed up in a year. Assessment of Hepatotoxicity Serum AST and ALT values were assessed according to the severity of hepatotoxicity. The hepatotoxicity grade was determined using Common Terminology Criteria for Adverse Events (CTCAE) version 4.0. The CTCAE defines grade I, grade II, grade III, and grade IV toxicity levels of AST and ALT as 1-3 times, 3-5 times, 5-20 times, and more than 20 times the upper limit of normal, respectively. In this study, hepatotoxicity was defined as grade II or higher. Statistical Analysis The chi-squared or Fisher's exact test was performed to compare categorical variables between patients with and without hepatotoxicity. Multivariate logistic regression analysis was performed to identify independent risk factors for hepatotoxicity. Factors having a P-value < 0.05 from the univariate analysis with strong confounding factors (age, BSA, and sex) were included in the multivariate analysis. The odds ratio (OR) and adjusted OR were calculated by univariate and multivariate analyses, respectively. Machine learning models were developed to predict the risk factors for hepatotoxicity. Classification methods, such as fivefold cross-validated multivariable logistic regression, elastic net, and random forest (RF) were utilized with an R package caret. For cross-validation, the dataset was randomly split into five equal folds. After portioning one data sample into five subsets, four subsets were used to construct machine learning models and the other subset was used for model validation. Each crossvalidation iteration was repeated 100 times. The area under the receiver-operating curve (AUROC) was developed to predict hepatotoxicity. A risk scoring system was developed from the multivariate and machine learning analyses. We randomly divided the data by a ratio of 7:3. Among a total of 703 samples included in this study, data from 503 patients were used to construct a risk scoring system, and the other 200 data were used to validate it. For the risk score, each coefficient from the logistic regression model was divided by the smallest one and rounded to the nearest integer. P-values less than 0.05 were considered statistically significant. Univariate and multivariate analyses were performed with the Statistical Package for Social Sciences (SPSS) version 20.0 for Windows (SPSS Inc., Chicago, Illinois, USA). Machine learning models were developed using R software version 3.6.0 (RFoundation for Statistical Computing, Vienna, Austria). RESULTS Among the 999 patients eligible in this study, patients were excluded if they did not have AST/ALT value results before TKI administration (n = 72), if they had elevated AST/ALT before TKI administration (n = 123), and if they already had underlying liver disease (n = 101). We analyzed data from 703 patients. For the excluded patients, the mean age, proportion of patients ≥ 60 years, and proportion of males were 60.6 ± 13.1 years, 56.3%, and 47.8%, respectively. There were no significant differences in the mean age or proportion of sex between the included and excluded patients. Multivariate analysis demonstrated that male patients and patients with liver metastasis had increased risk for TKI-induced hepatotoxicity by 1.4-fold and 2.1-fold, respectively. The use of anticancer drugs increased the risk of hepatotoxicity by 6.0-fold. Patients using H2 blockers or PPIs had a 1.5-fold increased risk of hepatotoxicity ( Table 2). Machine learning methods were utilized to construct a prediction model for TKI-associated hepatotoxicity. The AUROC values (mean, 95% CI) across 100 random iterations using five-fold cross-validated multivariate logistic regression, elastic net, and RF models were 0.75, 0.75, and 0.73, respectively ( Table 3). The ROC for five-fold cross-validated multivariate logistic regression, elastic net, and RF are shown in Figure 1. The hyperparameters and R code that we used are shown in Table 4 and Supplementary File 1, respectively. For the construction of risk scoring system, male (1 point), use of H2 blockers or PPIs (1 point), presence of liver metastasis (2 points), and use of anticancer drugs (4 points) were integrated into the analysis. From a training set, patients with 0, 1, 2-3, and 4-7 points showed approximately 9.8%, 16.6%, 29.0%, and 61.5% risk of hepatotoxicity, respectively. The respective value of the validation set was 10.2, 19.3, 30.8, and 57.1%. Although there were only two patients who scored 8 points (100% risk), they were all included in the training set. The logistic regression curve by mapping the scores to risk scores is presented in Figure 2, and the risk probability according to scores using logistic regression is shown in Table 5. The AUROC of the scoring system was 0.755 (95% CI 0.706-0.804). DISCUSSION This study demonstrated that the use of H2 blockers or PPIs and anticancer drugs increased the risk of the hepatotoxicity induced by the TKIs selected in this study (crizotinib, erlotinib, gefitinib, imatinib, and lapatinib) by 1.5-fold and 6.0-fold, respectively. Patients with liver metastasis and male patients had an increased risk of TKI-induced hepatotoxicity by 2.1-fold and by 1.4-fold, respectively. Machine learning analyses indicated good performance (higher than 0.7) of the constructed model. In our study, the presence of liver metastasis was a significant factor with the two-fold increase in hepatotoxicity by TKIs included in this study. Because patients with elevated AST and ALT were excluded, all patients had normal AST/ALT values at the start of the study. The relationship between liver metastasis and drug-induced hepatotoxicity has been rarely reported. However, a retrospective observational study of pembrolizumabinduced liver injury showed that patients with pre-existing liver metastasis were at a 3.6-fold higher risk of developing hepatotoxicity compared to patients with no liver metastasis (23). As the main metabolic site for most TKIs is the liver, the presence of liver metastasis may lead to asymptomatic liver damage before TKI use and may amplify the effects of TKIinduced hepatotoxicity. TKIs are often used in combination with other anticancer drugs. Previous studies have reported hepatotoxicity by many anticancer drugs, including methotrexate, cisplatin, gemcitabine, and paclitaxel (24). Thus, anticancer drugs used in combination with TKIs not only affect hepatotoxicity by themselves but may further aggravate the severity of hepatotoxicity caused by TKIs. For the construction of the risk scoring system, we included all factors that remained in the final multivariate analysis model, regardless of statistical significance. In addition to liver metastasis and anticancer drugs, male and the use of H2 blockers/PPIs were included in the risk scoring system. Contrary to our expectations, male sex increased the risk of TKI-induced hepatotoxicity in our study. Several studies have demonstrated that female patients generally had a higher risk of adverse drug reactions compared to male patients, and these results were similar for drug-induced hepatotoxicity (25,26). Physiological or biological differences which can affect drug toxicity may contribute to these gender differences (27). Our unexpected result is probably due to the effect of alcohol history. Male patients accounted for the majority (70%) of patients with a history of alcohol use, and 70% of these individuals had hepatotoxicity. Considering that female patients accounted for more than half of our study population, alcohol history may be an influencing factor in the higher incidence of hepatotoxicity in male patients. Concomitant use of PPIs or H2 blockers increased the risk of hepatotoxicity compared to non-users. ATP-binding cassette superfamily G member 2 (ABCG2) and ATP-binding cassette subfamily B member 1 (ABCB1) are drug efflux transporters situated in the liver (28). Since PPIs are known as an ABCG2 inhibitors, concomitant use of ABCG2 substrates and PPIs can increase the blood concentration of drugs that are ABCG2 substrates (18,19). Among the five drugs included in our study, gefitinib and erlotinib are substrates of ABCG2. Since half of the total study population was patients administered these drugs, this may have affected the analysis of PPIs as a hepatotoxicity factor. Both H2 blockers and TKIs are ABCB1 substrates. Coadministration of both ABCB1 substrates can cause competitive efflux transport, meaning other ABCB1 substrates such as TKIs remain in the liver instead of H2 blockers exiting. This increases the risk of TKI-induced hepatotoxicity. TKIs as a class with different mechanisms were included in this study. Like differences between epidermal growth factor receptor (EGFR) TKIs and non-receptor TKIs, differences in mechanisms may affect the occurrence of TKI-induced toxicity (29). However, this was not found in this study, probably because many TKIs have multiple targets, as imatinib mainly targets bcrabl but also affects a receptor tyrosine kinase, platelet-derived growth factor receptor (PDGFR). TKIs included in this study were used as a single daily dose (gefitinib 250 mg, crizotinib 250 mg, and lapatinib 1250 mg) except for imatinib and erlotinib, and the effects of drug doses on hepatotoxicity were not found in both drugs. In the case of imatinib, the dose range was 100 to 800 mg daily; it was not a significant factor for imatinib-induced hepatotoxicity in the multivariate analysis. For erlotinib, the daily dose was either 100 mg or 150 mg, and the statistical significance was not found. Since the three drugs among five TKIs in this study were used as a single dose, the effect of drug doses on clinical efficacy and safety should be further investigated. The AUROC values of machine learning methods ranged between 0.73-0.75. The machine learning methods that showed the best AUROC values were the five-fold multivariable logistic regression model and the elastic net model, a penalized linear regression model that combined the penalties of the lasso and ridge methods (30). The constructed risk scoring system showed good performance with the AUROC value of 0.75. There are several limitations to this study. The main limitation is the retrospective design of our study. It was impossible to obtain the patient's drug concentration to assess the relationship with the onset of hepatotoxicity or the patient's tissue to analyze the pattern of hepatotoxicity. In addition, not all TKIs were included in this study; especially, only one TKI among five TKIs with black box warning for hepatotoxicity was analyzed, Therefore, it needs to be cautious to apply this result to other TKIs. Since a relatively large number of patients were excluded according to the exclusion criteria, it is possible that real-world data could be different. However, the characteristics between included patients and excluded patients were not significantly different. Despite several shortcomings, our study is significant because it is the first to develop a risk scoring system for the hepatotoxicity caused by the selected TKIs in cancer patients. Furthermore, machine learning models were used to predict the increased risk of hepatotoxicity. In conclusion, our study demonstrated that the presence of liver metastasis and the concurrent use of PPIs or H2 blockers were related to TKI-induced hepatotoxicity. Male patients and patients administered anticancer drugs experienced an increased risk of hepatotoxicity. Before applying these results to clinical settings, it is necessary to consider other factors that may affect the efficacy and safety of the TKIs, such as daily dose, drug interaction, and genetic factors. Considering our retrospective study design and only five selected TKIs were included in this study, further prospective studies are needed to validate our findings. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT All procedures performed in five studies involving human participants were in accordance with the ethical standards of the relevant ethics committees, which approved the studies. The Ethics committees were as follow; the Clinical Research Ethics Committee of the Seoul National University Hospitals, The Asan Medical Center Clinical Research Ethics Committee, the Institutional Review Board of National Cancer Center, Korea. Written informed consent for participation was not required for this study due to the retrospective nature of this study. AUTHOR CONTRIBUTIONS All the authors have made substantial contributions to the conception of the study. All the authors contributed to designing the study. JH, SC, MK, JM, DJ, and JK contributed to material preparation and data collection. JH, JY, and HG performed data analysis and interpretation. JH contributed to drafting of the manuscript. HG contributed to critical revision of the manuscript. All authors approved the final manuscript.
2022-03-08T14:24:46.968Z
2022-03-08T00:00:00.000
{ "year": 2022, "sha1": "7cb29b93f5b87c1ee83517d812a39fb631d401f1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.790343/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "7cb29b93f5b87c1ee83517d812a39fb631d401f1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
8019437
pes2o/s2orc
v3-fos-license
Access Points Placement as a Discrete Optimization Problem In this work, we consider a method of searching of the direction of a wireless network development (the places of new access points or base stations etc.) optimized with criteria of coverage of important territories and minimum cost of equipment and additional needed infrastructure which does not need the execution of special field testings and determination of the exact geometry of elements of RF-propagation medium and their RF absorbing properties but takes into account the minimum accessible information obtained from built-in measuring instruments of wireless hardware and approximate data of the medium elements shape. The problem of search of a disposition and types of the infrastructure elements of the growing network is formulated as a multicriteria discrete constrained optimization problem solvable with variant probability method [1]. The problem of a medium RF-propagation properties modeling is also formulated and solved as a discrete optimization task. properties of medium acquired from the working wireless network hardware is able to help with the search of the optimal configuration of new elements of the growing network. The same is actual for the wireless networks which are developed chaotic at the large built-up territories (for example, reconstructed territories of the former industrial enterprises). RELATED WORKS The considered problem of optimal access points placement is often a subject of the engineers and network administrators in the Internet forums and scientific publications. Some of them offer ready-to use software able to forecast the signal coverage in accordance with the physical properties of the environment elements. Other scientific works offer various methods of search of new access points placement with optimal signal coverage [2,3] or (come complex case) placement which provides for the best localization of a mobile WLAN equipment position (calculation of its coordinates) within the area of coverage of the network [4,5]. The prognosis of signal level at each point taking into consideration the path loss of walls, ceilings and other elements is a well known engineering problem [7] The recent literature proposes and approves many techniques to analyze the indoor and outdoor signal coverage problem and different deterministic and stochastic optimization methods to find the optimal placement. In [2], author a version of the Nelder-Mead simplex method and a pattern algorithm that considers the minimization of the ratio of covered points in a mesh. In [3], there is a method of design of large-scale indoor network where placement of the access points guarantees the absence of the gaps. The optimal signal coverage area of an AP is considered as a cylinder and the AP positions are optimized with usage of geometrical schemes. Using the empirical Motley-Keenan indoor wave propagation model taking into consideration the type of walls and ceilings to calculate the path loss, [6] proposes an AP placement technique for optimal inside signal coverage. Genetic algorithm is used to find a configuration with less value of maximum path loss for each point. In [8], authors also propose a strongly typed genetic programming to solve access point configuration problem with optimal disposition of the access points providing the best coverage. In [4], they use genetic algorithms to find the disposition of access points which gives the optimal localization of the mobile device within some area covered by access points. In [5], they propose a ready-to-use system for the localization of such device. In our case, the optimal coverage is one of the objectives. The second important one is the minimum cost of the wireless equipment and infrastructure needed to make this equipment work. The results of the above methods are hard to interpret when the 'optimal' placement of an access point is situated far from the electric wiring, walls or other constructions where we are able to install the equipment. The transferal of the resulting position of the access point to the nearest possible place may significantly change the forecast of coverage. The erecting work is possible during a very short period of the year in Siberian weather conditions but we do not know when the necessity to change our network configuration arises. Most of above works are based on the signal propagation model which considers the logarithmic loss of each obstacle (walls, ceilings) from some information tables but often, we do not know even the exact building material of our environment elements and their exact geometry and the real absorption may differ very significant. In [4], they propose to use the least-squares fit on experimental data to determine the real path loss produced by the obstacles. But we may have enough or not enough experimental data which are needed for the least-squares method and it may be difficult to obtain the additional data. AVAILABLE INFORMATION The minimum information we need is the level of the signal and noise in different points. At least, the information of the signal and noise levels at the points where the network is already working is able to acquire from the network equipment (for example, the "iwconfig" utility gives us such information). The minimum information of the environment geometry (the disposition of walls, trees and other objects) is always available from maps, schemes or immediate visual observation. The RF absorbing properties of the environment elements are available from the information tables [8] and able to be defined more exactly if the element is situated between the existing transmitter and receiver or in the Fresnel zone. In complex cases, the absorbing properties can be specified as a solution of the below optimization problem. Also, we have an information about the possibility and the approximate cost to allocate the new access point ate each concrete place. So, the places where we have an electric wiring (at least, the places with the existing client wireless hardware) are more suitable to set the new access points than outdoor ones distant from the buildings and power supply infrastructure. Moreover, the researcher is able to set some number of possible places of the new access points in accordance to the reasons which are evident for him but hard to be formalized. SUPPOSITIONS, SIMPLIFICATIONS So, we have the information about the approximate geometry of the environment (disposition of the existing equipment, disposition of the obstacles for signal propagation and their presumable RF absorption properties) and the data of signal and noise levels at some points. To make our calculations simpler, let's divide our area into cells. As a rule, the real problems do not allow us to ignore the vertical positional relationship of the infrastructure and environment elements. That is why, we should consider our environment as a 3D space. For the simplicity of the statement and representation at the paper, we consider 2-dimensional case. Let's consider than the equipment is situated in the centers of the cells and the borders of the environment elements (obstacles such as walls, windows, trees etc.) coincide with the borders of any set of the cells. So, we draw the scheme of our environment with the existing and new access points and ,obstacles and existing and perspective zones of coverage of our network as if we mark the elements in the checked writing-book ( Fig.1). If we have an information about the signal level at some place then we suppose that this level is equal for the whole cell. If our network contains directional antennas then we suppose then the signal of such antenna is propagated within the area bounded with an angle (excluding pairs of beam transmission antennas, in this case, we suppose suppose that the area of propagation includes only 2 cells where 2 antennas are situated). PROBLEM STATEMENT Let there be several areas in our scheme which need to become the consistent reception (consistent coverage) areas. Moreover, certain areas are most important, other ones are desirable to supply the consistent coverage but it is not so important. We define the weight coefficient for each of these areas. Besides, we are allowed to demand some minimum bitrate for one or another area. Thus, we have N c cells where we need to supply the coverage with the weight coefficients vj and minimum bitrates b j , 1≤ j≤ N c . Let there be N p points (cells) where we are able to place the access points. For each of these cells, we take into consideration N t types of hardware (differentantennas, amplifiers etc). For each of these places, we know also the approximate cost of infrastructure equipment (such as power supply wires or self-contained power supply, antenna holders etc.) C i , 1≤ i≤N p and the cost of each kind of the hardware C k is also known, 1≤k≤N c . Let's define a matrix X of Boolean variables x ik . Setting the value of the variable x ik to 1 means that we decided to place the access point hardware of k-th type within the i-th possible cell and setting its value x ik =0 means that we decided to place there another kind of equipment or not to place any access point within the i-th cell. Our objective is to supply the maximum bitrate at maximum number of cells in accordance with their weight coefficients with minimum expense. Here, b j (X) is the maximum possible bitrate at the j-th receiving point (though all wireless devices are duplex, for the simplicity, we name he place to be covered by the growing device 'receiving' point in contrast to the access point). The access points are set in accordance with the vector X of Boolean variables, v j is the weight coefficient of the j-th receiving point, N p is the total number of possible places of access points in the future system, N t is the number of types of access point equipment (access points or wireless routers equipped with corresponding types of antennas), b minj are minimum guaranteed bitrates needed for some receiving points. MATHEMATICAL MODEL Let's define the RF absorbing properties of each obstacle shown in our scheme. Let П l be the absorption (in decibels) of the signal passing through the layer of l-th obstacle a cell thick. The initia values can be obtained from information tables [8]. For example, for a wall П l = -7 dB, for a window -2 dB, for the forest -... dB for each meter (the value for a cell depends on its size). Below, we consider a method allowing us to define more exactly the value of П l in accordance with the experimental data. Then, the signal level (Fig.2) of the i-th access point which we can receive at the j-th cell can be calculated as Here, P rij (X) is the signal level (dBm) of the i-th access point received at j-th receiving point, G ti is the gain of the antenna of the access point (including the loss of all cables), G r is the gain of the antenna of the receiving point (also, including all cables), L ij (OBST) is the path loss between the i-th access points and j-th receiving point with the configuration of obstacles (walls, windows, trees etc.) described with a set OBST. D ij is the distance (in meters) between the i-th AP and the j-th receiving point. A bitrate which is supplied for this signal level is is the maximum available bitrate of the link between the i-th access point and the j-th receiving point. Taking into consideration several possible variants of the wireless equipment for each access point, we the signal level is and the available bitrate of the j-th receiving point is To calculate the signal level received at some cell from some access point we should determine which obstacles lay between the access point and the covered cell taking into consideration the Fresnel zone. To determine the cells which we consider to belong to the direct sight line we implement the line drawing algorithm (which is implemented in the computer graphics software) [11] (Figure 3). Here, R F is the diameter of the Fresnel zone, d cell is the size of a cell in discrete coordinates scheme, D i is the distance to the i-th AP, D j is the distance to the j-th receiving point and F is the frequency (let it be 2.44 GHz). Let's determine (in cells) the thickness of the Fresnel zone in the middle point. If it does not exceed 1.5 points then we suppose it to be equal to 1 cell and the wave propagation zone lays within the line shown in Figure 3. Figure 3. Fresnel zone and its presentation in discrete coordinates Otherwise, for each cell of the line shown in the Figure 3 where the Fresnel zone thickness exceeds 1.5 cells, we 'draw' a line segment normal to the direct sight line (Fig.4) with the length equal to the diameter of the Fresnel zone. Let's investigate what number of cells of this line segment is occupied with the obstacle. Here, r mij is the ratio of the number of cells of the m-th line segment occupied with the obstacle (N OBST m ) and the total number of cells in the line segment (N Fm ). If the obstacle occupies less than 25% of the line segment cells then we suppose that the absorption of the obstacle at this segment is proportional to the part of the Fresnel zone, if it exceeds 25% then we suppose that the path loss caused by the obstacle is equal to that the obstacle blocks up all the cut of the Fresnel zone. L mij = { L OBSTq ,r Fmij ≤0. 25∧OBST q ∩SEGmij≠∅ , L OBSTq ,r Fmij 0. 25∧OBST q ∩SEGmij≠∅ , 0, OBST q ∩SEGmij= ∅ } Here, L OBSTq is the path loss caused with the q-th obstacle which have the size of 1 cell, OBST q is the configuration of the q-th obstacle (a set of the cells which it occupies), SEG mij is a set of cells of the m-th line segment normal to the line from the i-th AP to the j-th receiving point. Figure 4. Fresnel zone and its presentation in discrete coordinates If the Fresnel zone is occupied with the obstacle with more than 30% then we suppose that the obstacle absorbs the signal as if it occupies 100% of the Fresnel zone [8]. So, we know the value of loss caused by the obstacles situated between the access point and covered cell. Here, L ij is the total path loss caused by the obstacles situated the i-th AP and the j-th receiving point. Thus, the signal level from the i-th access point equipped with the hardware of the k-th type received within the j-th cell is The client wireless equipment situated within each of the covered cells is able to establish a connection with any access point but it selects the access point with maximum signal level. The signal level of the 'best' access point received within the j-th cell can be calculated as Thus, we have a discrete optimization problem with pseudo-Boolean object functions with constraints having 2 criteria. To solve problems of that kind we have a large experience of implementation of the variant probability algorithm (MIVER) proposed by A.Antamoshkin [1]. The problem of contraction of the 2 nd criterion and analysis of of Pareto-optimal set of solutions we use the method proposed in [12]. The algorithm and method have been successfully implemented for a large class of the analogous 2-criteria optimization problems of telecommunication systems [13], so, it is possible to presume its efficiency for our problem. The final step of the algorithm is the comparison of the Pareto-optimal solutions shown at the scheme with the values of possible bitrates of each cell for different values of the cost spent to achieve such coverage. The maximum bitrate available at the j-th point is MODEL PARAMETERS ADJUSTMENT We can presume the absorption of each obstacle in our scheme. But usually, we do not know the exact structure of the building materials, exact thickness of the walls and other parameters that causes the significant difference (even in logarithmic scale) between the real values and ones taken from the information tables. Sometimes, there are the obstacles which are not shown in the initial scheme but exert the significant influence upon the signal propagation. Let's solve the problem of accurate definition of the values of absorption for each obstacle and detection of 'invisible' obstacles. Let the real value of signal level at j-th cell be P realj and the calculated value be Here, X real is a vector characterizing the configuration of of the current placement of access points (x ik real = 1 if there is the k-th type of equipment already situated at the i-th place). In this calculation, we take into consideration only the existing access points. Thus, to determine the most adequate values of coefficients ... we should solve the minimization problem For the simplicity, we assume that these obstacles are situated between the access point and receiving cell and their shape is a sphere or circle with the center between the access point and receiving cell (in the middle, Fig. 5) because we do not know the real shape. We assume that such an invisible obstacle exists when the direct sight line and Fresnel zone is clear but the real measured signal level differs from the calculated one significant (10 dB in our example) Here, L ij inv is the path loss caused by each cell of this 'invisible' obstacle situated between the i-th AP and j-th receiving point. RESULTS We considered the problem of choice of place for additional access points of the growing network and their types as an optimization problem which is based on the adapted model of wave propagation environment. Though our model takes into consideration only the most important of the propagated signal and environment (practically, only the absorption and noise without detection of its nature), this model allows us to forecast the signal level and bitrate of each perspective area at least as a rough estimation. This method does not need any field testings but it taker into consideration the results of them if they have been performed. Basically, our method coincides with the usual practice of the engineering calculations but it allow to collect all the information about our environment into a single model and find one or more Pareto-optimal solutions taking into account all the information of the environment and cost of the new equipment placing. The results of our method can be simply interpreted by the specialist because the possible places of access points are defined at the first step. The practice [14] brings to light the necessity to check the real signal level after theoretical solving of the problem with our method and to implement the method with new measured values. For the system similar to one shown in Fig.1 we have the difference between the real and calculated values of the signal up to 11 dB, the second step gives the maximum error not more than 6 dB which is enough for the engineering calculation.
2012-08-30T20:50:16.000Z
2012-08-30T00:00:00.000
{ "year": 2012, "sha1": "01a06e84ed24b2f1a7c6fc07d8d8bcd6af9ffbfd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "01a06e84ed24b2f1a7c6fc07d8d8bcd6af9ffbfd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
203569576
pes2o/s2orc
v3-fos-license
Tuberculosis, Human Immunodeficiency Virus, and the Association With Transient Hyperglycemia in Periurban South Africa Abstract Background Diabetes mellitus (DM) increases tuberculosis (TB) risk. We assessed the prevalence of hyperglycemia (DM and impaired glucose regulation [IGR]) in persons with TB and the association between hyperglycemia and TB at enrollment and 3 months after TB treatment in the context of human immunodeficiency virus (HIV) infection. Methods Adults presenting at a Cape Town TB clinic were enrolled. TB cases were defined by South African guidelines, while non-TB participants were those who presented with respiratory symptoms, negative TB tests, and resolution of symptoms 3 months later without TB treatment. HIV status was ascertained through medical records or HIV testing. All participants were screened for DM using glycated hemoglobin and fasting plasma glucose at TB treatment and after 3 months. The association between TB and DM was assessed. Results Overall DM prevalence was 11.9% (95% confidence interval [CI], 9.1%–15.4%) at enrollment and 9.3% (95% CI, 6.4%–13%) at follow-up; IGR prevalence was 46.9% (95% CI, 42.2%–51.8%) and 21.5% (95% CI, 16.9%–26.3%) at enrollment and follow-up. TB/DM association was significant at enrollment (odds ratio [OR], 2.41 [95% CI, 1.3–4.3]) and follow-up (OR, 3.3 [95% CI, 1.5–7.3]), whereas TB/IGR association was only positive at enrollment (OR, 2.3 [95% CI, 1.6–3.3]). The TB/DM association was significant at enrollment in both new and preexisting DM, but only persisted at follow-up in preexisting DM in patients with HIV-1 infection. Conclusions Our study demonstrated high prevalence of transient hyperglycemia and a significant TB/DM and TB/IGR association at enrollment in newly diagnosed DM, but persistent hyperglycemia and TB/DM association in patients with HIV-1 infection and preexisting DM, despite TB therapy. Mortality in South Africa is characterized by concurrent infectious and noncommunicable diseases.The 2015 mortality report confirms this transition, with tuberculosis (TB) and diabetes mellitus (DM) ranked the first and second leading causes of death, while human immunodeficiency virus type 1 (HIV-1) was ranked fifth [1].Although the role that HIV-1 plays as a significant driver of the TB epidemic is well recognized, the emerging and rapidly growing burden of DM, another TB risk factor, presents another challenge to TB control. An increasing body of research shows an association between DM and TB [2,3].This association is becoming more apparent due to the epidemiological transition as DM is growing rapidly in settings where HIV-1 and TB epidemics persist.Diabetes increases the risk of developing TB and is also associated with adverse treatment outcomes, including death [3][4][5][6]. Tuberculosis increases insulin resistance and stress-induced hyperglycemia that may revert to normal during treatment [7,8].Therefore, testing for DM in persons with recently diagnosed TB may lead to misclassification of transient hyperglycemia as DM, and overestimation of the diabetes/TB association.Testing for DM in TB patients is recommended, with confirmatory tests after 2-3 months of TB treatment initiation [9,10]; however, the optimal time for screening and implications for clinical management are unknown. The objective of this study was to assess the association between hyperglycemia and TB, at TB diagnosis, and after 3 months of TB treatment. Study Design and Sampling A 3-month cohort study on consecutive patients with respiratory symptoms presenting to the clinic from July 2013 to August 2015.Patients were eligible if they provided consent, were ≥18 years of age, and had received <48 hours of TB chemotherapy.Those critically ill and in need of emergency clinical care were ineligible due to inability to provide informed consent.Based on the 4.5% and 1.2% prevalence of diabetes in TB patients and non-TB patients, respectively [13], assuming 80% power and 5% significance level, the required sample size was 798 (n = 399 per group) [18]. Case Definitions All participants were tested for TB according to South African guidelines [19].Samples were analyzed in a centralized national health laboratory.TB cases had a positive GeneXpert result.Non-TB participants were those with a negative GeneXpert result, who after examination by a physician had resolution of respiratory symptoms without TB treatment after 3 months.HIV status and antiretroviral therapy (ART) were ascertained from participants' medical records.For participants with unknown HIV status, voluntary HIV testing was offered, and if found to have HIV infection, they were provided counseling and ART initiation.All participants were tested for DM using fasting plasma glucose (FPG) and glycated hemoglobin (HbA1c).DM was defined as self-reported DM, FPG ≥7.0 mmol/L, or HbA1c ≥6.5%.Impaired glucose regulation (IGR) was defined as FPG 5.5 to <7.0 mmol/L or HbA1c 5.7% to <6.5% [20]. Measurements After TB diagnosis, sputum microscopy was repeated at months 3 and 5 in patients with pulmonary smear-positive TB according to South African guidelines [19].For DM diagnosis, venous blood was drawn after an overnight fast for FPG.At 3-month follow-up, both DM tests were repeated in all participants.All blood samples were processed on the day of collection at a centralized national health laboratory using standardized operating procedures of the Roche/Hitachi Cobas C311 system analyzer assay [21].Weight, height, and waist circumference were measured [22].The body mass index (BMI [kg/m 2 ]) was categorized as follows: underweight, <18.5; normal, 18.5-24.9;overweight, 25-29.9;obese, ≥30) [22].The cutoff point for high waist circumference was ≥94 cm (males) and ≥88 cm (females) [22].Hypertension was defined as a single measured blood pressure (BP) of systolic BP >140 mm Hg or diastolic BP >90 mm Hg [20], or a preexisting diagnosis. Questionnaire Socioeconomic, demographic, and chronic medical and medication history for HIV-1, DM, and hypertension were collected using a researcher-administered questionnaire. Statistical Analysis Medians and interquartile ranges (IQRs) and proportions summarized continuous and categorical variables.The χ 2 and Fisher exact tests assessed associations between categorical variables, respectively.The Mann-Whitney test was used to compare medians between 2 groups and the Kruskal-Wallis test for >2 groups.A multivariable logistic regression model for TB/DM association was manually built, using forward selection, controlling for potential confounding variables.To retain statistical power in the regression analysis, multiple imputation was used to impute HIV-1 serostatus for 50 participants with unknown HIV-1 status.We conducted sensitivity analysis comparing complete case and imputed analyses for multivariate analysis results on the association between TB and IGR/ DM (Supplementary Table 1).Statistical significance was set at P < .05.All data were analyzed using Stata version 13.0 software (StataCorp, College Station, Texas). Ethical Considerations This study was approved by the University of Cape Town Human Research Ethics Committee (HREC REF: 403/2011). Study Sample at Enrollment Nine hundred eighty-six participants were recruited, and 48 participants (4.9%) were excluded as TB could not be confirmed or excluded.A further 88 participants did not complete diabetes screening at enrollment.For the analysis, 850 participants were included: 412 TB cases and 438 non-TB participants (Figure 1). Study Sample at Follow-up Of the 850 patients enrolled, 639 returned for 3-month follow-up with 211 patients lost to follow-up (108 TB,103 non-TB) (Figure 1).Data comparing participants lost to follow-up to those followed up are presented in Supplementary Table 2. Glycemic Levels in TB Patients at Enrollment and Follow-up Among TB patients with newly diagnosed DM, median HbA1c decreased at follow-up (5.7% vs 5.4%; P < .0001),although FPG in this group slightly increased (4.6 vs 4.7 mmol/L; P < .0064)at follow-up (Table 2).Among those with a preexisting diagnosis of DM, glycemic levels were sustained at high levels with no significant changes at follow-up (Table 2). Association Between Hyperglycemia (DM and IGR) and TB at Enrollment and Follow-up Irrespective of HIV-1 status, the overall TB/DM association was positive and significant at both enrollment (OR, 2.4 [95% CI, 1.3-4.3])and at follow-up (OR, 3.3 [95% CI, 1.5-7.3])but only when DM was defined by a positive result for either of the diagnostic tests or a previous DM history (Table 3).This significant association was not observed at either time point when DM diagnostic tests were used in isolation (Table 3).The overall association between TB and IGR (by FPG or HbA1c) was positive at enrollment (OR, 2.3 [95% CI, 1.6-3.3];Table 3) but not at follow-up (OR, 0.8 [95% CI, .5-1.4]).On further analysis by DM diagnostic test, the overall TB/IGR association at enrollment was significant when using the HbA1c test (OR, 1.6 [95% CI, 1.1-2.3])but not by FPG (OR, 0.9 [95% CI, .5-1.5]) (Table 3). DISCUSSION We previously reported the prevalence of DM in patients with newly diagnosed TB and the cross-sectional TB/DM association at enrollment [12].In this study, we investigated whether hyperglycemia identified in the patients with newly diagnosed TB at enrollment persisted after 3 months of TB treatment.We also assessed association between TB and hyperglycemia at both time points.This is the first study in South Africa, a setting of high TB and DM burden, to document transient hyperglycemia in TB patients with preexisting DM and newly diagnosed DM.Overall, we report significant association between TB and DM at both enrollment and follow-up and significant association IGR by FPG 0.9 (.5-1.Data are presented as odds ratio (95% confidence interval).Bold text represents significant associations.Reference group for the associations: patients with no DM or IGR.All odds ratios were adjusted for sex, age, household size, income, hypertension (baseline), previous miner, previous prisoner, marital status, work status, and HIV-1 status. between TB and IGR at enrollment but not follow-up.However, when these results were further analyzed by DM category (newly diagnosed vs preexisting) and by HIV status within these categories, differing patterns emerged.Given the socioeconomic conditions of our study setting, the patients with newly diagnosed DM in our study may represent the proportion of undiagnosed DM due to factors associated with limited access to DM screening and having health facilities overwhelmed by TB and HIV [17]. Newly Diagnosed DM Hyperglycemia, transient in the majority of participants with newly diagnosed DM and IGR, was predominantly accounted for by the latter at enrollment, and normalized at follow-up.Similar to our findings, other studies have shown frequent hyperglycemia in patients with TB at initiation of TB treatment, followed by normalization during treatment [23].This may be due to inflammation in response to active TB [25][26][27], driven by complex interactions between hormones and cytokines [24].Consistent with the literature, the association between DM/ IGR and TB in newly diagnosed DM participants was only significant at enrollment but not at follow-up.With respect to the different diagnostic tests separately, none of the associations were significant. Preexisting DM Patients with preexisting DM had poorly controlled disease at both diagnosis of TB and 3 months later, as reflected by both FPG and HbA1c at these timepoints.Unlike newly diagnosed DM participants, the significant TB/DM association at enrollment (OR, 3.7 [95% CI, 1.5-9.1]) in this group, which persisted at follow-up (OR, 4.0 [95% CI, 1.6-10.1]),reflects poor glycemic control between the study timepoints.Hyperglycemia may have been exacerbated by acute TB.The persistence of raised glycemic levels reflects inadequate management of these patients, possibly due to poor follow-up, and highlights the complexity of clinical care in this subgroup. A Korean retrospective study showed that TB is usually diagnosed 1 year following the diagnosis of DM [27].Therefore, the observed relative odds of preexisting DM among patients with TB compared to those without TB in this study (OR, 2.8 [95% CI, 1.5-5.3])may approximate the relative risk to develop TB among patients with diabetes.Although the temporal relationship between TB and DM remains contentious, irrespective of the causal direction, comorbidity of TB and DM increases the risk of adverse TB treatment outcomes including treatment failure, mortality, and drug resistance [3,5,13].The observed poorly managed DM in these individuals with TB also highlights the increased risk of adverse TB treatment outcomes among these patients. TB, DM, and HIV When stratified by HIV-1 status, the TB/DM association (when DM is defined by the different diagnostic tests separately) was not significant at either time point.Interaction between HIV, TB and DM is still not well understood and different studies report varying results. It is complicated to interpret the interplay between TB, HIV, and DM as a wide range of factors may influence what is observed.This includes the effect of ART, HIV-1 infection as an independent risk factor for both DM and TB, or the choice of diagnostic test.Cotrimoxazole, administered to people living with HIV-1, can lead to hypoglycemia [28].Conversely, ART, particularly regimens containing protease inhibitors, increases insulin resistance, thus increasing the risk of diabetes [29]. Other studies suggest that HbA1c may underestimate the presence of hyperglycemia in people living with HIV-1 and that this may be due to nucleoside reverse transcriptase inhibitor use [30,31].In our study, HbA1c detected a higher prevalence of hyperglycemia.In a 2015 review by English et al, it was suggested that an HbA1c test is likely to be affected by iron deficiency anemia and may result in spurious increases of HbA1c level [32].On the other hand, non-iron deficiency anemia may lead to a reduction in HbA1c levels [32,33].A recent study observed lower HbA1c mean levels in severely anemic patients; however, due to a limited sample, no further analysis was performed to explore this relationship [34].The effect of anemia on the direction of the association is therefore unclear. We explored the potential effect of unmeasured confounding on our results by performing sensitivity analysis (Supplementary Table 3) [35].This showed that in IGR and newly diagnosed DM, the association with TB was rendered nonsignificant at baseline and follow-up.As such we note that weaker residual confounding may explain away our observed estimates.However, the association between TB and preexisting DM was significant both at enrollment and follow-up with ORs ranging between 3.7 and 9.3.To explain away these associations, an unmeasured confounder would need to be associated with both TB and preexisting DM with risk ratios ranging between 2.9 and 6.9, but weaker confounding would not. There were strengths and limitations to this study.A limited number of studies have evaluated hyperglycemia in individuals with TB, particularly in Africa where there is a high prevalence of comorbidity with other diseases such as HIV-1.TB cases were diagnosed according to South African guidelines [19], with the Gene Xpert analyzed in a centralized national health laboratory.For enrollment and follow-up, DM measurements were performed using 2 recommended tests.We relied on medical records to document ART use, as such we could not reliably ascertain duration of ART use in our multivariate analysis. Our study follow-up time was limited to 3 months and we were not able to analyze the effect of hyperglycemia on TB outcomes.Because critically ill patients were excluded in the study, it may have biased the study population to appear healthier. The proportion of participants lost to follow-up in this study, and to the health system, was relatively high (n = 212 [24.9%]) (Supplementary Table 2).Those followed up were older, mostly unemployed, and likely to have known DM and hypertension.Our results at follow-up may thus be slightly biased as they represent an older population prone to chronic conditions.Reasons for loss to follow-up include migration to other parts of the country and transfer to other health facilities in the province.To reduce bias and loss of statistical power, we imputed HIV status for patients with unknown HIV status.Therefore, loss to follow-up is less likely to have biased the associations observed in the study.The loss to follow-up observed in this study reflects how patients are lost in healthcare systems in this setting.This therefore highlights the importance of improving retention of patients in care for optimal management of all chronic diseases. CONCLUSIONS This is the first study in the South African context of high TB/HIV and rapidly increasing DM burdens to describe changes in glucose levels among patients with TB during treatment.This study showed that hyperglycemia was common in TB patients with DM.This confirms the need for confirmation DM tests in TB patients during and/or after the course of TB treatment.The association between DM and TB persisted at follow-up in participants with preexisting DM, particularly those infected with HIV-1.This highlights an important need for improved co-management of TB, DM, and HIV to limit the risks of adverse outcomes. Table 2 . Glycemic Levels Among Participants With Tuberculosis Data are presented as no.(%) unless otherwise indicated.Number of participants is shown at the end of each category.Values in bold indicate statistical significance. Table 1 . Continued Institute, which receives funding from Cancer Research UK (grant number FC001010218), Research Councils UK (grant number FC0010218), and the Wellcome Trust (grant number FC0010218).He also receives support from the National Institutes of Health (NIH) (grant number U1 AI115940), NIH (grant number WILK116PTB), and European and Developing Countries Clinical Trials Partnership (grant number SRIA 2015-1065).M. K. is supported by the South African Centre for Epidemiological Modelling and Analysis, the International Epidemiology Databases to Evaluate AIDS, and the NIH (grant number U01AI069924).Potential conflicts of interest.The authors report no potential conflicts of interest.All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest.Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
2019-09-19T09:05:47.245Z
2019-09-26T00:00:00.000
{ "year": 2019, "sha1": "96117ab6f600741a77dd8303dbf0d210a2d6f31c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cid/article-pdf/71/4/1080/33654035/ciz928.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a5080b526c4fefa16dc58ddb470db4eab67e6758", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
9838257
pes2o/s2orc
v3-fos-license
Characterisation of (R)-2-(2-Fluorobiphenyl-4-yl)-N-(3-Methylpyridin-2-yl)Propanamide as a Dual Fatty Acid Amide Hydrolase: Cyclooxygenase Inhibitor Background Increased endocannabinoid tonus by dual-action fatty acid amide hydrolase (FAAH) and substrate selective cyclooxygenase (COX-2) inhibitors is a promising approach for pain-relief. One such compound with this profile is 2-(2-fluorobiphenyl-4-yl)-N-(3-methylpyridin-2-yl)propanamide (Flu-AM1). These activities are shown by Flu-AM1 racemate, but it is not known whether its two single enantiomers behave differently, as is the case towards COX-2 for the parent flurbiprofen enantiomers. Further, the effects of the compound upon COX-2-derived lipids in intact cells are not known. Methodology/Principal Findings COX inhibition was determined using an oxygraphic method with arachidonic acid and 2-arachidonoylglycerol (2-AG) as substrates. FAAH was assayed in mouse brain homogenates using anandamide (AEA) as substrate. Lipidomic analysis was conducted in unstimulated and lipopolysaccharide + interferon γ- stimulated RAW 264.7 macrophage cells. Both enantiomers inhibited COX-2 in a substrate-selective and time-dependent manner, with IC50 values in the absence of a preincubation phase of: (R)-Flu-AM1, COX-1 (arachidonic acid) 6 μM; COX-2 (arachidonic acid) 20 μM; COX-2 (2-AG) 1 μM; (S)-Flu-AM1, COX-1 (arachidonic acid) 3 μM; COX-2 (arachidonic acid) 10 μM; COX-2 (2-AG) 0.7 μM. The compounds showed no enantiomeric selectivity in their FAAH inhibitory properties. (R)-Flu-AM1 (10 μM) greatly inhibited the production of prostaglandin D2 and E2 in both unstimulated and lipopolysaccharide + interferon γ- stimulated RAW 264.7 macrophage cells. Levels of 2-AG were not affected either by (R)-Flu-AM1 or by 10 μM flurbiprofen, either alone or in combination with the FAAH inhibitor URB597 (1 μM). Conclusions/Significance Both enantiomers of Flu-AM1 are more potent inhibitors of 2-AG compared to arachidonic acid oxygenation by COX-2. Inhibition of COX in lipopolysaccharide + interferon γ- stimulated RAW 264.7 cells is insufficient to affect 2-AG levels despite the large induction of COX-2 produced by this treatment. Introduction According to the textbooks, non-steroidal anti-inflammatory drugs (NSAIDs) produce their effects upon pain and inflammation as a result of the inhibition of cyclooxygenase (COX)derived prostaglandin production [1]. There is, however, evidence that NSAIDs also involve the endocannabinoid (eCB) system in their actions. Thus, for example, the effects of spinal administration of the NSAID indomethacin in the formalin test of prolonged pain is blocked by a CB 1 receptor antagonist and is not seen in CB 1 -/mice [2], and similar effects have been seen with other spinally administered NSAIDs (review, see [3]). In 2010, Bishay et al. [4] reported that the (R)-enantiomer of flurbiprofen produced CB receptor-mediated effects in a model of neuropathic pain. Given that the COX-inhibitory properties of the profens such as flurbiprofen are traditionally considered to reside in the (S)-enantiomer [5], this is an important result and may have bearing upon the analgesic properties of this compound. The two most well-studied endogenous ligands for the eCB system are anandamide (AEA) and 2-arachidonoylglycerol (2-AG). AEA and 2-AG are effectively metabolised both by hydrolytic and other pathways [6]. With respect to the former, AEA is hydrolysed by both fatty acid amide hydrolase (FAAH) and N-acylethanolamine acid amidase (NAAA), whilst 2-AG is hydrolysed by monoacylglycerol lipase, α/β-hydrolase domain 6/12 and FAAH to form arachidonic acid [6]. However, AEA and 2-AG are also substrates for COX-2 [7,8]. In 2009, Prusakiewicz et al. [9] reported that ibuprofen and mefenamic acid were more potent inhibitors of the oxygenation of 2-AG by COX-2 than of the oxygenation of arachidonic acid by this enzyme isoform. COX-2 is a homodimeric enzyme, and the authors suggested that the selectivity was due to the fact that 2-AG oxygenation was blocked when the inhibitor had bound to one of the monomers, whereas blockade of arachidonic acid metabolism required binding to both monomers [9]. This type of substrate-selective inhibition may be therapeutically useful: an indomethacin analogue, LM-4131, has also been identified as a substrate-selective inhibitor of COX-2. The compound increases brain 2-AG levels, and produces potentially beneficial effects in models of anxiety [10]. This group has also investigated the (R)-enantiomers of the profens and found them to be potent inhibitors of AEA and 2-AG oxygenation without affecting arachidonic acid oxidation [11]. They showed further that in dorsal root ganglion cells cultured under inflammatory conditions, the (R)-enantiomers of ibuprofen, naproxen and flurbiprofen increased AEA and 2-AG levels without affecting arachidonic acid levels [11]. It is not known, however, whether substrate-selective COX-2 inhibitors increase AEA and 2-AG in other cells, such as macrophages, when they are cultured under inflammatory conditions. An important unwanted property of NSAIDs is their propensity to cause gastric lesions. In a key study, Naidu et al. [12] showed that not only did the FAAH inhibitor URB597 reduce the gastrointestinal damage produced by the NSAID diclofenac, but also acted synergistically with this compound in a model of visceral pain. This and other findings has led to the suggestion that dual-action FAAH-COX inhibitors may be useful for the treatment of pain [13]. Recently, Sasso et al. [14] reported the synthesis of a compound, ARN2508. The compound inhibited FAAH and both COX isoforms in an irreversible manner and reduced concentrations in the plasma of the prostaglandin metabolite 6-keto PGF 1α whilst increasing levels of the FAAH substrates PEA and OEA. ARN2508 showed efficacy in a model of inflammatory pain without producing gastric lesions [14]. It is not known, however, whether the compound inhibits COX-2 in a substrate-selective manner. In 2003, two of us (C.C., V.O.) reported that the amide derivative of ibuprofen with 2amino-3-methylpyridine was more potent than ibuprofen in a model of visceral pain, but had a considerably lower ulcerogenic potency [15]. This compound was subsequently shown to inhibit FAAH with a potency approximately 2-3 orders of magnitude greater than ibuprofen itself, whilst retaining its COX-inhibitory properties [16,17]. We have also reported that the corresponding amide analogue of flurbiprofen (Flu-AM1) is a potent inhibitor of rat brain FAAH and additionally shows a substrate-selective inhibition of COX-2, whereby lower concentrations were needed to block the oxygenation of 2-AG than of arachidonic acid [18]. As with flurbiprofen, Flu-AM1 has a chiral centre. In view of the clear enantiomeric difference seen with flurbiprofen towards the inhibition of COX-2 [11], it is important to investigate whether the profile of Flu-AM1 as a dual action FAAH: substrate-selective COX-2 inhibitor can be further refined by selection of the (R)-enantiomer. Thus from the above discussion, two distinct questions can be formulated: a. Do the amide derivatives of flurbiprofen with 2-amino-3-methylpyridine show enantiomeric differences with respect to inhibition of COX-2 and FAAH? b. Is COX-2 inhibition per se sufficient to affect endocannabinoid levels in macrophage cells cultured under inflammatory conditions? These questions have been investigated in the present study. Synthesis of the enantiomers of Flu-AM1 COX-1 and 2 inhibition experiments The assay was performed according to Meade et al. [19] with minor modifications [20]. An oxygen electrode chamber with integral stirring (Oxygraph System, Hansatech Instruments, King´s Lynn, U.K.) was calibrated daily to ambient temperature and air pressure. The assay buffer contained 0.1 M Tris-HCl buffer pH 7.4, 1 μM haematin, 2 mM phenol, 5 mM EDTA, 10 μM substrate (arachidonic acid or 2-AG) (final assay volume was 2 ml). After addition of (R)-or (S)-Flu-AM1 dissolved in ethanol (final assay concentration 1%), a baseline was established for 5 min before initiation of reaction by addition of 200 units ovine COX-1 or human recombinant COX-2. The change in oxygen consumption as a measurement of enzyme activity was monitored for approximately 5 min. FAAH assay Brains from male B6CBAF1/J mice, stored at -80°C, were thawed, weighed and homogenized in cold buffer (20 mM HEPES, 1 mM MgCl 2 pH 7.0). Homogenates were centrifuged (35,000 g at 4°C for 20 min) before the pellet was resuspended in cold homogenization buffer. Centrifugation and resuspension was repeated twice. The suspension was incubated at 37°C for 15 min to degrade any endogenous substrate able to interfere with the FAAH assay. After centrifugation (35, or vehicle) were added, and the cells were incubated for 30 min at 37°C. When indicated, the calcium ionophore ionomycin (5 μM) was added at the same time as the test compounds. The plates were placed on ice and after removal of medium, the cells were washed twice with icecold PBS (2x1mL). One mL of methanol was added to the wells and the mixture was scraped using a rubber policeman and the extract pipetted into Falcon tubes. An additional 1 mL of methanol was added to the wells, the wells were scraped and the mixture was pipetted into the same tubes. These were then centrifuged at 2000 x g for 15 min (4°C) to sediment cell debris, and the methanol phase collected and stored at -80°C until used for analysis of prostaglandins, 2-AG, AEA and related lipids. [ 3 H]AEA hydrolysis by intact RAW 264.7 cells For the studies of [ 3 H]AEA hydrolytic capacity of RAW 264.7 cells, initial experiments indicated that low activities were seen. In consequence, 2.5 x 10 6 cells/well were seeded and cultured overnight prior to addition of either phosphate-buffered saline or LPS (0.1 μg/mL) + INF-γ (100 U/mL) and incubation for a further 24 h. [ 3 H]AEA hydrolytic capacity was then measured as described previously [23]. Briefly, the cells were washed twice with 400 μL of prewarmed KRH buffer (120mM NaCl, 4.7mM KCl, 2.2 mM CaCl 2 , 10mM 4-(2-hydroxyethyl)piperazineethane-sulfonic acid (HEPES), 0.12 mM KH 2 PO 4 , 0.12 mM MgSO 4 , pH 7.4) with 1% BSA prior to addition of 340 μL of pre-warmed KRH buffer with 0.1% fatty acid-free BSA and 10 μL of test compound (final concentrations as above) or vehicle (0.05% DMSO + 0.1% ethanol). After preincubation for 10 min at 37°C, [ 3 H]AEA (50 μL, final concentration 0.1 μM, in KRH buffer with 0.1% fatty acid-free BSA) was added and the cells were incubated for a further 60 min at 37°C. Reactions were stopped by addition of 600 μL of activated charcoal buffer (120 μL activated charcoal + 480 μL 0.5 M HCl) and the samples were then worked up as described above for the FAAH assay. Western blot for COX-2 RAW 264.7 cells (2.5 x 10 6 cells/well) were seeded into 6 well plate and incubated for 24 h at 37°C prior to treatment with either phosphate-buffered saline or LPS (0.1 μg/mL) + INF-γ (100 U/mL). Following incubation for 1.5-24 h at 37°C, medium was aspirated, 400 μl ice-cold phosphate-buffered saline was added, and the cells were scraped using a rubber policeman. This procedure was repeated, and the samples were centrifuged for 4 min at 1000 r.p.m., 4°C to sediment the cells. A mixture of 150mM NaCl, 50mM Tris, 1% Triton-X100, pH 8.0 + Protease Inhibitor III (1:200 v.v -1 , 500 μL) was added to the cells in Eppendorf tubes, which were then shaken for 30 min at 750 r.p.m., 4°C. Samples were then centrifuged at 14,000 g for 5 min at 4°C, and the supernatants then frozen at -80°C until used. Proteins in samples (20 μL, containing 3 μg of the samples and 1 x Laemmli buffer) were separated by gel electrophoresis using Mini Protean TGX stain free gels (BioRad, Hercules, CA, USA, cat # 456-8093; 200 V x 35 min). Human recombinant COX-2 (750 ng) was used as positive control. Proteins were transferred to PVDF mini membranes using a Trans-Blot Turbo Transfer System (BioRad). The membranes were treated with blocking solution (5% dried milk in 1 x tris-Buffered saline / Tween-20 [TBST] solution, 1 hour at room temperature) after which the primary antibody (COX-2 polyclonal antibody, rabbit anti-mouse, 1:1000, in 5% dried milk / TBST) was added and the membranes were left overnight at 4°C on a rotating table. After five washes with TBST, the membranes were treated with the secondary antibody (HRP-conjugated goat anti-rabbit, 1:2000) for 1 h at room temperature on a rotating table. After five washes, the membranes were treated with Clarity Western ECL substrate and photographed in a Molecular Imager Gel Doc XR system (BioRad) and quantified using ImageLab software 5.1 according to the manufacturers instructions (http://www.bio-rad.com/webroot/web/pdf/lsr/literature/Bulletin_6434.pdf). Cell viability experiments RAW 264.7 cells (2.5 x 10 6 cells/well) were seeded into 6 well plate and incubated for 24 h at 37°C prior to treatment with either phosphate-buffered saline or LPS (0.1 μg/mL) + INF-γ (100 U/mL). Following incubation for 1.5-24 h at 37°C, medium was aspirated, 400 μL ice-cold phosphate-buffered saline was added, and the cells were scraped using a rubber policeman. Cell viability was assessed using trypan blue and a TC20 automated cell counter (Bio-Rad). Assay of prostaglandins, 2-AG, AEA and related lipids in extracts from RAW 264.7 cells Cell extracts were thawed on ice and milliQ water was added to give a final methanol concentration of 5% (v/v). After samples were spiked with 10 μL internal standard solutions (800 ng/ mL 2-AG-d 8 (1:1)) and then applied directly to the solid phase extraction cartridge. Briefly, compounds were extracted using Waters Oasis HLB cartridges (60 mg of sorbent, 30 μm particle size). Cartridges were washed with 2 mL of ethyl acetate, followed by 2x2 mL of MeOH, and then conditioned with 2x2 mL of wash solution (95:5 v/v water/methanol with 0.1% acetic acid). After loading the sample containing internal standard and antioxidant solution, the cartridges were washed with 2x4 mL of wash solution, dried under high vacuum for about 1 minute, and eluted with 3 mL acetonitrile, followed by 2 mL of methanol and 1 mL of ethyl acetate into polypropylene tubes containing 6 μL of a glycerol solution (30% in methanol). Eluates were concentrated with a MiniVac system (Farmingdale, NY, U.S.A.) and reconstituted in 100 μL of methanol and vortexed. If necessary, samples were centrifuged to remove any residuals. Solutions were then transferred to LC vials with low-volume inserts, 10 μL of a recovery standard (CUDA, 50 ng/mL) was added, to normalise for changes in volume and instrument variability, and UPLC-MS/MS analysis was performed immediately. Chromatographic separation of the analytes was performed using an Agilent ultra-performance (UP)LC system (Infinity 1290) was coupled with an electrospray ionization source (ESI) to an Agilent 6490 Triple Quadrupole system equipped with the iFunnel Technology (Agilent Technologies, Santa Clara, CA, USA) [24]. Separate injections for subsequent ionization in positive (for 2-AG, AEA and related N-acylethanolamines) and negative mode (for the prostaglandins and other oxylipins) were undertaken. Analyte separation was performed using a Waters BEH C18 column (2.1 mm x 150 mm, 2.5 μm particle size), and 10 μL injection volumes were employed for each run. The mobile phase consisted of (A) 0.1% acetic acid in MilliQ water and (B) acetonitrile:isopropanol (90:10). The following gradients were employed: 0.0-3.5 min 10-35% B, 3.5-5. Precursor ions, [M+H] + and [M-H] -, product ions, multiple reaction monitoring (MRM) transitions and optimal collision energies were established for each analyte. ESI conditions were: capillary and nozzle voltage at 4000 V and 1500 V, drying gas temperature 230°C with a gas flow of 15 L/min, sheet gas temperature 400°C with a gas flow of 11 L/min, the nebulizer gas flow was 35 psi, and iFunnel high and low pressure RF at 90 and 60 V (negative mode) and 150 and 60 V (positive mode). The dynamic MRM option was performed for all compounds with optimized transitions and collision energies. The MassHunter Workstation software was used manually to integrate all peaks. The limits of quantification (LOQ) for compounds in the eCB metabolome were in the range 0.5-1000 fg on column, intraday accuracy and precision ranges (%) were 83-125 and 0.3-17, respectively, and interday accuracy and precision ranges (%) were 80-119 and 1.2-20, respectively, dependent upon the compound and the concentration studied. Corresponding values for the oxylipins were LOQ 0.5 fg-4.2 pg on column (LOQ), 85-115% (inter-and intraday accuracy) and < 5% (precision) [24]. Internal standard recovery rates were established for RAW264.7 cells pellet methanolic extracts (5 replicates, test samples) and PBS (100 mM, 5 replicates). Briefly, samples were spiked with 10 μL of internal standard solutions and extracted by SPE as described above. To calculate recovery rates, internal standard calibration curves obtained at five different concentrations normalized against CUDA were used and expressed as the percentage of the expected value. Matrix-dependent recovery was established by spiking 10 μL internal standards in a similar manner to human plasma. Statistical analyses pI 50 and IC 50 values were calculated using log(inhibitor) vs. response with variable slope (four parameters) algorithm in the GraphPad Prism computer program (GraphPad Software Inc., San Diego, CA. USA). The best fit was chosen by Akaike's informative criteria. K i values were obtained in two ways: 1) using the enzyme kinetics competitive model algorithm available in the GraphPad Prism programme; 2) from the intersection of the lines in a Dixon plot. The regression lines were determined by the robust analysis, rather than the least squares analysis, available in the GraphPad Prism programme. Oxygen consumption time courses were fitted to the"plateau followed by one phase delay" algorithms available in the GraphPad Prism programme. Kruskal-Wallis testing and post-hoc testing used Dunn's multiple comparison test were undertaken using the same computer programme. The rank-based two-way ANOVAs (two-way robust Wilcoxon analysis [25]) were calculated using the function raov in the Rfit package of the R computer programme [26,27]. Inhibition of COX isoforms in vitro by the enantiomers of Flu-AM1 The inhibition of ovine COX-1 and recombinant human COX-2 by the enantiomers of Flu-AM1 are shown in Fig 1. Both compounds were effective inhibitors of arachidonic acid oxidation by both isoform, and of 2-AG oxidation by COX-2. The curves in the figure were fitted to the built-in equation "plateau followed by one phase delay" in the GraphPad Prism programme, where the initial y value was set to zero and the x o value (the length of the initial lag phase) was allowed to be in the range 0-120 s. From the mean values returned from the equation, initial values (at x 0 + 1 s) were calculated and these were used to derive approximate IC 50 values of: (R)-Flu-AM1, COX-1 (arachidonic acid) 6 μM; COX-2 (arachidonic acid) 20 μM; COX-2 (2-AG) 1 μM; (S)-Flu-AM1, COX-1 (arachidonic acid) 3 μM; COX-2 (arachidonic acid) 10 μM; COX-2 (2-AG) 0.7 μM. Thus, the (S)-enantiomer is roughly twice as potent as the (R)-enantiomer, but both enantiomers show substrate-selective inhibition of COX-2, whereby the oxygenation of 2-AG is inhibited at concentrations an order of magnitude lower than required for inhibition of the oxygenation of arachidonic acid. Flurbiprofen inhibits COX in a time-dependent manner [28]. In order to determine whether the two enantiomers of Flu-AM1 also exhibited this property, the effect of sub-maximal concentrations of the compounds were investigated either without preincubation (where the reactions are started by addition of the enzyme) or following a five minute preincubation with enzyme prior to starting the reactions by addition of substrate. For both COX-1-catalysed oxygenation of arachidonic acid and COX-2-catalysed oxygenation of 2-AG, the inhibition was more prominent following the preincubation period (Fig 2). Inhibition of mouse brain FAAH by the enantiomers of Flu-AM1 The inhibition of [ 3 H]AEA hydrolysis in mouse brain homogenates by the two enantiomers of Flu-AM1 is shown in Fig 3A. The potencies of the two enantiomers were very similar, with IC 50 values of 8.8 and 11 μM for the (R)-and (S)-enantiomer, respectively. In kinetic experiments, (R)-Flu-AM1 behaved as a competitive inhibitor of FAAH with a K i value of 20±8 μM (Fig 3B). A Dixon plot of the data gave the same K i value (19 μM; Fig 3C). Further analysis of the Dixon plot for (R)-Flu-AM1 confirmed the assumption that, under the conditions used, the added AEA concentration is directly proportional to the free AEA concentration available to the enzyme (S1 Fig). Flu-AM1 Enantiomers, COX and FAAH limit was observed in the medium extracts. For medium extracts from the ionomycin-treated cells, small peaks were seen for PEA, SEA, OEA, 13-HODE and 9(S)-HODE, but these were very close to the quantification limit. This would suggest that release of the lipids is limited under the conditions used here. The combination of 0.1 μg/mL of LPS + 100 U/mL of INF-γ produced the expected induction of COX-2 and robust increases in the levels of PGD 2 and PGE 2 following a 24 h incubation (Fig 4). There was a variable effect of the treatment upon cell viability (Fig 4B), so that the 24 h time-point used is a trade-off between the levels of COX-2 induction required for the study and effects upon cell viability. Increasing the LPS concentration to 1 μg/mL did not increase further the expressed level of COX-2, nor did increasing the incubation time to 48 h (data not shown). Given that both enantiomers of Flu-AM1 had relatively similar properties towards COX (Figs 1 and 2), we focussed upon the (R)-enantiomer. An almost complete inhibition of both basal and LPS +INF-γ-stimulated PGD 2 and PGE 2 production was seen with the highest concentration of (R)-Flu-AM1 tested (10 μM), whereas more variable effects were seen with lower concentrations (Fig 4C and 4D). The effects of flurbiprofen (10 μM, either per se or together with 1 μM URB597) and (R)-Flu-AM1 (10μM) upon the levels of prostaglandins, 2-AG and related oxylipins were investigated in a series of experiments. For some of the lipids, not least the prostaglandins, there was a large variation in levels observed between batches, and so we have normalised the data to the corresponding vehicle controls. In the initial experiments, flurbiprofen and (R)-Flu-AM1 (10μM) blocked, as expected, both unstimulated and LPS + INF-γinduced PGD 2 and PGE 2 in the cell extracts (Table 1). LPS + INF-γ treatment also increased 11-HETE and possibly 15-HETE levels in a manner sensitive to inhibition by flurbiprofen and (R)-Flu-AM1 (Table 1). This is consistent with the report that COX-2 in activated macrophages is capable of producing these oxylipins [30]. In contrast to the robust effects of flurbiprofen and (R)-Flu-AM1 upon prostaglandin levels in the cell extracts, the levels of 2-AG were not affected (Table 1). Similar results were seen in a larger series of LPS + INF-γ -treated cells where the calcium ionophore ionomycin was also added (Fig 5). Linoleic acid-derived oxylipins were also analysed, in order to shed light on possible off-targets for (R)-Flu-AM1. No significant effects of this compound upon the linoleic acid-derived oxylipins were seen (Table 1; S3 Fig). To determine the ability of LPS + INF-γ -treated RAW 264.7 cells to hydrolyse exogenous [ 3 H]AEA (100 nM), the cells were incubated with this substrate for 60 min in the absence or presence of the compounds. As expected, 1 μM URB597, either per se or with flurbiprofen, completely blocked [ 3 H]AEA hydrolysis (Fig 6A). Given the potencies of flurbiprofen and (R)-Flu-AM1 towards mouse FAAH are modest, clear effects of these compounds at the concentration of 10 μM upon [ 3 H]AEA hydrolysis by the intact mouse RAW 264.7 cells would not be expected, and this was found to be the case (Fig 6A). Surprisingly, however, URB597 only produced modest effects upon the levels of AEA and related N-acylethanolamines in the cells (Table 1, Fig 6B-6H). Thus, at a concentration of URB597 causing complete inhibition of the hydrolysis of exogenous AEA, endogenous levels are only marginally affected. Flurbiprofen and (R)-Flu-AM1 did not affect the levels of these lipids in the ionomycin treated cells (Fig 6). Discussion In the present study, the enantiomers of Flu-AM1 were investigated in order to shed light on two questions that were asked at the end of the introduction. These questions are recapitulated below, to aid the discussion: Do the amide derivatives of flurbiprofen with 2-amino-3-methylpyridine show enantiomeric differences with respect to inhibition of COX-2 and FAAH? This question was motivated by previous studies showing that (R)-profens (ibuprofen, flurbiprofen, naproxen) retained the substrate-selective inhibition of COX-2 seen in the enantiomers, but lacked significant effect upon arachidonic acid oxygenation by either COX isoform [11]. Mutagenesis and computer modelling approaches have been very informative in Flu-AM1 Enantiomers, COX and FAAH [31,32]. The interaction of profen enantiomers with COX has been studied using both approaches [11,33], and crystallographic studies have suggested that a critical interaction for the (R)-profens is the ability of the carboxyl group to ion pair with the Arg 120 residue in COX-2 [11], whilst molecular modeling of the binding of (R)-flurbiprofen suggests that the phenolic group of Tyr 355 would interact with the α-methyl group of this inhibitor so as to interfere with the binding of the carboxylate group to Arg 120 , thereby accounting for the poor potencies of the (R)-enantiomers towards the oxygenation of arachidonic acid [33]. The two enantiomers of Flu-AM1 retain the time-dependency of COX inhibition seen with flurbiprofen [28] but do not show marked differences in their COX-inhibitory properties. This latter finding presumably reflects the fact that they contain an uncharged amide group instead of the negatively charged carboxyl group of flurbiprofen and suggests interaction with COX different from ion pair formation with the Arg 120 residue. It would clearly be of interest to investigate using computational and mutagenesis techniques the interaction of the compounds with COX isoforms, and, indeed, with FAAH. The two compounds show a degree of substrate-selectivity towards the inhibition of 2-AG oxygenation by COX-2 vs. arachidonic acid oxygenation by this isoform. For comparative purposes, recalculation by the method used here of data for racemic flurbiprofen obtained using the same method [18] gave IC 50 values for this NSAID of: COX-1 (arachidonic acid) 4 μM; COX-2 (arachidonic acid) 95 μM; COX-2 (2-AG) 2 μM. Thus, the compounds are approximately equipotent to flurbiprofen as inhibitors of COX-1, but more potent inhibitors of 2-AG oxygenation by COX-2. The enantiomers of Flu-AM1 showed very similar potencies towards inhibition of mouse brain FAAH, a result also seen for flurbiprofen enantiomers and rat brain FAAH [34]. It was noted that the potencies were lower than previously reported for racemic Flu-AM1 (IC 50 value 0.44 μM [18]). This seems to reflect a species difference, since the racemate was studied in rat brain homogenates, whereas mouse brain homogenates were used here. Indeed, in rat brain homogenates, (R)-and (S)-Flu-AM1 inhibit [ 3 H]AEA hydrolysis with IC 50 values of 0.74 and 0.99 μM, respectively (current authors, unpublished data). We have elected to present the mouse data here, since the lipidomic work described below was conducted on RAW264.7 cells, which are murine in origin. Flu-AM1 Enantiomers, COX and FAAH Is COX-2 inhibition sufficient to affect endocannabinoid levels in macrophage cells cultured under inflammatory conditions? Duggan et al. [11] reported that in primary cultures of mouse dorsal root ganglia cells stimulated with granulocyte-macrophage colony-stimulating factor followed by LPS, IRN-γ and 15 (S)-HETE, resulting in the induction of COX-2, the inhibition of AEA and 2-AG oxygenation by (R)-profens increased the levels of these eCBs in the cell extracts, without affecting arachidonic acid levels. This would suggest that in these cells (which lack FAAH [11], in contrast to the situation for the dorsal root ganglia in vivo [35]), COX-2 is an important determinant of Flu-AM1 Enantiomers, COX and FAAH eCB metabolism. We found that at concentrations of 10 μM, both flurbiprofen and (R)-Flu-AM1 completely blocked prostaglandin production by both unstimulated and LPS + IFN-γtreated RAW 264.7 macrophage cells, indicating that under these conditions the compounds block arachidonic acid oxygenation by both COX isoforms. However, this blockade did not affect the observed levels of either 2-AG or AEA. Thus, COX-2 appears to play a minor role in gating the catabolism of these eCBs in the RAW 264.7 cells, in contrast to the stimulated primary cultures of mouse dorsal root ganglia cells [11]. The present study has allowed us to answer an additional question: does FAAH inhibition affect endocannabinoid levels in macrophage cells cultured under inflammatory conditions? Flu-AM1 Enantiomers, COX and FAAH We found that URB597 produces significant, but rather small changes in the levels of AEA and related N-acylethanolamines that are FAAH substrates in the LPS + IFN-γ-treated RAW 264.7 cells despite the essentially complete inhibition of the hydrolysis of exogenously added [ 3 H] AEA at the concentration of the compound used (1 μM). There are two explanations for this finding. It is possible that in the LPS + IFN-γ-treated RAW 264.7 cells, the turnover of the Nacylethanolamines is so slow that blockade of FAAH produces little effect. This would be the case, for example, if the synthetic pathways were the rate-limiting step in the life cycle of these lipids. There is evidence in the literature that LPS treatment increases the rate of AEA synthesis and concentration in RAW 264.7 cells despite a reduction in the expression at the mRNA level of the N-acylethanolamine synthetic enzyme N-arachidonoyl phosphatidylethanolamine-phospholipase D [36][37][38]. The primary pathway for AEA synthesis in the cells was instead identified as the production and then dephosphorylation of phospho-AEA [37]. In our hands, we found a modest, albeit significant, increase in AEA, but not the other N-acylethanolamines, levels following LPS + IFN-γ-treatment (Table 1). It is possible that under the conditions used here, the phospho-AEA pathway is less active than in the study of Liu et al. [37], and this results in the synthesis rather than hydrolysis being rate-limiting, even following ionomycin treatment. An alternative (or additional) explanation is that the metabolism of endogenous AEA in the cells is less dependent upon FAAH than the hydrolysis of exogenously added AEA and that other catabolic enzymes are of greater importance. Given that the combination of flurbiprofen + URB597 did not affect levels of AEA in the RAW 264.7 cells, COX-2 can be ruled out as a candidate. The most likely enzyme is NAAA, given that it is highly expressed in macrophages [39]. NAAA inhibitors are beginning to appear in the literature, and one of these, 1-(2-biphenyl-4-yl)ethyl-carbonyl pyrrolidine, has been reported to restore PEA levels that were decreased in LPS-treated RAW 264.7 cells [40] Hopefully, more data will emerge on the effects of NAAA inhibitors on AEA as well as PEA levels in RAW 264.7 cells in the future. Conclusions There are two main conclusions to the present study. Firstly, we find that in contrast to the profens, the two enantiomers of Flu-AM1 show little difference with respect to their ability to inhibit COX isoforms. The compounds also inhibit FAAH with similar potencies. Thus, there is little advantage in using one or other of the enantiomers over using the racemate. Secondly, our data show that in activated RAW 264.7 cells, COX-2 plays a relatively minor role in regulating eCB levels, and that the importance of FAAH for the hydrolysis of endogenous AEA may be less pronounced than for exogenously added AEA.
2016-05-04T20:20:58.661Z
2015-09-25T00:00:00.000
{ "year": 2015, "sha1": "20912b53e92b35705c142454dfff2a314ce8c922", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139212&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f92af5b3d463a6830183c6b9e870222ea5d0b517", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
219080019
pes2o/s2orc
v3-fos-license
DEVELOPMENT POLE AND GROWTH POINTS OF INNOVATIVE ECONOMY: KAZAKHSTAN AND FOREIGN EXPERIENCE The article discusses the innovative development of the poles, as well as monitoring a comprehensive assessment of the innovative activities of the regions. This study indicates that increased innovation activity provides economic growth. "Growth points", which should be understood as a company, "the effect of enthusiasm, form the "development zone" in the region or country. From single-valued and high-time to the basics of unpolished theory, the theory of polar poles (PRT) is derived. By the middle of the great part, in regional economy the theory of the polar poles this concept, the theoretic object of the duality of the divergence of the differentiation of the duality of the duality of the duality in the market of products In keeping with this concept, the mid-region is characterized by the propagating (dynamically divisive, diverging) propulsion. It stimulates the dilatation of the dilapidated territory, in the opposite direction, in accordance with the instructions of the addictive, addictive and obsessive persons, presiding over the polar pole of the border, with a higher concentration in the subordinate area of the recital zone. Rather central mogwt how razvivatsya stixiyno, so, and celenapravlenno way optimalnogo razme§geniya sootvetstvwyu§gi'x Enterprise and blagopriyatnix create wsloviy Their xozyaystvennoy Activities pomo§gyu razlignix istognikov finansirovaniya (of State vlojeniy, gastnogo capital, subsidies, nalogovix lax and others.). Once you have developed in the region, you will have a far more complex development of the whole process, as you would expect, under the mechanisms of market economy. In tro d u c tio n . In scientific works, it was em phasized that the use o f poles as a category that forms the functions o f the state in ensuring the developm ent o f regions w ith the conditions for their achievem ent. In the early 1950s. The w orld-fam ous econom ist Francois Perroux , by the poles, understood the placed and dynam ically developing industries that generate a chain reaction o f the em ergence and grow th o f industrial centers. This theory was the basis for regional program s in m any countries. The Swedish econom ist, N obel Prize w inner in econom ics G. M yrdal confirm ed that the basic m odel o f cum ulative grow th shows how, w ith the help o f specialization and econom ies o f scale, the small advantages o f territories can grow and be m ultiplied over tim e [1]. The extension o f this effect to regions or the so-called "diverging effects" allow ed us to conclude that the advantages o f certain localities, grow th poles lead to an acceleration o f th eir developm ent and a large lag in backw ard regions. C onsequently, the grow th o f the econom y is uneven and the levels o f econom ic developm ent o f the territories do not converge [2]. Also interesting is the French experience in the form ation o f the so-called poles o f com petitivenessconsortia (clusters) that com bine research organizations, educational centers and industrial enterprises. A t the same tim e, the goal is to form enterprises that are attractive for im planting a private initiative in research and developm ent. Com petitiveness in term s o f the international division o f labor, and at the same tim e provide an effective solution to regional and social problem s. It should be noted that in France, research and production com plexes that com bine high-tech enterprises and research institutes w orking in various industries are considered poles o f com petitiveness [3]. ISSN 2224-5227 2. 2020 Table 1 -Types and functions of growth poles Innovative technological growth pole Industrial growth poles Agroindustrial growth poles Promising growth poles Agglomeration forming a single territorial socio economic system with a population in which organizational and managerial "capital" functions are concentrated and a significant innovative and technological reserve in the economy is formed Urbanization of a territory with an industrial type of economy characterized by high investment activity and the presence of a diversified diversified industry Medium and small cities with a developed production base and service sector and an active business environment in the field of agriculture The largest settlements, which are the organizing centers of rural areas, with the potential for the formation of agro-industrial growth poles Compiled by the second: based on the source [4]. The concept o f grow th poles is a netw ork o f grow th poles as an effective tool for raising the region, w hich provides alignm ent and support for the developm ent o f the region. The developm ent o f the w orld com m unity testified to the increasing influence o f innovation on the rate o f econom ic growth. In the global m arket, innovative activity o f the w idespread use o f innovations indicates that enhanced innovation activity has been identified in high-tech industries at the regional level. The scheme o f territorial developm ent and deploym ent o f productive forces K azakhstan is at the stage o f transition to an industrial-innovative form o f developm ent. The progress o f the ongoing reform in the country shows that interregional differentiation and transport problem s continue to adversely affect the grow th rate o f GRP, the volum e o f FDI, the export potential o f SM Es and private entrepreneurship, and the integrated use o f the econom ic potential o f the regions [5]. The identification o f "grow th poles" is a key m odel for the developm ent o f countries that are distinguished by advanced levels o f econom ic and social developm ent. Foreign experience shows that the innovation policy o f the region, especially in that part related to the m aterial production industry, and in particular the production o f building m aterials, should be form ed taking into account the general resource capabilities o f the region. A t the same tim e, in the developm ent o f innovation policy, a special place is occupied by the general analysis o f the resource potential o f the region as a starting point for assessing the m aterial prerequisites for the innovative developm ent o f enterprises in the region. A n assessm ent o f the region's resource potential is a quantitative characteristic th at takes into account the m ain m acroeconom ic indicators, the saturation o f the territory w ith production factors (natural resources, energy resources, production and transport infrastructure, labor force, etc.), innovative infrastructure and its developm ent level, consum er dem and for products inside and outside the region, etc [6][7]. Source: Static Resource USA, CHR, India, Germany, UK, Brazil, Analytics AV Group. -the creation o f republican and regional innovative com panies w ith the participation in the form ation o f their authorized capital o f the state and developers capable o f acting as general contractors for innovation and investm ent projects [11][12][13]. Foreign experience indicates that a developed entrepreneurial sector is an im portant factor determ ining the effectiveness o f a national innovation system. H ow ever, in K azakhstan, the entrepreneurial sector is characterized by a low level o f innovation activity, which, in turn, leads to a low share o f innovative products in Kazakhstan's GDP A fter the crisis o f 2010, characterized by a fall in business and investm ent activity o f business entities, there has been a steady upw ard trend in the share o f innovative products in GDP. H ow ever, it is insignificant -less than 1%. The developm ent o f an innovative econom y in the R epublic o f K azakhstan is largely determ ined by its financial support and in the initial stage needs substantial state support. The need to develop new technologies and innovations, increasing the dem and for innovative products are a requirem ent o f the present. Today, K azakhstan has created a num ber o f institutions that coordinate and support innovation. These institutions are involved in financing and m anaging innovation through various financial instrum ents. In addition, a num ber o f state concepts for regulating and stim ulating innovation were adopted in the republic, the creation o f a The key elem ents o f the regional m echanism for financing innovative activities are forecasting the innovative developm ent o f the region, a m ulti-channel financing system based on the rational distribution o f financial resources from various sources o f financing betw een all stages o f the innovation process, and a system for adjusting the financial m echanism taking into account the current situation in the regional innovation sphere [15].
2020-04-23T09:14:59.172Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "40dc805cc1563b8cabcb5d64c2596c5ad3ff6614", "oa_license": null, "oa_url": "https://doi.org/10.32014/2020.2518-1483.45", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "80dd644653da6d72d9eac04f225f6703873fb00d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
253378986
pes2o/s2orc
v3-fos-license
Facility readiness to remove subdermal contraceptive implants in 6 sub-Saharan African countries OBJECTIVE This study aimed to estimate the proportion of health facilities without the capability to remove contraceptive implants and those that have the capability to insert them and to understand facility-level barriers to implant removal across 6 countries in sub-Saharan Africa. STUDY DESIGN Using facility data from the Performance Monitoring for Action in Burkina Faso, the Democratic Republic of Congo, Ethiopia, Kenya, Nigeria, and Uganda from 2020, we examined the extent to which implant-providing facilities (1) lacked necessary supplies to remove implants, (2) did not have a provider trained to remove implants onsite, (3) could not remove deeply placed implants onsite, and (4) reported any of the above barriers to implant removal. We calculated the proportion of facilities that report each barrier, stratifying by facility type. RESULTS Between 31% and 58% of implant-providing facilities reported at least 1 barrier to implant removal in each country (6 sub-Saharan African countries). Lack of trained providers was the least common barrier to implant removal (0%–17% of facilities), whereas lack of supplies (17%–44% of facilities) and the inability to remove a deeply placed implant (16%–42%) represented more common obstacles to removal. Blades and forceps were commonly missing supplies across all 6 countries. Barriers to implant removal were less commonly reported at hospitals than at lower-level facilities in all countries except Burkina Faso. CONCLUSION This multicountry analysis showed that facility-level barriers to contraceptive implant removal are widespread among facilities that offer implant insertion. By preventing users from being able to discontinue their implants on request, these barriers pose a threat to contraceptive autonomy and reproductive health. Introduction Enthusiasm for subdermal contraceptive implants has intensified in recent years among global reproductive health practitioners. As part of a broader movement to promote the use of longacting reversible contraceptive (LARC) methods, scholars and advocates have been eager to expand access to and use of the contraceptive implant, citing the implant's 5-year duration, high effectiveness, and low levels of user error. 1−4 Recent work on contraceptive implants has described the past few years as a period of "liftoff" and "blossoming" for implants in sub-Saharan Africa in particular, with population-based survey data showing considerable growth in implant use across an array of sociodemographic groups in that region. 3 In Kenya, for example, implant use grew from 1.7% of married women in 2003 to 18.1% in 2016. 3 However, as implant insertion has risen throughout sub-Saharan Africa, concerns about implant removal have risen as well. Many scholars have expressed apprehension that the rapid rise in implant insertions may not be accompanied by a commensurate rise in the skills, supplies, or services to remove implants. 5−7 For example, a 2016 article projected that the need for implant removals would more than double between 2015 and 2018 across the 69 focus countries of the Family Planning 2020 initiative. 6 As the contraceptive implant is a provider-dependent method, a skilled provider and an array of medical supplies are required to safely remove the implant's rods from the arm. Implant removal is widely considered to be a more difficult procedure than insertion, and the difficulty of removal can be heightened when the implant is deeply placed or when it migrates from its initial place of insertion. 8−10 Family planning advocates have published a series of commentaries to call attention to the underexamined issue of implant removal services, expressing "serious concern about lack of quality removal services." 6,11 In recent years, an emerging body of in-depth qualitative research has shed light on the types of challenges that women can face when accessing removal services in sub-Saharan Africa. Studies from Ethiopia, Kenya, Ghana, and an anonymized setting showed that barriers include providers refusing to remove implants on user request, treating the labeled duration of efficacy as the minimum duration of use, telling users that "early" removal (before the end of the labeled duration of use) costs more, and refusing to remove the implant even when women express a desire to become pregnant. 7,12−15 The only peer-reviewed quantitative study on implant removal from sub-Saharan Africa-conducted in Ghana in 2018 among a sample of approximately 2200 implant users-found that more than one-third of those who sought to discontinue an implant were unable to do so on their first attempt. 7 Although localized studies like this provide important depth and a formative understanding of the challenges to implant removal, there is little systematic evidence of the scope of these issues at national or regional levels in the peerreviewed literature. Here, we began to fill this gap, using national data from facility-based surveys conducted as part of the Performance Monitoring for Action (PMA) project, to improve the understanding of facility-level barriers to implant removal across sub-Saharan Africa. Our goal was to understand the extent to which there may be asymmetry in the availability of implant provision and removal services. Among health facilities that are equipped to provide clients with contraceptive implants, we examined the proportion that lacked the commensurate ability to remove implants. Moreover, exploring facility readiness to remove contraceptive implants, we documented the nature of the barriers to removal services by country and facility type. Methods Data We used data collected as part of the PMA project, which uses mobile technology to conduct annual rapid national and regional surveys on a range of reproductive health topics, including family planning services, in sub-Saharan Africa and South Asia. The PMA collects facility-level data via samples of public and private service delivery points (herein referred to as "facilities") that offer primary and/or reproductive health services to a community in each context. Facility types range from small pharmacies or drug shops to tertiary-level hospitals. Trained interviewers conduct the facility survey using a structured questionnaire with the facility or departmental manager to record facility characteristics and the scope of health services provided. For family planning services, an observation-based facility audit is conducted to assess the availability of contraceptive commodities and other key supplies and equipment. We included all countries in sub-Saharan Africa that had data on facility readiness for implant removal in publicly available datasets as of 2020: Burkina Faso, the Democratic Republic of the Congo (DRC), Ethiopia, Kenya, Nigeria, and Uganda. AJOG Global Reports at a Glance Why was this study conducted? A growing body of in-depth qualitative literature indicates that women face barriers to the removal of subdermal implants, but there is little evidence of the scope of these barriers at the national or regional levels. Key findings Between 31% and 58% of facilities that offer implant insertion reported at least 1 barrier to implant removal. What does this add to what is known? Facility-level barriers to contraceptive implant removal are widespread in these six African countries. Original Research ajog.org The PMA selected facilities following a multistage sampling process to generate a sample that is reflective of the health facility environment of women surveyed in the PMA's nationally or regionally representative household sample. In the first stage of sampling, designed to generate a representative population of women of reproductive age, a series of enumeration areas (EAs) were drawn in each country or region; EAs were used as sampling units from which to identify a probability sample of households and female survey participants. The selection of health facilities was generated by identifying the lowest level public health facility (equivalent to a health post or clinic), the second lowest public facility (generally a health center), and the third lowest public facility (generally a primary-level hospital) whose catchment area included the selected EA. All private facilities that offered generalized or primary health services or specialized obstetrics and gynecology services or had the capacity to distribute contraceptives, including pharmacies and drug shops, within the EA were listed, and up to 3 facilities were randomly selected. This resulted in a total of 4 to 6 facilities per EA across the service delivery point (SDP) data. Although the women-level data that PMA collects are nationally representative in most settings, the SDP data used in this analysis were not nationally or regionally representative. PMA's sampling strategy has been described in greater detail in the studies of Zimmerman et al. 16,17 Data from this analysis were collected between November 2020 and January 2021. Additional information about this data source can be found at www.pmadata.org. Variables We examined 3 primary binary outcomes, reflecting whether each facility (1) lacked any of the necessary supplies to remove implants, (2) did not have a provider trained to remove implants onsite, and (3) could not remove deeply placed implants onsite. A facility was classified as lacking necessary supplies to remove implants if a facility representative reported that one or more of the following supplies was unavailable on the day of the interview: antiseptic, sterile gauze, anesthetic, scalpels, forceps, or clean gloves. In Ethiopia, data on clean gloves were not collected because of a skip pattern error; therefore, clean gloves were not included in the supplies necessary to remove implants in Ethiopia. A facility was classified as lacking a provider trained in implant removal if the representative answered "no" to the question, "On days when you offer family planning services, are there providers trained to remove implants?" A facility was classified as being unable to remove deeply placed implants if the representative responded "no" when asked, "Could implant removal (when deeply inserted) onsite be provided to a woman today?" Finally, we developed a fourth outcome reflecting whether the facility reported any of the 3 barriers to implant removal (yes or no). Analytical strategy We first described facility characteristics, including facility type, management (public or private), and availability of key infrastructure, stratified by country. We calculated for each country the proportion of implant-providing facilities that lack necessary supplies to remove implants, do not have a provider trained in implant removal, or are not able to remove a deeply placed implant onsite and the composite outcome of facilities experiencing any barrier to implant removal. Furthermore, we calculated the proportion of facilities that were missing each of the necessary supplies for implant removal. We stratified by facility type, calculating the proportion of hospitals and all other facility types (including clinics, health posts, dispensaries, health centers, surgery centers, and pharmacies) in each country that have a barrier to implant removal. Following PMA standard practice for the facility survey, we did not report results for any cells where the disaggregation results in <10 facilities. Ethics approval Ethical approval for PMA data collection efforts has been granted by the relevant ethics boards in each country (n=6) presented in this analysis and for the case of PMA Ethiopia, by the Johns Hopkins University Bloomberg School of Public Health Institutional Review Board (FWA00000287). Facility data used in this analysis were exempted as nonhuman subjects research. Role of funding source The funders of the study had no role in the study design, data collection, data analysis, data interpretation, or writing of the report. Results Publicly managed facilities constituted most of our samples across contexts because of the sampling approach, although this proportion was considerably lower in the DRC (52%) than in other countries (89%−94%) (Table 1). However, we noted that the distinction between public and privately managed facilities is highly context specific. Furthermore, the proportion of facilities that were hospitals varied substantially by country, from 10% in Burkina Faso to 33% in Nigeria. In all countries, more than half of the facilities had electricity and running water. The proportion of facilities that reported that they lacked at least some of the supplies necessary for implant removal ranged from 17% in Burkina Faso to 44% in Ethiopia (Figure 1). The proportion of facilities that reported that they lacked a provider trained to remove implants ranged from 1% in Kenya to 17% in Ethiopia. Between 16% and 42% of facilities reported that they ajog.org Original Research could not remove a deeply placed implant. The proportion of facilities that reported at least 1 of these barriers to implant removal ranged from 31% in Burkina Faso to 58% in Ethiopia. Lack of at least 1 of the supplies necessary for implant removal was the most common facility-level barrier to removal among implant-providing facilities in Burkina Faso (where 17% lacked supplies and 17% were unable to remove a deeply placed implant), the DRC, and Ethiopia Original Research ajog.org and was reported by at least 15% of facilities in each country. The inability to remove a deeply placed implant onsite was the leading barrier in Kenya, Nigeria, and Uganda and was reported by at least 16% of facilities in each country. Focusing on the lack of supplies in Figure 2, we showed the proportion of implant-providing facilities that lacked each medical supply necessary for the routine removal of contraceptive implants. Forceps were the most commonly missing supply in all countries except Ethiopia, with 8% to 13% of implant-providing facilities reporting they were unavailable. In Ethiopia, blades were most commonly lacking (18% of implant-providing facilities). The proportion of facilities missing antiseptic ranged from 0% in Burkina Faso to 6% in Ethiopia. Missing anesthetic ranged from 2% of facilities in Burkina Faso to 6% of facilities in the DRC. Missing gauze ranged from 1% of facilities in Burkina Faso to 8% of facilities in Ethiopia. The proportion of facilities missing gloves ranged from 0% in Burkina Faso to 5% in the DRC. 1 In Table 2, we disaggregated facilitylevel barriers to implant removal by facility type in our sample, comparing hospitals to nonhospital lower-level facilities. 2 Although there was variation by barrier and country, we observed a broad association between the level of care provided at the facility and the ability to remove implants, with lower-level clinics in our sample reporting greater barriers to providing removal services than hospitals (Figure 3). For example, in Ethiopia, 53% of nonhospital facilities reported lacking at least 1 supply necessary for implant removal, compared with 21% of hospitals. However, higher-level hospital facilities still reported substantial barriers to implant removal, including 29% of hospitals in the DRC that lacked at least 1 necessary supply and 19% of hospitals in Burkina Faso and Kenya that stated that they could not remove a deeply placed implant onsite. The only setting where lower-level health facilities had fewer barriers to implant removal was Burkina Faso, where 38% of hospitals reported at least 1 barrier, compared with 29% of nonhospitals. Discussion This multicountry analysis found that barriers to contraceptive implant removal are widespread among facilities that provide the implant. Among our sample of 2031 health facilities across 6 sub-Saharan African countries, we found that nearly half of the health facilities (45%) had at least 1 barrier to implant removal. More than one-quarter of implant-providing facilities lacked the supplies Senderowicz. Facility readiness to remove contraceptive implants. Am J Obstet Gynecol Glob Rep 2022. 1 These data were not available for Ethiopia. 2 Cells for which the number of facilities is less than 10 were left intentionally blank. ajog.org Original Research needed to remove the implant. Blades and forceps were the most commonly missing supplies across all 6 countries. Of note, 5% of facilities lack a provider trained to remove the implant, and nearly one-third of the facilities could not remove a deeply placed implant onsite. Although the medical imaging equipment needed in select cases to locate migrated implants may be expensive or challenging to operate, the supplies necessary for routine implant removal that we examined (such as gloves and scalpels) are low-tech staples of health service provision. Ensuring that these supplies are universally available in all implant-providing facilities is 1 key way to reduce barriers to removal for women who wish to discontinue the use of the implant. Our results suggested that lack of trained providers is a less common impediment to implant removal at health facilities than lack of supplies and the ability to remove implants that are deeply placed. That lack of trained providers was a relatively infrequent contributor to barriers to implant removal relative to lack of supplies suggested that increased provider training may be insufficient to address the totality of facility-level barriers. Across 5 of 6 countries, barriers to implant removal were more commonly reported at clinics, health centers, and other first-line facilities than they were at hospitals. These results were in line with the expectation that hospitals, as larger, tertiary-level facilities, would benefit from more highly trained personnel and more advanced equipment (such as imaging devices to locate implants that have migrated) and would suffer from fewer stockouts of basic supplies than their lower-level counterparts. However, this did not seem to be the case in Burkina Faso, where hospitals were more likely than nonhospitals to report that they could not remove a deeply placed implant onsite and that they lacked at least 1 of the necessary supplies. More research is required to understand why this may be the case in that setting. However, we noted that across all 6 countries, we observed high proportions of barriers to implant removal even at the hospital level. That Original Research ajog.org nearly one-fifth of higher-level hospitals surveyed in Burkina Faso and Kenya reported that they could not remove a deeply placed implant onsite raises concerns about a lack of recourse for women facing difficult removals in those settings. The limitations of these data and this analysis are important to note. The facility audits captured data from only 1 discrete time point, which may not always be representative of the facility's typical care capabilities at different points throughout the year. Especially when facilities have a notification that a data collector is coming, they may make efforts to obtain stock in advance of the visit or otherwise prepare for the evaluation. 18 More generally, SDP data are not nationally or regionally representative. In several cases, reports of facility readiness were verbal attestations by a health worker or facility administrator rather than visual inspection of supplies or provider training certificates. Respondents may have provided a socially desirable response that the facility has the needed supplies when it does not. These methodological limitations may potentially lead to the underestimation of the true scope of facility barriers to implant removal in these settings. Furthermore, the presence of additional equipment necessary to remove deeply placed removal (such as medical imaging devices) was not measured. Perhaps more importantly, the exclusive focus of these data on facility readiness leaves us with little understanding of the crucial issues of provider confidence and willingness to provide removal services. Much of the formative research on barriers to implant removal has identified issues of provider reluctance as a key barrier to discontinuation from users' perspectives, and we were unable to measure this here. 13−15 As women may encounter provider hesitancy to remove implants even at a facility that has the right supplies and training, the estimates that we presented were likely an underestimation of the true barriers to removal that implant users face in these contexts. Some previous research studies, although limited, have identified that high percentages of providers have struggled to remove implants or received removal training that did not include practice on actual patients. 19 Limitations in training and infrequent opportunities to apply skills may contribute to provider hesitancy to remove implants. 20 However, to the best of our knowledge, neither provider confidence nor provider willingness to remove methods on request is currently measured by any publicly available survey. Researchers need data that capture both facility readiness and provider ability or willingness to support contraceptive users in their efforts to discontinue LARC methods when they choose. 21 Accurately measuring provider willingness and ability to remove LARC methods is necessary to develop health system−level indicators of contraceptive autonomy. 22 Conclusion Respect for persons is a fundamental principle of biomedical ethics, and the ability to choose what contraceptive method to use, for how long, and when to discontinue it is an essential reproductive right. 23, 24 Scaling up the availability of contraceptive implants and other Facilities that report any barrier to removal services DRC, Democratic Republic of the Congo. Senderowicz. Facility readiness to remove contraceptive implants. Am J Obstet Gynecol Glob Rep 2022. ajog.org Original Research provider-dependent methods is important to expand access to a broad contraceptive method mix that meets user preferences. However, a focus on method provision without a commensurate emphasis on removal has resulted in substantial barriers to free contraceptive choice for women who wish to discontinue implant use. The facility-level barriers to implant removal reported in our study can result in "structural contraceptive coercion," in which women have no choice but to keep a method they wish to discontinue, even in the absence of any ill will or intent to coerce the part of providers, health facilities, or family planning programs. 14 Programs that insert provider-dependent contraceptive methods, such as implants and intrauterine devices, should ensure that method removal is as easily available to clients as insertion and is performed on request. Such a personcentered orientation can safeguard the principles of contraceptive autonomy and reproductive justice as the cornerstone of contraceptive services worldwide. &
2022-11-07T16:05:35.032Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "05f40e648669c3e8c2a7b1bfb325e56616d15295", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xagr.2022.100132", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e13b830a08688a8acfa64fc0f70699c2375dfcde", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
232243044
pes2o/s2orc
v3-fos-license
A thin superficial temporalis artery revealed by total necrosis of an island scalp flap, a case report Highlights • Total necrosis of a scalp flap based on the STA is rare.• Extensive exploration of the vascular supply of the scalp before pedicled flap is not a common rule.• Impact of anatomic variations on scalp flap surgery is not well documented.• Anatomical variation may impact negatively the outcome of single pedicled scalp flaps. Introduction The improved knowledge of vascular anatomy has empowered flap surgery in the scalp region. Five trunks including supratrochlear artery, supraorbital artery, retro-auricular artery, superficial temporal artery (STA), and occipital artery, all branches of the internal and external carotid arteries provide a particularly rich vascular network to supply the scalp. The STA is one of the most crucial vascular pedicles, the terminal branch of the external carotid originating in the parotid gland behind the neck of the mandible. Above the zygomatic process, the STA divides into two main branches, which are the frontal and the parietal divisions [1][2][3]. The terminal branches connect anastomoses with the auricular and occipital arteries and the con-tralateral side. Based on the existence of these anastomoses, it is possible to raise a large scalp flap based on a single STA pedicle [2,4,5]. Anatomical studies have reported some variations in the STA, mainly concerning the point of the division into the frontal and the parietal branches and the diameter of each branch [1,3,6,7]. A true thin STA (less than 1.5 mm diameter) is rare [3]. Ahmed et al. (2018) discovered two cases in 28 cadaver dissections but did not precise if the abnormally was unilateral or bilateral [1]. To the best of our knowledge, there is no report about the clinical implications of these variations. Consequently, only cases requiring microsurgery were reported to perform extensive preoperative investigations, particularly to assess the diameter of the artery for an easy vascular anastomosis [3]. However, the viability of a flap based on the STA is likely to be impacted by some of the anatomical variations, especially the one concerning the diameter at the origin of the artery. We came across a rare complication of total necrosis of an STAbased scalp flap in a patient in whom a post-operative angio-MRI revealed a thin right STA. This publication is in line with the recent SCARE criteria [8] to report this rare complication and the related clinical implications as a cautionary observation especially for the community of surgeons involved in the surgery of scalp flaps. Presentation of case A 43-year-old woman with a history of hypertension and cerebral stroke presented to our consultation in the department of plastic and reconstructive surgery (Casablanca, Morocco) with a fronto-parietal scalp alopecia that stemmed from a chemical burn two years earlier. In her medical story, she reported hypertension and depression that she was taking medication for. There was no known family genetic disorder. A previous attempt to treat the alopecia with a follicular hair transplant in another centre has failed. We counselled the patient for tissue expansion surgery and scalp flap; she gave her written consent. The surgical team included a professor of plastic surgery, a registrar plastic surgeon, and residents in plastic surgery. We put in the subgaleal plane a 300 mL rectangular expander that we filled progressively with sterile saline for two months until up to 20% of its capacity. (Fig. 1). There was no clinical symptom of complication as skin ulceration or skin necrosis. We went for a scalp flap based on the right STA artery and vein. Intraoperatively, the STA was manually palpated. We used no Doppler to localize the artery. A right STA-based goblet island flap was designed encompassing the expanded scalp (Fig. 1). Intraoperatively, we identified the parietal branch of the STA, but no frontal branch. We ligated and divided a communicating branch of the STA with the posterior auricular artery. A security margin of more than 3 cm of fascia was maintained around the pedicle for venous drainage. We successfully covered all the alopecia of 11 cm in the horizontal plane over 8 cm on the sagittal plane (Fig. 1). On postoperative day 2, the anterior part of the flap became congestive. We applied stab incisions and removed some stitches. Nevertheless, the necrosis progressed to almost all the flap area within a week (Fig. 1). In the assessment of this failure, the patient underwent an angio-MRI that revealed a thin right-sided STA at its origin measuring around 0.11 cm, and the absence of its frontal branch (Fig. 2). The patient did not agree to opacifications; hence, we could not explore better the vascular tree of the STA. We informed the patient about the complication and the findings on the angio-MRI. Then, we performed a surgical debridement of all necrosed tissues. We removed almost all the entire superficial layer of the flap, leaving a plane of fat and galea (Fig. 3), and covered the raw area with a split-thickness skin graft taken from the thigh. The wound healed completely (Fig. 3). Although the alopecia became more evident, the patient showed a good understanding of the situation and was keen on another surgical stage of scalp expansion in the future. The ethics committee of the Ibn Rochd hospital also gave its approval for the publication of the case. The present article is registered as a first record of this abnormality with a clinical implication. Discussion Among the arteries that supply the scalp, the STA is perceived as one of the most important [3,4,7,10]. Many authors have agreed with the STA versatility [2,10,11]. Among authors who practiced scalp replantation surgery, Nguyen et al. (2012) [11] have stressed that one vessel, mainly the STA was sufficient to keep the entire scalp viable. Therefore, we based our reconstruction plan on this knowledge. In searching for a potential cause of the failure of our procedure, we came across a particular anatomical pattern of unilateral thin right STA and the absence of its frontal branch. This finding is rare, as confirmed by previous anatomical studies and those published by Ahmed AG et al. (2018) [1] and Pinar YA et al. (2006) [3] in cadaver studies, and Medvedev F et al. who studies living subjects through angiogram (2015) [7]. Many authors using the STA island flaps trace the vessel with a hand-held Doppler [2,4]. Previous studies have not provided any recommendation for extra radiological vascular studies except when microsurgery is planned to assess the diameter of the artery for an easy microvascular anastomosis [3,7]. However, to the best of our knowledge, there was no reported case of total necrosis of an island flap based on the STA as we observed in our patient with the anatomical variation. The main complication that surgeons reported was venous congestion [3,10,12]. This complication needs no treatment in some cases. In several cases, suture release or stab incisions were required. The venous drainage of the flap is thought to depend not only on the superficial temporal vein (STV) but also on the facial network. For this reason, many authors recommend keeping at least 2-2.5 cm of fascia around the pedicle during the elevation of the flap [2,5,10]. In our case, we kept about 3 cm of fascia around the pedicle. Moreover, skin expansion is believed to have a positive impact on vascularity, acting as a delayed flap procedure [13]. We think that the detrimental factor in our case was the STA diameter on the right side, which was too thin to supply this large skin paddle. Unfortunately, this finding was detected in the post-operative time, resulting in an increased area of alopecia. This case may have ended in legal action by the patient. However, she did not ask for any compensation and showed a deep understanding. Conclusion This article describes a rare case of total failure of a scalp island flap and postoperative finding of a unilateral thin STA. We hypothesized that the STA calibre played a critical role in the occurrence of the encountered complication. Based on this hypothesis, we suggest that surgeons be careful when planning an island scalp flap based on a single pedicle; they may ask for radiological exploration in case of doubt. Future researches should focus on the clinical impact of some STA anatomical variations, especially those concerning the diameter of the artery. Declaration of Competing Interest The authors report no declarations of interest. Sources of funding No funding was received for this publication. Ethical approval The ethics committee of the Ibn Rochd teaching hospital gave their approval for the publication of this case report. Consent Patient consent was received. Provenance and peer review Not commissioned, externally peer-reviewed.
2021-03-17T06:17:26.520Z
2021-02-27T00:00:00.000
{ "year": 2021, "sha1": "f9d71884767feabf9304ce4fddf8365a01e85e41", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.105708", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b153ca863a0350b651b868efecef0e0f66c70f3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13088311
pes2o/s2orc
v3-fos-license
Magnetic field interaction with guided light for detection of an active gaseous medium within an optical fiber We report a novel fiber-optic sensing architecture for the detection of paramagnetic gases. By interacting a modulated magnetic field with guided light within a microstructured optical fiber, it is possible to exploit Faraday Rotation Spectroscopy (FRS) within unprecedentedly small sample volumes. This approach, which utilizes magnetic circular birefringence and magnetic circular dichroism effects, is applied to a photonic bandgap fiber to detect molecular oxygen and operates at a wavelength of 762.309 nm. The optical fiber sensor has a 4.2 nL detection volume and 14.8 cm long sensing region. The observed FRS spectra are compared with a theoretical model that provides a first understanding of guided-mode FRS signals. This FRS guided-wave sensor offers the prospect of new compact sensing schemes. ©2013 Optical Society of America OCIS codes: (060.2370) Fiber optics sensors; (060.4005) Microstructured fibers; (230.2240) Faraday effect. References and links 1. T. Ritari, J. Tuominen, H. Ludvigsen, J. Petersen, T. Sørensen, T. Hansen, and H. Simonsen, “Gas sensing using air-guiding photonic bandgap fibers,” Opt. Express 12(17), 4080–4087 (2004). 2. E. Austin, A. van Brakel, M. N. Petrovich, and D. J. Richardson, “Fibre optical sensor for C2H2 gas using gasfilled photonic bandgap fibre reference cell,” Sens. Actuators B Chem. 139(1), 30–34 (2009). 3. F. Benabid, F. Couny, J. C. Knight, T. A. Birks, and P. S. J. Russell, “Compact, stable and efficient all-fibre gas cells using hollow-core photonic crystal fibres,” Nature 434(7032), 488–491 (2005). 4. R. S. Brown, I. Kozin, Z. Tong, R. D. Oleschuk, and H.-P. Look, “Fiber-loop ring-down spectroscopy,” J. Chem. Phys. 117(23), 10444–10447 (2002). 5. H. Waechter, K. Bescherer, C. J. Dürr, R. D. Oleschuk, and H. P. Loock, “405 nm absorption detection in nanoliter volumes,” Anal. Chem. 81(21), 9048–9054 (2009). 6. L. Sun, S. Jiang, and J. R. Marciante, “All-fiber optical magnetic-field sensor based on Faraday rotation in highly terbium-doped fiber,” Opt. Express 18(6), 5407–5412 (2010). 7. H. C. Y. Yu, M. A. van Eijkelenborg, S. G. Leon-Saval, A. Argyros, and G. W. Barton, “Enhanced magnetooptical effect in cobalt nanoparticle-doped optical fiber,” Appl. Opt. 47(35), 6497–6501 (2008). 8. M. A. Schmidt, L. Wondraczek, H. W. Lee, N. Granzow, N. Da, and P. St J Russell, “Complex Faraday rotation in microstructured magneto-optical fiber waveguides,” Adv. Mater. (Deerfield Beach Fla.) 23(22-23), 2681– 2688 (2011). 9. G. Litfin, C. R. Pollock, J. R. F. Curl, and F. K. Tittel, “Sensitivity enhancement of laser absorption spectroscopy by magnetic rotation effect,” J. Chem. Phys. 72(12), 6602–6605 (1980). 10. R. Lewicki, J. H. Doty 3rd, R. F. Curl, F. K. Tittel, and G. Wysocki, “Ultrasensitive detection of nitric oxide at 5.33 microm by using external cavity quantum cascade laser-based Faraday rotation spectroscopy,” Proc. Natl. Acad. Sci. U.S.A. 106(31), 12587–12592 (2009). 11. S. So, E. Jeng, and G. Wysocki, “VCSEL based Faraday rotation spectroscopy with a modulated and static magnetic field for trace molecular oxygen detection,” Appl. Phys. B 102(2), 279–291 (2011). 12. W. Zhao, G. Wysocki, W. Chen, E. Fertein, D. Le Coq, D. Petitprez, and W. Zhang, “Sensitive and selective detection of OH radicals using Faraday rotation spectroscopy at 2.8 μm,” Opt. Express 19(3), 2493–2501 (2011). 13. D. J. Robichaud, J. T. Hodges, P. Maslowski, L. Y. Yeung, M. Okumura, C. E. Miller, and L. R. Brown, “Highaccuracy transition frequencies for the O2 A-band,” J. Mol. Spectrosc. 251(1-2), 27–37 (2008). 14. D. Long, D. Havey, M. Okumura, C. Miller, and J. Hodges, “O2 A-band line parameters to support atmospheric remote sensing,” J. Quant. Spectrosc. Radiat. Transf. 111(14), 2021–2036 (2010). #179454 $15.00 USD Received 7 Nov 2012; revised 15 Jan 2013; accepted 16 Jan 2013; published 25 Jan 2013 (C) 2013 OSA 28 January 2013 / Vol. 21, No. 2 / OPTICS EXPRESS 2491 15. H. Adams, D. Reinert, P. Kalkert, and W. Urban, “A differential detection scheme for Faraday rotation spectroscopy with a color center laser,” Appl. Phys. B 34(4), 179–185 (1984). 16. M. J. Steel, T. P. White, C. Martijn de Sterke, R. C. McPhedran, and L. C. Botten, “Symmetry and degeneracy in microstructured optical fibers,” Opt. Lett. 26(8), 488–490 (2001). 17. W. J. Tabor and F. S. Chen, “Electromagnetic propagation through materials possessing both Faraday rotation and birefringence: experiments with ytterbium orthoferrite,” J. Appl. Phys. 40(7), 2760–2765 (1969). 18. T. Martynkien, G. Statkiewicz-Barabach, W. Urbanczyk, and J. Wojcik, “Highly birefringent microstructured fibers for sensing applications,” Proc. SPIE 7141, 714108, 714108-10 (2008). 19. D. J. Robichaud, J. T. Hodges, L. R. Brown, D. Lisak, P. Maslowski, L. Y. Yeung, M. Okumura, and C. E. Miller, “Experimental intensity and lineshape parameters of the oxygen A-band using frequency-stabilized cavity ring-down spectroscopy,” J. Mol. Spectrosc. 248(1), 1–13 (2008). Introduction Gas sensors based on microstructured optical fibers (MOFs) offer a number of advantages compared to free-space gas sensing architectures.They can provide long optical pathlengths since the guided light can interact with the gas sample along the fiber length.For example, hollow-core photonic bandgap fibers (HC-PCFs) serve as an efficient platform for interacting light with absorbing gas molecules [1,2], and have also been used to enhance nonlinearoptical effects through high power densities of the guided light [3].MOFs enable the use of extremely small (nL) sample volumes, compared to tens of mL necessary for single or double-pass absorption cells, and hundreds of mL for conventional multi-pass cells.Their small dimensions, mechanical flexibility, and ability to be integrated with standard optical fiber components offer strong sensor miniaturization potential for applications in confined spaces and harsh environments.Free-space spectroscopic techniques such as cavity ringdown spectroscopy (CRDS) have already started to successfully transition into optical fiber-based sensing architectures, as demonstrated by fiber-loop ringdown spectroscopy, therefore utilizing small sample volumes [4,5]. The use of magneto-optical effects in optical fibers is an area of increasing interest.Recent work includes the development of magnetic field sensors [6], enhancement of the Faraday Effect in doped fibers [7], and microstructured magneto-optical fibers for the development of fiber-based optical isolators and circulators [8].Here we report, to the best of our knowledge, the first demonstration of Faraday Rotation Spectroscopy (FRS) within microstructured optical fibers.To date FRS has only been implemented in free-space, bulkoptic sensor systems.We demonstrate a fiber-optic FRS architecture by employing a HC-PCF as a miniature gas cell filled with a magneto-optical active gaseous medium, specifically molecular oxygen (O 2 ).Oxygen was chosen as it plays an important role in industrial, environmental and atmospheric sensing applications.In this work we investigate the interaction of an externally applied, modulated magnetic field with the guided light, and explore how the fiber waveguide properties influence and can be exploited to manipulate the FRS signal behavior.The observed FRS spectra are compared with a developed theoretical model to provide a first understanding of these guided-mode FRS signals. Faraday rotation spectroscopy Faraday Rotation Spectroscopy (FRS) is a highly sensitive and selective, background-free, spectroscopic technique that exploits the magneto-optic Faraday rotation effect to detect paramagnetic trace gases such as nitric oxide (NO) [9,10], molecular oxygen (O 2 ) [11], or free radical species like hydroxyl radicals (OH) [12].In FRS, background absorption interference from common diamagnetic atmospheric species such as water (H 2 O), carbon dioxide (CO 2 ) and other non-paramagnetic molecules are eliminated.As described in [9][10][11][12], FRS probes the change in the state of polarization of a linearly polarized laser beam as it propagates through a gas cell containing paramagnetic molecules exposed to an external magnetic field.When the laser frequency is in resonance with a Zeeman-split absorption line of the paramagnetic molecular species, magnetic circular birefringence (MC-birefringence) and magnetic circular dichroism (MC-dichroism) are observed.MC-birefringence results from a difference in refractive index for right-handed (RHCP) and left-handed (LHCP) circular polarized light.The originally linearly polarized laser light, which can be considered as a superposition of RHCP and LHCP, experiences a rotation of its polarization axis when propagating through the gaseous medium inside the longitudinal magnetic field due to the phase difference experienced between the two circularly polarized components.Thus the MC-birefringence signal is the difference between the dispersion profiles as depicted in Fig. 1(a), whereas the refractive index difference is proportional to the concentration of the absorbing species.MC-dichroism, as shown in Fig. 1(b), occurs when there is a difference in absorption between the two circular polarized components in the gaseous medium, which changes linear polarized light to elliptical polarized light.Every FRS signal is comprised of both MC-birefringence and MC-dichroism components; their relative strength depends on the individual system and measurement parameters. Experimental setup In this work we have chosen the P P 1 (1) 16 O 2 transition (A-band) at 762.309 nm (13118.04cm −1 ) [13,14] to record FRS signals.This low total angular momentum quantum number transition (J = 1) of molecular oxygen has the advantage of offering relatively strong Faraday rotation signals for low intensity magnetic fields [11].Our MOF-based FRS sensor is depicted in Fig. 2(a).A wavelength-tunable, external cavity diode laser (ECDL) from New Focus (Model: TLB-6712-P-D) with ~20 mW continuous-wave output power, specified <200 kHz linewidth and stable axis of polarization was used to probe the magneto-optical effects inside the HC-PCF.The linearly polarized laser beam was directed to the MOF gas cell, passing on through a Glan-Thompson polarizer (extinction ratio: 10 −6 ) to ensure the purity of polarization, followed by a half wave plate to set its polarization axis.To couple the laser beam into the pure-silica HC-PC fiber (NKT Photonics AIR-6-800, core dia.~6 μm, attenuation <0.4 dB/m at 760-800 nm, NA: ~0.17 at 780 nm), an aspheric f = 18.4 mm lens was used.The 24.6 cm long piece of HC-PCF was placed in a glass capillary tube to mechanically isolate the fiber, with both ends vacuum sealed using end caps to form an evacuable MOF gas cell.The detection volume inside the HC-PCF was calculated at 4.2 nL.Note that each of the gas cell end caps had a dead-volume of 1.29 mL.The capillary tube assembly was placed inside a 14.8 cm long air-core solenoid for axial magnetic field generation.A photograph of the complete MOF gas cell with air-core solenoid including fans for air cooling is shown in Fig. 2(b).For better understanding of the gas cell assembly, Fig. 2(c) depicts a concept drawing of the cell end caps with mounted glass capillary tube but without the air-core solenoid. For AC operation of the air-core solenoid, a series resistance-inductance-capacitance (RLC) circuit driven by an audio amplifier at a resonant frequency of f m = 1.32 kHz was constructed.When sine-wave modulated at f m , an AC magnetic field of 0.148 Tesla rms with 8.9% homogeneity over its length was measured.After the laser light was coupled out of the fiber, a light transmission of ~76% through the HC-PCF's air core was measured.The ), which splits the laser beam into two perpendicular polarized beams of equal intensity.Each beam was focused onto a balanced silicon photodetector enabling differential detection of the FRS signal for improved sensitivity [15].For lock-in detection, the FRS signal was demodulated at f m with its in-phase component maximized and recorded.All FRS spectra reported in this article were acquired by scanning the ECDL's frequency with 0.59 pm/s across the targeted O 2 transition, a lock-in time constant of 100 ms and a measurement time of ~0.5 s per averaged data-point.The HC-PCF used for our fiber-optic FRS system is not specified as a birefringent fiber.An electron microscope image of its structure is shown in Fig. 2(d), whereas the approximately Gaussian beam profile of the HC-PCF output is depicted in Fig. 2(e).NKT Photonics reports it to have a 50-nm wide photonic bandgap (PBG) centered at approximately 770 nm, placing the targeted O 2 transition close to the PBG center, where both polarization modes of the fiber are degenerate [16].Thus even small perturbations will cause coupling between both polarization modes and depolarization of the light.However, by mounting the fiber inside the glass capillary tube, which isolates it from stress and vibrations, it is possible to reduce this effect.We observed that the HC-PCF exhibited two orthogonally oriented polarization axes and confirmed that the polarization state of the light was preserved if coupled to one polarization axis (the polarization extinction ratio was measured to be 10 −2 ). Experimental results and theoretical model A series of FRS experiments were performed to investigate the performance of the fiber-optic based FRS sensor.Figure 3 shows FRS spectra recorded for different light coupling conditions into the HC-PCF, achieved by aligning the laser polarization state to both principle fiber axes in turn.amplitudes decreased).By optimizing the coupling to the fiber it was ensured that during measurements most of the optical power was coupled into the fundamental polarization modes of the HC-PCF.Under these conditions the laser beam exiting the fiber core, which has an approximately Gaussian profile (Fig. 2(e)), did not change shape or optical power for different light polarization angles to the HC-PCF. The FRS spectra depicted in Fig. 3 illustrate the effects taking place within the hollow core of the fiber.Figures 3(a) and 3(e) depict FRS spectra for light coupled to one of the two fiber polarization modes, with their observed signal shapes typical for FRS measurements originating from the MC-birefringence effect.When light is launched equally in both polarization modes (Fig. 3(c)), we observe a 4-fold increase in peak-to-peak signal amplitude, as well as a change in shape compared to exciting only one polarization mode.MCbirefringence alone cannot explain this effect.This suggests that the signal itself originates from the MC-dichroism effect within the fiber and also explains the change of its shape.The resulting signal arises from their superposition and its amplitude cannot be larger compared to the situation when only one polarization mode is excited.For the conditions in Fig. 3(c) we calculated a minimum detectable absorption of 1.9 × 10 −6 cm −1 Hz -1/2 .Figures 3(b) and 3(d) present FRS spectra for transitional coupling situations with MC-dichroism evident but not dominating the MC-birefringence.Thus in these cases the signal shapes and amplitudes are influenced by both effects. To test this explanation of the experimental results, a theoretical model has been developed.Analysis of the light propagation in complex structures such as HC-PCFs requires Maxwell's equations to be solved, and this can only be done numerically.However, in our case it is possible to make several approximations that simplify the problem.Under optimal coupling conditions in the experiment most of the laser light was coupled to the fundamental mode and thus higher-order modes can be excluded from the theoretical analysis.Additionally, the refractive index change introduced by the gas is small (calculated to be of the order 10 −9 ) and does not influence light propagation in the fiber.Therefore, the HC-PCF exhibits two orthogonal linear polarization modes, with the gas adding small MCbirefringence and MC-dichroism effects.For simulations of the light polarization in the fiber, the oxygen filled HC-PCF can be modeled as an elliptically birefringent crystal illuminated by a non-divergent laser beam.Using Jones matrix formalism this is equivalent to a medium with the dielectric matrix expressed as [17] ( ) ( ) where δ L and δ C are proportional to the linear and circular birefringences and ε is related to the average effective refractive indices of the fiber modes (for HC-PCF ε ≈1).Linear and circular dichroism can also be included in this formalism if δ L and δ C are taken to be complex. Here the HC-PCF is not linearly dichroic, and thus δ L is real.We expect residual structural birefringence emerging from imperfections in the fiber of the order 10 −7 -10 −5 [18].The circular birefringence in our system is relatively small (calculated to be of order 10 −9 ).For the numerical model, dispersion and absorption near the targeted O 2 line at 762.309 nm have been calculated using the Kramers-Kronig relation.All the parameters were taken from the HITRAN database [13,14] and a Voigt-profile was used to model the line shape.Absorption line shape contributions other than Lorentzian shaped pressure broadening and Doppler broadening as described in refs [14,19].are neglected.As result the complex refractive index for molecular oxygen (with magnetic field off) can be expressed as [10] ( ) where c is the speed of light in vacuum, N is the number density of molecules, S is the linestrength, ω 0 is the central transition frequency, Γ D is the Doppler broadening at half-width half-maximum (HWHM), Γ P is the pressure-broadening at HWHM and Z is the plasma dispersion function.By changing the magnetic field, ω 0 for two circular polarizations are shifted by a factor Δω = gμ B B/ħ of where B is the magnetic field strength, μ B is the Bohr magneton and g is the Lande factor.This gives rise to the MC-birefringence and MCdichroism, which are included in δ C as . By diagonalizing the matrix in Eq. ( 1) the modes of the filled fiber and their propagation constants can be found.In this way the fiber can be easily included in the Jones matrix formalism.In the formalism, the medium with a dielectric matrix ε is modeled as the ( ) ε matrix with k as vacuum wave number and z propagation distance (note: calculating this matrix exponent function requires diagonalization of ε ).For the simulations, all optical elements in the system (wave plates, polarizers, balanced detector, HC-PCF) are modeled using matrix operators.Additionally, a lock-in amplifier has been included in the simulations to reflect the experimental configuration.The modeled signal shapes, depicted as solid (red) traces in Fig. 3, show good agreement with our experimentally measured FRS signals. The theory of our fiber-optic based FRS system is more complex than conventional freespace FRS systems due to the presence of the linear birefringence of the fiber.The optical fiber provides additional parameters that control shapes of the FRS signals (e.g.different coupling conditions).The dielectric matrix shown as Eq. ( 1) incorporates these effects.This matrix can be divided into two parts (by separating linear birefringence and MC-birefringence / MC-dichroism) and expressed as in the form If the term δ L δ C is negligible the fiber-optic based FRS system can be described as a freespace FRS system with an additional wave plate.Then the optical fiber takes the role of a wave plate, with the phase retardance related to the linear birefringence.Thus, as in typical free-space FRS measurements with additional wave plates before the analyzer, the measured fiber-optic FRS signals are superpositions of MC-birefringence and MC-dichroism effects with different amplitudes.However, care has to be taken when making analogies between fiber-optic and free-space FRS systems.By careful analysis of the dielectric matrix from Eq. (1), differences between these systems may be found.For example, it can be shown that in our case |δ C | << δ L the eigenvalues and eigenmodes of the matrix from Eq. ( 1) differ in their dependence on δ C from the free-space system -eigenmodes are linear and eigenvalues are quadratic.This leads to further differences in the signals that are observed.For example, this is shown by the results in Fig. 3, which depict how the evolution of FRS signal shapes and amplitudes depends on the initial light polarization angle φ.This can be also seen in Fig. 4, which shows the peak-topeak FRS signal amplitude evolution with changing coupling conditions (i.e., light polarization angle φ), showing the sine-dependence while rotating the polarization of the light which is the fiber-space specific effect (note that the photodetector is balanced at all times). Figure 5 depicts the measured pressure dependence of the peak-to-peak FRS signal amplitude at an AC magnetic field of 0.148 Tesla rms when coupling equally to both polarization modes of the fiber (Fig. 3(c)).The maximum FRS signal amplitude measured around 300 mbar of gas pressure depends on the used magnetic field strength, resulting in a decrease of signal amplitude at higher pressures due to dominating pressure broadening.We experienced filling-times of <5 min.for reaching pressure equilibrium inside the fiber core.Fig. 4. Evolution of peak-to-peak FRS signal amplitude depending on coupling condition (i.e., light polarization angle φ) to the HC-PCF.Coupling to a single polarization mode (see Fig. 3(a,e)) produces a minimum; a maximum occurs when both modes are excited equally (see Fig. 3(c)).Fig. 5. Peak-to-peak FRS signal amplitude dependence at different gas pressures when coupling to both polarization modes of the HC-PCF.The decrease of signal amplitude at oxygen pressures >306.6 mbar is due to dominating pressure broadening. Summary and conclusion In summary, we have demonstrated a unique platform for studying the interaction of guided light with an active medium.The unique polarization properties of the interaction platform enable access to a qualitatively new environment for magneto-optical effects.The HC-PCF provides structural linear birefringence, and its hollow core, when filled with an active medium can be used to further alter its polarization properties, such as via the introduction of strongly wavelength-dependent circular birefringence and dichroism.It can be envisioned that by changing the fiber geometry and active medium these properties can be independently tuned.Such complex media may prove useful in fundamental studies of guided light and have strong potential practical applications, e.g., sensors, as demonstrated here in the form of a novel fiber-optic FRS sensing architecture with ultra-low 4.2 nL detection volume.The experimentally recorded FRS spectra are in good agreement with theoretical modelling and show that in our experimental setup it is possible to measure MC-birefringence and MCdichroism independently by modifying the coupling conditions.Moreover, in the presented MOF-based FRS architecture it is advantageous to incorporate MC-dichroism rather than MC-birefringence, as we observed a more than 4-fold increase of the signal amplitude for MC-dichroism.We anticipate that by applying polarization maintaining fibers this ratio may be increased further. Fig. 1 . Fig. 1.(a) MC-birefringence signal as the difference of refractive indices of RHCP and LHCP light.(b) MC-dichroism signal as the difference of absorption of RHCP & LHCP light. Fig. 2 . Fig. 2. (a) Schematic of the MOF-based Faraday rotation spectroscopy setup for detecting O 2 at 762.309 nm.Note: GTP -Glan Thompson polarizer, RP -Rochon polarizer, C -fiber collimator, BS -1% beam sampler, M -mirror, AL -aspheric lens, L -lens, λ/2 -half wave plate.(b) Photograph of complete MOF gas cell with air-core solenoid and fans for air cooling.(c) Concept drawing of cell end caps with mounted glass capillary tube.(d) Electron microscope image of the HC-PCF structure.(e) Beam profile of HC-PCF output. Fig. 3 . Fig. 3. Fiber-optic based FRS spectra for the 762.309 nm A-band transition of pure O 2 at gassample pressure P = 300.5 mbar (T ~307 K) at different coupling conditions (see insets).Dotted (black) traces show measured FRS spectra in good agreement with the solid (red) traces depicting the corresponding modeling results.Signal-to-noise ratios (SNR) are shown.Optical misalignment, which increases the coupling of light to the higher order modes, was demonstrated to degrade the resulting FRS signals (i.e., offset levels increased, signal
2018-04-03T00:36:37.395Z
2013-01-28T00:00:00.000
{ "year": 2013, "sha1": "04fca944490d6b18d0104358fe85193c48acb092", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.21.002491", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "04fca944490d6b18d0104358fe85193c48acb092", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
238229662
pes2o/s2orc
v3-fos-license
ACE2 and TMPRSS2 in human saliva can adsorb to the oral mucosal epithelium Abstract Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is primarily transmitted through droplets. All human tissues with the angiotensin‐converting enzyme 2 (ACE2) and transmembrane protease serines 2 (TRMPRSS2) are potential targets of SARS‐CoV‐2. The role of saliva in SARS‐CoV‐2 transmission remains obscure. In this study, we attempted to reveal ACE2 and TRMPRSS2 protein expression in human parotid, submandibular, and sublingual glands (three major salivary glands). Then, the binding function of spike protein to ACE2 in three major salivary glands was detected. The expression of ACE2 and TMPRSS2 in human saliva from parotid glands were both examined. Exogenous recombined ACE2 and TMPRSS2 anchoring and fusing to oral mucosal epithelial cells in vitro were also unraveled. ACE2 and TMPRSS2 were found mainly to be expressed in the cytomembrane and cytoplasm of epithelial cells in the serous acinus cells in parotid and submandibular glands. Our research also discovered that the spike protein of SARS‐CoV‐2 binds to ACE2 in salivary glands in vitro. Furthermore, exogenous ACE2 and TMPRSS2 can anchor and fuse to oral mucosa in vitro. Thus, the expression of ACE2 and TMPRSS2 in human saliva might have implications for SARS‐CoV‐2 infection. | INTRODUC TI ON The SARS-CoV-2 has emerged as a global pandemic, causing a severe public health problem (Arshad et al., 2020). Like other human respiratory coronaviruses, such as severe acute respiratory syndrome coronavirus (SARS-CoV) and the middle east respiratory syndrome coronavirus, SARS-CoV-2 also belongs to genusβ of the coronaviridae, involving multiple organs and tissues of the body (Cui et al., 2019;Hui et al.2020). The angiotensin-converting enzyme 2 (ACE2), known as a functional receptor for SARS-CoV, was first reported by Li (Li et al., 2003). Now, studies on SARS-CoV-2 have demonstrated that ACE2 also serves as the adsorption target for the SARS-CoV-2 spike protein 1 (Wan, et al., 2020). The SARS-CoV and the SARS-CoV-2 spike protein 1 can bind to ACE2 located on the host cytomembrane, facilitating the fusion and entry of the virus. This can culminate the adsorption and infection of the virus in the host cell (Xiao et al., 2003). The binding of the SARS-CoV-2 spike protein 1 to ACE2 protein of the host cell facilitates the entry of SARS-CoV-2 via TMPRSS2, a cytomembrane protease, and the deletion of TMPRSS2 can inhibit this process (Hoffmann et al., 2020). Thus, all human organs with ACE2 and TMPRSS2 are potential infectious targets of SARS-CoV-2 (Chai et al., 2020;Chen, Zhao, et al., 2020;Chen, Zhou, et al., 2020;Fan et al., 2020;Huang et al., 2020;Liu et al., 2020;Patel et al. 2020;Xu et al., 2020). Respiratory droplets are the most common transmission route of SARS-CoV-2 (Singhal, 2020). It is reported that SARS-CoV-2 and SARS-CoV have been consistently detected in the infected patient's saliva (Azzi et al.2020;Wang et al.2004). The questionnaire for COVID-19 patients also indicated the functional abnormalities of salivary glands, such as hyposecretion (Chen, Zhao, et al., 2020;Chen, Zhou, et al., 2020). Based on the public database and bioinformatic analysis, some studies have reported the expression of ACE2 and TMPRSS2 in human salivary glands (Pascolo et al., 2020;Song et al., 2020;Shamsoddin, 2020;Wang et al., 2020). Only a couple of studies have demonstrated the presence of ACE2 in salivary glands just in the Chinese rhesus macaques and rats by histological method (Cano et al., 2019;Liu et al., 2011). However, the distribution details of the ACE2 and TMPRSS2 in human salivary glands and saliva remain unexplored. In the current study, we have investigated ACE2 and TMPRSS2 distribution in major human salivary glands (parotid, submandibular, and sublingual glands) and the binding of ACE2 to the SARS-CoV-2 spike protein. We have also detected the expression of ACE2 and TMPRSS2 in human saliva and the anchoring and fusing function of exogenous ACE2 and TMPRSS2 to the oral mucosal epithelium in vitro. The current study aims to provide experimental evidence to expand the knowledge about the role of saliva in SARS-CoV-2 infection. | Specimen acquisition We procured parotid (n = 6), submandibular (n = 6), and sublingual (n = 7) gland samples from adult patients afflicted with benign disorders of salivary glands. Samples were surgically resected in the Affiliated Stomatological Hospital of Nanjing Medical University (Table 1). Saliva samples were collected from young healthy volunteers '(n = 8) parotid glands (Table 2) | ELISA to detect the binding of SARS-COV-2 spike protein to ACE2 in human salivary glands Salivary gland samples were homogenized on ice. The homogenate was crushed through the ultrasonication process. It was followed by centrifugation of homogenate at 5000 g for 5 min, and the supernatant was separated for further analysis. Protein estimation was done using Bicinchoninic Acid Kit (P0012, Beyotime, China). The SARS-CoV-2 spike protein (SARS-CoV-2 Spike S1-His Recombinant Protein, 40591-V08H3, Sino Biological, China) was diluted in PBS, and 1 ng/ well was added into 96-well plates (Corning, USA) followed by overnight incubation at room temperature (RT). These plates were washed three times with Tween 20 solution with 0.05% TBS. Wells were blocked with 1% BSA in PBS for 1 h at RT. The supernatant of salivary gland homogenate was added to each well in triplicate and incubated overnight at RT. The plate was washed four times with PBS. AEC2 antibody (ACE2, 500 ng/ml, ab15348, Abcam, USA) was used to detect ACE2 receptor-bound spike proteins, followed by a thorough wash with Tween 20 with TBS-0.05%, three times, and 30 min incubation at RT with the HRP labeled Goat anti-rabbit IgG (1:2000, A0277, Beyotime, China). ABTS liquid substrate solution (Sigma, USA) was used for color development as per the manufacturer's instruction. OD (405 nm) was measured using an ELISA plate reader with wavelength correction set to 650 nm. HPR-labeled BSA (SE063, Solarbio, China) and HRP labeled goat anti-rabbit IgG served as control. As the control for SARS-CoV-2 spike protein binding to ACE2 in human salivary glands, the homogenate of small intestine and breast tissue was used as the positive and negative control respectively. | IHC for ACE2 and TMPRSS2 detection in salivary glands The dewaxed tissue sections were incubated with a drop of 3% H2O2 at room temperature for 10 min to block endogenous peroxidase activity and later washed with PBS three times. The primary antibody ACE2 (2 µg/ml, ab15348, Abcam, USA) and TMPRSS2 (1:2000, ab109131, Abcam, USA) were added to each slide and incubated at RT for 2 h. After incubation, these slides were washed three times with PBS solution and incubated with the secondary antibody HRP-labeled anti-mouse/rabbit polymer (GK800511-B, Genetech, China) at RT for 30 min and re-washed three times with PBS solution. Subsequently, the diaminobenzidine (DAB) solution (Genetech, China) was added to each slide and incubated for 5 min. The tissue sections were counterstained with hematoxylin, dehydrated with alcohol gradient, renders tissues transparent with xylene, and fixed with neutral gum. ACE2 and TMPRSS2 immunoreactivity was grouped by immunoreactive score (IRS) (namely immunointensity score (IS) and proportion score (PS)) selected by staining intensity and distribution respectively. IS was divided into negative (0), weak (1), moderate(2), or strong (3), whereas PS was segmented into negative (<25% of the cells were immunoreactive) and positive (>25% of the cells were immunoreactive). Histological tissues were evaluated by two pathologists, respectively. The positive and negative control tissue sections were stained using primary antibodies (ACE2 for small intestine and breast tissue, 2 µg/ml, ab15348, Abcam, USA; TMPRSS2 for prostate adenocarcinoma and adipose tissue, 1:2000, ab109131, Abcam, USA) and secondary antibody (HRP-labeled anti-mouse/rabbit antibody) respectively. The control without primary antibody was stained using HRP-labeled anti-mouse/rabbit antibody. The isotype control was stained using normal rabbit IgG (700 ng/ml, A7016, Beyotime, China) and HRP-labeled anti-mouse/rabbit antibody. | Statistical analysis All the statistical analysis was done using SPSS version 22.0 (IBM-Corp., USA). The difference between the three groups was analyzed by one-way analysis of variance. SNK-q test was used to make multiple comparisons. p < 0.05 was considered to be statistically significant. | The expression of ACE2 and TMPRSS2 in salivary glands Both ACE2 and TMPRSS2 were found to be expressed in salivary glands; however, their expression levels were significantly different among the three salivary glands. Lower than the ACE2 expression of the parotid glands, the ACE2 in submandibular glands and sublingual glands were 1.34 ± 0.05-Fold (F) (p = 0.027) and 1.81±0.13-F (p = 0.003) respectively (Figure 1a). Lower than the TMPRSS2 expression of the parotid glands, the TMPRSS2 in submandibular and sublingual glands were 1.52 ± 0.03-F (p = 0.001) and 4.85 ± 0.02-F (p < 0.001) respectively (Figure 1b). The expression of ACE2 was positive in small intestine and negative in breast tissue (Figure 1d). The expression of TMPRSS2 was positive in prostate adenocarcinoma and negative in adipose tissue ( Figure 1d). This result also confirmed the specificity of primary antibodies (ACE2 and TMPRSS2) used in this study. Salivary glands only stained by the secondary antibody were negative (Figure 1d). | The spike protein of SARS-COV-2 absorbing to salivary glands We observed that the spike protein of SARS-CoV-2 could bind to human salivary glands (parotid, submandibular, and sublingual glands). ELISA of salivary gland homogenate supernatant demonstrated that the ACE2 could bind to the spike protein of SARS-CoV-2 ( Figure 2a-c). As the control for the spike protein of SARS-CoV-2 binding to ACE2 in human salivary glands, the spike protein of SARS-CoV-2 was positively absorbed to the homogenate of small intestine and negatively absorbed to breast tissue (Figure 2d,e). | The location of ACE2 and TMPRSS2 in salivary glands The brownish-yellow ACE2 ( Figure 3a) and TMPRSS2 (Figure 3b) immune-complex were observed in the cytoplasm and cytomembrane of epithelial cells from serous acinus, intercalated, secretory, and excretory ducts. The vascular endothelial cells were positively immunostained for the ACE2 immune complex (Figure 3a). ACE2 and TMPRSS2 complex staining was scored as 3 in parotid glands. The mucinous acinar cells were found to be negatively immu- Intestinal villi were observed in the mucosal layer of the small intestinal epithelium (Figure 4a). The brownish-yellow ACE2 immune complex was observed in the small intestinal epithelium, which was used as the positive control for ACE2 immunostaining (Figure 4b). Normal breast tissue with luminal cells was surrounded by a basal layer of myoepithelial cells by HE staining (Figure 4c). There was no brownish-yellow ACE2 immune complex observed in breast tissue ( Figure 4d). Adenocarcinoma of the prostate (Gleason grade 3 + 4 = score of 7) presented with single, separate, well-formed glands in the prostate (Figure 4e). Prostate adenocarcinoma was positive for the brownish-yellow TMPRSS2 immune complex, as the positive control for TMPRSS2 immunostaining (Figure 4f). H&E staining of mature adipocytes is presented in Figure 4g. There was no brownish-yellow TMPRSS2 immune complex observed in adipose tissue (Figure 4h). The parotid, submandibular and sublingual glands, only incubated with HRP-labeled anti-mouse/rabbit antibody, were found to be negative for immunostaining (Figure 4i). The parotid, submandibular, and sublingual glands, which were incubated with normal IgG and HRP-labeled anti-mouse/rabbit antibody, were also found to be negative for immunostaining (Figure 4j). | The expression of ACE2 and TMPRSS2 in saliva from human parotid glands The outcomes of ELISA showed that the concentration of ACE2 and TMPRSS2 in saliva was 0.38 ± 0.03 ng/ml and 0.76 ± 0.18 ng/ml respectively. As the control, the expression of ACE2 was positive in the homogenate of small intestine (0.91 ± 0.12 ng/ml) and negative in the homogenate of breast tissue (0.11 ± 0.04 ng/ml); the expression of TMPRSS2 was positive in the homogenate of prostate adenocarcinoma (0.87 ± 0.10 ng/ml) and negative in the homogenate of adipose tissue (0.05 ± 0.03 ng/ml). | The exogenous ACE2 and TMPRSS2 absorbing to oral mucosa epithelial cells The exogenous ACE2 and TMPRSS2 were found to exist in HOEC and HOK cells after incubating with recombinant human ACE2 and TMPRSS2 with His-tag. However, their expression levels were not significantly different between HOEC and HOK cells (Figure 5a,b). This result also confirmed the specificity of primary antibodies (Histag) used in this study. | DISCUSS ION A study in SARS-CoV infecting Chinese rhesus macaques has shown that the salivary glands of these animals possess ACE2, the function of which is affected by SARS-CoV infection (Liu et al., 2011). (Chen, Zhao, et al., 2020;Chen, Zhou, et al., 2020). There have been some studies confirming that ACE2 and TMPRSS2 are expressed in salivary glands. While the distribution details of the ACE2 and TMPRSS2 in human salivary glands still need to be explored (Pascolo et al., 2020;Song et al., 2020;Shamsoddin, 2020;Wang et al., 2020). This is the first study to demonstrate the expression and distribution details of the ACE2 and TMPRSS2 in the acinus and ducts cells of the human parotid, submandibular, and sublingual glands. The spike protein of SARS-CoV and SARS-CoV-2 targeting ACE2 are highly similar in structure, and the fusion of both SARS-CoV and SARS-CoV-2 depends on TMPRSS2 (Bertram et al., 2012;Kuba et al., 2005;Li et al., 2003;Wan et al., 2020). Thus, saliva may promote the infection of SARS-CoV. Our results confirmed that saliva contains ACE2 and TMPRSS2. The exogenous ACE2 and TMPRSS2 can absorb to oral mucosa epithelial cells. Based on results mentioned above, we hypothesized that the ACE2 and TMPRSS2 secreted into saliva from the cytoplasm of serous acinar cells in human salivary glands might anchor and fuse to oral mucosa epithelium, and perhaps they could contribute to SARS-CoV-2 infection in vivo. It is reported that live SARS-CoV-2 was detected in saliva. Moreover, saliva contains the three key elements (ACE2, TMPRSS2, and SARS-CoV-2) of SARS-CoV-2 infection . This is the first study demonstrating that ACE2 and TMPRSS2 are expressed in saliva and binds to oral mucosa epithelium. Therefore, saliva may be a promoter of SARS- Saliva is mainly composed of water and secreted by serous acinar cells, assisting in maintaining the moisture in the mouth, swallowing, and oral self-cleaning. In the resting state of saliva secretion, parotid glands account for 25%, and the submandibular glands account for 60%. However, in the highly stimulated state, parotid glands account for 50%, and the submandibular glands account for 35% of the total salivary secretion (Jensen et al., 1998). Some SARS-CoV-2 infected patients develop a dry mouth and abnormal taste. And some other cases present oral necrotic ulcers and aphthous-like ulcerations, which develop early in the course of disease after the development of dysgeusia and affect the function of the tongue, lips, palate, and oropharynx. (Brandão et al. 2021;Chen, Zhao, et al., 2020;Chen, Zhou, et al., 2020). DATA AVA I L A B I L I T Y S TAT E M E N T The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author upon reasonable request.
2021-10-01T06:16:50.857Z
2021-09-29T00:00:00.000
{ "year": 2021, "sha1": "506e4fd3c1896b1affd955c48cfdf00822b731c3", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/joa.13560", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b0f908b621441c766f5b09f3d66fa047c615f747", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55549862
pes2o/s2orc
v3-fos-license
“ Live at Epidemin ” – a cinematic investigation of architectonic space and artistic practice within a public institution of contemporary art This investigative research at its very beginning aims to develop deeper and wider understanding of, from an architectural point of view, the fuzzy relations between architectonic space, exhibitions and exhibited art. The first part is about my subject: “the spatial practices within the architectures of contemporary art” and dictates a background pointing out current spatial tendencies within the field. I will discuss different modes of utilizing architectonic space within the institutions, with focus on the appropriation of given spaces and the performance of processand new mediaoriented art. In part two, I will introduce a set of borrowed questions/concepts, which I hope will serve me as tools during the investigation. The third part will contain arguments for my choice of film as my investigative medium and eventually present a film-project in progress. In this part I will also discuss in cinematic media as a research tool. INTRODUCTION 1:1 Architectures of contemporary art The practices within the public art institutions as well as their needs has hanged during the last two or three decades.A contemporary art utilizing new media as well as societal processes differs radically in its spatial performance in comparison to "old" media like sculpture or painting.The notion of "interaction" also implies a shift of focus from the actual art object towards its recipient or reader, something that consequently also changes the relation to architectonic space.But the shift is not only interior and related to viscous forms of contemporary art, the institutions themselves has become strategic pieces on the game-board of global enterprises as well as regional economies.Cultural capital in shape of architecture, art and design easily converts into added value in the branding and identity operations undertaken on different levels within the societal body.In doing so you can separate between institutions situating themselves within given architectures (Kunstwerke, PS1, Palais de Tokio, Röda Sten and a wide range of galleries) having a what-can-we-make-out-of-things attitude and institutions commissioning architecture to suit and manifest their needs (Guggenheim, MOMA etc.) having a "what do we want"attitude.In both cases architecture and architectonic space becomes a signifier of art. 1:2 Architecture, exhibitions and art You can distinguish between three main actors/agents within the public institutions of contemporary art.First of all there is the architecture and its spaces that either can be commissioned or appropriated.This choice has not necessarily to do with funds or resources but can just as well have to do with an overall aesthetic or context, as with the case of Tate Modern.On the other hand if there is a lack of funds, abandoned industrial space offers a lot of space for a fair price.Secondly, there are the curators editing the contents of the spaces utilized.The curator has to mediate and articulate the possibilities and limitations supplied by the architectonic space and the content and form of the art-works situated within it.The exhibition becomes the link between the art-works and the architectonic space, creating a spatial narrative, making it possible for an audience to make their own interpretation.Thirdly there are the artists and their art that in the end need some kind of space to enter a dialogue, a space that can be virtual as well as real.The character of this space can be very different depending on the techniques and forms of the established dialogue.For instance the space required to experience a painting are only to certain degree similar to the space required for experiencing an art-video.The single exhibition is constituted by the interplay between architectonic space, exhibition narrative and artistic form and content.The exhibition becomes a negotiation where the different parameters at best enhance each other, something which unfortunately not always is the case.For instance the spatial requirements for art within the field of new media or processoriented art are in many cases opposed to the requirement of traditional art forms.The possibilities shut out light as well as sound are in many cases critical in the context of the contemporary art.This is a kind of paradox since contemporary art in other end ops for inclusiveness and availability. 1:3 Spatial practices Today many public art institutions are hybridizations of a set of socio-economics activities like, restaurants, cafes, bookshops, giftshops, lecture-halls, seminar-series, concerts, and clubs making trade offs on the cultural capital fostered by core activities taking place within the gallery space.In fact even in the smallest gallery, you can find similar socio-economic activities, although miniaturized.Here you are offered a possibility to buy objects related to the exhibition, eat or drink something etc.As a consequence the gallery space, which in these cases can be the only space, becomes a space shared by a range of activities related to art, where the displaying of art is only one.If you look at the gallery space itself, it's not only the spatial platform for artists and curators or a space of experience for visitors.It is also a construction site for those building the exhibition as well as a space for calibrating and installing technique for those responsible for the performance of different technological systems (computers, projectors etc).Usually the exhibition is monitored (by guards) as well presented (by guides).On top of this you have basic maintenance performed by janitors and cleaners.In comparison to many other spatial practices, like walking or cleaning, many of the spatial practices within the architectures of contemporary art are highly reflexive and self-conscious.For instance the curators and artists are well-articulated spatial practices with extensive knowledge of the relations between piece, audience and space.This is what they do, situating art, over and over, in order to promote artistic experiences to an audience.The reflexive mode also goes for the visitors, eager to express their likings as well as their dis-likings, not only about singular pieces or bodies of work but also whole exhibitions.Ideally the gallery space promotes this reflexive mode at all levels within the ecology of the particular system, mirroring the societal body as a whole. PERSPECTIVES 2:1 Theory as tools -a beginning In taking on the investigation of architectonic space and spatial practices within the public institutions of contemporary art as a subject I have undertaken a series of readings as possible entrypoints.The readings conducted and presented in this paper does not have the ambition to draw out an extensive and consistent map of the area investigated, but should rather be regarded as generative readings, as the first stepping stones.As such, the readings, at least at this point, do not utilize the full potential of the discourse engaged.Still, I have found scraps and pieces that I at this point have found useful and possible to develop further.At this early stage I regard the readings as tools in a Deleuzian sense.At best these tools will prove consistent with my discourse, at worst (which is not bad at all), they may prove themselves useful only within this initial and temporal context, pointing towards other directions.I will discuss readings conducted of Michel deCerteau´s "The practice of everyday life", Hal Fosters "Design and Crime", Lev Manovich´s "The Language of new media" and finally Nicholas Bourriauds "Post-production".These writings are well situated within contemporary discourse regarding design in a wider perspective, and as such I hope they make out some kind position from which I can back-track as well as envision.As writings which have had an impact on contemporary discourse, and as something "in the air" I know their discourse from before, as domestized in different design magazines and projects, but not in their articulated form.In working with concepts of strategic and tactic practices, the relation between design and the utilization of design, interactivity as well as the notion of postproduction they give depth to current phenomena's.To which degree they articulate line of thoughts already suggested but not articulated and inter-related or actually have some kind of cutting-edge status dictating the general discourse may be discussed.In the end these readings opens up new ways of interpreting space and the use of space in a contemporary context.I will introduce a line of thoughts, first of all regarding the inter-relations between practices within public architectures of contemporary art (deCerteau).Secondly I will discuss the relation between utilizer and utilized in a design perspective (Foster).Thirdly I will bring to focus questions of different modes of interactivity and how we can regard architecture in this sense (Manovich).Finally I will try to use the notion of postproduction to introduce an alternate reading of the architectures of contemporary art (Bourriaud).Altogether I hope the readings can come together as something in between overlapping wholes and separate trajectories suggesting issues to be further developed. . 2:2 Michel deCerteau and the practices of everyday life First I would like to discuss Michel deCerteau and his toolbox of theories and concepts that deals with everyday practices such as walking.In his work you find distinctions between strategic practices and tactical practices.In his words a strategy is "the calculus of force-relationships which becomes possible when a subject of will and power…can be isolated from the environment"¹, which he puts in comparison with a tactic which "constantly manipulate events in order to turn them into "opportunities".But where deCerteau discusses the dual relation between the everyday practice of the ordinary man and his environment I would like to discuss the strategic-tactic relation between uneven agents within public institutions of contemporary art.For instance in the case of the re-use of the building you could describe the relation between the building and institution as a strategic-tactic relation, where the institution has to adopt its practice to the building.This relation you also find between the space/institution/exhibition and the artist adopting his piece and in the end between the piece and the audience interpreting and interacting with the piece.In this perspective the institution as whole consists of a range of intertwined (spatial) practices, which in an unevenly way, in strategic-tactic relations, are related to each other.Thus the institutions become gameboards, where ranges of pawns have different possibilities as well as responsibilities.These pawns of functions within the institutions are idealized states, where the curator only does the curating and the cleaner only does the cleaning.In reality the curator may very well do some cleaning, although the cleaner may not do some curating (that is if it is not explicitly stated), all according the inscribed hierarchy.Thus you find, within and in between the official practices, a range of un-official practices, as a secondary protocol ensuring the performance of the institution. 2:3 Hal Foster, Adolf Loos and the spielraum of culture Secondly I will make use of Hal Fosters collection of essays in "Design and Crime".More specifically I will use his perspective on the work and writings of the Viennese architect Adolf Loos, most noted for his essay "Ornament and Crime", from 1908, and his concept of "Raumplan" or space as stage.Loos was a fierce critic of architects like Josef Hoffmann and Joseph Maria Olbrich who advocated design as a "gesamtkunstwerk".For Loos modern life was signified by differences, such as the difference between the private and the public, between exterior expression and inner life.He called for a design and architecture that distinguished and acknowledged these differences.The architects and designers role was to contribute with architecture and design that would work as a platform or background for life rather than being its centrepiece.In Loos mind, architecture and design was not about style, it was about use.Style was a personal issue or as stated by his fellow critic Karl Kraus: "there is a distinction between an urn and a chamber pot and that distinction above all provides culture with a running-room [Spielraum]".You could describe the concept of spielraum as "that" (runningroom) which is in between the artefact and the utilizer of the artefact which makes it possible for the utilizer to contextualize him or herself as well situate the artefact according to his or her needs.The spielraum is ultimately a void that has to be trespassed in order to become operational, an absence of design, a space, which calls for a practice.Here the mediating between the design and the utilizer becomes a creative act and at best an articulated reflexive practice.It is easy to recognize architects like Frank Gehry and Rem Koolhaas as the Hoffmans and the Olbrichs of contemporary architecture.But where do we find Loos and Kraus?Is it possible that we have to look for the notion of spielraum within the critical dialogue conducted by utilizers of architecture rather than within the practices conducted by architects, as in the examples of PS1 and Kunstwerke? 2:4 Lev Manovich and open and closed interactivity I also would like to introduce some definitions stated by Lev Manovich in "The Language of New Media" (2002) in order to put yet another perspective on the relation between design and utilizers of designs.In talking about different kinds of media Manovich separates between open interactivity and closed interactivity.Closed interactivity represents the kind of interactivity we find in a traditional novel, a linear computergame or a high fashion restaurant design.Here our interaction is severely limited and ultimately pre-programmed.We can read a novel randomly or backwards but the logic, construct and essence of the novel will be lost if we do.The same goes for the linear computer-game or high fashion restaurant design, which of course can be used in the wrong way but in doing it will lose what it is all about as computer-game or restaurant design.Open interactivity on the other we find in Linux, Lego as well as empty warehouses.Here the interaction itself is the content provider.There is no pre-programmed outcome, expected result or end, anything can happen, at least within certain limits.At least you can sense, illusionary or not, the freedom of choice. The notions of open and closed interactivity articulate the differences between the design and the utilizer of the design as negotiable.It states that a book, still being a book has a multitude of different ways to interact with its reader, the same goes for softwares and computer-games, as well as architecture and space.In this perspective tailored architectures like Guggenheim Bilbao and New MOMA where a strong link between design and utilizer is established, states a closed interactivity.The art of Dj-ing has little to do with acoustic music performances or studio production of music.First of all a Dj does not produce any music of his own that is as a composer or artist although he or she may very well produce tracks to be used while Dj-ing.Instead the Dj fuses or mixes tracks, produced by other artists, in live performances or sets, a continuous, temporal and dynamic entity constituted by the interplay between the Dj, his choice of records and the audience.In his book Postproduction, Nicholas Bourriaud discusses notions like sampling, mixing, editing as the common denominators of contemporary cultural production.The notion of postproduction is about how things, new as well as old are utilized in different ways in accordance to specific context, situations and events.It also suggests the act of re-interpretation, re-instating and recombining as something more than a simple repetition or mechanic procedure, that is as an act of meaning-creation by it's own right.In perspective of the utilization of architecture, the notion of postproduction, shifts the balance between the notions of function and use.In the best of worlds architecture are rendered specific functions which are supposed to correspond to specific uses.But in most cases we are forced to re-utilize architecture according to our needs (when its already there).Chronologically architecture is altered through the change of use.Beginning from the date the building is ready to inhabit or occupy the building is engaged in an on-going process or metamorphosis, a series of smaller or larger alterations, which correspond to the temporal needs of a long line of utilizers.Functions and uses are not rigid states by rather dynamic ambitions that aim to establish a correspondence between the built and the lived in a give-and-take process.Not only may a use alter a building but a building may also alter a use.In many cases both things are true.The most dramatic examples of this kind mutation of the function/use issues we find within the architectures of contemporary art.For instance during the 70´s and 80´s the artist and exhibitors moved to downtown industrial loft-spaces on Manhattan as first step of cultural reapproapriation.During the 90´s many larger, public as well as private, art institutions moved out to old industrial areas and into old industrial buildings like warehouses and powerplants as well as old institutional buildings (PS1, Kunstwerke, Tate Modern etc).This shift, or return to the real to use the words of Hal Foster, of location and space have given access to a new range of exhibition spaces (large scale and dramatic), to which a new range of pieces has been created as a response (for instance Marsyas by Anish Kapoor in Tate Modern).This way left-over areas and buildings have been re-instated in contemporary life and thought, brought back by acts of re-interpretation, reprogramming as well as re-utilization. INVESTIGATIONS 3:1 Background There are many ways to approach this project.One way could be to study that which has been built and developed within the field.These kind of typological studies have to some degree already been made (Newhouse 1998, Sachs, ed. 2000).In my perspective this kind of study becomes a meta-narrative, telling the story about how architects and institutions respond to the task of giving shape to the institutions to which artists should contribute with their practice.The strong relation between architects, directors and to some degree curators (Sabbagh, 2000) puts its mark on the institutions, where the artists in many cases are absent at this stage of the process, thus putting an emphasis on architectural branding and identity together with curatorial possibilities.In the best of worlds this would have no negative influence on the actual performance of specific artistic works, but in most cases negotiations and compromises will dictate the conditions, which of course not always is a bad case.In many cases this kind of "resistance" can operate as a generative force within the process.In this project I will try move to a position closer to the source, namely the artistic practices who are supposed to perform in these spaces.What kind of spaces do they opt for?What kind of spaces do their practices require?What ideas do they have about the architectures of contemporary art?Is there some kind of generative resistance and if so how do we avoid being to smooth?This kind of projects have been done from an artistic point of view, mainly with a focus on relation to public space and the role of the art community within contemporary society (Bode, Schmidt 2004), as well from theoretical points of view.This architectural re-reading that I am proposing, suggests an updated reading of the spatial practices of contemporary artists when it comes to the utilization of architectonic space.The artistic spatial practices as defined within the modernistic movement are still very dominant, at least within the architectural community, being valid for many artistic practices, but not all (mainly those working within new media and process-oriented art).This shift that I am proposing, from the architectures of contemporary art to the utilization of the architectures of contemporary art, including those architectures appropriated by artists as well as institutions, calls for a different investigative approach than the typological study described before. 3:2 Strategies To choose an approach or more specifically to choose tools and methods is also by consequence a way of choosing in what way the tools and methods in themselves should guide the process.The use of tools and methods suggests some kind of preunderstanding of what these tools and methods does, both in order to use them properly but also in order to get what you desire out of them.To a certain degree they constitute predictable paths and thus operate as a kind of shortcut, that is, if you know what you are searching for.As an architect there is a simple way of learning about the architectures of contemporary art.By means of referential projects and projective drawings and models an architect can engage artists as well as curators in a discussion about how the architectures of contemporary art ought to be.Here the architectural drawings and models constitute a generative space, a meeting place for the different actor/agents within the project.Drawings and models are developed and articulated through a series of negotiations until a reasonable agreement is reached.Thoughts are expressed in a what-and-how-to-build language and translated into representations of built matter.Even more, since the initial program and problems stated usually are vague, the solutions developed through drawings and models, are tools for creating a better understanding of the actual problems and how to solve them (the project as a process where you learn what you want and can do).In perspective of my research, this approach has a disadvantage, since it in the end has a focus on how the spaces ought to correspond with a fictional use, rather than how space is utilized in actual practices.In an architectural practice you engage in the process of the actual in a fictional and projective way, with different kinds of representations.The architect's commitment ends where the actual begins (the construction site), although he may have to negotiate continuously between the fictional and actual on site.In cinema it is the other way around.The crew and directors etc begin in the actual (if you don't count the script) and brings it into the fictional (the film), through a process of editing and postproduction.By working with film within an architectural practice (as a process where the tools operate to give you better understanding of the actual problem), I will try to reverse the relation between the actual and fictional.That is, to start out in the actual and to bring it into the fictional.This way I can approach the spatial practices as they are, in context and in dialogue.Further more; I will also be able to develop a material to promote discussion, questions and inquiries, as the project moves on. 3:4 A cinematic investigation Since contemporary art exhibitions are limited in time as well as space they are well suited for investigative cinematic projects.The exhibition does not only offer a physical setting but also provides a cast (artists, visitors, critics etc), script (a making, an expression and reception as well as an un-making) and a range of exhibition related themes.In this first cinematic investigation I have chosen to follow an artist and friend while installing his exhibition in a gallery in central Gothenburg.Being my first film-project, it felt important to have to possibility to improvise and try things out.Thus, working with someone I knew well, felt more experimental in a relaxed sense.A failure wouldn't be a disaster, but something, which could be pondered in dialogue with friends and colleagues later on.Further on his artistic practice was very much about process and spatiality, which meant that he would actually work with the gallery space as a tool.Also he was acquainted with the project and had no problem with me sticking around for four days while he was preparing and installing his exhibition.Further on, the gallery space itself had specific qualities interesting for my line of research, being an old epidemic hospital.Entering the gallery I became Peters (the artist) helping hand as well as a cinematic investigator.Working with both a digital camera and a DV-camera I shot around 300 pictures and 3 hours of film.The shots captured the work performed and the discussions that took place where very much of a problemsolving kind of dialogue, a how-to-do and what-to-do discussion.Working with the editing of the film I soon realised that the reflective material acquired weren't enough.The shots showed the actual work performed very well but had a hard time putting a perspective on the actions performed.As a consequence I meet Peter in his office for yet another session (only two hours) where he gave his perspective on the exhibition as well as his thoughts about exhibition spaces in a contemporary perspective. 3:5 Perspectives on a cinematic investigation The first film I called "Live on Epidemin" with the Swedish subtitle "en liten film om arkitektur" or "a short movie about architecture".The aim of the film is two-folded, that is on cinematic level.First of all, showing how space can be utilized and how the preparation work required for it to be utilized can ask questions about in what way art and architecture or piece and space can correspond and enter a dialogue.Here the actual work performed is the key issue, as a mediating force aiming for a qualitative coherence.Secondly, the case also serves as a point of departure, for a generic discussion about exhibition spaces.Working with a cinematic investigation is about asking you in what way the actual shooting of the film can be generative?How can a material that can contribute to discourse be acquired?First of all, since everything is live, you won't get a second chance.Either you're there or you are not.Shooting thus becomes a gamble, which you have to put your faith into.Even though you can manipulate the shots by pushing the cast or actually perform yourself, in the end it is really hard to tell what will come out of the material.This is also the reason why the editing becomes at least as educating, since it in the end is here, that the multiple shots are put into one continuous entity.While editing you one way or another have to go through all of the material.In doing so, the position established and the direction suggested by the material becomes clear.You can follow your own discourse and more importantly your own hidden agenda that if nothing else, becomes evident overlooking the material as a whole.Whether you like it or not, the way you do things and think about things puts a mark on the material.Even if you not in the film, you are in the film.From a research point of view, this could be regarded as problematic, or oppositely, as an actual resource.Although the cinematic tools are very manipulative you can, from an outside perspective when editing the film, get a sense of initial intentions, hidden agendas as well as new openings.Working ones way through the material presents new patterns on the subject as well as your position as a researcher in the project.Editing becomes a reflexive mode as well as an interpretative mode aiming for a cinematic articulation contributing to discourse like this. 3:6 Backtracking The material gathered is an open book, which you can read in many ways, but it was gathered with an aim as well as a set of pre-notions.In what way did notions such as postproduction, open interactivity, spielraum as well as spatial practices come in to use?First of all the notions rendered a framing; a what-to look-for and a where-to-look-for as well as a how-to-look-for.Having the notions in mind when entering the gallery, gave directions being a counterforce to the real, a resistance.After having left the session, and gallery, going through the material, the material became two-folded.On one hand actual chronological events emerging from a situation, which you somehow feel a responsibility to make proper interpretations.On the other hand the material also emerges as examples, as individual parts which you can arrange according to an intention, or as in this case, a cinematic form.As such, there are a multitude of shots from which you could discuss notions of postproduction, interactivity etc.But you could just as well discuss other issues, like spatial proportions or configurations.The correspondence between the notions and the actual events is purely intentional (read my intention) in the sense that it is there as an interpretation and a reading.The shift which occurs when re-interpretating the events in perspective of notions such as postproduction etc, is rhetorical, but it serves a purpose.The new notions, although somewhat alien in relation to architectonic discourse, suggests an alternate reading of architecture as process and media rather than object and manifestation, as becoming rather than being.In this kind of project, putting something called theory at one place and things called practice on another is of little use, since they operate simultaneously on the same level, within a line of action, as a trajectory and an ambition.Planning shots as well as conducting interviews/dialogues or editing, forces you to scrutinize the key concepts over and over in a continuous process of articulation.Experience, hunches and readings forces you through the actual.Thus the notions or concepts are not used in order to provide answers but rather as guiding lights and referential complexes, as opposed to traditional concepts and notions within discourse (architectural) granting a safe passage within the known.Instead they offer possible detours (hopefully generative) to an area less articulated, stated and closed. They already have what they want (although you should be careful what you wish for).Architectures like the warehouse appropriated by Kunstwerke or the school appropriated by PS1, where the utilizers supply the architecture with content as well as they make it accessible, oppositely states an open interactivity (What can we make out of things?).Image 3. Installing "Tänk om det verkligen är så…", H Benesch 2005 2:5 Nicholas Bourriaud and the notion of postproduction A new hero and cultural icon of the new millennium is the Dj.
2018-12-05T20:00:29.666Z
2005-05-29T00:00:00.000
{ "year": 2005, "sha1": "7a6289050e038045e3a1aeddfcd5d57b4b962a7a", "oa_license": "CCBYNC", "oa_url": "https://dl.designresearchsociety.org/cgi/viewcontent.cgi?article=1061&context=nordes", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "7a6289050e038045e3a1aeddfcd5d57b4b962a7a", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art", "Sociology" ] }
7531376
pes2o/s2orc
v3-fos-license
Deriving a Stationary Dynamic Bayesian Network from a Logic Program with Recursive Loops Recursive loops in a logic program present a challenging problem to the PLP framework. On the one hand, they loop forever so that the PLP backward-chaining inferences would never stop. On the other hand, they generate cyclic influences, which are disallowed in Bayesian networks. Therefore, in existing PLP approaches logic programs with recursive loops are considered to be problematic and thus are excluded. In this paper, we propose an approach that makes use of recursive loops to build a stationary dynamic Bayesian network. Our work stems from an observation that recursive loops in a logic program imply a time sequence and thus can be used to model a stationary dynamic Bayesian network without using explicit time parameters. We introduce a Bayesian knowledge base with logic clauses of the form $A \leftarrow A_1,...,A_l, true, Context, Types$, which naturally represents the knowledge that the $A_i$s have direct influences on $A$ in the context $Context$ under the type constraints $Types$. We then use the well-founded model of a logic program to define the direct influence relation and apply SLG-resolution to compute the space of random variables together with their parental connections. We introduce a novel notion of influence clauses, based on which a declarative semantics for a Bayesian knowledge base is established and algorithms for building a two-slice dynamic Bayesian network from a logic program are developed. Introduction Probabilistic logic programming (PLP) is a framework that extends the expressive power of Bayesian networks with first-order logic [20,23]. The core of the PLP framework is a backward-chaining procedure, which generates a Bayesian network graphic structure from a logic program in a way quite like query evaluation in logic programming. Therefore, existing PLP methods use a slightly adapted SLD-or SLDNF-resolution [18] as the backwardchaining procedure. Recursive loops in a logic program are SLD-derivations of the form where for any i ≥ 1, A i is the same as A i+1 up to variable renaming. 1 Such loops present a challenging problem to the PLP framework. On the one hand, they loop forever so that the PLP backward-chaining inferences would never stop. On the other hand, they may generate cyclic influences, which are disallowed in Bayesian networks. Two representative approaches have been proposed to avoid recursive loops. The first one is by Ngo and Haddawy [20] and Kersting and De Raedt [17], who restrict to considering only acyclic logic programs [1]. The second approach, proposed by Glesner and Koller [13], uses explicit time parameters to avoid occurrence of recursive loops. It enforces acyclicity using time parameters in the way that every predicate has a time argument such that the time argument in the clause head is at least one time step later than the time arguments of the predicates in the clause body. In this way, each predicate p(X) is changed to p(X, T ) and each clause p(X) ← q(X) is rewritten into p(X, T 1) ← T 2 = T 1 − 1, q(X, T 2), where T , T 1 and T 2 are time parameters. In this paper, we propose a solution to the problem of recursive loops under the PLP framework. Our method is not restricted to acyclic logic programs, nor does it rely on explicit time parameters. Instead, it makes use of recursive loops to derive a stationary dynamic Bayesian network. We will make two novel contributions. First, we introduce the well-founded semantics [33] of logic programs to the PLP framework; in particular, we use the well-founded model of a logic program to define the direct influence relation and apply SLG-resolution [6] (or SLTNF-resolution [29]) to make the backward-chaining inferences. As a result, termination of the PLP backward-chaining process is guaranteed. Second, we observe that under the PLP framework recursive loops (cyclic influences) define feedbacks, thus implying a time sequence. For instance, the clause aids(X) ← aids(Y ), contact(X, Y ) introduces recursive loops aids(X) ← aids(Y ) ... ← aids(Y 1) ... Such cyclic influences represent feedback connections, i.e., that p1 is infected with aids (in the current time slice t) depends on whether p1 was infected with aids earlier (in the last time slice t − 1). Therefore, recursive loops of form (1) imply a time sequence of the form where A is a ground instance of A 1 . It is this observation that leads us to viewing a logic program with recursive loops as a special temporal model. Such a temporal model corresponds to a stationary dynamic Bayesian network and thus can be compactly represented as a two-slice dynamic Bayesian network. The paper is structured as follows. In Section 2, we review some concepts concerning Bayesian networks and logic programs. In Section 3, we introduce a new PLP formalism, called Bayesian knowledge bases. A Bayesian knowledge base consists mainly of a logic program that defines a direct influence relation over a space of random variables. In Section 4, we establish a declarative semantics for a Bayesian knowledge base based on a key notion of influence clauses. Influence clauses contain only ground atoms from the space of random variables and define the same direct influence relation as the original Bayesian knowledge base does. In Section 5, we present algorithms for building a two-slice dynamic Bayesian network from a Bayesian knowledge base. We describe related work in Section 6 and summarize our work in Section 7. Preliminaries and Notation We assume the reader is familiar with basic ideas of Bayesian networks [21] and logic programming [18]. In particular, we assume the reader is familiar with the well-founded semantics [33] as well as SLG-resolution [5]. Here we review some basic concepts concerning dynamic Bayesian networks (DBNs). DBNs are introduced to model the evolution of the state of the environment over time [16]. Briefly, a DBN is a Bayesian network whose random variables are subscripted with time steps (basic units of time) or time slices (i.e. intervals). In this paper, we use time slices. For instance, W eather t−1 , W eather t and W eather t+1 are random variables representing the weather situations in time slices t − 1, t and t + 1, respectively. We can then use a DBN to depict how W eather t−1 influences W eather t . A DBN is represented by describing the intra-probabilistic relations between random variables in each individual time slice t (t > 0) and the inter-probabilistic relations between the random variables of each two consecutive time slices t − 1 and t. If both the intraand inter-probabilistic relations are the same for all time slices (in this case, the DBN is a repetition of a Bayesian network over time; see Figure 1), the DBN is called a stationary DBN [24]; otherwise it is called a flexible DBN [13]. As far as we know, most existing DBN systems reported in the literature are stationary DBNs. In a stationary DBN as shown in Figure 1, the state evolution is determined by random variables like C, B and A, as they appear periodically and influence one another over time (i.e., they produce cycles of direct influences). Such variables are called state variables. Note that D is not a state variable. Due to the characteristic of stationarity, a stationary DBN is often compactly represented as a two-slice DBN. Definition 2.1 A two-slice DBN for a stationary DBN consists of two consecutive time slices, t − 1 and t, which describes (1) the intra-probabilistic relations between the random variables in slice t and (2) the inter-probabilistic relations between the random variables in slice t − 1 and the random variables in slice t. A two-slice DBN models a feedback system, where a cycle of direct influences establishes a feedback connection. For convenience, we depict feedback connections with dashed edges. Moreover, we refer to nodes coming from slice t − 1 as state input nodes (or state input variables). 2 Example 2.1 The stationary DBN of Figure 1 can be represented by a two-slice DBN as shown in Figure 2, where A, C and B form a cycle of direct influences and thus establish a feedback connection. This stationary DBN can also be represented by a two-slice DBN starting from a different state input node such as C t−1 or B t−1 . These two-slice DBN structures are equivalent in the sense that they model the same cycle of direct influences and can be unrolled into the same stationary DBN (Figure 1). Observe that in a two-slice DBN, all random variables except state input nodes have the same subscript t. In the sequel, the subscript t is omitted for simplification of the structure. For instance, the two-slice DBN of Figure 2 is simplified to that of Figure 3. In the rest of this section, we introduce some necessary notation for logic programs. Variables begin with a capital letter, and predicate, function and constant symbols with a lower-case letter. We use p(.) to refer to any predicate/atom whose predicate symbol is p and use p( − → X ) to refer to p(X 1 , ..., X n ) where all X i s are variables. There is one special predicate, true, which is always logically true. A predicate p( − → X ) is typed if its arguments − → X are typed so that each argument takes on values in a well-defined finite domain. A (general) logic program P is a finite set of clauses of the form where A, the B i s and C j s are atoms. We use HU(P ) and HB(P ) to denote the Herbrand universe and Herbrand base of P , respectively, and use W F (P ) =<I t , I f > to denote the well-founded model of P , where I t , I f ⊆ HB(P ), and every A in I t is true and every A in I f is false in W F (P ). By a (Herbrand) ground instance of a clause/atom C we refer to a ground instance of C that is obtained by replacing all variables in C with some terms in HU(P ). A logic program P is a positive logic program if no negative literal occurs in the body of any clause. P is a Datalog program if no clause in P contains function symbols. P is an acyclic logic program if there is a mapping map from the set of ground instances of atoms in P into the set of natural numbers such that for any ground instance A ← B 1 , ..., B k , ¬B k+1 , ..., ¬B n of any clause in P , map(A) > map(B i ) (1 ≤ i ≤ n) [1]. P is said to have the bounded-termsize property w.r.t. a set of predicates {p 1 (.), ..., p t (.)} if there is a function f (n) such that for any 1 ≤ i ≤ t whenever a top goal G 0 =← p i (.) has no argument whose term size exceeds n, no atoms in any SLDNF-(or SLG-) derivations for G 0 have an argument whose term size exceeds f (n) (this definition is adapted from [32]). Definition of a Bayesian Knowledge Base In this section, we introduce a new PLP formalism, called Bayesian knowledge bases. Bayesian knowledge bases accommodate recursive loops and define the direct influence relation in terms of the well-founded semantics. where (i) the predicate symbols p, p 1 , ..., p l only occur in P B and (ii) p(.) is typed so that for each variable X i in it with a finite domain DOM i (a list of constants) there is an atom member(X i , DOM i ) in the clause body. A Bayesian knowledge base contains a logic program that can be divided into two parts, P B and CB. P B defines a direct influence relation, each clause (4) saying that the atoms p 1 (.), ..., p l (.) have direct influences on p(.) in the context that B 1 , ..., B m , ¬C 1 , ..., ¬C n , member(X 1 , DOM 1 ), ..., member(X s , DOM s ) is true in P B ∪ CB under the well-founded semantics. Note that the special literal true is used in clause (4) to mark the beginning of the context; it is always true in the well-founded model W F (P B ∪ CB). For each variable X i in the head p(.), member(X i , DOM i ) is used to enforce the type constraint on X i , i.e. the value of X i comes from its domain DOM i . CB assists P B in defining the direct influence relation by introducing some auxiliary predicates (such as member(.)) to describe contexts. 3 Clauses in CB do not describe direct influences. Recursive loops are allowed in P B and CB. In particular, when some p i (.) in clause (4) is the same as the head p(.), a cyclic direct influence occurs. Such a cyclic influence models a feedback connection and is interpreted as p(.) at present depending on itself in the past. In this paper, we focus on Datalog programs, although the proposed approach applies to logic programs with the bounded-term-size property (w.r.t. the set of predicates appearing in the heads of clauses in P B) as well. Datalog programs are widely used in database and knowledge base systems [31] and have a polynomial time complexity in computing their well-founded models [33]. In the sequel, we assume that except for the predicate member(.), P B ∪ CB is a Datalog program. For each clause (4) in P B, there is a unique CPT, P(p(.)|p 1 (.), ..., p l (.)), in T x specifying the degree of the direct influences. Such a CPT is shared by all instances of clause (4). A Bayesian knowledge base has the following important property. Proof: (1) If the head of a clause in P B contains variables, there must be atoms of the form member(X i , DOM i ) in its body. This means that clauses whose head contains variables are not unit clauses. Therefore, all unit clauses in P B are ground. (2) Let A be an answer of G 0 obtained by applying SLG-resolution to P B ∪ CB ∪ {G 0 }. Then A must be produced by applying a clause in P B of form (4) with a most general unifier (mgu) θ such that A = p(.)θ and the body (p 1 (.), ..., p l (.), true, B 1 , ..., B m , ¬C 1 , ..., ¬C n , member(X 1 , DOM 1 ), ..., member( X s , DOM s ))θ is evaluated true in the well-founded model W F (P B∪CB). Note that the type constraints (member(X 1 , DOM 1 ), ..., member(X s , DOM s ))θ being evaluated true by SLG-resolution guarantees that all variables X i s in the head p(.) are instantiated by θ into constants in their domains DOM i s. This means that A is ground. For the sake of simplicity, in the sequel for each clause (4) in P B, we omit its type constraints member(X i , DOM i ) (1 ≤ i ≤ s). Therefore, when we say that the context B 1 , ..., B m , ¬C 1 , ..., ¬C n is true, we assume that the related type constraints are true as well. Example 3.1 We borrow the well-known AIDS program from [13] (a simplified version) as a running example to illustrate our PLP approach. It is formulated by a Bayesian knowledge base KB 1 with the following logic program: 4 P B 1 : 1. aids(p1). Note that both the 3rd and the 4-th clause produce recursive loops. The 3rd clause also has a cyclic direct influence. Conceptually, the two clauses model the fact that the direct influences on aids(X) come from whether X was infected with aids earlier (the feedback connection induced from the 3rd clause) or whether X has contact with someone Y who is infected with aids (the 4-th clause). Declarative Semantics In this section, we formally describe the space of random variables and the direct influence relation defined by a Bayesian knowledge base KB. We then define probability distributions induced by KB. Space of Random Variables and Influence Clauses A Bayesian knowledge base KB defines a direct influence relation over a subset of HB(P B). Recall that any random variable in a Bayesian network is either an input node (with no parent nodes) or a node on which some other nodes (i.e. its parent nodes) in the network have direct influences. Since an input node can be viewed as a node whose direct influences come from an empty set of parent nodes, we can define a space of random variables from a Bayesian knowledge base KB by taking all unit clauses in P B as input nodes and deriving the other nodes iteratively based on the direct influence relation defined by P B. Formally, we have The space of random variables of KB, denoted S(KB), is recursively defined as follows: 1. All unit clauses in P B are random variables in S(KB). Let A ← A 1 , ..., A l , true, B 1 , ..., B m , ¬C 1 , ..., ¬C n be a ground instance of a clause in P B. If the context B 1 , ..., B m , ¬C 1 , ..., ¬C n is true in the well-founded model W F (P B∪ CB) and {A 1 , ..., A l } ⊆ S(KB), then A is a random variable in S(KB). In this case, each A i is said to have a direct influence on A. 3. S(KB) contains only those ground atoms satisfying the above two conditions. Let W F (P B ∪ CB) =<I t , I f > be the well-founded model of P B ∪ CB and let I P B = {p(.) ∈ I t |p occurs in the head of some clause in P B}. The following result shows that the space of random variables is uniquely determined by the well-founded model. Definition 4.2 For any random variables Proof: First note that all unit clauses in P B are both in S(KB) and in I P B . We prove this theorem by induction on the maximum depth d ≥ 0 of backward derivations of a random variable A. (=⇒) Let A ∈ S(KB). When d = 0, A is a unit clause in P B, so A ∈ I P B . For the induction step, assume B ∈ I P B for any B ∈ S(KB) whose maximum depth d of backward derivations is below k. Let d = k for A. There must be a ground instance Since the head A is derived from the A i s in the body, the maximum depth for each A i must be below the depth k for the head A. By the induction hypothesis, the A i s are in I P B . By definition of the well-founded model, A is true in W F (P B ∪ CB) and thus A ∈ I P B . (⇐=) Let A ∈ I P B . When d = 0, A is a unit clause in P B, so A ∈ S(KB). For the induction step, assume B ∈ S(KB) for any B ∈ I P B whose maximum depth d of backward derivations is below k. Let d = k for A. There must be a ground instance A ← A 1 , ..., A l , true, ... of a clause in P B such that the body is true in W F (P B ∪ CB). Note that the predicate symbol of each A i occurs in the head of a clause in P B. Since the head A is derived from the literals in the body, the maximum depth of backward derivations for each A i in the body must be below the depth k for the head A. By the induction hypothesis, the A i s are in S(KB). By Definition 4.1, A ∈ S(KB). Theorem 4.1 suggests that the space of random variables can be computed by applying an existing procedure for the well-founded model such as SLG-resolution or SLTNF-resolution. Since SLG-resolution has been implemented as the well-known XSB system [25], in this paper we apply it for the PLP backward-chaining inferences. SLG-resolution is a tabling mechanism for top-down computation of the well-founded model. For any atom A, during the process of evaluating a goal ← A, SLG-resolution stores all answers of A in a space called table, denoted T A . Let {p 1 , ..., p t } be the set of predicate symbols occurring in the heads of clauses in P B, Algorithm 1: Computing random variables. By the soundness and completeness of SLG-resolution, Algorithm 1 will terminate with a finite output S ′ (KB) that consists of all answers of We introduce the following principal concept. an influence clause. 5 All influence clauses derived from all clauses in P B constitute the set of influence clauses of KB, denoted I clause (KB). The following result is immediate from Definition 4.1 and Theorem 4.1. Influence clauses have the following principal property. Proof: (=⇒) Assume A i has a direct influence on A, which is derived from the k-th clause in P B. By Definition 4.1, the k-th clause has a ground instance of the form A ← .., A l . Then the k-th clause in P B has a ground instance of the form A ← A 1 , ..., A i , ..., A l , true, B 1 , ..., B m , ¬C 1 , ..., ¬C n such that its body is true in W F ( P B ∪ CB) and (by Theorem 4.3) {A 1 , ..., A i , ..., A l } ⊆ S(KB). By Definition 4.1, A ∈ S(KB) and A i has a direct influence on A. The following result is immediate from Theorem 4.4. 5 The prefix "k." would be omitted sometimes for the sake of simplicity. Theorem 4.4 shows the significance of influence clauses: they define the same direct influence relation over the same space of random variables as the original Bayesian knowledge base does. Therefore, a Bayesian network can be built directly from I clause (KB) provided the influence clauses are available. Observe that to compute the space of random variables (see Algorithm 1), SLG-resolution will construct a proof tree rooted at the goal must be a success branch (i.e. a branch starting at the root node and ending at a node marked with success) in the tree that generates the answer. Let p i (.) ← A 1 , ..., A l , true, ... be the k-th clause in P B that is applied to expand the root goal ← p i ( − → X i ) in the success branch and let θ be the composition of all mgus along the branch. Then A = p i (.)θ and the body A 1 , ..., A l , true, ... is evaluated true, with the mgu θ, in W F (P B ∪ CB) by SLG-resolution. This means that for each 1 ≤ j ≤ l, A j θ is an answer of A j that is derived by applying SLG-resolution to .., A l θ is an influence clause. Hence we have the following result. Every success branch in a proof tree for a goal in GS 0 produces an influence clause. The set of influence clauses can then be obtained by collecting all influence clauses from all such proof trees in SLG-resolution. For each goal , collecting an influence clause from the branch into I ′ clause (KB). Proof: That Algorithm 2 terminates is immediate from Theorem 4.2, as except for collecting influence clauses, Algorithm 2 makes the same derivations as Algorithm 1. The termination of Algorithm 2 then implies I ′ clause (KB) is finite. By Theorem 4.6, any clause in I ′ clause (KB) is an influence clause in I clause (KB). We now prove the converse. Let k. A ← A 1 , ..., A l be an influence clause in I clause (KB). Then the k-th clause in P B A ′ ← A ′ 1 , ..., A ′ l , true, .... has a ground instance of the form A ← A 1 , ..., A l , true, ... whose body is true in W F (P B ∪ CB). By the completeness of SLGresolution, there must be a success branch in the proof tree rooted at a goal ← p i ( − → X i ) in GS 0 where (1) the root goal is expanded by the k-th clause, (2) the composition of all mgus along the branch is θ, and (3) Return .., A ′ l θ is the same as k. A ← A 1 , ..., A l . This influence clause from the success branch will be collected into I ′ clause (KB) by Algorithm 2. Thus, any clause in In the proof trees, a label C i on an edge indicates that the i-th clause in P B is applied, and the other labels like X = p1 on an edge show that an answer from a table is applied. Each success branch yields an influence clause. For instance, expanding the root goal ← aids(X) by the 3rd clause produces a child node ← aids(X) (Figure 4). Then applying the answers of aids(X) from the table T aids(X) to the goal of this node leads to three success branches. Applying the mgu θ on each success branch to the 3rd clause yields three influence clauses of the form 3. aids(pi) ← aids(pi) (i = 1, 2, 3). As a result, we obtain the following set of influence clauses: For the computational complexity, we observe that the cost of Algorithm 2 is dominated by applying SLG-resolution to evaluate the goals in GS 0 . It has been shown that for a Datalog program P , the time complexity of computing the well-founded model W F (P ) is polynomial [33,34]. More precisely, the time complexity of SLG-resolution is O(|P | * N Π P +1 * logN), where |P | is the number of clauses in P , Π P is the maximum number of literals in the body of a clause, and N, the number of atoms of predicates in P that are not variants of each other, is a polynomial in the number of ground unit clauses in P [6]. ← aids(X) in DOM i takes time linear in the size of DOM i . Let K 1 be the maximum number of member(X i , DOM i ) predicates used in a clause in P and K 2 be the maximum size of a domain DOM i . Then the time of handling all member(X i , DOM i ) predicates in a clause is bounded by K 1 * K 2 . Since each clause in P is applied at most N times in SLGresolution, the time of handling all member(X i , DOM i )s in all clauses in P is bounded by |P | * N * K 1 * K 2 . This is also a polynomial, hence SLG-resolution computes the well-founded model W F (P B ∪ CB) in polynomial time. Therefore, we have the following result. Probability Distributions Induced by KB For any random variable A, we use pa(A) to denote the set of random variables that have direct influences on A; namely pa(A) consists of random variables in the body of all influence clauses whose head is A. Assume that the probability distribution P(A|pa(A)) is available (see Section 5.2). Furthermore, we make the following independence assumption. Assumption 1 For any random variable A, we assume that given pa(A), A is probabilistically independent of all random variables in S(KB) that are not influenced by A. We define probability distributions induced by KB in terms of whether there are cyclic influences. Definition 4.4 When no cyclic influence occurs, the probability distribution induced by KB is P(S(KB)). Theorem 4.9 P(S(KB)) = A i ∈S(KB) P(A i |pa(A i )) under the independence assumption. Proof: When no cyclic influence occurs, the random variables in S(KB) can be arranged in a partial order such that if A i is influenced by A j then j > i. By the independence assumption, we have P(S(KB)) = P( When there are cyclic influences, we cannot have a partial order on S(KB). By Definition 4.2 and Theorem 4.4, any cyclic influence, say "A 1 is influenced by itself," must be resulted from a set of influence clauses in I clause (KB) of the form A n ← ..., A 1 , ... These influence clauses generate a chain (cycle) of direct influences which defines a feedback connection. Since a feedback system can be modeled by a twoslice DBN (see Section 2), the above influence clauses represent the same knowledge as the following ones do: A n ← ..., A 1 t−1 , ... Here the A i s are state variables and A 1 t−1 is a state input variable. As a result, A 1 being influenced by itself becomes A 1 being influenced by A 1 t−1 . By applying this transformation (from influence clauses (6) to (8)), we can get rid of all cyclic influences and obtain a generalized set I clause (KB) g of influence clauses from I clause (KB). 6 When there is no cyclic influence, KB is a non-temporal model, represented by I clause (KB). When cyclic influences occur, however, KB becomes a temporal model, represented by I clause (KB) g . Let S(KB) g be S(KB) plus all state input variables introduced in I clause (KB) g . Definition 4.5 When there are cyclic influences, the probability distribution induced by KB is P(S(KB) g ). By extending the independence assumption from S(KB) to S(KB) g , we obtain the following result. Proof: Since I clause (KB) g produces no cyclic influences, the random variables in S(KB) g can be arranged in a partial order such that if A i is influenced by A j then j > i. The proof then proceeds in the same way as that of Theorem 4.9. Building a Two-Slice DBN Structure From a Bayesian knowledge base KB, we can derive a set of influence clauses I clause (KB), which defines the same direct influence relation over the same space S(KB) of random variables as P B∪CB does (see Theorem 4.4). Therefore, given a probabilistic query together with some evidences, we can depict a network structure from I clause (KB), which covers the random variables in the query and evidences, by backward-chaining the related random variables via the direct influence relation. Let Q be a probabilistic query and E a set of evidences, where all random variables come from S(KB) (i.e., they are heads of some influence clauses in I clause (KB)). Let T OP consist of these random variables. An influence network of Q and E, 7 denoted I net (KB) Q,E , is constructed from I clause (KB) using the following algorithm. contact(p2, p1) aids(p2) contact(p1, p2) An influence network is a graphical representation for influence clauses. This claim is supported by the following properties of influence networks. Proof: First note that termination of Algorithm 3 is guaranteed by the fact that any random variable in S(KB) will be added to T OP no more than one time (line 2a). Let A i , A j be nodes in I net (KB) Q,E . If A j is a parent node of A i , connected via an edge A i k ← A j , this edge must be added at line 2b, due to applying an influence clause in I clause (KB) of the form k. A i ← A 1 , ..., A j , ..., A l (line 2). Conversely, if I clause (KB) contains such an influence clause, it must be applied at line 2, with edges of the form A i k ← A j added to the network at line 2b. Theorem 5.2 For any Proof: That I net (KB) Q,E covers all random variables in T OP follows from line 1 of Algorithm 3. We first prove that if A j ∈ W then A j ∈ V . Assume A j ∈ W . There must be a chain of influence clauses of form (10) with A i ∈ T OP . In this case, B 1 , B 2 , ..., B m , A j will be recursively added to the network (line 2). Thus A j ∈ V . We then prove that if A j ∈ V and A j ∈ T OP then A j ∈ W . Assume A j ∈ V and A j ∈ T OP . A j must not be added to V at line 1. Instead, it is added to V at line 2a. This means that for some Theorem 4.9 shows that the probability distribution induced by KB can be computed over I clause (KB). Let I net (KB) S(KB) denote an influence network that covers all random variables in S(KB). We show that the same distribution can be computed over Theorem 5.4 implies that an influence network without loops is a Bayesian network structure. Let us consider influence networks with loops. By Theorem 5.2, loops in an influence network are generated from recursive influence clauses of form (6) and thus they depict feedback connections of form (7). This means that an influence network with loops can be converted into a two-slice DBN, simply by converting each loop of the form by introducing a state input node A 1 t−1 . As illustrated in Section 2, a two-slice DBN is a snapshot of a stationary DBN across any two time slices, which can be obtained by traversing the stationary DBN from a set of state variables backward to the same set of state variables (i.e., state input nodes). This process corresponds to generating an influence network I net (KB) Q,E from I clause (KB) incrementally (adding nodes and edges one at a time) while wrapping up loop nodes with state input nodes. This leads to the following algorithm for building a two-slice DBN structure, 2S net (KB) Q,E , directly from I clause (KB), where Q, E and T OP are the same as defined in Algorithm 3. 1 continued) To build a two-slice DBN structure from KB 1 that covers aids(p1), aids(p2) and aids(p3), we apply Algorithm 4 to I clause ( KB 1 ) while letting T OP = {aids(p1), aids(p2), aids(p3)}. It generates 2S net (KB 1 ) Q,E as shown in Figure 7. Note that loops are cut by introducing three state input nodes aids(p1) t−1 , aids(p2) t−1 and aids(p3) t−1 . The two-slice DBN structure concisely depicts a feedback system where the feedback connections are as shown in Figure 8. Algorithm 4 is Algorithm 3 enhanced with a mechanism for cutting loops (item 2b), i.e. when adding the current edge A k ← A i to the network forms a loop, we replace it with an edge A k ← A i t−1 , where A i t−1 is a state input node. This is a process of transforming influence clauses (6) to (8). Therefore, 2S net (KB) Q,E can be viewed as an influence network built from a generalized set I clause (KB) g of influence clauses. Let S(KB) g be the set of random variables in I clause (KB) g , as defined in Theorem 4.10. Let 2S net (KB) S(KB) denote a twoslice DBN structure (produced by applying Algorithm 4) that covers all random variables in S(KB) g . We then have the following immediate result from Theorem 5.4. Theorem 5.5 When I clause (KB) produces cyclic influences, the probability distribution induced by KB can be computed over 2S net (KB) S(KB) . That is, P(S(KB) g ) = A i ∈S(KB)g P(A i |pa(A i )) = A i ∈S(KB)g P(A i |parents(A i )) under the independence assumption. Remark 5.1 Note that Algorithm 4 produces a DBN structure without using any explicit time parameters. It only requires the user to specify, via the query and evidences, what random variables are necessarily included in the network. Algorithm 4 builds a two-slice DBN structure for any given query and evidences whose random variables are heads of some influence clauses in I clause (KB). When no query and evidences are provided, we may apply Algorithm 4 to build a complete two-slice DBN structure, 2S net (KB) S(KB) , which covers the space S(KB) of random variables, by letting T OP consist of all heads of influence clauses in I clause (KB). This is a very useful feature, as in many situations the user may not be able to present the right queries unless a Bayesian network structure is shown. Also note that when there is no cyclic influence, Algorithm 4 becomes Algorithm 3 and thus it builds a regular Bayesian network structure. Building CPTs After a Bayesian network structure 2S net (KB) Q,E has been constructed from a Bayesian knowledge base KB, we associate each (non-state-input) node A in the network with a CPT. There are three cases. (1) If A (as a head) only has unit clauses in I clause (KB), we build from the unit clauses a prior CPT for A as its prior probability distribution. (2) If A only has non-unit clauses in I clause (KB), we build from the clauses a posterior CPT for A as its posterior probability distribution. (3) Otherwise, we prepare for A both a prior CPT (from the unit clauses) and a posterior CPT (from the non-unit clauses). In this case, A is attached with the posterior CPT; the prior CPT for A would be used, if A is a state variable, as the probability distribution of A in time slice 0 (only in the case that a two-slice DBN is unrolled into a stationary DBN starting with time slice 0). Assume that the parent nodes of A are derived from n (n ≥ 1) different influence clauses in I clause (KB). Suppose these clauses share the following CPTs in T x : P(A 1 |B 1 1 , ..., B 1 m 1 ), ..., and P(A n |B n 1 , ..., B n mn ). (Recall that an influence clause prefixed with a number k shares the CPT attached to the k-th clause in P B.) Then the CPT for A is computed by combining the n CPTs in terms of the combination rule CR specified in Definition 3.1. Example 5.3 (Example 5.2 continued) Let CPT i denote the CPT attached to the i-th clause in P B 1 . Consider the random variables in 2S net (KB 1 ) Q,E . Since aids(p1) has three parent nodes, derived from the 3rd and 4-th clause in P B 1 respectively, the posterior CPT for aids(p1) is computed by combining CPT 3 and CPT 4 . aids(p1) has also a prior CPT, CPT 1 , derived from the 1st clause in P B 1 . For the same reason, the posterior CPT for aids(p2) is computed by combining CPT 3 and CPT 4 . The posterior CPT for aids(p3) is CPT 3 and its prior CPT is CPT 2 . contact(p1, p2) and contact(p2, p1) have only prior CPTs, namely CPT 5 and CPT 6 . Note that state input nodes, aids(p1) t−1 , aids(p2) t−1 and aids(p3) t−1 , do not need to have a CPT; they will be expanded, during the process of unrolling the two-slice DBN into a stationary DBN, to cover the time slices involved in the given query and evidence nodes. If the resulting stationary DBN starts with time slice 0, the prior CPTs, CPT aids(p1) 0 and CPT aids(p3) 0 , for aids(p1) and aids(p3) are used as the probability distributions of aids(p1) 0 and aids(p3) 0 . Related Work A recent overview of existing representational frameworks that combine probabilistic reasoning with logic (i.e. logic-based approaches) or with relational representations (i.e. nonlogic-based approaches) is given by De Raedt and Kersting [8]. Typical non-logic-based approaches include probabilistic relational models (PRM), which are based on the entityrelationship (or object-oriented) model [12,15,22], and relational Markov networks, which combine Markov networks and SQL-like queries [30]. Representative logic-based approaches include frameworks based on the KBMC (Knowledge-Based Model Construction) idea [3,4,10,13,14,17,20,23], stochastic logic programs (SLP) based on stochastic context-free grammars [7,19], parameterized logic programs based on distribution semantics (PRISM) [26], and more. Most recently, a unifying framework, called Markov logic, has been proposed by Domingos and Richardson [9]. Markov logic subsumes first-order logic and Markov networks. Since our work follows the KBMC idea focusing on how to build a Bayesian network directly from a logic program, it is closely related to three representative existing PLP approaches: the context-sensitive PLP developed by Haddawy and Ngo [20], Bayesian logic programming proposed by Kersting and Raedt [17], and the time parameter-based approach presented by Glesner and Koller [13]. In this section, we make a detailed comparison of our work with the three closely related approaches. Comparison with the Context-Sensitive PLP Approach The core of the context-sensitive PLP is a probabilistic knowledge base (PKB). In order to see the main differences from our Bayesian knowledge base (BKB), we reformulate its definition here. where all arguments T i s are typed with a finite domain and the last argument V takes on values from a probabilistic domain DOM p . • P B consists of probabilistic rules of the form where 0 ≤ α ≤ 1, the A i s are p-predicates, and the B j s and C k s are context predicates (c-predicates) defined in CB. • CB is a logic program, and both P B and CB are acyclic. • CR is a combination rule. In a probabilistic rule (11), each p-predicate A i is of the form q(t 1 , ..., t m , v), which simulates an equation q(t 1 , ..., t m ) = v with v being a value from the probabilistic domain of q(t 1 , ..., t m ). For instance, let D color = {red, green, blue} be the probabilistic domain of color(X), then the p-predicate color(X, red) simulates color(X) = red, meaning that the color of X is red. The left-hand side P (A 0 |A 1 , ..., A l ) = α expresses that the probability of A 0 conditioned on A 1 , ..., A l is α. The right-hand side B 1 , ..., B m , ¬C 1 , ..., ¬C n is the context of the rule where the B j s and C k s are c-predicates. Note that the sets of p-predicate and cpredicate symbols are disjoint. A separate logic program CB is used to evaluate the context of a probabilistic rule. As a whole, the above probabilistic rule states that for each of its (Herbrand) ground instances .., ¬C ′ n is true in CB under the program completion semantics, the probability of A ′ 0 conditioned on A ′ 1 , ..., A ′ l is α. PKB and BKB have the following important differences. First, probabilistic rules of form (11) in PKB contain both logic representation (righthand side) and probabilistic representation (left-hand side) and thus are not logic clauses. The logic part and the probabilistic part of a rule are separately computed against CB and P B, respectively. In contrast, BKB uses logic clauses of form (4), which naturally integrate the direct influence information, the context and the type constraints. These logic clauses are evaluated against a single logic program P B ∪ CB, while the probabilistic information is collected separately in T x . Second, logic reasoning in PKB relies on the program completion semantics and is carried out by applying SLDNF-resolution. But in BKB, logic inferences are based on the well-founded semantics and are performed by applying SLG-resolution. The well-founded semantics resolves the problem of inconsistency with the program completion semantics, while SLG-resolution eliminates the problem of infinite loops with SLDNF-resolution. Note that the key significance of BKB using the well-founded semantics lies in the fact that a unique set of influence clauses can be derived, which lays a basis on which both the declarative and procedural semantics for BKB are developed. Third, most importantly PKB has no mechanism for handling cyclic influences. In PKB, cyclic influences are defined to be inconsistent (see Definition 9 of the paper [20]) and thus are excluded (PKB excludes cyclic influences by requiring its programs be acyclic). In BKB, however, cyclic influences are interpreted as feedbacks, thus implying a time sequence. This allows us to derive a stationary DBN from a logic program with recursive loops. Recently, Fierens, Blockeel, Ramon and Bruynooghe [11] introduced logical Bayesian networks (LBN). LBN is similar to PKB except that it separates logical and probabilistic information. That is, LBN converts rules of form (11) into the form where the A i s are p-predicates with the last argument V removed, and the B j s and C k s are c-predicates defined in CB. This is not a standard clause of form (3) as defined in logic programming [18]. Like PKB, LBN differs from BKB in the following: (1) it has no mechanism for handling cyclic influences (see Section 3.2 of the paper [11]), and (2) although the well-founded semantics is also used for the logic contexts, neither declarative nor procedural semantics for LBN has been formally developed. Comparison with Bayesian Logic Programming Building on Ngo and Haddawy's work, Kersting and De Raedt [17] introduce the framework of Bayesian logic programs. A Bayesian logic program (BLP) is a triple <P, T x , CR> where P is a well-defined logic program, T x consists of CPTs associated with each clause in P , and CR is a combination rule. A distinct feature of BLP over PKB is its separation of probabilistic information (T x ) from logic clauses (P ). According to [17], we understand that a well-defined logic program is an acyclic positive logic program satisfying the range restriction. 9 For instance, a logic program containing clauses like r(X) ← r(X) (cyclic) or r(X) ← s(Y ) (not range-restricted) is not well-defined. BLP relies on the least Herbrand model semantics and applies SLD-resolution to make backward-chaining inferences. BLP has two important differences from BKB. First, it applies only to positive logic programs. Due to this, it cannot handle contexts with negated atoms. (In fact, no contexts are considered in BLP.) Second, it does not allow cyclic influences. BKB can be viewed as an extension of BLP with mechanisms for handling contexts and cyclic influences in terms of the well-founded semantics. Such an extension is clearly nontrivial. Comparison with the Time Parameter-Based Approach The time parameter-based framework (TPF) proposed by Glesner and Koller [13] is also a triple <P, T x , CR>, where CR is a combination rule, T x is a set of CPTs that are represented as decision trees, and P is a logic program with the property that each predicate contains a time parameter and that in each clause the time argument in the head is at least one time step later than the time arguments in the body. This framework is implemented in Prolog, i.e. clauses are represented as Prolog rules and goals are evaluated applying SLDNF-resolution. Glesner and Koller [13] state: "... In principle, this free variable Y can be instantiated with every domain element. (This is the approach taken in our implementation.)" By this we understand that they consider typed logic programs with finite domains. We observe the following major differences between TPF and BKB. First, TPF is a temporal model and its logic programs contain a time argument for every predicate. It always builds a DBN from a logic program even if there is no cyclic influence. In contrast, logic programs in BKB contain no time parameters. When there is no cyclic influence, BKB builds a regular Bayesian network from a logic program (in this case, BKB serves as a non-temporal model); when cyclic influences occur, it builds a stationary DBN, represented by a two-slice DBN (in this case, BKB serves as a special temporal model). Second, TPF uses time steps to describe direct influences (in the way that for any A and B such that B has a direct influence on A, the time argument in B is at least one time step earlier than that in A), while BKB uses time slices (implied by recursive loops of form (1)) to model cycles of direct influences (feedbacks). Time-steps based frameworks like TPF are suitable to model flexible DBNs, whereas time-slices based approaches like BKB apply to stationary DBNs. Third, most importantly TPF avoids recursive loops by introducing time parameters to enforce acyclicity of a logic program. A serious problem with this method is that it may lose and/or produce wrong answers to some queries. To explain this, let P be a logic program and P t be P with additional time arguments added to each predicate (as in TPF). If the transformation from P to P t is correct, it must hold that for any query p(.) over P , an appropriate time argument N = 0, 1, 2, ... can be determined such that the query p(., N) over P t has the same set of answers as p(.) over P when the time arguments in the answers are ignored. It turns out, however, that this condition does not hold in general cases. Note that finding an appropriate N for a query p(.) such that evaluating p(., N) over P t (applying SLDNF-resolution) yields the same set of answers as evaluating p(.) over P corresponds to finding an appropriate depth-bound M such that cutting all SLDNF-derivations for the query p(.) at depth M does not lose any answers to p(.). The latter is the well-known loop problem in logic programming [2]. Since the loop problem is undecidable in general, there is no algorithm for automatically determining such a depth-bound M (rep. a time argument N) for an arbitrary query p(.) [2,27,28]. We further illustrate this claim using the following example. Example 6.1 The following logic program defines a path relation; i.e. there is a path from X to Y if either there is an edge from X to Y or for some Z, there is a path from X to Z and an edge from Z to Y . P : 1. e(s, b1). To avoid recursive loops, TPF may transform P into the following program. P t : 1. e(s, b1, 0). Let us see how to check if there is a path from s to g. In the original program P , we simply pose a query ? − path(s, g). In the transformed program P t , however, we have to determine a specific time parameter N and then pose a query ? − path(s, g, N), such that evaluating path(s, g) over P yields the same answer as evaluating path(s, g, N) over P t . Interested readers can practice this query evaluation using different values for N. The answer to path(s, g) over P is yes. However, we would get an answer no to the query path(s, g, N) over P t if we choose any N < 100. Conclusions and Discussion We have developed a novel theoretical framework for deriving a stationary DBN from a logic program with recursive loops. We observed that recursive loops in a logic program imply a time sequence and thus can be used to model a stationary DBN without using explicit time parameters. We introduced a Bayesian knowledge base with logic clauses of form (4). These logic clauses naturally integrate the direct influence information, the context and the type constraints, and are evaluated under the well-founded semantics. We established a declarative semantics for a Bayesian knowledge base and developed algorithms that build a two-slice DBN from a Bayesian knowledge base. We emphasize the following three points. 1. Recursive loops (cyclic influences) and recursion through negation are unavoidable in modeling real-world domains, thus the well-founded semantics together with its topdown inference procedures is well suitable for the PLP application. 2. Recursive loops define feedbacks, thus implying a time sequence. This allows us to derive a two-slice DBN from a logic program containing no time parameters. We point out, however, that the user is never required to provide any time parameters during the process of constructing such a two-slice DBN. A Bayesian knowledge base defines a unique space of random variables and a unique set of influence clauses, whether it contains recursive loops or not. From the viewpoint of logic, these random variables are ground atoms in the Herbrand base; their truth values are determined by the wellfounded model and will never change over time. 10 Therefore, a Bayesian network is built over these random variables, independently of any time factors (if any). Once a two-slice DBN has been built, the time intervals over it would become clearly specified, thus the user can present queries and evidences over the DBN using time parameters at his/her convenience. 3. Enforcing acyclicity of a logic program by introducing time parameters is not an effective way to handle recursive loops. Firstly, such a method transforms the original non-temporal logic program into a more complicated temporal program and builds a dynamic Bayesian network from the transformed program even if there exist no cyclic influences (in this case, there is no state variable and the original program defines a regular Bayesian network). Secondly, it relies on time steps to define (individual) direct influences, but recursive loops need time slices (intervals) to model cycles of direct influences (feedbacks). Finally, to pose a query over the transformed program, an appropriate time parameter must be specified. As illustrated in Example 6.1, there is no algorithm for automatically determining such a time parameter for an arbitrary query. Promising future work includes (1) developing algorithms for learning BKB clauses together with their CPTs from data and (2) applying BKB to model large real-world problems. We intend to build a large Bayesian knowledge base for traditional Chinese medicine, where we already have both a large volume of collected diagnostic rules and a massive repository of diagnostic cases.
2005-06-26T21:07:34.000Z
2005-06-26T00:00:00.000
{ "year": 2005, "sha1": "e5859e24ac58cc9bf2ab7ade39658e72676ffa46", "oa_license": null, "oa_url": "http://www.cs.ust.hk/~qyang/Docs/2005/ilp2005.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6f37ff847c7758ab48fbaa5dba4a72141fdff0d3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
92559756
pes2o/s2orc
v3-fos-license
Prevalence and risk factors associated with Campylobacter among layer farms Campylobacter jejuni is an important food-borne pathogen. The main source of this pathogen is poultry and poultry products. Poultry farms of low biosecurity level plays major role in disseminating this pathogen. The objectives of this study were to investigate the occurrence of Campylobacter and identify potential risk factors associated with their presence in layer farms in Northern Jordan. A total of 2524 samples from chickens, litter, water and feed were collected from 35-layer farms. Samples underwent conventional and enrichment isolation methods for Campylobacter. Confirmation was done morphologically, biochemically and by PCR typing. The flock-level prevalence of C. jejuni was 40%, 37%, 20% in chicken cloacae, drinking water and litter respectively. C. jejuni was the only confirmed isolated species. None of the feed samples revealed presence of Campylobacter. The concentration of free residual chlorine was below the recommended standard levels. The risk factors were identified using modified semi-structured questionnaire. There was no significant association between evaluated risk factors and isolation status potentially reflecting small number of study farms. The prevalence rate for C. jejuni is within commonly reported range. High stocking density, short distance between farms, improper hygienic practice and low water chlorine level seems to increase occurrence rate of Campylobacter in layer farms. Educational biosecurity programs regarding C. jejuni transmission and their public health importance needs to be established. Introduction Campylobacter organisms are gram negative spiral-shaped bacteria which inhabit the intestine of many species and can cause clinical manifestation of variable severity. 1Campylobacter is considered of great public health significance worldwide because it is the most reported gastrointestinal bacterial foodborne pathogen. 2Most commonly reported human illnesses are caused by Campylobacter jejuni. 2 In the United States, there have been reports of more new 2 million Campylobacter cases per year. 3In European Union, nearly 200,000 cases have been reported in 2009. 4he Campylobacter organism seems to be adapted to birds which they carry without being or showing signs of illness.It has been reported that most people becoming ill with Campylobacter manifest a variety of clinical signs including potentially bloody diarrhea, vomiting, nausea, abdominal pain, and fever within few days after exposure to the organism and typically signs lasts for few days. 5Small percentage of infected people do not express clinical signs however Campylobacter can potentially cause more serious and life threatening clinical signs in people with compromised immune system. 6It has been reported that 50-70% cases of human Campylobacter is attributed to consuming contaminated poultry products.This reflects the need for lowering contamination associated with Campylobacter thus decreasing occurrence which is considered essential step in lowering incidence rate of human infection.In many countries, culled layer chicken is consumed as part of human diet. 7In Jordan, layer chickens are sold as spent hens for human consumption at the end of their laying period.There have been a considerable research body that documented and characterized the prevalence and risk factors associated with Campylobacter in broiler chicken industry.A local study in Jordan was conducted on samples obtained from broiler birds at the slaughter house indicating a 40% prevalence rate of Campylobacter. 8owever, there is limited research efforts conducted to estimate the occurrence rate and identify risk factors associated with Campylobacter in layer industry.Therefore, the broad goal of the research project reported here is to determine prevalence of Campylobacter, source of infection and risk factors associated with Campylobacter infection in layer flocks in Northern Jordan. Sample collection and analysis Cloaca, litter, water, and feed samples were collected from 35 operating layer farms represented by 43 flocks and a total of 478,600 birds located in Northern Jordan. 9ach farm/house was visited once, where five cloacal swabs representing 1000 birds were collected (giving a total of 2395 cloacal samples).Sterile swabs were inserted into the cloaca and rotated gently before being pulled out.A total of 500 gm of litter or feed were randomly collected from five different locations from each house/farm and ten different locations water sources (drinkers) were randomly selected to collect a total of pooled 2 litters of water from each house/farm.All cloacal, litter, water, feed samples were harvested utilizing sterile swabs, spoons, containers and gloves then transported to laboratory in an ice box and analyzed within few hours from collection. Water samples collected from the main water tanks and water drinkers from each farm/house was also analysed to determine Contributions: MQAN, made substantial contributions to the design and execution of the study, participated in the manuscript's drafting; ARAA, made contributions to the design of the study, participated in the manuscript's drafting and executed the bacteriogical isolation; MAA, participated significantly in the manuscript's drafting and executed the statistical analysis; MTKT, contributed to the design of the study and participated in laboratory work. Conflict of interest: the authors declare no conflict of interest.the level of total chlorine, free residual chlorine and water acidity (water pH).Total and free residual chlorine was measured using DPD (N, N-diethyl-p-phenylenediamine) method and results were expressed with range of 0 to 3.5 parts per million (ppm) or equivalent to (0-3.5 mg/L).Water acidity was measured electrometrically using pH meter (inoLab, Germany). Isolation identification and confirmation of Campylobacter The reference bacterial strains used in this study as positive control were C. jejuni ATCC 33291 and C. coli ATCC 43478 that was obtained from Jordanian Food and Drug Administration. The method described by ISO 10272-1:2006(E) was followed for isolation of Campylobacter. 10 Samples collected from cloaca, litter, feed or filtered water (0.45 mm membrane filter) were separately diluted with 225 mL of enrichment Bolton broth medium (Oxoid, UK) and homogenized using stomacher (Seaward, U.K) for 2 minutes at 2400 rpm.Obtained homogenates were incubated at 37°C for 4-6 hours and then at 41.5°C for 44 h ± 4 h under microaerobic conditions using a gas-generating kit Campygen sachets (Oxoid, UK).After enrichment step, inoculums from each source were inoculated into modified charcoal cefoperazone deoxycholate agar (mCCD agar) (Oxoid, UK) and incubated at 41.5°C under micro-aerobic atmosphere and inspected after 44 h ± 4 h.Two colonies, presumed to be Campylobacter, were sub-cultured on a non-selective Columbia blood agar (Oxoid, UK) for purification.Confirmation was done by microscopic examination for morphology and motility followed by oxidase, catalase, DRYSPOT latex agglutination, and Hippurate hydrolysis tests. 11ll oxidase negative colonies do not require further confirmatory tests.The DRYSPOT latex agglutination test (Oxoid, UK) was performed according to the manufacturer's instructions where the test was considered positive when agglutination noticed within 3 minutes.Hippurate hydrolysis test was done according to standard protocol of the manufacturer's instructions.Hippurate hydrolysis test was considered positive if a dark violet color formed in the testing tubes.The Campylobacter jejuni hydrolyzes Hippurate and gives positive results. PCR molecular typing DNA extraction and PCR technique was performed as described by Nayak et al. (2005) for the amplification of 160bp DNA fragment of the oxidoreductase subunit in the Campylobacter genome. 12The pair of primers used for C. jejuni: F 5'-CAA ATA AAG TTA GAG GTA GAA TGT-3'and R 5'-GGA TAA GCA CTA GCT AGC TGA T-3' and for C. coli: F 5'-ATG AAA AAA TAT TTA GTT TTT GCA-3' and R 5'-ATT TTA TTA TTT GTA GCA GCG-3' (Alpha DNA, Montreal, Canada) were used to amplify DNA fragment that corresponds to the region of oxidoreductase subunit. Study area and data collection A questioner was purposely designed using close end questions.The questionnaires were filled by state veterinarians dur-ing a field visit.The questions gathered information to reflect farms environmental current situation and preventive practices in use that might increase or decrease the risk of infections. Statistical analysis All statistical analyses were performed using SPSS software (SPSS, version 19.0, SPSS Inc., Chicago, IL, USA).Association between isolation of Campylobacter and potential risk factors were initially screened in a univariable analysis using Chi-square test.Only variables with no Collinearity (r<0.60) were considered for the univariable analysis.Collinearity was evaluated using non-parametric spearman rank correlation test.Only variable with significant association with Campylobacter were considered for the final multivariable logistic regression model.Variables were forced into the multiple regression models using Enter method.The Hosmer-Lemeshow test was used to evaluate the goodness-of-fit for the developed logistic regression model.The independent student t-test was used to test for differences between negative and positive farms in regard to quantitative variables listed in Table 1.Statistical significance was set at (P≤0.05). Results A total of 535, 13, 7 and 0 isolates were recovered from cloacal swabs, water, litter and feed samples respectively.The number of samples that revealed presumptively identified Campylobacter species using 3. Article Univariate binary regression analysis showed no significant association between the evaluated risk factors and the isolation of Campylobacter.Also, multiple regression analysis showed no significant association between the isolation of Campylobacter and the evaluated risk factors collectively probably reflecting the small sample size of the study farms.Data are presented in Table 4. The independent student t-test showed only significant difference between negative and positive cloacal isolates in regard to stocking density, height of the fence, distance between farms and water pH.Summary of data are presented in Tables 1 and 4. The level of total chlorine in the main water tanks and water troughs of the studied farms ranged from (0.2-0.4 ppm) and (0.0-0.2 ppm), while the free residual chlorine ranges were (0-0.2ppm) and (0-0.09ppm) respectively (Table 1). The 160 bp product amplified by the primer sets targeting the gene segment on the oxidoreductase encoding gene of all identified C. jejuni isolates were detected in all the 555 isolates by conventional PCR. Our questionnaire reviled 19 Campylobacter positive farms out of the 30 that have no implementation of any hygienic preventive measures.These 30 farms are designated as farms with (Low level of biosecurity).Two Campylobacter positive farms out of the 5 that implement some kind of preventive measures were designated as farms with (Medium level of biosecurity).None of the studies farms have (High level of biosecurity). Discussion and Conclusions The prevalence rate of Campylobacter jejuni isolated from the tested farms in cloaca, water, litter and feed was 14%, 13%, 7% and 0% respectively.The (40%) flock-level prevalence of C. jejuni (cloacal swabs) in the tested layer chicken farms is within the suggested prevalence ranges of 2-100% in both developed and developing countries. 13t is of no difference from other rates detected in broiler chicken from France (42.7%),Denmark (42.5%), Germany (41%), Japan (45%), Italy (80%) , and Jordan (40%). 8,13n this study C. jejuni was isolated either from one or different sources within the same farm.The most highly contaminated source of C. jejuni was drinking water.Newell et al. (2011) suggesting the horizontal introduction of Campylobacter from the environment in a laying hen flock, which may suggest a link between chicken colonization and water or litter contamination. 14n other study of C. jejuni rout of transmission and possible sources of infection to broilers, it was concluded that water supply was the predominant source of C. jejuni infection on the farm. 15ositive litter samples were always associated with positive birds.][18] Epidemiological studies characterizing occurrence and risk factors associated with broiler flocks have been widely conducted in many locations.0][21][22][23][24][25] However, epidemiological studies characterizing occurrence and risk factors associated with Campylobacter in layer farms is not fully investigated in many parts of the world.Findings presented here highlight a set of substantial risk factors that could potentially be associated with occurrence of Campylobacter in layer farms. Univariable or multivariable regression analysis of the evaluated risk factors indicated that there was no statistically significant association between the evaluated risk factors and isolation of Campylobacter which could be mostly attributed to the relatively small number of farms tested.However, analysis of the frequency information and data presented in Tables 1 and 4 indicate some risk factors that are associated with substantially higher prevalence of Campylobacter infection in layer farms.This include birds stocking density (>6 birds/m 2 ), short distance between farms (<1000 m), lack of hand washing before entering the farms, presence of litter piled piles inside the farm, presence of rats, mice, pigeons and sparrows.Furthermore, the mean socking bird density in positive farms was significantly higher (10.33 birds/m 2 ) than negative farms (6.04 birds/m 2 ).Also, the mean distance between positive farms were significantly shorter (380 m) when compared with negative farms (2200 m).In here, distance was associated with a decreased risk of Campylobacter infection which might highlight the role of long dis- tance for rodents and resident birds.In this study, 60% of the positive farms had higher bird stocking density and 60% of positive farms were within short distance to each other.Also, Campylobacter infection was higher on farms that had multiple houses.5][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] In here, 5% of the positive farms had rats or mice while 55% of the negative farms did have neither rats nor mice.Also, 40% of the negative farms did not show any pigeons or sparrows.It has been reported that wild birds are potential contaminants to farms or the surrounding soil .31It appears that good hygiene practices by farmers such as changing disinfectant at gate or house door, decontamination of vehicles entering farms, presence of high fence (1.8 m) and secure gait, restricted access of non-essential visitors and wearing protective boots seemed to be associated significantly with lower prevalence rate of Campylobacter infection.These findings are in agreement with other previous studies. 14,30As expected Campylobacter was isolated from the litter samples which might also act as a potential source of contamination of chicken houses. 28,29The potential source of litter contamination is mainly from the intestinal contents and it is expected to survive under its wet condition. The value of a clean water source is paramount and should be monitored in chicken industry.Chlorine is the most commonly used disinfectant in water treatment of broiler and layer chicken industry.Chlorinating drinking water is helpful in reducing the risk of Campylobacter colonization. 26The recommended level of residual chlorine concentration in poultry farms is from 2-5 ppm within a pH range of 6-8. 27he residual chlorine values recorded in farms under study were less than 2 ppm which is considered not inhibitory for the growth of Campylobacter in chicken drinking water. 27Chlorine acts predominantly as a sanitizer when the pH of the water is neutral or acidic.In here, the mean values of the pH of drinkers of positive and negative farms are 7.38 and 7.51 respectively.Data are summarized in Table 1. One limitation of this study is the limited number of farms studied.The study farms were the only available farms in the region during the study period.In Jordan, there are no integrated layer companies and their production is mostly dependent on individual layer farmers. The significance of the study reported here is being new efforts conducted in layer farms to determine the prevalence rate and characterize isolation of Campylobacter and highlight important risk factors for their occurrence in layer farms. The authors believe that the finding presented here is still relevant despite the lack of finding significant association between the evaluated risk factors and the isolation of Campylobacter in layer farms.It is recommended to improve hygienic practices at layer farms and to establish national guidelines and biosecurity standards to decrease prevalence rate of Campylobacter in layer farms which would positively impact the poultry industry and lower infection rate of human Campylobacter. Table 1 . Quantitative statistics for Campylobacter evaluated risk factors. Variable Isolation status Min Mean 25 th percentile 50 th percentile 75 th percentile Max mCCD agar was 978 positive out of the 2524 tested samples.All of the 555 Agglutination positive isolates were positive for both Hippurate hydrolysis test and Catalase test and were also confirmed molecularly using PCR technique as C. jejuni (
2019-04-03T13:11:25.570Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "d65a34744cd804908336d02c3d1b0398d3112c2f", "oa_license": "CCBYNC", "oa_url": "https://www.pagepress.org/journals/index.php/vsd/article/download/6430/6622", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d65a34744cd804908336d02c3d1b0398d3112c2f", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
6259061
pes2o/s2orc
v3-fos-license
Regulation of intercellular adhesion strength in fibroblasts. The regulation of adherens junction formation in cells of mesenchymal lineage is of critical importance in tumorigenesis but is poorly characterized. As actin filaments are crucial components of adherens junction assembly, we studied the role of gelsolin, a calcium-dependent, actin severing protein, in the formation of N-cadherin-mediated intercellular adhesions. With a homotypic, donor-acceptor cell model and plates or beads coated with recombinant N-cadherin-Fc chimeric protein, we found that gelsolin spatially co-localizes to, and is transiently associated with, cadherin adhesion complexes. Fibroblasts from gelsolin-null mice exhibited marked reductions in kinetics and strengthening of N-cadherin-dependent junctions when compared with wild-type cells. Experiments with lanthanum chloride (250 microm) showed that adhesion strength was dependent on entry of calcium ions subsequent to N-cadherin ligation. Cadherin-associated gelsolin severing activity was required for localized actin assembly as determined by rhodamine actin monomer incorporation onto actin barbed ends at intercellular adhesion sites. Scanning electron microscopy showed that gelsolin was an important determinant of actin filament architecture of adherens junctions at nascent N-cadherin-mediated contacts. These data indicate that increased actin barbed end generation by the severing activity of gelsolin associated with N-cadherin regulates intercellular adhesion strength. Cadherin-mediated adherens junctions are critically involved in tumorigenesis and metastasis as well as in the maintenance of mature tissue architecture, formation of distinct tissue boundaries, tissue differentiation, and cell sorting events such as epithelial-mesenchymal transitions during embryogenesis (1)(2)(3)(4)(5). The classical cadherins are a group of singlepass, transmembrane, calcium-dependent glycoproteins that mediate homotypic intercellular adhesions termed adherens junctions (5). Adherens junctions are linked to the actin cytoskeleton, and their formation or dissolution is tightly regulated. N-cadherin (adherens junction-specific cell adhesion molecule) is a member of the classical cadherin family, and its expression and function are necessary for maintenance of embryonic vitality (6) and for establishment of embryonic asymmetry (7). Further, N-cadherin is the predominantly expressed cadherin of mesenchymal tissues and is known to play an important role in mediating tissue organization and cell differentiation in muscle (8 -10), cartilage (11), bone (12,13), and neural tissues (14,15). However, the regulation of N-cadherinmediated intercellular adhesions, particularly in connective tissue fibroblasts, is poorly characterized. N-cadherins are tethered to the cortical actin cytoskeleton by ␣-catenin, a member of the armadillo family of proteins, which binds indirectly to the cytoplasmic tail of cadherins via ␤-catenin (16,17). Tethering of cadherins to cortical actin filaments is required for cadherin-mediated adhesion and adhesion strengthening (18 -21). Recently, cadherins have been shown to function as adhesion-activated cell surface receptors (reviewed in Ref. 22). Ligation of cadherins on opposing cell surfaces generates signals that induce recruitment to the adherens junctions of several actin-binding proteins that mediate localized remodeling of the actin cytoskeleton (reviewed in Ref. 23). Notably, vasodilator-stimulated phosphoprotein, zyxin, mena, Arp2/3, and cortactin 1 are localized to nascent cadherin-mediated intercellular adhesions, further underlining the importance of actin filament networks in cadherin function (24). After initial intercellular contact and cadherin ligation, the dramatic reorganization of the actin cytoskeleton at adherens junctions is likely mediated by actin-severing proteins, proteins that can generate new barbed ends and promote rapid assembly of new actin filaments (25,26). However, this has not been shown. Notably, actin-severing proteins such as gelsolin are activated by localized increases of calcium (27,28), which suggests a possible regulatory mechanism for intercellular adhesion. The precise mechanism and regulatory molecules involved in cadherin-mediated adhesions are currently unclear, but gelsolin appears to be an important candidate. Notably, gelsolin is expressed at high levels in connective tissue fibroblasts (29) and is the only known calcium-dependent severing protein with a Ca 2ϩ K d in the micromolar range (30). Intracellular calcium transients localize to areas of nascent cadherin-mediated contacts and can regulate remodeling of cortical actin filaments (24,31). We used the donor-acceptor intercellular adhesion model (32), Ncad-Fc 2 recombinant chimeric protein, and gelsolin-null fibroblasts to study the role of gelsolin in mediating the formation of intercellular contacts. Our results indicate that gelsolin transiently associates with nascent N-cadherin-mediated adherens junctions, and there, in a calcium-dependent manner, remodels cortical actin filaments. This novel association may serve to explain the observations related to the importance of calcium in cadherin-mediated intercellular adhesions noted previously (31). Donor-Acceptor Model and Wash-off Assay-The donor-acceptor model was used to analyze nascent cadherin-mediated intercellular adhesions in fibroblasts as described previously (31,32). Briefly, for quantification, donor and acceptor cells were incubated overnight in growth medium containing spectrally discrete 1 mg/ml of dextranconjugated fluorochromes (Sigma). Following designated treatments, donor cell suspensions were prepared with 0.01% trypsin supplemented with CaCl 2 (2 mM) and seeded onto acceptor monolayers at ratios of 1:1 unless otherwise indicated. Cells were incubated for the indicated times before jet-washing in a logarithmic series to estimate the strength of intercellular adhesions (34). Attached donor cells were fixed with paraformaldehyde and quantified in three randomly chosen ϫ40 fields with an inverted fluorescent microscope. Immunofluorescence, Confocal, and Video Microscopy-Acceptor cells as non-confluent monolayers were overlaid with donor cells to establish nascent adherens junctions and incubated for discrete time periods. Non-confluent acceptor monolayers were used in these analyses to permit contrasts between nascent donor-acceptor intercellular adhesions and donor-substratum adhesions. Cells were fixed for 10 min with 2% paraformaldehyde, 5% sucrose solution in phosphate-buffered saline, permeabilized for 5 min in a 0.02% Triton-X solution in phosphate-buffered saline, and stained with monoclonal N-cadherin (GC-4, Sigma), TRITC-conjugated ␤-catenin (14: BD Transduction Laboratories), fluorescein isothiocyanate-conjugated pan-cadherin (CH-19, Sigma), or rabbit polyclonal gelsolin antibodies (kind gift of D. J. Kwiatkowski) followed by Cy3 or fluorescein isothiocyanate-tagged secondary antibodies (Sigma). Immunofluorescence was visualized by confocal microscopy (Leica, Heidelberg, Germany; ϫ40 oil immersion lens) using 1-m transverse optical sections. For fluorescein isothiocyanate labeling, excitation was set at 488 nm, and emission was collected with a 530-/20-nm barrier filter. For TRITC or Cy3, excitation was set at 530 nm, and emission was collected at 620/40 nm. Transfection-Transient transfections were performed with Fu-GENE 6 transfection reagent (Roche Applied Science). Cells at 50% confluence were incubated with 3 l of FuGENE6 reagent and 1 g of pEGFP-gelsolin in 100 l of serum-free medium at 24°C for 30 min and assayed 48 h after transfection. Flow Cytometry-EGFP-gelsolin-transfected GsnϪ/Ϫ cells were harvested with 0.01% trypsin supplemented with 2 mM CaCl 2 . Transfected cells were sorted from untransfected cells (FACSTAR Plus; BD Biosciences) with excitation at 488 nm. Sorted cells were washed three times with phosphate-buffered saline and electronically counted prior to kinetic and strengthening experiments. For characterization of cadherin expression, whole cell lysates were prepared with 2% SDS Laemmli sample buffer. Protein concentrations of samples were standardized using the RC DC Bio-Rad protein assay (Bio-Rad Laboratories), and equivalent amounts of proteins were analyzed by Western blotting. Membranes were probed with N-cadherin antibody (GC-4; Sigma) or with anti-P-cadherin antibody (BD Transduction Laboratories) or with anti-E-cadherin antibody (G-10: Santa Cruz Biotechnology, Santa Cruz, CA). ␤-Actin (AC-15: Sigma) antibody was used to co-blot. Preparation of N-cadherin-Fc Dishes-The Ncad-Fc (chicken N-cadherin ectodomain fused to the Fc fragment of mouse IgG2b) protein was expressed in HEK-293 cells and collected as described (35). Ncad-Fccoated microbiological plastic dishes were coated with protein G-purified Ncad-Fc protein reconstituted in sodium bicarbonate buffer in yields of at least 100 g/ml was adsorbed onto plates following overnight incubation at 4°C and used immediately. Protein adsorption was quantified by dot blot and was estimated at 1.25 g/cm 2 based on densitometric comparison with purified mouse IgG Fc fragment controls (Jackson Laboratories, West Grove, PA). Severing Assay-The ability of gelsolin to sever actin filaments was measured as described (36). Briefly, rhodamine phalloidin (1 M; Molecular Probes, Eugene, OR) was added to actin filaments (0.4 M), and the rate of fluorescence loss at 570 nm was measured by fluorimetry. Reduction of fluorescence is caused by gelsolin severing of actin and displacement of phalloidin after adding CaCl 2 (1 mM). Affinity-purified rabbit muscle actin (1.0 mg/ml; Cytoskeleton, Denver, CO) was resuspended in polymerization buffer (50 mM KCl, 2 mM MgCl 2 , 0.5 mM ATP, 2 mM Tris, pH 8.0) and sedimented with an Airfuge (Beckman; 30,000 rpm for 20 min) to remove unpolymerized actin. Cell lysates from wild-type and GsnϪ/Ϫ cells attached onto non-tissue culture plates coated with Ncad-Fc protein were prepared with detergent plus protease inhibitors in buffer containing 50 mM KCl, 2 mM MgCl 2 , 0.5 mM ATP, 2 mM Tris, pH 8.0, 1 mM EGTA, and 1% Triton X-100. The lysates were dialyzed with several changes of buffer containing 2 mM MgCl 2 , 50 mM KCl, 2 mM Tris-HCl, and 1 mM EGTA, 0.5 mM ␤-mercaptoethanol. The volume of the dialyzed cell lysate was adjusted to 400 l in dialysis buffer. Labeled F-actin in polymerizing buffer (200 l; 50 mM KCl, 2 mM MgCl 2 ) was added to a final concentration of 400 nM. Severing assays were performed in calcium (2 mM CaCl 2 , 1 mM EGTA). Actin Assembly-Gelsolin wild-type or gelsolin-null cells were allowed to attach to Ncad-Fc-coated non-tissue culture plates for specific incubation times. In permeabilized cells incubated with rhodamine actin monomers, increases of rhodamine fluorescence due to incorporation into nascent actin filaments were measured (37)(38)(39). Cells were permeabilized for 20 s using 0.1 volume of octyl glucoside buffer (PHEM buffer (60 mM PIPES, 24 mM HEPES, 5 mM EGTA, 1 mM MgSO 4 , pH 6.9) containing 2% octyl glucoside and 2 M phalloidin). Permeabilization was stopped by diluting the detergent with buffer. Immediately thereafter, freshly sedimented rhodamine actin monomer (0.23 M) in buffer containing 120 mM KCl, 2 mM MgCl 2 , 3 mM EGTA, 10 mM PIPES, and 0.1 mM ATP was added to the samples for 10 s followed by fixation with 3.7% formaldehyde. The samples were observed with a Nikon TE 300 microscope, and rhodamine fluorescence in single cells was quantified using the PCI imaging program. For background correction, detergent treatments were omitted, fluorescence was quantified, and background signal were subtracted from experimental samples. Ca 2ϩ Fluxes-Donor cells were loaded with fluo-4/acetoxymethyl ester (3 M) according to the manufacturer's instructions (Molecular Probes) and plated on acceptor cells or onto Ncad-Fc-coated microbiological plastic dishes. Peripheral membrane Ca 2ϩ influx was measured in donor cells immediately after attachment and visualized by z axis optical sectioning by confocal microscopy. For wash-off assays, embryonic fibroblasts were preincubated with lanthanum chloride (250 M), and donor-acceptor cultures were established for 15 min in growth medium containing the same concentration of the calcium channel blocker. For experiments to detect near plasma membrane calcium transients, a lipophilic calcium ion indicator fura-C18 was loaded into substratum-bound cells according to the manufacturer's instructions, and Ncad-Fc-coated or bare beads were incubated onto cells. Plasma membrane calcium ion measurements were conducted as described previously (27). Magnetic Bead Pull-off Assays-Proteins enriched at sites of Ncadherin ligation through recombinant protein-coated bead-associated adhesion complexes were prepared. Briefly, after designated incubation times, cells and attached N-cadherin-coated magnetic beads (Sphero-tech, Libertyville, Il) were collected by scraping into ice-cold extraction buffer (cytoskeleton extraction buffer). Beads were pelleted using a side-pull magnetic isolation apparatus (Dynal, Lake Placid, NY), and supernatants were collected. Isolated beads were resuspended, sonicated, homogenized, and washed three times in CSKB prior to gel fractionation and Western blot analysis. Electron Microscopy of Cytoskeletons and Quantification of Images-Gelsolin-null or gelsolin wild-type cells were allowed to attach onto Ncad-Fc-coated glass coverslips for 3 min prior to detergent extraction, fixation, and processing as described elsewhere (40). Briefly, cells were extracted for 5 min with 1% Triton X-100, 4% polyethylene glycol in PEM buffer (100 mM PIPES, pH 6.9; 1 mM MgCl 2 ), 1 mM EGTA supplemented 10 M phalloidin, washed three times in PEM buffer, and fixed in 2% glutaraldehyde (electron microscopy grade) in 0.1 M sodium cacodylate, pH 7.3, for 20 min at room temperature and overnight at 4°C. Samples were subsequently fixed in 0.1% aqueous tannic acid and uranyl acid solutions, respectively, 20 min prior to dehydration and critical point drying. Samples were gold-coated using a Polaron sputter coater with a rotary planetary stage. Samples were visualized and digital images were acquired with a Hitachi S-570 scanning electron microscope. Filament lengths and branching frequency were quantified using inclusion criteria as described elsewhere (41). Briefly, filament lengths were quantified in a 1.76 by 1.76 m image using the Simple PCI software (Compix Inc., Imaging Systems, Cranberry Township, PA). Filaments longer than the field of view were excluded. Included filaments were traced from the cell edge until they emerged from a mother filament or were lost in the network. Branching frequency of stereotypical 70°Y-branches was measured from an 880 by 880 nm image at the cell edge. Normalized measurements of number of branches/m of actin filament length were obtained to correct for differences in filament density. The number of branches per image was divided by the total length of all visible filaments. Statistical Analyses-For continuous variables, means and S.E. were computed. Comparisons between two groups were evaluated by Student's t test, and for multiple samples, analysis of variance was used with statistical significance set at p Ͻ 0.05. Cadherin Expression and Importance of Polymerized Actin in Intercellular Adhesions-We assessed the expression of classic cadherin family members in embryonic mouse fibroblasts and Rat-2 fibroblasts by immunoblotting whole cell lysates with antibodies specific for extracellular epitopes of P-, E-, and Ncadherins. Only N-cadherin was detected in these cells (Fig. 1A, wild-type embryonic fibroblasts shown). Control cell lysates were used to verify specificity of antibodies used for cadherin expression profiling (results not shown). Immunoprecipitations for N-cadherin in wild-type and null fibroblasts showed that FIG. 2. Gelsolin co-localizes with Ncadherin adhesion complex proteins and actin filaments at nascent contacts. Optical sections at the donor-acceptor interface of stained donor Rat-2 cells were incubated onto non-confluent acceptor monolayer for 15 min; images were acquired by confocal immunofluorescence. A, gelsolin is enriched at nascent donor-acceptor interfaces and co-localizes with ␤-catenin (yellow in merged overlay). Donor-substratum interfaces show weaker gelsolin staining and lack of ␤-catenin staining (green in merged overlay). The outline of the underlying acceptor cell is indicated by a gray outline. A differential interference contrast (DIC) image is provided to reveal the location of the acceptor cell. B, gelsolin co-localizes with distinct F-actin staining at sites of nascent donor-acceptor interfaces (open arrow; yellow in merged overlay). The lower optical section of F-actin staining is provided to reveal the location of the acceptor cell and spatial interaction between donor and acceptor cell. The bars indicate 20 m. ␣-catenin and ␤-catenin association with the cytoplasmic domain of N-cadherin was unaffected by the presence or absence of gelsolin (Fig. 1B). An intact cortical actin cytoskeleton is known to be important for the formation and maintenance of cadherin-mediated intercellular adhesions (18 -21). We verified the functional importance of actin filament assembly and organization in cadherin-mediated adhesion strengthening in mouse embryonic fibroblasts using the donor acceptor model shear wash-off assays using latrunculin B (1 M). As expected, we found that in a logarithmic series of jet washes, latrunculin B-treated samples exhibited significantly reduced levels of donor cell adhesive strength when compared with vehicle control samples throughout the wash-off series (p Ͻ 0.01; results not shown). Gelsolin Co-localizes with N-cadherin Adhesion Complexes-As remodeling of cortical actin filaments by actin-binding proteins may be important for the formation of adherens junctions, we determined whether gelsolin was localized to intercellular adhesions. Immunofluorescence analysis of nascent, N-cadherin-dependent intercellular adhesions of Rat-2 fibroblasts was conducted using the donor-acceptor model, a systemthatgenerateslargenumbersofsynchronizedN-cadherindependent intercellular adhesions (31,32). There was significant enrichment of gelsolin at donor-acceptor interfaces that co-localized with ␤-catenin ( Fig. 2A, merged images). Incompletely established acceptor monolayers were used to contrast donor-acceptor and donor-substratum interfaces as demonstrated in the differential interference contrast mage. Although ␤-catenin staining was limited to donor-acceptor interfaces, gelsolin was also found peripherally at donor-substratum interfaces, albeit with greatly reduced staining intensity ( Fig. 2A, merge image). Indeed, when donor cells were incubated on completely confluent monolayers of acceptor cells, there was a peripheral ring of gelsolin staining corresponding to sites of donor-acceptor cell interfaces (Fig. 3B). Gelsolin also co-localized with cortical actin filaments at donor-acceptor interfaces and at nascent intercellular contacts (Fig. 2B). Gelsolin Transiently Associates with Nascent Intercellular Contacts-Immunoprecipitations were conducted using antibodies directed against a cytoplasmic cadherin epitope to evaluate whether gelsolin is recruited and physically associates with N-cadherin-mediated intercellular adhesions. Donor-acceptor samples containing equivalent amounts of protein showed that gelsolin was recruited to, and transiently associated with, the N-cadherin adhesion complex of proteins (Fig. 3A). The cadherin-gelsolin physical interaction was detected most prominently during the first 30 min of intercellular adhesion and subsequently decreased to baseline levels (Fig. 3A, time 0 is acceptor monolayer without donor cells). To evaluate the gelsolin-cadherin interaction spatially over time, donor cells were incubated over completely confluent acceptor monolayers and subsequently fixed and stained at distinct time points. Merged images demonstrate that the greatest degree of co-localization between the cadherin and gelsolin staining occurred during 15 and 30 min with a subsequent decrease as the timeline progressed (Fig. 3B). Collectively, these data suggest that gelsolin is actively recruited to sites of early intercellular contact and progressively disassociates as junctions mature. Gelsolin Regulates N-cadherin Adhesion Kinetics and Strengthening-Donor-acceptor cultures were established using wild-type fibroblasts, gelsolin-null fibroblasts, or gelsolinnull fibroblasts reconstituted with EGFP-gelsolin by transient transfection (gelsolin rescue) to study the effects of gelsolin on the kinetics of N-cadherin-mediated intercellular adhesion. Gelsolin-null cells showed markedly reduced N-cadherin-de-pendent intercellular adhesion kinetics when compared with gelsolin wild-type cells (Fig. 4A). The greatest kinetic differences between the wild-type and null cell types were detected between 5 and 30 min with very minor increases in total numbers of intercellular adhesions subsequent to that, suggesting an important role of gelsolin in the initial formation of intercellular adhesions. After transient transfection of null donor cells with a full-length EGFP-gelsolin construct, the adhesion kinetics were restored to that of wild-type cells (Fig. 4A, rescued). We next investigated the strength of intercellular binding using a logarithmic wash-off assay performed on donor-acceptor cultures incubated for 15 min to evaluate the possible role FIG. 4. Gelsolin severing activity is required for N-cadherin adhesion and strengthening. A, temporal increase of intercellular adhesion between murine wild-type (ϩ/ϩ), gelsolinnull (Ϫ/Ϫ), and gelsolin-null reconstituted with gelsolin (gelsolin rescue) fibroblasts in the donor-acceptor model. By 30 min, gelsolin-null cells exhibit 2-fold less intercellular attachment (p Ͻ 0.01). This effect was reversed after gelsolin reconstitution. B, wash-off assay was conducted to evaluate the strength of nascent intercellular adhesions in the donor-acceptor model. Cells were incubated for 15 min and were jet-washed in a logarithmic series. There is ϳ50% reduced binding in gelsolin-null versus wild-type fibroblasts at all time points (p Ͻ 0.01). After gelsolin reconstitution in gelsolinnull fibroblasts, the level of intercellular adhesion was equivalent to that of wildtype cells. The means Ϯ S.E. are shown for three separate samples. C, gelsolin has no effect on N-cadherin dependent strengthening by the wash-off assay in cells incubated for 180 min time period. of gelsolin in nascent intercellular adhesion strengthening. The estimated strength of attachment of gelsolin-null fibroblasts was considerably lower than that of wild-type cells (Fig. 4B). Similar to the kinetics assay, gelsolin-null fibroblasts that were transiently transfected with gelsolin exhibited restoration of intercellular binding strength to that of wild-type cells. To evaluate the potential role of gelsolin in the maintenance of established intercellular adhesions, the strength of intercellular binding of donor-acceptor cultures incubated for 180 min was investigated using the shear wash-off assay. No differences of intercellular adhesion strength were detected between wildtype and null cell types throughout the wash-off series (Fig. 3C). These data suggest that the role of gelsolin is limited to the strengthening of nascent intercellular contacts with no effect on the maintenance of mature contacts. External Calcium Influx and Gelsolin Activation-Gelsolin severing and capping activity are critically dependent on calcium ion concentration (42)(43)(44). We examined calcium transients subsequent to N-cadherin ligation using the donor-acceptor model (31). Donor gelsolin wild-type cells were loaded with fluo-4/AM (3 M) harvested and imaged in real time by confocal microscopy as the donor cells attached to underlying acceptor cells. There was a localized increase of calcium at sites of donor-acceptor adhesion directly opposed to the donor cell plasma membrane (Fig. 5A, i). We verified that this calcium response was due specifically to N-cadherin ligation as opposed to other, intercellular adhesion-associated, surface-expressed proteins. Donor cells were loaded with fluo-4/AM and were allowed to attach onto non-tissue culture plates coated with recombinant Ncad-Fc chimeric protein. Similar to the results found with the donor-acceptor model, distinct submembranous calcium signals were found after fibroblast attachment (Fig. 5A, iii and v) when compared with unattached cells (Fig. 5A, ii and iv). Cells did not attach to control plates coated with mouse IgG-Fc fragments. To further validate the submembranous or near plasma membrane localization of calcium transients at sites of Ncadherin ligation, Fura-C18 dye-loaded substratum-bound cells were observed following N-cadherin-Fc and control bare bead binding. N-cadherin-Fc-coated beads elicited a sharp rise in near plasma membrane [Ca 2ϩ ] at sites of bead to cell binding when compared with bare controls (Fig. 5B). As the apparent submembranous localization of the calcium signal suggested an influx of calcium through plasma membrane channels as described previously (45), we evaluated the effect of lanthanum chloride (a blocker of external calcium channels) on gelsolin wild-type and gelsolin-null fibroblast nascent intercellular adhesion strengthening using the donor-acceptor model (15-min incubation). Lanthanum chloride-treated wild-type donor-acceptor samples demonstrated significantly reduced adhesion strength when compared with vehicle controls and with lanthanum-treated and control gelsolin-null cells. Notably, there was no significant difference of adhesion between lanthanum-treated and untreated gelsolin-null cells (Fig. 5C). Gelsolin Regulates Cadherin-localized Actin Filament Assembly-To determine whether cadherin-mediated adhesion kinetics and strengthening were related to the severing activity of gelsolin, we conducted actin severing assays (36). Gelsolin wild-type and null fibroblasts were plated on Ncad-Fc-coated, non-tissue culture dishes for 30 min as our immunoprecipitation data showed that this time period was coincident with the largest amount of cadherin associated with gelsolin. Cell lysates were prepared, and the severing activity from the wildtype fibroblasts (slope ϭ Ϫ35.4; R 2 ϭ 0.817; n ϭ 3; p Ͻ 0.01) was significantly higher than that of the gelsolin-null cells (slope ϭ Ϫ11.5; R 2 ϭ 0.734; n ϭ 3; Fig. 6A). As anticipated, this finding indicated that more barbed ends would be generated during intercellular adhesion in the wild-type cells than the gelsolin-null cells. We determined whether greater numbers of barbed ends would be generated at sites of N-cadherin ligation as a result of cadherin-associated gelsolin activity in the wild-type cells. Accordingly, actin monomer addition assays were conducted (37)(38)(39) in wild-type and gelsolin-null fibroblasts plated on Ncad-Fc-coated non-tissue culture plates. The rate of monomer incorporation was quantified by fluorescence image analysis (Fig. 6B). We found significantly faster incorporation of actin monomers in wild-type cells than gelsolin-null cells samples during the first 60 min of adhesion with a subsequent lack of activity after 60 min (p Ͻ 0.05). By 180 min, gelsolin-null samples incorporated ϳ2.5-fold less actin monomer at sites of N-cadherin ligation ( Fig. 6B; p Ͻ 0.01). Effect of Gelsolin on N-cadherin-associated Cytoskeletal Architecture-We determined whether gelsolin expression affects the relative proportion of N-cadherin adhesions formed between cells. Ncad-Fc-conjugated protein A-coated magnetic beads were allowed to bind to monolayers of wild-type or gelsolin-null fibroblasts for 5 min. Saturation of available bead binding sites was verified by separate immunoblotting experiments that demonstrated minimal amounts of chimeric protein remaining in the supernatant (data not shown). Proteins eluted from the beads were co-immunoblotted for ␤-catenin and ␤-actin. The immunoblots showed that approximately equivalent amounts of ␤-catenin were recruited to N-cadherin-mediated adhesions (Fig. 7A). As there is equimolar stoichiometry of ␤-catenin and ␣-catenin in the cadherin complex (17,46) we considered that there were equivalent numbers of actin binding sites and N-cadherin adhesions formed between the two cell types. As actin filaments bind to N-cadherin through associated ␣-catenin, we co-immunoblotted for ␤-actin. There were dramatically reduced amounts of ␤-actin in the bead preparations from the gelsolin-null cells despite equivalent amounts of ␤-actin in the whole cell lysates (Fig. 7A, lysates not shown). Densitometry revealed that there was 45% less ␤-actin associated with the cadherin adhesions of null cells when compared with wild-type cells (Fig. 7A, right panel), as determined when the densities of ␤-actin bands were standardized with those of ␤-catenin bands. Bare beads were included to show the absence of associated adherens junctions proteins (Fig. 7A). Scanning electron microscopy was used to visualize the cytoskeletal architecture of wild-type and gelsolin-null fibroblasts that adhered to N-cadherin-Fc-coated plates for 5 min. The gelsolin-null fibroblasts (Fig. 7, B-D) showed longer actin filaments, reduced cross-linking, reduced branching, and a generally lower level of actin filaments than wild-type cells (Fig. 7, E-G). Quantification of the structural features seen in these images verified that indeed the filament length and 70°b ranching frequency in null cells was significantly less at nascent N-cadherin adhesions when compared with wild-type cells (Fig. 7, H and I). These data suggest that despite an equivalent amount of ␤-actin expression in both cell types and a tendency to form approximately the same number of junctions based on a bead rip-off assay, the underlying cytoskeletal architecture of N-cadherin adhesions is critically dependent on gelsolin expression. DISCUSSION Down-regulation of gelsolin expression is an important prognostic marker for progression of malignancy and tumor invasion (47), indicating that gelsolin may play a crucial role in regulating physiological intercellular adhesion. Our findings provide a mechanistic basis by which gelsolin contributes to the maturation of cadherin-mediated adhesions, and consequently, is an important regulator of tissue integrity. We have used the donor-acceptor model and a recombinant Ncad-Fc chimeric pro-tein to study early events involved in the organization of the actin filament network to which N-cadherin molecules are tethered. We demonstrated that in vitro, gelsolin spatially co-local- izes to, and transiently associates with, N-cadherin adhesions. Gelsolin regulates the kinetics and strength of early N-cadherin adhesions through a mechanism that involves actin severing, barbed end generation, and subsequent filament remodeling that is important for the maturation of intercellular adhesions (Fig. 8). To our knowledge, this is the first report that demonstrates a local role for gelsolin in the regulation of nascent, N-cadherinmediated adhesions. These findings support the developing notion that cadherin ligation marks intercellular junctions as sites of dynamic actin reorganization through the recruitment of various components of the actin assembly machinery (22,24,48). Our data also underline the importance of localized intracellular calcium transients in regulating the formation of intercellular junctions. Thus, although extracellular calcium is crucial in the structural requirements for maintenance of the extracellular domains of cadherins (49,50), intracellular calcium concentration is also likely important for gelsolin severing (28) and the subsequent adhesion strengthening that is effected through remodeling of the actin cytoskeleton. Calcium and N-cadherin Adhesion Strengthening-Millimolar levels of extracellular calcium are required for maintenance of the structural integrity and the association of lateral dimers of cadherin extracellular domains (49,51). In contrast, micromolar intracellular calcium transients are generated after Ncadherin ligation in fibroblasts (31). These transient increases of intracellular calcium are required for actin polymerization and regulate cadherin-mediated intercellular adhesion through an unidentified mechanism (24). Our results demonstrated an influx of calcium ions through the plasma mem- brane that was localized to sites of N-cadherin ligation and that appears to be important for activating cadherin-associated gelsolin severing activity. Previous reports have shown that in fibroblasts undergoing cadherin ligation, calcium influx occurs through plasma membrane channels located at nascent but not mature N-cadherin-mediated intercellular junctions (45). We found that blockade of this calcium entry resulted in marked reductions of nascent adhesion strength in gelsolin wild-type cells but had no effect on gelsolin-null cells. The absence of any effect in gelsolin-null cells is likely due to the adaptation of these cells to gelsolin deficiency, possibly through compensatory increases of other actin-severing proteins such as actindepolymerizing factor/cofilin that are not calcium-dependent. Indeed, the gelsolin-null cells exhibited measurable severing activity and actin assembly when plated on N-cadherin substrates, albeit at greatly reduced rates when compared with wild-type cells. We suggest that the intracellular calcium transients following N-cadherin ligation (45) are important in intercellular adhesion because of their role in locally activating gelsolin severing activity. N-cadherin Adhesion and Gelsolin Activation-Cadherin association with the actin cytoskeleton is thought to be the ratelimiting step in epithelial intercellular adhesion (23,24). Inhibition of this association significantly reduces cadherin adhesive function (Refs. 52 and 53 and reviewed in Ref. 54). An expanding list of regulators of actin assembly has been localized to developing cadherin-mediated junctions including the Arp 2/3 complex, cortactin, and the vasodilator-stimulated phosphoprotein/Ena and vinculin/zyxin family members. These findings further support the importance of local actin polymerization in the regulation of cadherin adhesion. Indeed, our data show that the severing activity of gelsolin is an important component of actin filament dynamics at the earliest stages of N-cadherin-dependent adhesions. We found that gelsolin-null mouse fibroblasts exhibited significantly reduced adhesion kinetics and strengthening when compared with wild-type controls. Gelsolin transiently associated with nascent N-cadherin adhesions, suggesting that gelsolin plays an important local role in breaking down existing actin filament networks at nascent contacts. This localized remodeling facilitates locally required configurations of adherens junctions to promote intercellular adhesion. In addition to increasing the pool of actin monomers, the severing activity of gelsolin followed by uncapping generates a large number of free barbed ends that are necessary for actin assembly (25). Thus, the association of gelsolin with cadherins and its local activation may be required for efficient actin assembly by actin nucleators such as the Arp 2/3 complex, as has been shown in platelets and fibroblasts (26). Indeed, we found that gelsolin wild-type cells incorporated significantly larger amounts of actin monomer and subsequently exhibited considerably more polymerized actin at sites of early N-cadherin ligation than gelsolin-null cells. The greatest amount of actin monomer addition occurred during the first 60 min of N-cadherin ligation with very little addition occurring subsequent to that. This profile corresponds quite closely with the transient association of gelsolin with cadherin adhesion complex noted in the donor-acceptor model. Further, our electron microscopy showed that the actin network of gelsolin wild-type cells was significantly more cross-linked with a shorter average filament length than gelsolin-null cells. The contrast in the microfilament architecture between gelsolin wild-type and null FIG. 8. Proposed mechanism of the role of gelsolin in N-cadherin-mediated intercellular fibroblast adhesion. Intercellular contact via surface-expressed N-cadherin molecules induces adhesion complex formation and tethering to actin cytoskeleton. Gelsolin is recruited to nascent contacts and is activated by locally induced influx of calcium. Gelsolin severs existing actin filaments and generates barbed ends. Actin nucleation sites for polymerization and actin reorganization are necessary for adhesive strength. cells underlines the important functional differences of cadherin adhesion noted above and highlights the importance of gelsolin as a regulator of intercellular adhesions. Overexpression of gelsolin in Madin-Darby canine kidney cells disrupts intercellular contacts by an unknown mechanism that maintains the composition of the E-cadherin-catenin complex (55). We found that the level of gelsolin did not influence ␤-catenin association with N-cadherin; however, our functional assessments are inconsistent with these findings. This discrepancy may be attributable to differences in cellular background and the type of cadherin that was expressed (56), in addition to variations in gelsolin levels between studies. Although structurally similar (5), different classical cadherin family members mediate functionally distinct adhesions in different tissue types (57). Further, overexpression of gelsolin may produce different effects when compared with cells lacking gelsolin altogether. Gelsolin null cells reconstituted with gelsolin showed the importance of actin severing in the formation of nascent intercellular contacts and intercellular adhesion strengthening. Loss of cadherin-mediated intercellular adhesion has been implicated in malignant transformation (58). The reduction in cadherin-mediated adhesion strength, which compromises the integrity of intercellular contacts prior to metastasis, may result from reduced gelsolin expression or function. Indeed, down-regulation of gelsolin expression coincides with tumor invasiveness and has been implicated as a prognostic indicator for therapeutic interventions in cancer (59). We found that gelsolin-null cells demonstrated a defect in the maturation of cadherin-mediated adhesions, which resulted in reductions in the strength of intercellular contacts. The rescue of this deficiency by reconstituting gelsolin in these cells underscores the importance of gelsolin as a critical regulator of adhesion strength for nascent contacts. This is particularly relevant for N-cadherin-expressing mesenchymal cells as they exhibit rapid rates of lamellipodial extension and high turnover of cadherinmediated intercellular adhesions (60,61). Cadherin ligation rapidly increases GTP-bound rac without affecting other Rho family GTPases such as Rho or cdc42 (62,63). As active rac is required for actin filament assembly and insertion into N-cadherin adhesions (16,64), and as gelsolin is an important rac-dependent effector of actin assembly (39), we suggest that N-cadherin ligation may mediate rac-dependent activation of gelsolin that is required for adherens junction formation. Collectively, our data demonstrate the importance of gelsolin and actin remodeling in mediating intercellular adhesion following cadherin ligation.
2018-04-03T03:29:35.572Z
2004-09-24T00:00:00.000
{ "year": 2004, "sha1": "c7944ce32ec3191a3766e52d8a6df855af2627a6", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/39/41047.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b843948a1e4607ac240503dfcd9429c2e2c2d295", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
258658493
pes2o/s2orc
v3-fos-license
Analysis of Chlorophylls/Chlorophyllins in Food Products Using HPLC and HPLC-MS Methods Of the different quality parameters of any food commodity or beverage, color is the most important, attractive and choice-affecting sensory factor to consumers and customers. Nowadays, food industries are interested in making the appearance of their food products attractive and interesting in order to appeal to consumers/customers. Natural green colorants have been accepted universally due to their natural appeal as well as their nontoxic nature to consumers. In addition, several food safety issues mean that natural green colorants are preferable to synthetic food colorants, which are mostly unsafe to the consumers but are less costly, more stable, and create more attractive color hues in food processing. Natural colorants are prone to degradation into numerous fragments during food processing, and thereafter, in storage. Although different hyphenated techniques (especially high-performance liquid chromatography (HPLC), LC-MS/HRMS, and LC/MS-MS are extensively used to characterize all these degradants and fragments, some of them are not responsive to any of these techniques, and some substituents in the tetrapyrrole skeleton are insensitive to these characterization tools. Such circumstances warrant an alternative tool to characterize them accurately for risk assessment and legislation purposes. This review summarizes the different degradants of chlorophylls and chlorophyllins under different conditions, their separation and identification using various hyphenated techniques, national legislation regarding them, and the challenges involved in their analysis. Finally, this review proposes that a non-targeted analysis method that combines HPLC and HR-MS assisted by powerful software tools and a large database could be an effective tool to analyze all possible chlorophyll and chlorophyllin-based colorants and degradants in food products in the future. Introduction The scientific community is highly interested in green chemistry, green technology, and evergreen processes in the application of cutting-edge technologies. Vendors as well as customers are fond of natural colorants. Essentially, food industries use natural food colorants to cultivate a sense of nature in the customer's mindset, because natural colorants are non-toxic and healthy. Among the different green colorants [Green S (str-1), Fast Green FCF (str-2), Malachite Green (str-3), Tartrazine (str-4), Brilliant blue (str-5), Chlorophyll a (str-6i), Chlorophyll b (str-6ii), Chlorophyll c (str-6iii), Chlorophyll d (str-6iv), Bacteriochlorophyll (str-6v), Protochlorophyll (str-6vi), a combination of Tartrazine, Brilliant blue, and Fast green or combination of Tartrazine and Brilliant blue)], natural chlorophylls are used extensively by the food and processing industries to impart a sense of nature and organicity in the customer's mind ( Figure 1) [1]. One of the main challenges facing the food processing and beverage industries is that of finding the right concentration of the food additive/colorant to be used for a certain purpose, considering both their adverse effects for consumers and the quality of foodstuffs and beverages with respect to their texture, color and appearance, taste and related health issues, and the legislation surrounding food additives or food colorants [2] on the basis of their Acceptable Daily Intake (ADI) [3]. Among the different classes of food additives (preservatives, nutritional additives, coloring agents, flavoring agents, texturizing agents, and miscellaneous agents), food colorants are responsible for some health disorders in consumers, such as allergies and hyperactivity [4,5]. Additionally, there is a lack of coordination and harmony among the legislation on food additives/colorants issued by different countries, which causes an obstacle to the maintenance of a uniform food safety protocol in international trade [6,7]. It has ben observed that the colorants FD&C Green No. 3 (Fast Green (E143)) and citrus red No.2 (E121) are allowed in the USA, but are banned in the European Union (EU). Similarly, the colorants carmoisine (E122), amaranth (E123), and patent blue (E131) are not allowed in the USA, but are permitted in the EU [3,7]. Among food colorants, natural colorants such as flavonoids, isoprenoids, and nitrogen-heterocyclic and pyrrole derivatives are commonly found in different foodstuffs and beverages [8]. Among natural colorants, chlorophylls are highly abundant in nature and are extensively used by green leaves for the conversion of solar energy to chemical energy; however, nowadays, they are being researched by many groups for their uses as food colorants instead of artificial and synthetic colorants, which have adverse health effects on consumers. Additionally, chlorophylls and chlorophyllins have bioactive properties which can deliver some beneficial health effects such as antidiabetic, anticancer, and anticardiovascular effects, alongside being anti-neuro-disorder agents [9][10][11]. Additionally, another interesting research finding around chlorophylls and chlorophyllins suggests that no absorption of chlorophylls and chlorophyllins occurs inside the body after consumption via different foodstuffs and beverages; they are instead excreted in the feces [12]. There are different challenges in utilizing chlorophyll as a natural green colorant, alongside its several beneficial effects for consumers. Chlorophyll is not soluble in water, but it can be extracted from green leaves and plants in organic solvents. Then, the next challenge is its stability in normal conditions, which is exaggerated during food processing in different food industries. Chlorophylls degrade into several degradants under the conditions applied during food processing, which creates a very complicated scenario during the separation and identification of these degradants [13,14]. Additionally, different cutting-edge analytical techniques make possible their correct speciation and characterization [1,4,12,13,15]. So, it is of the utmost importance to gain thorough knowledge of these degradants and the plausible routes of degradation under various conditions before the required research can be carried out on the analytic techniques used to separate and identify these degradants of chlorophylls and chlorophyllins [16][17][18][19][20][21][22][23][24]. Herein, we first introduce their chemistry and stability, and the active legislation and regulations concerning them in different countries. Separation and identification methods based on HPLC, HPLC/MS, and HPLC/MS-MS for green chlorophylls and chlorophyllins are discussed. Finally, we propose a non-target analysis method which combines high-performance liquid chromatography and high-resolution mass spectrometry, assisted by powerful software tools and large database, which could be a future tool to analyze all the possible chlorophyll and chlorophyllin-based colorants and degradants in food products. Chemistry and Stability of Chlorophylls Many researchers are actively trying to improve the stability of chlorophylls by using different cutting-edge technologies in fulfilling the market demand for natural green hues and more natural formulations for customers' acceptance and satisfaction. Normally, Mg 2+ ions containing chlorophylls are green in color, but Mg 2+ -free derivatives (mainly pheophytins and pheophorbides) are brown. Researchers have adopted different strategies for restoring or sustaining the green coloration of chlorophylls, such as (i) introduction of different other metals to replace the Mg 2+ metal, (ii) encapsulation of metallochlorophyllins with starch-containing gum Arabic, octenyl succinic anhydride and maltodextrin [25], or whey proteins [26], and (iii) microencapsulation [27]. Still, the stability of chlorophylls and their derivatives is a big challenge facing the food processing industries. Due to the solubility issues of chlorophylls and chlorophyllins, lipid-soluble chlorophylls are recognized as E140i, and water-soluble chlorophyllins are denoted E140ii. Similarly, lipid-soluble Cu-chlorophylls are considered E141i, and water-soluble Cu-chlorophyllins are considered E141ii. Considering the broad uses of chlorophylls and chlorophyllins, food industries follow some typical industrial processes to prepare chlorophylls and chloro-phyllins. In general practice, normal solvent extraction methods are followed to prepare E140i; solvent extraction followed by saponification is used to prepare E140ii; solvent extraction followed by copper salt treatment is used to prepare E141i, and solvent extraction followed by saponification and copper salt treatment is carried out to prepare E141ii [33]. Chlorophylls and chlorophyllins are used mostly in the food processing and beverage industries. During food processing, these chlorophylls and chlorophyllins undergo several unit processes in different mild and drastic conditions, both of which form different degradants ( . Chlorophylls (str-6i & 6ii, E140i) form pheophytin a (R=CH 3 ) (str-19i) and pheophytin b (R=CHO) (str-19ii) via demetallation due to mild heat and acid treatment, but prolonged heating leads to the loss of the methoxycarbonyl group and to the formation of pyropheophytin a (str-20i) and pyropheophytin b (str-20ii). The loss of the phytyl group takes place when chlorophylls (str-6i & 6ii) are exposed to enzymatic alkaline hydrolysis, forming chlorophyllide (R=CH 3 ) (str-21). As a result, the breakage of the esterphytyl bond takes place with the formation of more polar products. Chlorophyllide (str-21) forms pheophorbide (R=CH 3 ) (str-22) upon mild heat and acid treatment, but prolonged heating leads to pyropheophorbide (R=CH 3 ) (str-23), with the loss of the methoxycarbonyl group; however, saponification of pheophorbide (str-22) generates chlorophyllin (R=CH 3 ) (str-24, E140ii) by breaking the isocyclic ring (ring E) ( Figure 3). Similarly, Cu-chlorophyll a (str-25i, R=CH 3 ) and Cu-chlorophyll b (str-25ii, R=CHO) form Cu-pyropheophytin a (str-26i, R=CH 3 ) and Cu-pyropheophytin b (str-26ii, R=CHO) through the loss of the methoxycarbonyl group, while loss of the phytyl group generates Cu-chlorophyllin a (str-27i, E141i, R=CH 3 ) and Cu-chlorophyllin b (str-27ii, E141i, R=CHO) through the loss of the methoxycarbonyl group. Cu-chlorophyllin forms Na-Cu-chlorophyllin a (str-28i, E141i, R=CH 3 ) and Na-Cu-chlorophyllin b (str-28ii, E141ii, R=CHO) upon saponification, i.e., NaOH treatment ( Figure 4). Legislations and Regulations It has been observed that different food processing industries have used a combination of Tartrazine, Brilliant blue, and Fast green, or a combination of Tartrazine and Brilliant blue for green color hues, instead of costly natural chlorophyll and chlorophyllin-based pigments in their foodstuffs and beverages [1,22,34]. Mostly, the food and beverage industries favor the use of the colorant Na-Cu-Chlorophyllin (str-28, E141) due to its better stability under food processing conditions and in the storage period of foodstuffs and beverages [35]. Actually, different countries have separate legislations and regulations on the usage and safety of food colorants and additives in different foodstuffs and beverages ( Table 1). The Joint Food and Agriculture Organization of the United Nations/World Health Organization (FAO/WHO) Expert Committee on Food Additives (JECFA) published regulations on the safety of food additives in their 41st meeting in 2018 [36,37]. Before, in the USA, only copper chlorophyllins were authorized, allowed to reach a 2% maximum content in citrus-based beverages [38]. In the USA, the FDA now allows the use of Na-Cu-chlorophyllins, with an ADI of 7.5 mg/kg bw/day [39]. Recently, in the USA, chlorophylls, chlorophyllins (INS 140), Cu-chlorophyll (INS 141i), and Na-Cu-chlorophyllins (INS 141ii) have been approved for use in foodstuffs and beverages [14]. Almost all food and beverage industries follow European legislation on the safety of food colorants/additives (EU Regulation No. 1333/2008 with amendment Regulation (EC) No. 1129/2011) [40]. As per EU regulations, chlorophyll (E140i), chlorophyllin (140ii), Cu-chlorophyll (E141i), and Na-Cu-chlorophyllin (E141ii) are natural green pigments permitted for usage in foodstuffs and beverages, at the specified level. The Sanitation Law in Japan permitted three food colors (F0177-Food Red No. 3 Aluminum Lake, F0178 Food Yellow No. 4 Aluminum Lake, F0179 Food Blue No. 1 Aluminum Lake). Basically, Food Blue No. 1 Aluminum Lake is permitted to use for Food, Confectionery and Toy industries. Mostly, Food Blue No. 1 Aluminum Lake is used for poisonous medicine, while Food Red No. 3 Aluminum Lake for drastic medicine and Food Yellow No. 4 Aluminum Lake for general medicine. Moreover, Japanese legislation on food additives of natural origin, published by the Ministry of Health and Welfare, allows the use of three synthetic colors: Cu-chlorophyll (Jn-242), Na-Cu-chlorophyllin (Jn-241), and Na-Fe-chlorophyllin (Jn-333) (str-50) [41]. Extraction of Chlorophylls and Chlorophyllins from Food Products Considering regulations on the usage of food colorants and additives, as well as the stability and numerous degradants of natural chlorophylls and chlorophyllins under different conditions within the food processing industries during the manufacturing of the foodstuffs and beverages, the extraction of natural colorants/pigments from these complex foodstuffs' matrices followed by their separation and analysis is an extremely challenging task for analysts. There are different strategies adopted for the extraction of food colorants from foodstuffs and commodities [1]. Extraction of Colorants from Fatty Food Products Mathiyalagan et al. (2019) classified the collected food products into two categories: fatty food products (chocolates, sweets, chips) and non-fatty food products (hard candy). The authors collected different types of fatty food products such as soft candy, hard candy, and jelly beans for the analysis of green-color pigments [1]. Initially, 2.0 g of grounded chocolates, sweets, and chips samples were mixed with 1.0 mL Butylated Hydroxyl Toluene (BHT) solution (0.1%) and 10 mL ethanol: water: ammonia solution (10:3:0.5 v/v/v). The fats were removed from the dissolution with 50 mL of hexane [48][49][50]. Then, the sample was transferred to a 100 mL separating funnel after sonication at 40°C for 10 min, and the resulting dispersion was transferred to a centrifuge tube, after acidification with 10 mL of 5% acetic acid and shaking for 1 min. Finally, the solution was centrifuged at 3000× g for 5 min, and the hexane layer was decanted. The extraction step was repeated to produce a colorless extract, followed by vacuum aspiration to remove the solvents. The colored sample was then ready for analysis. In the case of white jelly or candy samples, the samples were dissolved completely in warmed water and then the above process was followed. In the case of green-colored fatty foodstuffs labeled with Na-Cu-chlorophyllin, the above extraction procedure was modified due to the fat-soluble nature of Na-Cu-chlorophyllin (E141ii). The food sample was dissolved in 50 mL of hexane along with a large volume of ethyl acetate (~20-30 mL) and sonicated for 10 min. The colored ethyl acetate layer was collected and dried under an N 2 gas blow. Finally, the residue was dissolved in 2 mL of methanol with sonication, filtered using a 0.2 µm syringe nylon filter, and stored for HPLC analysis. Extraction of Colorants from Non-Fatty Food Products The food colorants of non-fatty hard candy samples (~2.0 g) were dissolved in 10 mL water after adjusting the pH to 2.5 using hydrochloric acid; they were then extracted in 3.0 mL of ethyl acetate, followed by sonication for 10 min. The organic layer was collected, centrifuged, and dried through N 2 gas purging. Finally, the above procedure was repeated [51]. After extraction, Mathiyalagan et al. (2019) determined the Tartrazine (E-102) and Brilliant Blue (E-133) in candies, sweets, jelly samples, powder samples, and chips products, using the RP-HPLC method, equipped with a UV-Vis detector using gradient elution. All the samples were separated through a Luna C18 column (5 µm size × 25 cm length × 4.6 mm ID) fitted with a guard column (5 µm size × 1 cm length × 4.6 mm ID) using mobile phase A, itself composed of methanol:acetonitrile (1:1, v/v), and mobile phase B, consisting of 40 mM ammonium acetate aqueous solution. The pH of the mobile phase was adjusted to 7.4 with dilute acetic acid. The authors detected Tartrazine and Brilliant Blue as blended colorants to achieve green hues in candy, mouth fresheners, chips, antacid drink powder, sweets, and cream biscuits, while both hard candy and soft candy contained Na-Cu-chlorophyllin. Only one candy sample contained Tartrazine, Brilliant blue, and Fast green as blended colorants to obtain green hues. The authors reported 4.745 to 140.284 mg/kg of Tartrazine, 0.952 to 36.835 mg/kg of Brilliant Blue, and 3.334 to 4.489 mg/kg of Na-Cu-chlorophyllin in the studied foodstuffs collected from the local markets in Vellore, India [1]. Inoue et al. (1994) used RP-HPLC fitted with a UV-Vis detector at a wavelength of 407 or 423 nm for the separation of Na-Cu-Chlorophyllin and its different degradants in prepared standards as well as foodstuffs [52]. This method used an Inertsil ODS-2 column (5 µm size × 25 cm length × 4.6 mm ID) for the separation of different colorants using mobile phase methanol:water (97:3, v/v) containing 1% acetic acid. This method separated Na-Cu-chlorophyllin, Cu-pheophorbide a, Cu-chlorin e4, Cu-rhodin g7, and Cu-chlorin e6 from their mixture, and allowed the analysis of Na-Cu-chlorophyllin in food products within the linearity range of 0-30 mg/L. Chernomorsky et al. (1997) collected commercial food products for the examination of Na-Cu-chlorophyllin, using RP-HPLC equipped with a PDA detector; it was separated through a C18 column after elution with 1 M methanol:ammonium acetate (80:20, v/v) and methanol:acetone (80:20, v/v) mobile phases within a run time of 15 min [53]. This method identified different chlorophyll derivatives such as porphyrin, Cu-pheophorbide a, Cu-chlorin e6, as well as Cu-Isochlorin e4 in the analyzed commercial food products. Cu-Isochlorin e4 was identified as an impurity in the collected foodstuffs. Almela et al. (2000) collected different ripened fruits for the analysis of different chlorophyll derivatives by RP-HPLC, using PDA and a fluorescence detector at 660 nm, after separation using an Inertsil ODS-2 column (5 µm size × 25 cm length × 4.6 mm ID) [54]. The authors used a high concentration of ammonium acetate buffer mobile phase (pH 7.0). This developed method was able to separate highly polar food colorants, i.e., pheophorbides and inorganic chlorophyllides, in the collected fruit samples. Separation and Identification of Chlorophylls and Chlorophyllins in Food Products Food products of different foodstuffs, food commodities, and beverages are available on the market. Chlorophyll derivatives, chlorophyllins, and their degradants could not be extracted from all green-colored foodstuffs and commodities using the same extraction procedure, because some foodstuffs are fatty, while others are non-fatty. Sometimes, a mixture of these foodstuffs may be present in food commodities. Hence, the extraction procedures of food colorants vary from one food type to another food type. It is important to first check the nature of the ingredients present in food commodities before accordingly selecting an extraction procedure for separation and identification. Separation and Identification of Chlorophylls and Chlorophyllins in Food Products Using HPLC Methods Cano (1991) developed an HPLC-PDA method for the determination of colorants in four collected kiwi fruits (Actinidia chinensis, Planch) and cultivars (Hayward, Abbot, Bruno, and Monty) by separating them through a Hypersil ODS stainless steel column (5 µm size × 10 cm length × 4.6 mm ID), with mobile phases of (A) methanol/water (75:25, v/v) and (B) ethyl acetate under gradient elution. The author detected chlorophyll a and b, and pheophytin a [55]. Yasuda et al. (1995) developed an RP-HPLC-PDA method for the analysis of chlorophylls and its derivatives in collected foodstuffs (boiled bracken, agar-agar, and chewing gum) after separation through a C18 RP-HPLC column, using a mobile phase of methanol:water (97:3, v/v) containing 1% acetic acid at a flow rate of 1 mL/min and a wavelength of 405 nm. The extraction of colorants was carried out at a pH of 3-4 using diethyl ether. The green colorants of the homogenised foodstuffs were extracted in ethyl ether at a pH of 3-4 adjusted with 0.1 N hydrochloric acid, and the organic solvent was evaporated. The residue was dissolved in methanol and used for HPLC analysis. The authors detected Cu-chlorin e6 and Cu-chlorin e4 in the Na-Cu-chlorophyllin-containing foodstuffs. Their results suggest that Cu-chlorin e6 is not stable under the heat and pH of the food manufacturing process, and hence the authors suggested the analysis of Cu-chlorin e4 as an indicator for the presence of Na-Cu-chlorophyllin in food commodities (boiled bracken, agar-agar and chewing gum) [56,57]. Nonomura et al. (1996) extracted chlorophyll a in spinach, and used it as a standard material for the preparation of Fe-chlorophyllins in inert and dark conditions to avoid molecular degradation. Then, they separated the components of Fe 3+ -chlorophyllin through an Inertsil ODS column, with a mobile phase of acetonitrile-phosphate buffer (pH 2) (60:40, v/v) containing tetramethyl ammonium chloride (0.01 M) and analyzed by RP-HPLC. They detected three major derivatives: Fe 3+ -pheophorbide a, Fe 3+ -chlorin e6, and Fe 3+ -chlorin e4. They also confirmed the presence of all three species using FAB-MS analysis [58]. Egner et al. (2000) analyzed chlorophyllin derivatives using HPLC, ESI/MS, and MS/MS techniques in human serum samples after oral consumption of Na-Cu-chlorophyllin, in Qidong, Jiangsu Province, People's Republic of China. The authors found some green-colored serum and detected unreported Cu-chlorin e4 ethyl ester and Cu-chlorin e4. This finding suggested that chlorophyllin derivatives were bioavailable and absorbed into the bloodstream, creating the possibility of their chemopreventive activity [59]. Wang et al. (2004) initiated their study to monitor the green color of green tea infusions, as cold tea beverages in clear bottles are popular in different countries. They found chlorophylls to be the main component of the greenness of these tea infusions. In addition to chlorophylls, they detected flavonoids, catechins, and flavonols in green tea infusions, while quercetin was the main phenolic compound contributing to the greenness of the tea infusions [60]. Bohn et al. (2004) analyzed chlorophylls and their derivatives using HPLC equipped with a fluorescence detector. All the colorants were separated through an RP-C18 column (4 µm size × 25 cm length × 2 mm ID) with methanol for HPLC analysis. They identified chlorophyll a and a , chlorophyll b and b , and corresponding pheophytins [61]. Scotter et al. (2005) developed an HPLC-PDA and HPLC-Fluorescence method for determining the food color additives Cu-chlorophylls and Cu-chlorophyllins in foods and beverages. The authors found huge amounts of native chlorophylls in mint sauce samples. Food commodities containing significant amounts of emulsifiers (i.e., ice cream), gelatin, or fats were problematic during extraction; hence, further development of extraction regimes is desirable for such products. All of the samples analyzed with added E141 had estimated total copper chlorophyllin contents of below 15 mg/kg (range 0.7-13.0) [62] (Table 2). Roca et al. (2010) developed an HPLC-PDA method to monitor the adulteration of olive oils, which is used to make their green coloration. The separation was carried out using a stainless steel C18 column (3 µm size x 20 cm length × 4.6 mm ID) with the mobile phases (A) water/ion pair reagent/methanol (1/1/8, v/v/v) and (B) methanol/acetone (1:1, v/v). A mixture of 0.05 M tetrabutylammonium and 1.0 M ammonium acetate in water was used as the ion-pair reagent. They detected pheophytins (a and b) in the collected samples adulterated with E141ii, but did not find them in the samples that contained colorant E141i, indicating the capability of this method to monitor the adulteration of vegetable oils with E141ii. The authors suggested selecting a λmax of 654 nm for Cu-pyropheophytin a, and of 633 nm for Cu-pyropheophytin b, during the screening of the studied adulterated olive oil samples [63]. Loranty et al. (2010) studied the fate of chlorophylls and carotenoids in commercial dry herbal and fruit teas, as well as in infusions made from these teas. They developed an HPLC-PDA method for this study. The colorants were separated using a Phenomenex Luna C18 column (5 µm size × 25 cm length × 4.6 mm ID), with mobile phases of (a) acetonitrile:water (90:10, v/v) and (b) ethyl acetate, under gradient elution at a flow rate of 1 mL/min. The authors detected complex chlorophyll and related pigment profiles in all of the evaluated commercial dry teas, whereas lutein was the main component in the infusion [64]. Baskan 1, v/v), within a wavelength range (λ-range) of 350 to 800 nm. They found different chlorophyll derivatives such as chlorins, rhodins, pheophorbides, chlorophylls, pheophytins, 13 2 -OH-pheophorbides, 13 2 -OH-chlorophylls, 13 2 -OH-pheophytins, 15 1 -OH-lactone-pheophorbides, 15 1 -OH-lactone-pheophytins, and pyropheophytins [69]. Laddha et al. (2020) monitored the fate of chlorophyllins after intake by rats [46]. For this study, the authors collected rat plasma and analyzed it using HPLC-PDA after separation through a Luna ® C18 RP-HPLC column (100 Å 4.5 µm size × 25 cm length × 4.6 mm ID), using a mobile phase of MeOH:10 mM ammonium acetate (90:10, v/v) at a flow rate of 1 mL/min. The injection volume was 20 µL, and the run time was 20 min. They detected Na-Cu-chlorophyllin in the rat plasma [70]. Mendes-Pinto et al. (2005) analyzed carotenoids and chlorophyll-derived compounds in grapes and Port wines using HPLC-DAD and HPLC-DAD-MS (ESP+) analysis. They detected 13 carotenoid and chlorophyll-derived compounds in grapes, whereas pheophytins a and b were unknown. They also found 19 compounds with carotenoid or chlorophyll-like structures in Port wines. Their observation was that chlorophyll derivatives degraded faster than carotene and lutein [74]. Mortensen and Geppel developed an HPLC-PDA method for the detection of Na-Cu-chlorophyllin and its derivatives in the collected five commercial Na-Cu-chlorophyllin samples and one green food colorant. Additionally, they used an MS detector for the authentication of the separated colorants. Based on their absorption spectra and mass data, three of the collected standards contained Cu-chlorin e6, Cu-chlorin p6, and Cu-isochlorin e4. The other two samples contained a low amount of Cu-chlorin e6, but Cu-chlorin p6 was absent. The majority of samples contained porphyrins, but no samples contained chlorins derived from chlorophyll b [51]. Gandul-Rojas et al. (2012) studied the pattern of color adulteration in table olives using the non-permitted semi-synthetic green colorant Na-Cu-chlorophyllin (E141ii), using the HPLC-DAD method [35]. For the HPLC analysis, the colorants were extracted as per the method of Mínguez-Mosquera and Garrido-Fernández (1989) [75]. The colorants in the extract were analyzed using the HPLC-PDA method after separating through a C-18 stainless steel column (3 µm size × 20 cm length × 0.46 cm ID) with mobile phases consisting of (A) water/ion pair reagent/methanol (1/1/8, v/v/v), and (B) methanol/acetone (1/1, v/v). A mixture of tetrabutylammonium (0.05 M) and ammonium acetate (1.0 M) in water was used as the ion-pair reagent. Cu-chlorophyllin complexes were found in the extract. The results of this study suggested the fraudulent practices of vendors in their achievement of a green color in the served table olives [35]. Yoshioka and Ichihashi (2008) developed a chromatographic technique using RP-HPLC equipped with a PDA detector for the analysis of 40 synthetic food colors in drinks and candies collected from Japanese local markets. The authors separated the colorants using a ZORBAX Eclipse XDB-C18 Rapid Resolution HT (1.8 µm size × 5 cm length × 4.6 mm ID) with gradient elution, using a mobile phase solvent A (0.1 mol/L of ammonium acetate aqueous solution, pH 6.7) and solvent B (1:1 methanol-acetonitrile, v/v) at a flow rate of 1.5 mL/min [76]. Huang et al. (2008) developed an HPLC-APCI-MS method to monitor chlorophylls and their derivatives in a traditional Chinese herb Gynostemma pentaphyllum Makino. They used a HyPURITY C18 column for the separation of chlorophyll-based colorants in the sample, with a quaternary solvent system of hexane-acetone-ethanol-toluene (10:7:6:7, v/v/v/v) under gradient elution. They quantified chlorophyll a and a , chlorophyll b and b , pheophytin a and a , pheophytin b and b , hydroxypheophytin a and a , pyropheophytin a, hydroxychlorophyll a and b, and hydroxypheophytin b and b [77]. Aparicio-Ruiz et al. (2010) checked the degradation kinetics of chlorophyll a-series pigments at varying temperatures in the collected three virgin olive oils. They found that the isocyclic ring alteration formed pheophytin, pyropheophytin, 13 2 -OH-pheophytin, and 15 1 -OH-lactone-pheophytin, whereas the porphyrin ring alteration resulted in colorless compounds. In addition, the authors did not find any matrix effect on 15 1 -OHlactone-pheophytin conversion, but 13 2 -OH-pheophytin conversion was affected by the oil matrices [78]. Kao et al. (2011) developed an HPLC-DAD-APCI-MS method to determine chlorophyll and its derivatives in hot-air-dried and freeze-dried Chinese herb Rhinacanthus nasutus (L.) Kurz samples. The authors separated different colorants using an Agilent Eclipse XDB-C18 column, with a mobile phase of (A) methanol/N,N-dimethylformamide (97:3, v/v) and (B) acetonitrile under gradient elution. They identified chlorophyll a and a , hydroxychlorophyll a and b, 15-OH-lactone chlorophyll a, chlorophyll b and b , pheophytin a and a , hydroxypheo-phytin a and a , and pheophytin b in hot-air-dried Rhinacanthus nasutus, but the freeze-dried Rhinacanthus nasutus contained only chlorophyll a and a , chlorophyll b and pheophytin a. Zinc-phthalocyanine was found to be an appropriate internal standard to quantify all the chlorophyll compounds. The results suggested that chlorophyll a and pheophytin a were the most abundant in the hot-air-dried samples, while chlorophyll a and chlorophyll b were the main colorants in freeze-dried samples [79] (Table 3) a and b, chlorophyll a and b derivatives), and carried out quantification of seven targeted compounds. The limit of detection for lutein was 0.01 ng/mL, and that of chlorophyll a was 0.24 ng/mL [80]. Isakau et al. (2007) tried to analyze the tetrapyrrolic compound chlorin e6 and its degradants, after its uses as a photolon formulation for photodynamic therapy of various diseases. The authors developed an HPLC-PDA-MS-based chromatographic method for this study, and identified several degradants such as chlorin e6 17 4 -ethyl ester, chlorin e4, 15-hydroxyphyllochlorin, rhodochlorin, 15 1 -hydroxymethylrhodochlorin δ-lactone, rhodochlorin-15-oxymethyl δ-lactone, rhodochlorin-15-oxymethyl δ-lactone 17 4 -ethyl ester, 15 1 -hydroxymethylrhodoporphyrin δ-lactone, rhodoporphyrin-15-oxymethyl δ-lactone, and purpurin 18. They used an analytical HPLC column (3.5 µm size × 15 cm length × 4.6 mm ID) and a semi-preparative column (5 µm size × 15 cm length × 10 mm ID) packed with XTerra RP-18, using a mobile phase A (0.1% TFA in water) and B (acetonitrile) under gradient elution [81]. Loh et al. (2012) analyzed the Chinese herb Taraxacum formosanum, considering its different medicinal values, as an essential component of different drug formulations. Chlorophylls were extracted in 30 mL of hexane/ethanol/acetone/toluene (10:6:7:7, v/v/v/v), the upper layer was collected and evaporated to dryness, and the residue was dissolved in 5 mL of acetone, filtered, and stored for HPLC analysis. For chlorophyll derivatives, the authors used column chromatography for separation, after dissolving 10 g of the herb sample in 80 mL of hexane/ethanol/acetone/toluene (10:6:7:7, v/v/v/v) for 1 h at room temperature. Finally, the supernatants were evaporated to dryness and the residue was dissolved in 5 mL of acetone, filtered and stored for analysis. A HyPURITY C18 column (5 µm size × 15 cm length × 4.6 mm ID) was used for the separation of chlorophyll and its derivatives, with a quaternary mobile phase of (a) water, (b) methanol, (c) acetonitrile, and (d) acetone, under gradient elution. They determined chlorophylls a and a , chlorophylls b and b , pheophytins a and a , hydroxychlorophyll b, hydroxychlorophylls a and a , and chlorophyllides a and a in the herb extract. The authors found chlorophyllide b, pyropheophorbide b, hydroxypheophytin a, and hydroxypheophytin a in the extract collected from the column, which accounted for 63% of the total content, suggesting more investigation is needed before the use of this herb in any drug formulation [82]. Lafeuille et al. (2014) studied the effect of five different drying treatments on the green colorants of 50 collected samples of culinary aromatic herbs in Turkey and Egypt. Different drying methods such as sun-drying, freeze-drying, oven-drying, DP1 (a modified traditional sun-drying process), and DP2 (a specially designed drying process to preserve the green colorants of aromatic herbs) were applied for drying. They used a standard extraction procedure for the extraction of green colorants from the collected samples. Briefly, 1 g of the fresh or dry herb were mixed with 100 mL of an 80:20 acetone:sodium citrate solution (0.1 M). The solution was filtered and stored for analysis. For this study, they developed an HPLC-PDS-MS method after separating through a Kinetex stainless-steel HPLC C18column (6 µm size × 10 cm length × 4.6 mm ID) with a mobile phase of acetone:methanol (80:20, v/v) containing 0.5 M of NH 4 OAc. They detected 24 pigments (2 original chlorophyll a and b, 22 different degradants). Among the degradants, chlorophyllide, pyrochlorophyll, pheophytin, pyropheophytin, and pheophorbides were identified [83]. Based on literature survey and findings of different researchers, it is evident that there are various degradants of natural green chlorophylls found under different food processing conditions. Based on this, we can generalize chlorophyll and chlorophyllins' structures as well as their degradants. Three structures are based on chlorin-skeleton (str-73-75)-related, and another three are based on porphyrin-skeleton (str-83-88)-related colorants ( Figure 13). Depending on M (any metal cation), R (H, CH 3 ), R 1 (Phytyl group, H), R 2 (H, OH, COOCH 3 ) and R 3 (H, OH, COOCH 3 ), with or without an intact isocyclic ring, we can obtain different chlorophylls, chlorophyllins and their derivatives. Although various chlorin-skeletonbased colorants have been detected by different researchers, porphyrin-skeleton-based colorants could be reported in the near future. Non-Targeted Analysis of Chlorophyll and Chlorophyllin-Related Compounds Using HPLC/MS-MS and HPLC/ICP-IDMS Methods An in-house mass database created ex professo was developed in comparison to the database used in HR-MS software for structural elucidation from mass spectrometric data [15,18,23,24,83]. The in-house mass database created ex professo used monoisotopic masses, elemental composition, and, optionally, retention time and characteristic product ions in positive mode if known, for all chlorophyll (Chl) derivatives of the Chl-a and Chl-b series (str-73 model-1 to str-78 model-6). Bruker Daltonics DataAnalysis 4.1 was used to evaluate data, and a Compass isotope pattern calculator (Bruker, Bremen, Germany) was used to calculate theoretical isotopic distributions, while Bruker Daltonics Data Analysis 4.1 and Bruker Daltonics Target AnalysisTM were applied for data analysis as the filtering rules in dealing with the new workflow of data. This characterization process is executed by three filtering rules. The first filtering rule performs the screening of significantly different isotope cluster analyses between copper and non-copper chlorophylls as the key founding principle of this methodology. For this screening, two stable isotopes of copper, namely 63 Cu, and 65 Cu, are considered, with relative abundances of 100 and 44.61, respectively. Meanwhile, three stable isotopes of magnesium, namely 24 Mg, 25 Mg, and 26 Mg, are considered, with relative abundances of 100, 12.66 and 13.93, respectively. These were filtered according to the threshold values for mass accuracy and isotopic pattern (mass error below 5 ppm) and mSigma value (below 50) to obtain the list of filtered hits. Eventually, only one should fit with the elemental composition expected for the [M + H]+ ion, and should satisfy the thresholds for mass accuracy (mass error below 5 ppm) and SigmaFit values (below 50). The second filtering rule imposes additional constraints to avoid confounding possibilities in determining false positives ions, which are generated from the compounds with copper, but not from chlorophyll derivatives. Basically, chlorophylls contain four atoms of nitrogen; hence, the maximum limit for the mass error (below 5 ppm) and the mSigma value (below 50) with respect to the nitrogen mass calculation are the main criteria for screening the second list of candidates. The third filtering rule is based on the typical UV-Vis spectrum of chlorophyll pigments. Typically, chlorophyll compounds show two absorption bands i.e., the S-band (soret band in the blue region) and the Q-band (in the red region) at 430 nm and 660 nm, respectively [84]. These two absorption bands are considered for the screening of the final set of chlorophyll compounds in order to elucidate the correct and authenticated new chlorophyll compounds or degradants. In addition, the same methodology may be applied for the detection of zinc chlorophylls in a food matrix, considering five stable isotopes of zinc, namely 64 Zn, 66 Zn, 67 Zn, 68 Zn, and 70 Zn, with relative abundances of 100, 57.958, 8.498, 39.413 and 1.307, respectively [85]. The most practical and striking advantages of this methodology are the ability to determine the accurate structure of copper-based chlorophyll degradants in complex matrices without tedious and time-consuming structural elucidation using instrumentally based analyses. Traditional chemical analysis is based on target analysis, which refers to the use of various techniques and instruments to detect and quantify the amount of the target compound(s) in a sample. The results can provide valuable information for making more informed decisions for regulatory compliance, quality control, and research purposes. The newly introduced non-target analysis (NTA) method refers to the use of advanced instrumental techniques, such as high-performance liquid chromatography and high-resolution mass spectrometry, assisted by powerful software tools and large databases to identify both known and unknown compounds. The results are useful for various purposes, such as detecting adulteration in food and medicine products, identifying potential contamination sources in environmental samples, failure analysis in industrial processes, and transformation studies in product shelf-life [86]. monitored dephytylated chlorophyll standards derivatives using HPLC/UHPLC-APCI-hrTOF-MSMS. For this study, the authors used a C18 Spherisorb ODS-2 LC stainless steel column (3 µm size × 20 cm length × 0.46 cm ID), and separation was carried out using mobile phases (a) H 2 O/ion pair reagent/MeOH (1:1:8, v/v/v) and (b) MeOH/acetone (1:1, v/v), with an ion-pair reagent of 0.05 M tetrabutylammonium and 1 M ammonium acetate in water, at a flow rate of 1 mL/min, with scan m/z range of 50-1500 and mass resolving power of over 18,000 (m/∆m). The authors developed a new high-throughput methodology which was able to determine the fragmentation pathway of 16 dephytylated chlorophyll derivatives, elucidating the structures of the new product ions and the new mechanisms of fragmentation without the need for known standards. ESI in positive ionization mode was used for more polar compounds, whereas APCI in positive ionization mode was used for the apolar compounds. The authors reported different colorants, such as chlorophyllide a/b, 13 2 -OH-chlorophyllide a/b, 15 1 -OH-lactonechlorophyllide a/b, pyrochlorophyllide a/b, pheophorbide a/b, 13 2 -OH-pheophorbide a/b, 15 1 -OH-lactone-pheophorbide a/b, and pyropheophorbide b. This new methodology combines hrTOF-MSMS and powerful post-processing software for the first time, which will pave the way for the non-targeted analysis and study of chlorophyll-and chlorophyllinrelated compounds in the relevant fields [87] (Table 4). Pérez-Gálvez (2015) developed an HPLC/APCI-TOF-MS method for the determination of Cu-pyropheophytin a in a marketed Cu-chlorophyll mixture, which is permitted for use in citrus foodstuffs. The samples were separated using a C18 stainless steel column (3 µm size × 20 cm length × 0.46 cm ID), with a mobile phase consisting of (a) water/ion reagent/methanol (1/1/8, v/v/v) and (b) methanol/acetone (1/1, v/v), under gradient elution. They identified Cu-pyropheophytin a in all of the marketed colorant samples, and suggested that this method could be used to monitor adulteration of the colorant E141ii in table olives [88]. . The injection volume was 5 µL, and the column was kept at 45 • C throughout the analysis. The authors used both ESI and APCI ionization sources for the identification of chlorophylls and their derivatives. The authors identified 48 different chlorophyll-based colorants/derivatives using this developed analytical technique in the studied foodstuffs and beverages collected from different supermarkets in Spain and Italy. In this study, the authors used 2 mL of 80% cold aqueous acetone to extract the pigments from tea samples (~10 mg), and directly injected the filtered extract into the chromatographic system for analysis. For the other samples, the authors used the extraction method of Scotter et al. [38]. In brief, about 1-4 g of sample was mixed with 4 mL of acetone in a centrifuge tube and shaken at 5000 rpm for 10 min in a dark place. Then, the mixture was mixed with 6 mL of ethyl acetate and followed by the previous step. Finally, the solution was mixed with 2.5 mL of NaCl solution (10%, w/v) and cooled to 4 • C after shaking for 15 min. The organic layer of the final mixture was collected in a glass tube after centrifugation at 2700× g for 3 min at 4 • C, and the organic layer was dried under N 2 gas flow. The residue was kept at −80 • C in an inert environment, and dissolved in the mobile phase immediately before injection for analysis. The authors detected pheophytins, pheophorbides, and pyro-derivatives mainly in the processed green vegetable and fruit products, while several other foodstuffs contained chlorophyll-derived food colorants such as Cu-chlorophyllins, Cu-pheophytins, Cu-pyropheophytins, Cu-pheophorbides, and Cu-pyropheophorbides [13]. Chong et al. (2019) developed a chromatographic technique for simultaneous analysis of Na-Fe-chlorophyllin and Na-Cu-chlorophyllin in fortified candy samples, using HPLC/UPLC equipped with a PDA detector at 395 nm, after separation through an Inertsil ODS-2 column using a mobile phase of methanol:water (97:3 and 80:20, v/v) containing 1% acetic acid. The authors also identified the main components of Na-Fechlorophyllin and Na-Cu-chlorophyllin using HPLC-tandem MS. The identified green colorants were Fe-Isochlorin e4 (LOD = 1.4 mg/kg, LOQ = 4.1 mg/kg) and Cu-Isochlorin e4 (LOD = 1.4 mg/kg, LOQ = 4.8 mg/kg). The colorants from the fortified food samples were extracted using the following procedure. About 5-10 g of the finely crushed fortified candies was mixed with 5 mL of 0.1 N HCl, and the sample mixture was ultrasonicated at 50 • C for 10 min and diluted to 20 mL with methanol. The diluted sample was centrifuged at 10,000 rpm for 10 min, and the upper layer was filtered with a 0.2-µm membrane filter before injection into the HPLC system [90]. Harp et al. (2020) developed a novel method of UHPLC combined with an ICP isotope dilution MS (UHPLC-ICP-IDMS) using post-column isotopic dilution with 65 Cu for the analysis of Cu-chlorophylls and their degradation products in collected green colored table olives. During the industrial processing and storage of table olive-based foodstuffs, their green colors change to brown or pale yellow, which prompted the authors to carry out this study. The authors found Cu-Isochlorin e4 and Cu-15 2 -Me-chlorin e6 in the analyzed table olives. The authors found higher contents of Cu-Isochlorin e4 in the samples compared to that of Cu-15 2 -Me-chlorin e6, suggesting the addition of Na-Cu-chlorophyllin to the table olives for the achievement of their green color [91]. Pérez-Gálvez et al. (2020) developed an HPLC-ESI/APCI-HRMS method assisted by powerful post-processing software to identify chlorophylls and chlorophyllins in the green-colored food matrices of fortified olive oil and processed vegetable samples. The chromatographic separation of colorants was carried out through a C18 Spherisorb ODS-2 HPLC column (3 µm size × 20 cm × 0.46 cm ID) after gradient elution, using mobile phase (a) water/ammonium acetate (1 M)/methanol (1/1/8, v/v/v), and (b) methanol/acetone (1/1, v/v), at a flow rate of 1 mL/min. In this method, the authors used the characteristic isotopic pattern of the copper chlorophyll derivatives as a filtering rule, first in detecting the coloring products in foods, second in filtering the elemental composition of chlorophylls containing four atoms of nitrogen, and third in the filtering of UV-Vis spectra. Interestingly, no standards or reference materials were used in this method, and this method could be applied to detect the presence of other metallo-chlorophyll complexes introduced for improving the green coloration of food products [83]. Pérez-Gálvez et al. (2020) studied the fate of the green colorant E141i in high-fatcontaining foodstuffs after consumption by mice. They developed the HPLC-ESI(+)/APCI(+)-hrTOF-MS 2 method for analysis of Cu-chlorophyll-related metabolites in serum and feces. The results showed that Cu-pheophytins from a series were detected in feces after ingestion of Cu-chlorophylls, and that serum did not contain Cu-chlorophyll derivatives. Only Cupyroporphyrin a was present in their livers, suggesting no absorption of the Cu-chlorophyll compounds through the gastrointestinal (GI) tract [12]. Herrera et al. (2022) aimed to determine the cultivation and processing variables of the qualities of six different green tea varieties, and to determine their influence on the chlorophyll profile, in order to establish a characteristic profile for specific green teas. They developed the HPLC-ESI(+)/APCI(+)-hrTOF-MS 2 method for the analysis of Cu-chlorophyll-related metabolites in serum and feces. They identified for the first time 13 2hydroxy-chlorophylls, 13 2 -hydroxy-pheophytins, and 15 1 -hydroxy-lactone-pheophytins in green teas. A higher proportion of chlorophylls a and b was found in Matcha tea, justifying its higher quality and price. The authors also found chlorophyll metabolites (pheophytins, pyropheophytins, and oxidized chlorophylls) to be indicative of the various processing and storage conditions [92]. Scheme Sample Conclusions Although many research groups have developed different analytical methods based on chromatography and mass spectrometry for the separation and identification of chlorophyll and chlorophyllin-based colorants, these challenges are not over. More cutting-edge analytical methods are urgently needed to extract different chlorophyll and chlorophyllinbased colorants without deformation during extraction conditions. As many numbers of degradants or derivatives are possible, more sophisticated hyphenated techniques are required to analyze all these colorants accurately and reproducibly. In addition, reference standards are not available for the authentication of all the identified unknown degradants of green pigments. Nowadays, the NTA method has started being used to identify different unknown and unreported natives and degradants of chlorophyll and chlorophyllin-based colorants. This more sophisticated and information-rich NTA method could be a future tool to analyze all possible chlorophyll and chlorophyllin-based colorants and degradants in foodstuffs and beverages, both for effective utilization in consumer products and for the regulatory authorities. Due to different legislations and different definitions published by different countries, and incomplete and unclear characterization of the authorized natural green colorants, the different food and cosmetic industries may face severe challenges in using these natural green colorants in their foodstuffs and beverages. Also, researchers may find newgeneration semi-synthetic preservative green colorants for use as additives of natural origin. Using the NTA method, which would be stable, low cost, and easily dispersible, to study foodstuffs and beverages (without compromising their color hue and safety) is desirable. The trend of using preservatives and colorants might be a critical challenge to human health if their toxicity and in vivo behavior are not properly evaluated in detail.
2023-05-13T15:08:24.812Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "29408b65cd3b448fa675ffa4e8356bdce96dc672", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules28104012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f37e622836000d72b570e1fefdebd6202576159b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
218761351
pes2o/s2orc
v3-fos-license
β-arrestin2 deficiency protects against hepatic fibrosis in mice and prevents synthesis of extracellular matrix Hepatic fibrosis is a disease of the wound-healing response following chronic liver injury, and activated hepatic stellate cells (HSCs) play a crucial role in the progression of hepatic fibrosis. β-arrestin2 functions as a multiprotein scaffold to coordinate complex signal transduction networks. Although β-arrestin2 transduces diverse signals in cells, little is known about its involvement in the regulation of liver fibrosis. Our current study utilized a porcine serum-induced liver fibrosis model and found increased expression of β-arrestin2 in hepatic tissues with the progression of hepatic fibrosis, which was positively correlated with collagen levels. Furthermore, changes in human fibrotic samples were also observed. We next used β-arrestin2−/− mice to demonstrate that β-arrestin2 deficiency ameliorates CCl4-induced liver fibrosis and decreases collagen deposition. The in vitro depletion and overexpression experiments showed that decreased β-arrestin2 inhibited HSCs collagen production and elevated TβRIII expression, thus downregulating the TGF-β1 pathway components Smad2, Smad3 and Akt. These findings suggest that β-arrestin2 deficiency ameliorates liver fibrosis in mice, and β-arrestin2 may be a potential treatment target in hepatic fibrosis. Introduction Hepatic fibrosis is a common final pathway of a variety of chronic liver diseases and is often associated with severe morbidity and mortality. The pathogenesis of hepatic fibrosis is characterized by the excessive accumulation of extracellular matrix (ECM) and formation of fibrous scars, which lead to destruction of the normal liver parenchyma 1 . The activation of hepatic stellate cells (HSCs) is reported to play a crucial role in the formation of liver fibrosis and is a major cellular source of matrix proteins 2 . At the cellular level, transforming growth factor-β1 (TGF-β1) is critical in the progression of liver fibrosis due to its role in regulating ECM synthesis, HSC proliferation, and apoptosis. Following liver injury, HSCs are activated and secrete latent TGF-β, which forms an autocrine positive feedback loop to induce fibrogenesis through Smad2/3 3 . TGF-β1 functions by binding to three receptors: type I (TβRI), type II (TβRII) and type III (TβRIII) TGF-β1 receptor. Both Smad-dependent pathways (such as Smad2 and Smad3) and several Smadindependent pathways, such as mitogen-activated protein kinases (MAPKs) and phosphatidylinositol 3-kinase (PI3K)/Akt, are critical for TGF-β1-mediated signalling 4 . Currently, clinical reports suggest that advanced liver fibrosis is potentially reversible. Therefore, it is crucial to develop effective antifibrotic strategies. Although β-arrestin2 transduces multiple signals in cells, its role in the modulation of liver fibrosis is unclear. We previously reported that β-arrestin2 depletion diminishes HSC mitogenic signalling and proliferation in vitro 10 . However, the potential of β-arrestin2 in the development of liver fibrosis in vivo and ECM synthesis has not been investigated. To that end, the present study utilized a porcine serum (PS)-induced liver fibrosis model and found increased expression of β-arrestin2 in hepatic tissues with the progression of hepatic fibrosis, which was positively correlated with collagen levels. Furthermore, changes in human fibrotic samples were also observed. We next used β-arrestin2 −/− mice to further demonstrate that β-arrestin2 deficiency ameliorates carbon tetrachloride (CCl 4 )-induced liver fibrosis and decreases ECM deposition. In vitro depletion and overexpression experiments showed that decreased β-arrestin2 inhibited collagen production by HSCs and elevated TβRIII expression, thus downregulating the TGF-β1 pathway components Smad2, Smad3 and Akt. Taken together, these findings suggest that β-arrestin2 is a potential treatment target in hepatic fibrosis. Results β-arrestin2 expression correlated with collagen production during fibrosis development To study the dynamic expression of β-arrestin2 in vivo, we established a PS-induced liver fibrosis model to investigate the changes during fibrosis progression. Hematoxylin-eosin (HE) staining showed that at 6 weeks after PS injection, the liver displayed disordered hepatic cords, massive infiltration of inflammatory cells and considerable collagen deposition. Between 9 and 16 weeks, the apparent fibrous septa showed radial extension, resulting in the formation of pseudolobules (Fig. 1a). Western blot analysis showed that β-arrestin2 protein levels in the liver tissues of fibrotic rats increased with the progression of fibrosis. Furthermore, the deposition of collagen I and collagen III in the rat liver correspondingly increased (Fig. 1b). Considering the close association of TGF-β1 with collagen production and hepatic fibrosis, we investigated the expression of TGF-β1 and its receptors TβRII and TβRIII in liver tissues. Western blot analysis showed that TGF-β1 expression was upregulated compared with that in the normal control group with increasing severity of hepatic fibrosis. However, the expression of TβRIII in PS-injected rats was significantly lower than that in the normal control group. We found that TβRII expression was not significantly changed with the progression of fibrosis (Fig. 1c). Correlation analysis revealed that β-arrestin2 expression in fibrotic livers was positively associated with collagen I and collagen III levels but negatively associated with TβRIII expression (Fig. 1d). Collectively, these data revealed the correlation between β-arrestin2 expression and collagen production in fibrosis development. β-arrestin2 was frequently upregulated in patients with liver fibrotic diseases To further verify the role of β-arrestin2 in liver fibrosis, we examined the β-arrestin2 expression profiles in human samples by immunohistochemistry. β-arrestin2 staining intensity showed a clearly increasing trend in mild fibrosis and showed significantly enhanced levels in severely fibrotic patients compared with that of samples from control livers. However, TβRIII was significantly downregulated in fibrotic livers (Fig. 1e, f). β-arrestin2 deficiency ameliorated liver fibrosis in mice To determine the contribution of β-arrestin2 to hepatic fibrosis, we used β-arrestin2 −/− mice in the CCl 4 mouse model of liver fibrosis. Injection of CCl 4 resulted in serious hepatic steatosis, necrosis, severe architectural changes and excessive collagen accumulation in WT mice. However, the livers of β-arrestin2 −/− mice showed (see figure on previous page) Fig. 1 β-arrestin2 expression is associated with collagen production in liver fibrosis development. a Representative photographs of HE staining from control rats and 3, 6, 9, 12, 16 weeks after PS injection (n = 8 in each group, scale bar = 100 μm). b Time course analysis of β-arrestin2 and collagen I, collagen III expression by Western blot in PS-induced liver fibrosis rats. c TGF-β1, TβRII, TβRIII protein expressions in liver tissues of fibrotic rats at different time points, and the protein levels were normalized to β-actin for the Western blots from the same lysate. Densitometry values in the histograms were expressed as -fold change relative to the control, which was assigned a value of 1. The data from at least four independent experiments are shown as mean ± SD. *P < 0.05, **P < 0.01 vs. control group. d The correlation analysis of β-arrestin2 and collagen I, collagen III, TβRIII expression in liver fibrosis. e Immunohistochemical analysis of β-arrestin2 and TβRIII expression in human samples from normal liver and different histological grades of liver fibrosis (scale bar = 200 μm for ×100 and 50 μm for ×400). Top panel is HE staining for a serial section (scale bar = 200 μm). f Bar graph of the relative positive optical density values of β-arrestin2 and TβRIII in the hepatic tissues (n = 8 in normal group, n = 10 in mild group, and n = 9 in moderate group, severe group). ## P < 0.01 vs. normal group. minimal collagen accumulation (Fig. 2a, b). Consistent with the liver histology results, β-arrestin2 −/− mice showed obviously reduced hydroxyproline in liver homogenates compared with that of WT mice upon CCl 4 treatment (Fig. 2d). Increased levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) are conventional indicators of liver injury. An obvious increase in the activity of these two enzymes was detected in WT model mice, while a significant reduction was observed in β-arrestin2 −/− mice (Fig. 2c). Since lipid peroxidation is considered an important factor in CCl 4 -induced hepatotoxicity, we examined malondialdehyde (MDA), superoxide dismutase (SOD) and glutathione (GSH) levels in liver homogenates. WT mice had a significant increase in MDA levels and a decrease in SOD and GSH levels. MDA levels were significantly decreased and SOD, GSH levels were increased in the livers of β-arrestin2 −/− mice compared with those of WT mice (Fig. 2d). These data indicate that β-arrestin2-deficient mice are protected from redundant collagen deposition and architectural changes in the liver that commonly occur upon CCl 4 treatment. β-arrestin2 deficiency inhibited the activation of T cells Activated T cells also play a crucial role in the pathogenesis of hepatic fibrosis 11 . To determine if the loss of β-arrestin2 alters T cell subsets, the subsets of naïve T cells (CD4 + CD62L + ), activated T cells (CD4 + CD69 + ), Th17 cells (CD4 + IL-17 + ) and regulatory T (Treg) cells (CD4 + CD25 + Foxp3 + ) were examined. The results showed that in the spleens of WT model mice, activated T cells and the Th17/Treg ratio significantly increased, while naïve T cells decreased. β-arrestin2 deficiency led a decrease in activated T cell populations and Th17/Treg ratio and increased naïve T cells in the spleens of fibrotic mice ( Fig. 3a-d). These data suggest that the inflammatory response after CCl 4 treatment is attenuated in β-arrestin2 −/− mice. β-arrestin2 deficiency reduced collagen production and TGF-β1 signalling in fibrotic mice To ascertain whether the alleviation of fibrosis in β-arrestin2 −/− mice is due to the loss of TGF-β1 responsiveness, the expression of TGF-β1 downstream signalling molecules was detected. Importantly, the β-arrestin2 level was also elevated in the WT CCl 4induced fibrosis model (Fig. 4a). These observations were consistent with a model of PS-induced fibrosis as previously mentioned. As shown in Fig. 4a-e, the expression of collagen I, TGF-β1, p-Smad2, p-Smad3, and p-Akt was decreased, whereas TβRIII expression was increased in β-arrestin2 −/− mice compared with those of WT mice. These data suggest that the prevention of liver fibrosis in the absence of β-arrestin2 may be due to downregulating TGF-β1 signalling. c Serum ALT and AST activities of β-arrestin2 −/− and WT mice 6 weeks after intraperitoneal injection of CCl 4 5 mL/kg (n = 8 in each group). d Effect of β-arrestin2 deficiency on the hydroxyproline content, lipid peroxidation products MDA, and SOD, GSH levels of liver homogenates in CCl 4 -induced liver fibrotic mice (n = 8 in each group). # P < 0.05, ## P < 0.01 vs. normal group; **P < 0.01 vs. WT model group. Gene silencing of TβRIII enhanced TGF-β1-induced collagen production Considering that TGF-β1 plays a pivotal role in hepatic fibrosis 3 , TGF-β1-stimulated HSCs were used in this in vitro study. Time-course expression of β-arrestin2, TβRIII and its downstream signalling components in HSC-T6 cells that were stimulated with TGF-β1 was detected. As shown in Fig. 5a, β-arrestin2 expression was progressively increased in HSCs after stimulation with TGF-β1 for 0.5 h and peaked 4 h. However, TβRIII expression was decreased under TGF-β1 stimulation. As expected, TGF-β1 treatment induced increases in collagen I and collagen III levels ( Fig. 5b), which was accompanied by upregulation of p-Smad2, p-Smad3 and p-Akt ( Fig. 5c-e). To further investigate a role of TβRIII in β-arrestin2 deficiency-mediated collagen suppression, we next used siRNA targeting TβRIII to block the expression of TβRIII in HSCs that were stimulated with TGF-β1. Western blot results revealed that siRNA against TβRIII decreased TβRIII protein expression in HSC-T6 cells (Fig. 6a). Further experiments showed that when the expression of TβRIII was reduced by TβRIII siRNA, the collagen I and III levels in HSCs were increased compared with those in TGF-β1-treated cells that were not transfected (Fig. 6b). We also examined TGF-β1 downstream signalling proteins that participate in TβRIII-mediated HSC collagen production. Forty-eight hours after transfection with TβRIII siRNA, HSCs were stimulated with TGF-β1 for 0.25 or 4 h, and the levels of p-Smad2, p-Smad3 and p-Akt were increased in cells with low TβRIII expression compared with those of cells that were transfected with the scrambled siRNA ( Fig. 6c-e). These results indicate that downregulation of TβRIII expression affects collagen production in HSCs through upregulation of TGF-β1 signalling. The β-arrestin2/TβRIII interaction regulates collagen production in vitro Since TGF-β1 signalling was decreased during ECM production in β-arrestin2 −/− mice, the role of β-arrestin2 in this process remains poorly understood. Thus, we focused our study on the possible role of β-arrestin2 in HSC collagen production upon TGF-β1 stimulation in vitro. For this purpose, we first transfected β-arrestin2 siRNA into HSC-T6 cells to determine the effect of endogenous β-arrestin2 on collagen production. β-arrestin2 protein expression was significantly reduced, as determined by western blotting. Further experiments showed that when the expression of β-arrestin2 was Fig. 4 β-arrestin2 deficiency reduced collagen production and TGF-β1 signalling in fibrotic mice. Western blot analysis of β-arrestin2 and collagen I (a), TGF-β1 and its co-receptor TβRIII (b), p-Smad2 (c), p-Smad3 (d), p-Akt (e) from liver tissue protein extracts of WT and β-arrestin2 -/mice (n = 8 per group). Densitometry values in the histograms were expressed as -fold change relative to WT normal group, which was assigned a value of 1. && P < 0.01 vs. WT normal group; # P < 0.05, ## P < 0.01 vs. normal group; *P < 0.05, **P < 0.01 vs. WT model group. reduced by β-arrestin2 siRNA, the collagen I and III levels in HSC-T6 cells were significantly decreased, which correlated with a decrease in the phosphorylation of Smad2, Smad3 and Akt (Fig. 7a, b). These findings indicate that the above changes in collagen production in TGF-β1stimulated HSCs were inhibited when β-arrestin2 was knocked down. These in vitro data support the in vivo studies in which deficiency of β-arrestin2 correlated with reduced collagen production. Because we observed that the loss of β-arrestin2 was closely associated with decreased collagen levels in HSCs, we hypothesized that overexpression of β-arrestin2 in HSCs promotes collagen production. Thus, we transfected a plasmid encoding pEGFP-C2-β-arrestin2 in LX-2 cells, a human immortalized HSC line. Successful overexpression of β-arrestin2 was confirmed by Western blotting (Fig. 7c). β-arrestin2 overexpression increased collagen I levels. Furthermore, the expression of phosphorylated Smad2, Smad3 and Akt was significantly increased when β-arrestin2 was overexpressed (Fig. 7d). Our results suggest that β-arrestin2 promotes HSC collagen production by positively regulating the TGF-β1 pathway. Finally, we decided to study the putative role of the β-arrestin2/TβRIII interaction in TGF-β1-induced HSCs. The expression and subcellular localization of β-arrestin2 and TβRIII were confirmed by immunofluorescence confocal microscopy (Fig. 8a). Immunofluorescent analysis showed that β-arrestin2 protein was diffusely expressed predominantly in the cytoplasm of untreated HSCs and co-expressed with TβRIII. However, expression of β-arrestin2 was distributed in the cytoplasmic membrane of HSCs after TGF-β1 stimulation for up to 4 h. Moreover, the immunofluorescent staining intensity of β-arrestin2 was increased upon TGF-β1 treatment, which was in consistent with the Western blot analysis. These observations suggest that co-expression of β-arrestin2 and TβRIII resulted in their co-localization in the cytoplasm of HSCs. To investigate whether β-arrestin2 has a role in regulating TβRIII, we again used siRNA targeting β-arrestin2 or expression vectors carrying β-arrestin2 in HSCs. A decrease in TβRIII was observed in β-arrestin2-overexpressing cells compared with cells that were transfected with the empty vector. TβRIII expression was increased in cells that were treated with β-arrestin2 siRNA and TGF-β1 (Fig. 8b, c). We next examined whether β-arrestin2 interacts with TβRIII by co-immunoprecipitation. Initially, β-arrestin2 and TβRIII were co-expressed in HSCs, as determined by co-immunoprecipitation. Immunoprecipitation with TβRIII antibody resulted in the co-precipitation of β-arrestin2. Enhanced co-immunoprecipitation of β-arrestin2 and TβRIII was observed in HSCs upon TGF-β1 treatment (Fig. 8d), while the β-arrestin2/TβRIII interaction did not significantly change after siRNAmediated silencing of β-arrestin2 in TGF-β1-stimulated HSCs (Fig. 8e). These results suggest that decreased β-arrestin2 expression in HSCs may be through increasing TβRIII expression and downregulating its interaction with TβRIII, thus inhibiting TGF-β1 signalling and collagen production (Fig. 8f). Discussion Increasing evidence suggests that β-arrestins trigger signalling cascades independent of G-protein activation, and mediate many intracellular signalling networks, such as Notch, Wnt and TGF-β pathways, and downstream kinases including MAPK and PI3K 6,12 . These signalling pathways have been shown to be involved in the formation of fibrosis. Our previous studies found that with the exacerbation of liver fibrosis, expression of β-arrestin2 but not β-arrestin1 in liver tissues is increased 10 . However, its clinical relevance in regard to the progression of hepatic fibrosis and collagen production has never been clarified. Therefore, our present study was designed to investigate the role of β-arrestin2 The increased collagen I and collagen III levels were observed in TβRIII siRNA transfected HSCs. c-e Effects of transfecting TβRIII siRNA on activation of Smad2, Smad3 and Akt pathway in TGF-β1 stimulated HSCs. Down-regulation of TβRIII resulted in the increased activation of Smad2, Smad3 and Akt. The changes in p-Smad2, p-Smad3, p-Akt are expressed as ratio of phosphorylated/unphosphorylated forms and are shown as a bar diagram. # P < 0.05, ## P < 0.01 vs. control group; *P < 0.05, **P < 0.01 vs. scrambled siRNA group. deficiency in hepatic fibrosis. The results further demonstrate that β-arrestin2 plays a crucial role in collagen production and that β-arrestin2 deficiency ameliorates liver fibrosis. Different models of hepatic fibrosis have been used to study the molecular pathogenesis of this disease. The PSinduced liver fibrosis model in rats manifests changes that are similar to those found in liver diseases in humans 13 . The current study demonstrated that both β-arrestin2, TGF-β1 and collagen were increased during fibrosis development in this model. Furthermore, β-arrestin2 expression in the fibrotic liver was positively associated with collagen I and collagen III levels. These results were further validated in the CCl 4 -induced chemical liver fibrosis model. Importantly, β-arrestin2 levels were also elevated in human fibrosis samples. The results demonstrate that induction of β-arrestin2 occurs in two different animal models of hepatic fibrosis and in human fibrosis samples and is associated with increased collagen levels. Fig. 7 β-arrestin2 regulates collagen production via TGF-β1 downstream pathway. a Expression of β-arrestin2, collagen I and collagen III in TGF-β1 stimulated HSCs after transfecting β-arrestin2 siRNA, and the protein levels were normalized to β-actin for the Western blots from the same lysate. b Effects of transfecting β-arrestin2 siRNA on activation of Smad2, Smad3 and Akt pathways in TGF-β1 stimulated HSCs. # P < 0.05, ## P < 0.01 vs. control group; *P < 0.05, **P < 0.01 vs. scrambled siRNA group. c β-arrestin2 was overexpressed in LX-2 cells, a human immortalized HSC line, by transfected the plasmid encoding pEGFP-C2-β-arrestin2. d Overexpression of β-arrestin2 significantly promoted collagen I production and activation of Smad2, Smad3, Akt in LX-2 cells. The changes in p-Smad2, p-Smad3, p-Akt are expressed as ratio of phosphorylated/unphosphorylated forms. # P < 0.05, ## P < 0.01 vs. empty vector group; *P < 0.05, **P < 0.01 vs. TGF-β1 group. Fig. 8 β-arrestin2 regulates collagen production via TβRIII. a The localization of β-arrestin2 and TβRIII in HSCs examined by immunofluorescent confocal microscopy. Cells were stained with Alex Fluor 488 donkey anti-mouse antibody (to detect β-arrestin2, in green), Alex Fluor 555 donkey antirabbit antibody (to detect TβRIII, in red), DAPI (to detect DNA, in blue). Scale bar = 25 μm. b Overexpression of β-arrestin2 significantly inhibited TβRIII expression in LX-2 cells upon TGF-β1 treatment. # P < 0.05, ## P < 0.01 vs. empty vector group; **P < 0.01 vs. TGF-β1 group. c The increased expression of TβRIII in TGF-β1 stimulated HSCs was observed after transfecting β-arrestin2 siRNA. ## P < 0.01 vs. control group; *P < 0.05, **P < 0.01 vs. scrambled siRNA group. d Co-immunoprecipitation experiments of β-arrestin2 and TβRIII in HSCs. **P < 0.01 vs. without TGF-β1 treatment group. e After knockdown of β-arrestin2, the cell lysates were subjected to co-immunoprecipitation and western blot analysis. The ns indicates P > 0.05 vs. transfected with β-arrestin2 siRNA and without TGF-β1 treatment group. f Model depicting the role of β-arrestin2 in regulation of TGF-β1 signalling and collagen production. Some studies have shown aberrant protein expression of β-arrestin2 associated with fibrosis-associated diseases 14 . For instance, Fan et al. 15 demonstrated that in experimental ulcerative colitis (intestinal fibrosis) rats, the expression of β-arrestin2 was obviously decreased in the colonic mucosa compared with those of the normal control group. In contrast, an increasing number of studies have indicated that β-arrestin2 expression is increased in some fibrotic diseases. β-arrestin2 −/− mice are protected from excessive collagen deposition in a bleomycin-induced lung fibrosis model 8 . Moreover, increased expression of β-arrestin2 protein was observed in cystic fibrosis cells 16 . However, to date, the role of β-arrestin2 deficiency in liver fibrosis has not been investigated. Our current studies demonstrate that β-arrestin2 −/− mice showed minimal collagen deposition and less hydroxyproline in the liver than WT mice. β-arrestin2 deficiency also reduced serum transaminase activities, which indicates that β-arrestin2 −/− mice are protected from CCl 4 -induced liver injury. Since oxidative stress and subsequent lipid peroxidation participate in CCl 4 -induced hepatic fibrosis 17 , we examined oxidative stress parameters. Increased liver MDA levels and decreased SOD and GSH were detected in the WT model mice. β-arrestin2 −/− mice had significantly elevated GSH levels and decreased MDA levels, compared with those of WT mice. These data demonstrate that β-arrestin2 depletion inhibits lipid peroxidation and restores the antioxidative defence system in hepatic fibrosis. Increasing evidence suggests that inappropriate inflammation drives the progression of fibrosis, and some studies have concentrated on the imbalance between Treg cells and other effector T cells as a reason for this inappropriate inflammation 18,19 . It has been reported that the Treg/Th17 balance might influence fibrosis progression in hepatitis B virus-related liver fibrosis via an increase in liver injury and promotion of HSC activation 20 . Intriguingly, β-arrestin2 also plays a role in inflammation and the immune response. β-arrestin2 induced the production of IL-17 and CD4 + T lymphocyte expression in a mouse asthma model 21 . In the OVA-induced murine model of allergic asthma, pulmonary eosinophil and CD4 T cell infiltration, as well as IL-4, IL-6, IL-13 and TNF-α levels, were all enhanced in WT but not in β-arrestin2 −/− mice 22 . To gain in vivo evidence of the effect of β-arrestin2 on T cell activation during liver fibrosis, we determined the frequencies of T cell subsets in the splenic lymphocytes of mice. The frequencies of activated T cells and the Th17/ Treg ratio were significantly increased in WT mice after CCl 4 treatment, while naïve T cells were decreased. β-arrestin2 deficiency contributed to the reduction in activated T cells and the Th17/Treg ratio, and the enhancement of naïve T cells in the spleens of fibrotic mice. These results indicate that the inflammatory response after CCl 4 treatment in β-arrestin2 -/mice was attenuated. Our data suggest a previously unrecognized important role for β-arrestin2 deficiency in ameliorating liver fibrosis and regulating collagen formation in vivo. However, we cannot exclude other underlying mechanisms of β-arrestin2-mediated signalling pathways in the progression of hepatic fibrosis because of the complexity of β-arrestin2 signalling cascades. TGF-β1 is the most potent liver pro-fibrotic cytokine 23 . Although Smad-mediated signalling is well described as a significant mechanism of TGF-β1 signalling, additionally, Smad-independent pathways, such as MAPK, Akt, NF-κB pathways also participate in TGF-β1 signalling 3,24 . We examined whether β-arrestin2-regulated collagen formation was associated with TGF-β1 signalling. Our results indicate that both TGF-β1 and its downstream signalling molecules p-Smad2, p-Smad3, p-Akt were decreased in β-arrestin2 −/− mice compared with those of WT mice. In vitro, TGF-β1stimulated HSCs were used to further explore the role of β-arrestin2 in collagen synthesis. As expected, the collagen levels and activation of Smad2, Smad3, Akt were increased upon TGF-β1 stimulation. Moreover, β-arrestin2 expression gradually increased. To further investigate a role of β-arrestin2 in collagen production, we utilized siRNA targeting β-arrestin2 or transfected plasmids encoding β-arrestin2 in HSCs. Results showed that the collagen level and p-Smad2, p-Smad3, p-Akt were significantly increased when β-arrestin2 was overexpressed, while collagen production and Smad2, Smad3, Akt activation induced by TGF-β1 were inhibited when β-arrestin2 was knocked down in HSCs. These data together suggest that β-arrestin2 promotes HSCs collagen production by positively regulating the TGF-β1 pathway in HSCs. TGF-β regulates diverse cellular processes through a heteromeric complex of TβRI, TβRII and TβRIII. TGF-β ligands bind to constitutively active TβRII on the cell surface, activating TβRI and then forming heteromeric complexes to induce downstream signal 3 . TβRIII, which lacks intrinsic enzymatic activity, is the most abundantly expressed TGF-β superfamily receptor. TβRIII has primarily been considered to function as a coreceptor. However, recent studies have identified that TβRIII has a complex and context-dependent role in regulating TGF-β superfamily signalling and disease development. TβRIII functions as a potent inhibitor of TGF-β signalling by preventing type I-type II receptor complex formation 3 . In NIH/3T3 fibroblasts that stably expressed TβRIII, Smad2/ 3, Akt, ERK phosphorylation and procollagen type I expression were inhibited 25 . In several cancers, including breast cancer, prostate cancer, and lung cancer, TβRIII expression is reduced or even lost 26 . Our previous studies found that TβRIII expression was decreased in hepatocellular carcinoma (HCC) patient tissues, and knockdown of TβRIII promoted the migration and invasion of HCC cells 27 . Our present results show that TβRIII expression in PS-injected rats significantly decreased with the progression of fibrosis, which correlated with an increase in β-arrestin2 levels. In addition, TβRIII was downregulated in human fibrotic samples. Corroborating the results of the in vitro experiments, HSCs that were transfected with TβRIII siRNA showed high collagen I and collagen III expression and an increase in Smad2, Smad3 and Akt phosphorylation. These results indicate that downregulation of TβRIII expression affects collagen production in HSCs through upregulation of TGF-β1 signalling. β-arrestin2 functions as a multiprotein scaffold to coordinate complex signal transduction networks. Recent studies have indicated that β-arrestin2 binds TβRIII and is involved in its clathrin-independent/lipid raft pathwaydependent internalization 28,29 . TβRIII, through its interaction with β-arrestin2, activates Cdc42 and inhibits epithelial and cancer cell migration 30 . TβRIII expression inhibits TGF-β-mediated Smad2/3 nuclear translocation and transcriptional activation in MDA-MB-231 cell lines 31 . How might β-arrestin2 deficiency negatively regulate TGF-β1 signalling through its interaction with TβRIII? Our current results showed that TβRIII expression was increased in β-arrestin2 −/− mice compared with that of WT mice with CCl 4 -induced liver fibrosis. In addition, a decrease of TβRIII in β-arrestin2-overexpressing HSCs was observed. Conversely, TβRIII expression was increased in HSCs that were treated with β-arrestin2 siRNA and TGF-β1. Both co-immunoprecipitation and fluorescence confocal studies demonstrated the interaction between β-arrestin2 and TβRIII in HSCs. Previous studies have proposed a model in which the binding of TβRII and TβRI to TβRIII competes with the construction of the TβRII/ TβRI complex, thus suppressing signalling to the Smad pathway 31 . The findings of our studies raised the possibility that β-arrestin2 deficiency enhances TβRIII expression and suppresses TGF-β1 signalling, thereby reducing collagen production and ameliorating liver fibrosis. In summary, we provide evidence that β-arrestin2 deficiency ameliorated liver fibrosis in mice. Interfering with the expression of β-arrestin2 in HSCs inhibited collagen deposition through negative regulation of the TGF-β1 downstream pathway. Taken together, these findings suggest that locally delivered β-arrestin2 inhibitors may be a potential strategy for treating liver fibrosis. Animals All animal experiments were conducted according to the guidelines of the Animal Care and Use Committee of Anhui Medical University, and the experiments were authorized by the Ethics Review Committee for Animal Experimentation of the Institute of Clinical Pharmacology, Anhui Medical University. Male Wistar rats weighing 120 ± 10 g were obtained from the Shanghai BK Experimental Animal Centre (Grade II, Certificate No. D-65). β-arrestin2 −/− mice (C57BL/6 background) were purchased from Jackson Laboratory (Maine, USA). Male and female mice were evaluated, and the control mice were age-and sex-matched littermates. Each mouse was genotyped at 21 days after birth as previously described 32 . The animals were housed in a pathogen-free room with a constant temperature of 23 ± 3°C, humidity of 50 ± 20%, and a 12 h/12 h light/dark cycle. All animals were allowed free access to standard chow and tap water ad libitum throughout the experiment. Animal models of liver fibrosis Two animal experimental models of liver fibrosis were used for this study: PS administration and CCl 4 administration. The rats were randomly allocated into the normal control group and PS model group. Rats in the PS model group were intraperitoneally injected with PS at a dose of 0.5 mL/rat twice a week for a total of 16 weeks 33 . Rats in the normal control group were injected with the same amount of saline solution. After 3, 6, 9, 12 and 16 weeks of injections, eight rats in the PS group were sacrificed under anaesthesia. The liver samples were collected for histopathological staining and Western blot analysis. For CCl 4 experiments, CCl 4 (Shanghai Lingfeng Chemical Factory, Shanghai, China) was diluted in olive oil (Sigma, MO, USA) at a ratio of 1:9 and intraperitoneally injected (5 mL/kg body weight) into 6-to 8-week-old β-arrestin2 −/− and WT C57BL/6 mice (n = 8 per group). This administration was conducted twice a week for up to 6 weeks to establish liver fibrosis 34 . Age-and sex-matched control mice were treated twice weekly with similar volumes of olive oil injected i.p. The mice were sacrificed 6 weeks after initiation of the experiment. Human samples Specimens from 28 cirrhosis patients with different degrees of fibrosis were extracted during surgeries and collected at the Affiliated Hospital of Anhui Medical University (Hefei, China). The control group comprised eight patients with intrahepatic biliary lithiasis. Liver histological examination revealed normal histology or minimal changes. All experimental procedures were approved by the research ethics committee of Anhui Medical University (No. 20131323). All patients participated after providing written informed consent. This study was conducted according to the guidelines formulated by the Science Council of China. Cell culture conditions and cell models A rat HSC cell line (HSC-T6) or human immortalized HSC cell line (LX-2) was selected for in vitro studies, and the cell lines were acquired from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). The cells were cultured in DMEM (Life Technologies Inc., CA, USA) containing 10% foetal calf serum (HyClone, UT, USA) in a humidified atmosphere with 5% CO 2 at 37°C. The HSC cell lines possessed a fibroblast-like morphology and specific expression of α-smooth muscle actin 35 . Treatment of these HSCs with TGF-β1 triggered the morphological transition, which was concomitant with increased ECM synthesis 36 . Histology and immunohistochemical staining HE staining of the liver tissues was conducted according to standard protocols. Immunohistochemical staining was performed as previously described 10 . Liver sections were dewaxed, rehydrated and subjected to antigen retrieval. Then, the sections were placed in 3% H 2 O 2 in methanol for 10 min. After blocking, the sections were incubated with primary antibodies against β-arrestin2 and TβRIII (Santa Cruz Biotechnology, CA, USA) at specific dilutions for 1 h at 37°C. Then, the immunoreactivity of the antibodies was detected with the streptavidin/peroxidase (SP) method (Zhongshan Goldenbridge Biotechnology Co., LTD, Beijing, China), and diaminobenzidine (DAB) was used to visualize the reaction. After counterstaining with haematoxylin solution, the sections were viewed under an Olympus BX53 microscope (Olympus Optical Co., Ltd., Tokyo, Japan). A negative control was performed using the same staining procedure but omitting the primary antibodies. Semiquantitative analysis was performed using Image-Pro Plus software (Media Cybernetics, USA). For each slide, five random fields were analysed. Analysis of serum transaminase activities Serum samples were collected from all mice, and the transaminase activities of ALT and AST were evaluated by commercial kits (Jiancheng Biologic Co., Nanjing, China) according to the instructions. Determination of hydroxyproline levels in the liver Approximately 100 mg of liver tissue was collected to determine the hydroxyproline level as described 37 . The level of hepatic hydroxyproline is an indirect indicator of tissue collagen levels and is presented as mg/g wet tissue. Analysis of antioxidase and lipid peroxidation Hepatic tissues were rinsed with cold saline solution and subsequently homogenized on ice. After centrifugation at 4°C and 1000×g for 15 min, the supernatants were collected. The activities of SOD and GSH were measured to evaluate the antioxidases, and the results are presented as the units of SOD per milligram of hepatic tissue or GSH μmol/g protein. The lipid peroxidation state of the liver was detected by determining the MDA level, which is presented as nmol/mg protein. The procedures were conducted according to the kit instructions (Jiancheng Biologic Co., Nanjing, China). Preparation of splenic lymphocytes and T cell subset analysis After the mice were anaesthetized and sacrificed, singlecell spleen suspensions were harvested by mechanical separation of spleen tissue through nylon mesh. Lymphocytes were obtained from the gradient interphase. Then, the cells were rinsed with PBS three times and stained with specific fluorescent antibodies, including anti-CD4-FITC, anti-CD25-APC (eBioscience, CA, USA), anti-CD62L-PE, and anti-CD69-PE (Miltenyi Biotec, Bergisch Gladbach, Germany), in the dark at 4°C for 20 min. For analysis of the Treg and Th17 subsets, the cells were fixed and permeabilized, followed by incubation with anti-Foxp3-PE and anti-IL-17-PE antibodies (eBioscience, CA, USA). Afterwards, the cells were washed and resuspended in PBS, and the prepared samples were analysed on a BD FACSVerse flow cytometer (BD Biosciences, NJ, USA). siRNA transfection and DNA transfection For β-arrestin2 or TβRIII knockdown, HSC-T6 cells were seeded in 6-well plates and transfected with specific siRNA duplexes purchased from GenePharma Company (Shanghai, China) targeting β-arrestin2 and TβRIII RNA. A scrambled RNA duplex served as a negative control. The HSCs were incubated for 48 h after transfection and then harvested for Western blot analysis. For overexpression of β-arrestin2, a pcDNA3 expression plasmid encoding pEGFP-C2-β-arrestin2 was used in this study, which was kindly provided by Dr. Yang K. Xiang of the University of California, Davis. LX-2 cells were grown in 6-well plates and transiently transfected with the β-arrestin2 overexpression vector using Lipofectamine 3000 (Invitrogen Life Technologies, CA, USA) according to the manufacturer's protocols. Each well contained 5 μg of DNA. Immunofluorescence double-labelling assay Cells were seeded in a six-well dish with poly-D-lysinecoated coverslips. After incubation overnight, the cells were starved and stimulated with TGF-β1 5 ng/mL (PeproTech, NJ, USA) for the indicated time. The cells were then fixed with 4% paraformaldehyde for 20 min, washed thrice with PBS and permeabilized with 0.1% Triton X-100 for 5 min. After that, the cells were incubated with 1% bovine serum albumin, followed by primary antibodies against β-arrestin2 and TβRIII overnight at 4°C. The samples were subsequently incubated with a mixture of Alexa Fluor 555-conjugated anti-rabbit and Alexa Fluor 488-conjugated anti-mouse secondary antibodies (Life Technologies Inc., CA, USA) for 2 h in the dark. The samples were then mounted with a sealer containing DAPI, and the images were captured with a Leica SP8 laser scanning confocal microscope (Leica Biosystems, Wetzlar, Germany). β-arrestin2-positive expression is presented as green fluorescent foci, TβRIIIpositive expression is presented as red fluorescent foci, and colocalization of these two proteins is presented as yellow fluorescent foci. Co-immunoprecipitation assay Cells were collected in RIPA lysis buffer (Beyotime Biotechnology, Shanghai, China) supplemented with a mammalian protease inhibitor mixture (Biocolors, Shanghai, China). The cell lysate was immunoprecipitated (IP) with anti-TβRIII antibody, subsequently separated by SDS-PAGE and subjected to Western blotting analysis with anti-β-arrestin2 antibody (Santa Cruz Biotechnology, CA, USA). The assay was performed in accordance with standard procedures. Statistical analysis Statistical analysis was carried out using SPSS software version 15.0 (SPSS Inc., Chicago, IL, USA). The data are collected from eight animals per group for in vivo studies and at least four independent experiments for in vitro studies, and are presented as the means and standard deviation of the mean unless otherwise indicated. Analysis of variance (ANOVA) and Student's t-tests were used to identify significant differences between groups. The correlation between the expression of β-arrestin2 and collagen expression in liver tissues was performed by Pearson's correlation analysis. Values of P < 0.05 were considered to be statistically significant.
2020-05-21T14:36:10.407Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "2837bd588afda0fdf97e78f52ba79a7b29d6e0d3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-020-2596-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3b662f9352b46597183fb2ebaeba18e53937d18", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
15872056
pes2o/s2orc
v3-fos-license
Investigation of Inclusion Complex of Patchouli Alcohol with β-Cyclodextrin The objective of this study was to improve the stability and water-solubility of patchouli alcohol by complexing with β-cyclodextrin (β-CD). The interactions between patchouli alcohol and β-CD were characterized by differential scanning calorimetry (DSC), Fourier transformation-infrared (FT-IR) spectroscopy, powder X-ray diffraction (PXRD), and Scanning electron microscope (SEM), respectively. According to molecular modeling method, the enthalpy formation of host-guest illustrated the predominant configuration and the lowest value ΔbGo was -10.8174±1.9235 kcal/mol, suggesting the complex could reduce the energy of the system. The characterization analysis confirmed the formation of PA-CD inclusion complex, and the results indicated the advantage of the inclusion complex in stability and dissolution rates. These results identified PA-CD inclusion complex an effective way for the storage of PA, and better inclusion method still needed to be studied. Introduction Patchouli alcohol (PA, Fig 1) is a sesquiterpene with tricyclic structure, and has been extracted from the whole plant of traditional chinese medicine Guang-huo-xiang, which is also called Pogostemon cablin (Blanco) Benth. Patchouli alcohol is the nominal ingredient which standing for the typical aromatic odor and also used as the chemical reference for the quality control of P. cablin in Chinese Pharmacopoeia [1][2]. In traditional Chinese medicine, the P. cablin tasted hot, tepidity, owned to spleen, stomach and lung, usually used to treat cold, nausea and diarrhea [1]. PA has exhibited various pharmacological activities, such as protecting against the neurotoxicity of β amyloid peptide fragment 25-35 (Aβ [25][26][27][28][29][30][31][32][33][34][35] ) [3], enhancing cognition in memory impairment mice induced by scopolamine [4], anti-inflammatory activities in RAW 264.7 cells and rats models [5][6], anti-influenza virus activities in vitro and in vivo [7][8]. However, Patchouli alcohol easily evaporates even at room temperature due to its volatile nature, which can cause the bioactivity decrease in a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 the processing and storage. In addition, the development of Patchouli alcohol as medicine is greatly limited due to its low water solubility and poor bioavailability [9]. Therefore, searching for a safe and effective method for patchouli alcohol to enhance its stability and solubility became important. Cyclodextrins (CDs) are a group of cyclic oligosaccharides consisting of 6-8 units of 1, 4-linked glucose units. The spaces of these macromolecules are expressed as circular table shape with different diameters [10]. The property of the inner cavity is hydrophobic, while the outer side is hydrophilic [11]. So the CDs performed as good host for water soluble and fat soluble compounds. The applications of CDs have been extensively investigated to improve the stability and solubility of poor water soluble compounds by formation of inclusion complexes [12][13][14]. Several works which focused on the reaction between cyclodextrins and volatile oils have been carried out [15][16][17]. The water-solubility of garlic oil was increased by forming inclusion complex with HP-β-CD [18].The stability of patchouli oil/β-CD complex was found higher than uncomplex oil [19]. The dissolution rate and oral bioavailability of PA solid dispersion with Eudragit have been improved through inhibiting reprecipitation in super saturated solution [20]. In this research, Patchouli alcohol and β-CD was prepared to form the inclusion complex with a saturated aqueous solution method, which was designed to improve the solubility and stability of PA [21]. Solubility phase analysis was performed to forecast the enhanced solubility of PA by β-CD. PA/CD inclusion was confirmed by differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FT-IR), powder X-ray diffraction (PXRD), and Scanning electron microscope (SEM). Molecular modeling studies was carried out to obtain a three dimensional image of the most likely structure of the inclusion complex. Thermal stability, humidity stability, and photo-stability of PA-CD inclusion complex were compared with free PA to demonstrate the advantage of PA/CD inclusion complex. Materials Patchouli alcohol (PA, purity ! 99%) was gotten from Professor Su Ziren's group (Guangzhou University of Chinese Medicine), β-Cyclodextrin (β-CD) was purchased from Boao Bio-Technology Co., Ltd. (Shanghai, China). Other reagents used in this study were analytical grade. no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. Preparation The inclusion complex was prepared by saturated aqueous solution method [20]. PA (100 mg) was dispersed in water (100 mL) containing β-CD (1.0 g) and stirred for 4.0 h at 50˚C. The resultant solution was filtered through a 0.45 μm syringe filter and then lyophilized (ALPHA 1-2 LD plus, CHRIST, Germany) for 36 h. The PA-CD inclusion complex was stored in a desiccator at 4˚C until further use. Solubility phase analysis 2.3.1. Solubility phase method. Solubility phase analysis was manipulated in Thermo Max Q 4000 shaker (Thermo Scientific, USA) by the method reported by Higuchi and Connors [22]. The excessive amount of PA (10 mg) was added to volumetric flasks containing 5 mL various concentrations β-CD solutions (0, 0.9, 1.8, 3.6, 5.4, 7.2, and 9.1 mM) and then the solutions were ultrasonicated for 5 min. The flasks were then shaken continuously at 100 rpm in shaker at different temperatures (25 and 35˚C) for 72 h and the suspensions were filtered through a 0.45 μm syringe filter. The amount of PA was analyzed by GC-MS spectrometry (Agilent 7890A/5975C, USA). Each experiment was performed in triplicate. GC-MS analysis. GC-MS analysis was carried out on an Agilent 7890A-5975C GC-MS system (Agilent, USA). The GC separation was conducted on a HP-5MS capillary column (30 m × 0.25mm, 0.25 μm). Nonsplit injection (0.5 μL) was conducted, and helium was used as carrier gas at the rate of 1.5 mL/min. The initial oven temperature was set at 90˚C, and then programmed heating to 250˚C by a gradient of 10˚C/min (held for 2 min). The inlet temperature was 230˚C. The mass spectrometer condition was: EI mode, the ionization energy: 70 eV, the ionization source temperature: 250˚C, the scan range: 150-300 amu, and the scan rate: 0.25 s per scan. Solubility test Excess quantities of PA and its β-CD inclusion complex were dispersed in 25 mL of distilled water in sealed bottles to get a super-saturated solution. The bottles were shaken continuously for 24 h at ambient temperature until equilibrium was attained. Super-saturated solution was filtered through a 0.45 μm syringe filter and further diluted with methanol. The amount of PA was analyzed by GC-MS method mentioned in 2.3.2. Characterization of the inclusion complexes 2.5.1. Differential scanning calorimetry (DSC). DSC method was used to check the formation of inclusion on a Netzsch STA449C thermal analyzer (Netzsch Corporation, Germany). The accurately weighted powdered samples of PA, β-CD, PA/CD IC (inclusion complex), and PA/CD PM (physical mixture) were laid in aluminum pans and heated from 20˚C to 300˚C at a scanning rate of 10˚C min −1 under nitrogen gas flow (25 mL min −1 ). Fourier transform infrared spectroscopy (FT-IR). The FT-IR spectra of PA, β-CD, PA/CD IC, and PA/CD PM were measured on a spectrometer (Perkin Elmer Spectrum 400, USA). The typical bands were recorded in the range of 4000 to 800 cm −1 . Powder X-ray diffraction (PXRD) . The X-ray powder diffraction patterns of the powdered samples of PA, β-CD, PA/CD IC, and PA/CD PM were collected by copper radiation (40 kV, 20 mA), on a Ultima diffractometer (Empyrean, Nederland), in the range of 2 < 2θ < 60. The step size was 0.02˚and the counting time was 2 s per step. 2.5.4. Scanning electron microscope (SEM). The scanning electron microscope was used to observe the appearance of surface morphologies of PA, β-CD, PA/CD IC, and PA/CD PM using a Jeol 1 JSM-5900. Powdered samples were fixed on a brass stub through double-sided tape and vacuum-coated gold. Molecular modeling In order to investigate and confirm the inclusion behavior of guest patchouli alcohol (PA) into host (β-CD), autodock 4.2.3 was used to simulate the supermolecular structure of the inclusion complex with regular genetic algorithm method [23].The docked conformation which had the least binding energy was selected to analyze the mode of binding. The interaction of host-guest were developed by PyMol molecular viewer [24]. Amber 11 Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) program was used to calculate the binding delta Gibbs free energy. The docking result was also optimized by molecular simulation using amber 11 program [25]. The optimization ran for 2 ns. The binding energy (Δ b G o ) was calculated according to Eq (1): The calculated energy of PA, β-CD and the inclusion complex molecules were E host , E guest , and E complex (kcal/mol), respectively. Stability studies The effect of inclusion complex on the thermal stability, humidity stability, and photostability of free and complexed PA with β-CD were examined on a thermo statically controlled stability chamber (SHH-SDT, Yongsheng Gallenkamp, China) under the following stress conditions. The thermal stability of PA powder samples were tested at 40˚C, the humidity stability of PA powder samples were examined at 25˚C / 70% relative humidity (RH), and the photostability tests were conducted under 4500 lx at 25˚C [14]. During the testing period of 10 days, all experiments were carried out in triplicate. The samples were dissolved and diluted into appropriate concentrations by methanol, and the content of PA was examined by the GC-MS method described above in 2.3.2. Solubility phase study Solubility phase analysis was performed to detect the solubilizing ability of β-CD to PA and the inclusion stability constant of PA/CD complex inclusion. Fig 2 shows the solubility phase diagrams of PA-β-CD inclusion in two different temperatures (25, 35˚C). Both diagrams can be viewed as A L -type, indicating that the solubility of PA was increased linearly with the raise of the CD concentration, and the stoichiometric ratio was 1:1. The inclusion stability constant K 1:1 was defined by the Higuchi eq (2), where S 0 represented the equilibrium solubility of PA in water and slope was gained from the solubility phase diagram [22,26]. The K 1:1 at 25, 35˚C was calculated as 932.0 and 997.6 M -1 , respectively: Characterization When the guest molecules were included into the cyclodextrin cavities, their physical characteristics, such as melting, boiling and sublimation points will change [27]. DSC analysis. The DSC thermograms of PA, β-CD, PA/CD IC, and physical PA/ CD PM were shown in Fig 3A. The thermogram of PA showed narrow sharp peak at 58.4˚C, while which of β-CD exhibited a broad blunt peak ranging from 60 to 100˚C, and the peak emerged at 83.8˚C, indicating the melting points of PA and β-CD. For the PA/CD PM, the endothermic peaks of PA and the β-CD were both observed, indicating the physical property of mixture sample was similar to the two components, and the PM sample was still the simple mixture of PA and the β-CD. As concerned to the complex inclusion, the complete disappearance of sharp PA peak at 58.4˚C and the broad β-CD peak at 83.8˚C, and the shift of peaks at 65.0˚C indicating that some interactions were established between PA and the β-CD. In a word, the DSC results suggested that PA-β-CD complex inclusion was successfully formed. FT-IR analysis. FT-IR spectra of β-CD, PA, PA/CD IC, and physical PA/CD PM were illustrated in Fig 3B. The spectrum of PA showed a distinct peak in the 3500 cm −1 , indicating the presence of OH group. The bands in 3000 to 2850 cm −1 regions were assigned to saturated C-H stretching vibration. The bands at 1460, 1380 cm −1 , and 1470 cm −1 were represented the bending vibration (CH 3 , CH 2 ) from the ring of PA molecule. In the spectrum of β-CD, the broad peak at 3271 cm −1 could be assigned to the multi-hydrogen bonds. The spectrum of physical mixture was the simple combination of PA and β-CD, many characteristic peaks of PA at 3500, 1460, 1380, and 1470 cm −1 and the classic broad β-CD peak at 3270 cm −1 were easily found, suggesting that PA and β-CD were independent existence without any interaction between each other. For the complex inclusion, the characteristic IR peaks of PA were almost completely vanished, indicating the new structure was formed and the guest molecule PA was entirely embedded into the internal space of β-CD. Fig 3C showed the PXRD patterns of β-CD, PA, PA-CD CI, and PA-CD PM. The existence of intense and acute peaks in the PXRD of PA (2θ = 10.8, 11.7, and 14.7˚) and β-CD (2θ = 9.0, 12.5, 22.8 and 27.2˚) indicated their crystalline form. In physical mixture, the classic peaks of PA and β-CD were observed indicating that the crystalline structures of the PA and β-CD were still kept without any change. The spectrogram of PA-CD inclusion complex showed distinctive broad bands, indicating new complex inclusions were successfully formed. SEM analysis. The differences between particles were observed by electronic microscopy shown in Fig 3D. β-CD particles presented a rod shaped form while PA presenting a cubic shape. For PM, which is a mixture of raw materials, as described for PA and β-CD, particles' shapes and sizes didn't change after mixing. PA/CD complex inclusion particles presented the cube-shaped form, the size was smaller and different from the raw materials' morphology. These observations confirm that a different entity (inclusion complex) was successfully formed. Molecular modeling studies The molecular docking study was carried out to illuminate more about the geometry configuration of the PA-CD inclusion. The preferential relative orientation for the complex was shown in Fig 4A. The binding energy (Δ b G o ) was calculated below (Table 1). Moreover, the binding mode was also optimized (Fig 4B). The binding energy (Δ b G o ) of complex and the isolated molecule (PA and β-CD) suggested that stability of complex. The lower values for complexation energy meant the more stable complex. Δ b G o of the complex was calculated with the minimum energy mode according to Eq (1) and the data of Δ b G o were -10.8174±1.9235 kcal/mol. From the 3D structure with the minimum energy mode presented in Fig 4, the free guest molecule PA was found to be completely entrapped into the β-CD cavity as the length of β-CD cavity were greater than that of PA and formed a cylindrical structure. In the molecular dynamics simulation, one water molecule was added to the inclusion system by the TIP3P model. The hydrogen bonds were formed between the hydroxyl group in PA molecule and the nearby water molecule, and between the water molecule and the cyclodextrin molecule. Therefore, the water-bridged hydrogen bond was formed when the water molecule used as mediation. And the measured distance of hydrogen bonds were 2.0 and 1.8 Å respectively. Stability The amount of changes of PA, PA/β-CD was tracked by GC-MS to evaluate the stability. Fig 5 illustrates the trends of change in the relative amount m/m 0 of PA and PA/β-CD respectively. The relative amount of free PA showed a quickly degradation behavior under the 40˚C, 70% relative humidity, and 4500 lx (Fig 5b, 5d and 5f), suggesting the poor stability of PA. However, when PA was included with β-CD, the degradation rate of PA was much slower in the overall process as shown in Fig 5a, 5c and 5e). This indicated that the stability of PA in high temperature, high humidity, and strong light were improved through the complex with β-CD. Solubilization test After complexation with β-CD, the water solubility of PA was slightly enhanced and increased from 12.66 μg/mL (RSD = 5.28%) to 17.49 μg/mL (RSD = 1.39%), which indicated the formation of a relative better water-soluble inclusion complex. Conclusion In this study, the inclusion complexes formed between PA with β-CD were studied by phasesolubility analysis, FT-IR, DSC, PXRD, and SEM technologies. The molecular modeling suggested that the complex of 1:1 host-guest had the lowest Δ b G o value, the PA molecular was totally entrapped into the β-CD cavity. The stability of PA was significantly enhanced by inclusion. The water solubility of PA was slightly enhanced by inclusion with β-CD and better inclusion methods are still needed to be studied. Given the limitation of applications of free PA, the convenient preparation and the advantage of the PA/CD complex, this inclusion method should be deemed a prospective strategy for further utilization of PA.
2018-04-03T04:52:17.334Z
2017-01-17T00:00:00.000
{ "year": 2017, "sha1": "280ef697bb91b44a4a40f67827343956fc877ba7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0169578&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "280ef697bb91b44a4a40f67827343956fc877ba7", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239027005
pes2o/s2orc
v3-fos-license
A hierarchical cellular structural model to unravel the universal power-law rheological behavior of living cells Living cells are a complex soft material with fascinating mechanical properties. A striking feature is that, regardless of their types or states, cells exhibit a universal power-law rheological behavior which to this date still has not been captured by a single theoretical model. Here, we propose a cellular structural model that accounts for the essential mechanical responses of cell membrane, cytoplasm and cytoskeleton. We demonstrate that this model can naturally reproduce the universal power-law characteristics of cell rheology, as well as how its power-law exponent is related to cellular stiffness. More importantly, the power-law exponent can be quantitatively tuned in the range of 0.1 ~ 0.5, as found in most types of cells, by varying the stiffness or architecture of the cytoskeleton. Based on the structural characteristics, we further develop a self-similar hierarchical model that can spontaneously capture the power-law characteristics of creep compliance over time and complex modulus over frequency. The present model suggests that mechanical responses of cells may depend primarily on their generic architectural mechanism, rather than specific molecular properties. In this manuscript Hang et al. introduce a theoretical model which accounts for the essential mechanical properties of cell membrane, cytoplasm and cytoskeleton. They demonstrate that their model can correctly reproduce not only the universal power-law characteristics of cell rheology but also the relationship between the power-law exponent and cell stiffness. In fact, the power-law exponent is found in this study to be in the range between 0.1 and 0.5, as observed in many experiments, and decreases linearly with the logarithm of cell stiffness. To the best of my knowledge, these results together can not be captured by any existing model of cell mechanics. The Authors also develop a self-similar hierarchical model that can reproduce the power-law dependences of the creep compliance with time and of the storage and loss moduli with frequency. Taken together, these method developments are noteworthy and will contribute significantly to the progress in cell mechanics. The conclusions of this work are clearly supported by the presented results. I do not find any flaws in the data analysis, interpretation or conclusions. The methodology is sound. The only weakness of this manuscript is that there may be not enough detail provided in the methods for the work to be reproduced. Since the Authors use commercial finite-element software (Abaqus), they could simply deposit exemplary input/output files alongside with Supplementary Notes. It also would be a fine service to the community if the Authors could deposit the Python scripts used in this work to generate the cellular configurations and evaluate the finite-element model solutions. Suggestion: The predictions of the new model seem to be in quantitative agreement with a vast variety of experiments involving different cell types and states. I am wondering if it would be useful to present (in a new figure 1) the universal master curve of reference 14 (Annu. Rev. Mater. Res. 41, 75-97, 2011) and contrast it with the simulation results reported here. Reviewer #3 (Remarks to the Author): This manuscript described a theoretical framework to quantitatively understand the widely observe powerlaw rheology behavior of living cells. To my knowledge, this has not been achieved previously in a way as quantitatively as this manuscript. This model thus would be very valuable in understanding cells and their mechanical behaviors under different conditions. Thus I think this manuscript met the standard of Nature Communications. Nevertheless, I have some comments that should be addressed prior to publication. 1. Power-law power is adjusted by the amount of MT as MT is an elastic component here. This suggests different power in experiments are from a different MT content which is not necessarily the case. I do understand the difficulty of the modeling that it is unnecessary to include more diverse components. But the authors should discuss this result, especially what it indicates to avoid confusion or misinterpretation. 2. Is ita on page 8 the same viscosity as the one used in Table 1, as they have the same symbol? If this is the background fluid, this typically is thought to be a few times more viscous than water according to the diffusion test, as it is an aqueous solution with proteins in it. The viscosity in Table 1 should be more like an effective viscosity of the entire cytoplasm. Otherwise, a timescale of 300 s won't make sense. Or this viscosity here is not the viscosity of the background fluid but rather an effective viscosity of the entire cytoplasm? This should be clarified. 3. Comparing Fig 4a (untreated) and 4d (cytoD treated), why are E2 and E3 change upon cytochalasin D treatment? CytoD dissolves F-actin but E2 is the stiffness of microtubules. Discussion regarding this would be helpful. 4. While this model successfully describes the power-law rheology measured on the level of the whole cell and the cortex, the model itself doesn't separate the thick F-actin rich cortex and the more diluted internal cytoplasm. Nevertheless, the cytoplasm also shows the classical power-law rheology as shown by many groups (for example https://doi.org/10.1016/j.cell.2014.06.051), with a power ranging from 0.1 to 0.5 as well. It seems this model can also do a great job in describing the power-law rheology observed in the cytoplasm alone. As the cytoplasm is known to be much softer than the whole cell or the cell cortex, this distinction would be important to acknowledge. It would be great if some discussion can be added. 5. On page 7, it is stated that MTs and microfilaments cross and connect with each other. They indeed interpenetrate with each other but not necessarily connect to each other. 6. It is known that the cell cytoskeleton is composed of F-actin, MTs and also intermediate filaments. Recently Response to Reviewers The authors wish to thank the reviewers for their very helpful comments and suggestions. The paper has been revised carefully, and the main changes are marked in blue. Below is an itemized response to each reviewer's comments. Reviewer #2 In this manuscript Hang et al. introduce a theoretical model which accounts for the essential mechanical properties of cell membrane, cytoplasm and cytoskeleton. They demonstrate that their model can correctly reproduce not only the universal powerlaw characteristics of cell rheology but also the relationship between the power-law exponent and cell stiffness. In fact, the power-law exponent is found in this study to be in the range between 0.1 and 0.5, as observed in many experiments, and decreases linearly with the logarithm of cell stiffness. To the best of my knowledge, these results together can not be captured by any existing model of cell mechanics. The Authors also develop a self-similar hierarchical model that can reproduce the power-law dependences of the creep compliance with time and of the storage and loss moduli with frequency. Taken together, these method developments are noteworthy and will contribute significantly to the progress in cell mechanics. The conclusions of this work are clearly supported by the presented results. I do not find any flaws in the data analysis, interpretation or conclusions. The methodology is sound. The only weakness of this manuscript is that there may be not enough detail provided in the methods for the work to be reproduced. Since the Authors use commercial finite-element software (Abaqus), they could simply deposit exemplary input/output files alongside with Supplementary Notes. It also would be a fine service to the community if the Authors could deposit the Python scripts used in this work to generate the cellular configurations and evaluate the finite-element model solutions. Answer: We thank the reviewer for his/her positive recommendation of our paper. In response to this comment, we have added more details related to our modeling in the "Methods" section, and provided Python scripts along with the paper to ensure that the results can be reproduced by interested readers. We have added the following statements in the "Methods" section (Page 17) and provided the Python scripts (see Supplementary Materials). "For both step and cyclic loads, we simulated the viscoelastic cytoplasm and viscoelastic membrane by the Kelvin-Voigt model with the following constitutive relations: , and MF 2400 MPa E = . A detailed description of the Kelvin-Voigt model can be found in Supplementary Note 1. The detailed geometric parameters of the cell structure can be seen in the "Model" section. All simulations were carried out by using the commercial finite element software Abaqus 6.13-1 and can be set automatically by running a python script (see Supplementary Materials) in Abaqus 6.13-1." figure) the universal master curve of reference 14 (Annu. Rev. Mater. Res. 41, 75-97, 2011) and contrast it with the simulation results reported here. Suggestion: The predictions of the new model seem to be in quantitative agreement with a vast variety of experiments involving different cell types and states. I am wondering if it would be useful to present (in a new Answer: We thank the reviewer for this suggestion. As presented in Fig. 6 of Ref. 14, the power-law exponent can be collapsed into a universal master curve which decreases linearly with the cell stiffness in a semi-logarithmic coordinate. As predicted by Equation (7) in our manuscript, we show that when the cellular stiffness is not high, the creep compliance curves can intersect at a point ( 0 τ , 0 j ), and the power-law exponent decrease linearly with the cell stiffness in a semi-logarithmic coordinate. This prediction is consistent with the universal master curve of Ref. 14. Moreover, when the cellular stiffness is high, the power-law exponent tends to be a constant, as shown in Fig. 5b in our manuscript and reported in experiments of Ref. 5. We have summarized many experimental data related to different cell types and cell states [Refs. (5,11,45,and 46)] and found that our predictions agree well with the experimental results (see Fig. R1). In response to this comment, we have added the following statements on the relationship between the power-law exponent and cell stiffness (Page 14), a new Figure 6 (Page 15), and relevant references of experimental results. "Here, we summarize existing experimental results 5,11,45,46 for different cell types and states, and plot the power-law exponent with respect to the cellular stiffness, as shown in Fig. 6. It is clearly seen that our predictions agree well with the experimental results and the cells become more solid-like as their stiffness increases. These results confirm our predictions that for moderate cellular stiffness, the power-law exponent decrease linearly with the cell stiffness in a semi-logarithmic plot. Moreover, the power-law exponent of cells gradually converges to a certain threshold with increasing stiffness, which was not discussed in previous literature 14 . These broad agreements between experimental findings and our predictions show the robustness of our selfsimilar hierarchical model in describing cell rheology." Reviewer #3 This manuscript described a theoretical framework to quantitatively understand the widely observe powerlaw rheology behavior of living cells. Power-law exponent α Normalized stiffness (K/K n ) Answer: We agree with the reviewer that some factors, such as the amount of MT and the viscoelasticity of cytoplasm, can result in different power-law exponents of cell rheology. As reported in many experiments (e.g., Refs. 10-14), the power-law exponents of cell rheology vary in the range of 0.1~0.5, depending on the cell types or cell states. By varying the amount of MT, we showed that the power-law exponent can be quantitatively tuned, which may explain why the power-law exponent differs for different cell types or states. In fact, other factors, such as the viscoelasticity of cytoplasm, the amount of microfilaments and intermediate filaments can also quantitatively regulate the power-law exponent. In response to this comment, we have clarified some statements in the revised manuscript and added more discussion on the changes of the power-law exponent (Page 7). "It can be seen that the increase in the amount of MTs can reduce the power-law exponent from 0.564 to 0.189." "In fact, changes in MT number and stiffness are among a number of factors that can alter the power-law exponent of cells. Similarly, changes in mechanical properties of other components of the cytoskeleton (MFs 5, 6 and intermediate filaments 36,37 ) or the cytoplasm 38, 39 can also regulate the power-law exponent of cells. Therefore, it is possible that through re-configuring the network of the cytoskeleton or changing the mechanical properties of the cytoplasm, the power-law exponent can be quantitatively tuned in the range of 0.1 ~ 0.5, which may explain why the power-law exponent differs for different cell types or states (e.g., drug-induced) 5, 7, 8 ." Table 1, as they have the same symbol? If this is the background fluid, this typically is thought to be a few times more viscous than water according to the diffusion test, as it is an aqueous solution with proteins in it. The viscosity in Table 1 should be more like an effective viscosity of the entire cytoplasm. Otherwise, a timescale of 300 s won't make sense. Or this viscosity here is not the viscosity of the background fluid but rather an effective viscosity of the entire cytoplasm? This should be clarified. Answer: The symbol η in the Kelvin-Voigt model (Table 1) and the self-similar hierarchical model (Figure 3 on Page 8 in the previous version; Page 9 in this revised version) has the same physical meaning and represents the effective viscosity of the entire cytoplasm. The cytoplasm is a crowded aqueous solution filled with ions and proteins. Hence, different cells exhibit different viscosities affected by the volume fraction of each component in the cytoplasm, as well as the interaction between the cytoplasm and the cytoskeleton. In this work, the effective viscosity of the cytoplasm is represented by η throughout the whole manuscript. To avoid confusion or misinterpretation, we have clarified this point in the revised manuscript. In response to this comment, we have added the following statements in the revised manuscript (Page 4). "The cytoplasm is a crowded aqueous solution filled with ions and proteins. Hence, different cells exhibit different viscosities, depending on the volume fraction of each component in the cytoplasm, as well as the interaction between the cytoplasm and the cytoskeleton. In this sense, the viscous coefficient η represents the effective viscosity of the entire cytoplasm." 3. Comparing Fig 4a (untreated) and 4d (cytoD treated), why are E2 and E3 change upon cytochalasin D treatment? CytoD dissolves F-actin but E2 is the stiffness of microtubules. Discussion regarding this would be helpful. Answer: We thank the reviewer for pointing out this important issue. From a macroscopic perspective, the cell is treated as a 3-level self-similar hierarchical structure with 1 E , 2 E , and 3 E representing, respectively, the effective stiffness of the cytoplasm, MTs in the load direction, and the transverse expansion of the cytoskeleton and the cytoplasm, and η representing the effective viscosity of the entire cytoplasm. Since the drug cytochalasin D can dissolve actin filaments, there will be a significant reduction in the effective stiffness of the cytoskeletal network in both loading and transverse directions, i.e., a reduction in both 2 E and 3 E (Fig. 4d). In addition, the results of Fig. 4b (Histamine treated) and Fig. 4c (DBcAMP treated) are also discussed in the revised manuscript. In response to this comment, we have added some statements (Page 9) and more discussion on the effects of different drugs on the changes of mechanical properties of cells (Page 11) in the revised manuscript. "In this way, from a macroscopic perspective, the cell is treated as a 3-level selfsimilar hierarchical structure with 1 E , 2 E , and 3 E representing, respectively, the effective stiffness of the cytoplasm, MTs in the load direction, and the transverse expansion of the cytoskeleton and the cytoplasm, and η representing the effective viscosity of the entire cytoplasm." "The drug Histamine 43 can enhance the permeability of cells, which reduces the cytoplasmic stiffness 1 E (see Fig. 4(b)). This drug also promotes cell contraction that can increase the stiffness of the cytoskeletal network ( 2 E and 3 E ). For cells treated with DBcAMP, the contraction of cells is inhibited 8, 44 , which reduces the stiffness of the cytoskeletal network ( 2 E and 3 E ), as shown in Fig. 4(c). When the cells are treated with cytochalasin D 8 , the cytoskeleton is dissolved, resulting in a reduction in stiffness ( 2 E and 3 E ) of the cytoskeletal network (see Fig. 4(d))." 4. While this model successfully describes the power-law rheology measured on the level of the whole cell and the cortex, the model itself doesn't separate the thick Factin rich cortex and the more diluted internal cytoplasm. Nevertheless, the cytoplasm also shows the classical power-law rheology as shown by many groups (for example https: //doi.org/10.1016/j.cell.2014.06.051), with a power ranging from 0.1 to 0.5 as well. It seems this model can also do a great job in describing the power-law rheology 6 observed in the cytoplasm alone. As the cytoplasm is known to be much softer than the whole cell or the cell cortex, this distinction would be important to acknowledge. It would be great if some discussion can be added. Answer: We thank the reviewer for pointing out this extension of our model. Experiments showed that the elastic modulus of the cytoplasm also follows a powerlaw dependence on loading frequency: G β ω ′ with exponent 0.15 β = . In our model, the structural details of the cytoplasm are ignored, since it is much softer than the cytoskeleton. Thus, the whole cytoplasm is considered as a viscoelastic medium, i.e., the 1st level hierarchy of the model. When one studies the rheological response of the local region of the cytoplasm, the structural details of the cytoplasm must be considered, and the self-similar hierarchical model can then be used to study the rheology of the cytoplasm. The interstitial fluid inside the cytoplasm (containing water, ions and small proteins) can be considered as the 1st level hierarchy, the large proteins in the cytoplasm as the 2nd level hierarchy, and the interaction between the large proteins as the 3rd level hierarchy. In this sense, the present model can be extended to investigate the dynamical mechanical response of the cytoplasm. In response to this comment, we have added some discussion on the extension of our model on the cytoplasm (Pages 11 and 12) and also added relevant references. "In addition, the self-similar hierarchical model can also be used to study the power-law rheology observed in the cytoplasm whose storage modulus follows a similar power-law form G β ω ′ with 0.15 β = 38 . When using this model to investigate the rheological response of the cytoplasm, the structural details of the cytoplasm should be considered. The interstitial fluid inside the cytoplasm (containing water, ions and small proteins) can be treated as the 1st level hierarchy, the large scale proteins in the cytoplasm as the 2nd level hierarchy, and the interactions between the proteins as the 3rd level hierarchy. In this sense, the present model can be extended to investigate the dynamical mechanical response of the cytoplasm." "With the self-similar hierarchical model, one can describe, explain, and predict the rheological behavior of living cells with different types or states, as well as the viscoelastic cytoplasm." On page 7, it is stated that MTs and microfilaments cross and connect with each other. They indeed interpenetrate with each other but not necessarily connect to each other. Answer: In response to this comment, we have revised the relevant descriptions in the manuscript (Page 8). "In cells, abundant MTs and microfilaments interpenetrate with each other to form a three-dimensional cytoskeleton network bathed in the cytoplasm 40-42 composed of water, solutes, and small molecules." Answer: We thank the reviewer for pointing out the role of intermediate filaments (IFs) in cell mechanics. Indeed, as with microtubules (MTs) and microfilaments (MFs), IFs also play an important role in cell mechanics, as reported in recent experiments (Ref. 36). When studying the cell's creep response under small deformations, the effect of IFs can be ignored, since they contribute little to the cortical stiffness in this case (Ref. 47). As pointed by the reviewer, vimentin IFs have a critical mechanical contribution to the overall cell mechanics especially at large deformations, substantially enhancing the strength, stretchability, resilience, and toughness of cells (Ref. 36). We showed that by treating IFs and MFs as strings in a prismatic tensegrity structure (Supplementary Note 4), the cells can exhibit significant strain-stiffening behavior as found in many experiments and 36)). It is known that the cell cytoskeleton is composed of F-actin, MTs and also In response to this comment, we have added some discussion on the effect of IFs (Pages 15 and16) and Supplementary Note 4 in the revised manuscript. "When studying the creep response of cells under small deformations, we have ignored the effect of intermediate filaments (IFs), since they contribute little to the cortical stiffness in this case 47 . Very recently, Hu et al. studied the effect of IFs on the mechanical properties of cells, and showed that under large deformations, the IF network behave as a strain-stiffening hyperelastic network that substantially enhance the strength, stretchability, resilience, and toughness of cells 36 . Supplementary Note 4 shows that by treating IFs and MFs as strings in a prismatic tensegrity structure, the cells can exhibit the remarkable strain-stiffening behavior found in experiments [15][16][17]36 , while holding the rheological characteristics. In addition, IFs play an important role in the mechanics of epithelial monolayers 37,48 , which can also be studied by our model. This suggests a strong potential of self-similar hierarchical models for investigating the mechanics of natural biological materials."
2021-10-20T06:16:41.512Z
2021-10-18T00:00:00.000
{ "year": 2021, "sha1": "46c711bc5157341768736938981368318bdad873", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-26283-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b816baf34f62e33fecf76fe47cf9c23cd2b02ce6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245951954
pes2o/s2orc
v3-fos-license
Effects of Processing on Starch Structure, Textural, and Digestive Property of “Horisenbada”, a Traditional Mongolian Food Horisenbada, prepared by the soaking, steaming, and baking of millets, is a traditional Mongolian food and is characterized by its long shelf life, convenience, and nutrition. In this study, the effect of processing on the starch structure, textural, and digestive property of millets was investigated. Compared to the soaking treatment, steaming and baking significantly reduced the molecular size and crystallinity of the millet starch, while baking increased the proportion of long amylose chains, partially destroyed starch granules, and formed a closely packed granular structure. Soaking and steaming significantly reduced the hardness of the millets, while the hardness of baked millets is comparable to that of raw millet grains. By fitting digestive curves with a first-order model and logarithm of the slope (LOS) plot, it showed that the baking treatment significantly reduced the digestibility of millets, the steaming treatment increased the digestibility of millets, while the soaked millets displayed a similar digestive property with raw millets, in terms of both digestion rate and digestion degree. This study could improve the understanding of the effects of processing on the palatability and health benefits of Horisenbada. Introduction Horisenbada, prepared by the soaking, short-time steaming, and high temperature baking of millets, is a traditional Mongolian food. Horisenbada is usually consumed as porridge with tea or yogurt, having good palatability, strong satiety, and a unique flavor. It is an important daily staple-food for Mongolian steppe herdsmen due to its long shelf life, convenience, and nutrition. Millets are one of the major cereal grains, mainly distributed in arid and semi-arid areas of Africa and Asia (China and India) [1]. In general, millets are enriched in essential nutrients, such as protein, fat, carbohydrates, minerals, vitamins, and bioactive compounds [2]. Phenolic acid, flavonoids, and other bioactive compounds in millets exhibit multiple health benefits, including antioxidant and antimicrobial activities [3]. Besides, millets can be processed in various ways including cooking, fermentation, toasting, puffing, etc. [4], which could contribute to the alteration of millet structure, texture, and digestion properties [5,6]. During the manufacture of Horisenbada, soaking, steaming, and baking processes are of prime importance. Soaking is a basic pretreatment widely used in cereal processing. For instance, the presoaking before rice cooking allowed water molecule to diffuse into the inside of the rice kernel [7]. Presoaking made grains easier to be gelatinized in the later processing [8]. In addition, alkali soaking significantly increased the amylopectin amount in the leachate, which increased the stickiness of cooked rice [9], whereas soaking in diverse solutions, such as water, sodium chloride (1%, w/v), or sodium bicarbonate (0.75%, w/v), did not affect the in vitro starch digestion [10]. Steaming is one of the most common grain processing methods. A series of previous research proved that cooking method [11], steaming time [12], and steaming temperature [13] can influence the texture of cooked grains, which was essentially related to the gelatinization and molecular structure of grains [14]. Our previous study found that rice grains during parboiling might lead to a less sticky texture due to starch gelatinization in the surface layer of cooked rice via blocking starch leaching [15]. Furthermore, the stickiness between cooked rice grains was strongly affected by the molecular structure of the leached starch [16]. One study also reported that cooked rice's morphological structure and in vitro digestion rate was varied by cooking method [17]. Baking is a dry heating process widely used in the manufacture of grain foods. It has been proven that baking enhanced the flavor by browning the surface of millet grains and increasing the variety of aroma compounds to give millets a specific odor [18]. In addition to changing the flavor properties, dry heating also changed the molecular size of the grain starch and reduced the long-amylose chains with the degree of polymerization (DP)~5000-20,000 [19]. One also found that the baking process obviously decreased the digestibility of grain starch, which was associated with the aggregation of compact starch granules [20]. Considering that the production of Horisenbada mainly involves soaking, steaming, and baking processes, it is reasonable to propose that the positive effects of processing on the textural and digestible properties of Horisenbada might be strongly associated with the alteration of the structural changes of millet starch. So far, the effects of soaking, steaming, and baking on the starch structure, texture, and digestion of millet grains are not known. Therefore, the association between millet starch structure (molecular structure, crystal structure, and granule structure) and functional properties (hardness, digestion) were explored in this study. Preparation of Horisenbada Ten grams of dehulled millets were soaked in 50 mL of distilled water at 35 • C for 15 min, and then the water was removed. Further, millets were steamed-cooked for 4 min without extra water, then laid on a baking tray and rapidly baked in an oven at 180 • C for 13 min. For comparison, the millets sample, soaked in 50 mL of distilled water at 35 • C for 15 min and then dried at 60 • C for 12 h, was nominated as the "Soaked" sample; the one soaked in 50 mL of distilled water at 35 • C for 15 min, steamed-cooked for 4 min, and then dried at 60 • C for 12 h, was nominated as the "Steamed" sample; the counterpart, processed with the same procedures of Horisenbada, was nominated as the "Baked" sample. Molecular Size Distribution The determination of molecular size distribution was conducted following the method described elsewhere [21]. Eight milligrams of millet powder were treated by protease and sodium bisulfite, and then precipitated by adding 10 mL of ethanol. The sample was dissolved in a DMSO solution with 0.5% (w/w) LiBr (DMSO/LiBr). A size exclusion chromatography system consisted of GRAM 30, GRAM 3000 columns (PSS, Mainz, Germany) and an RID-10A refractive index detector. Chain Length Distribution The deproteinization treatment was consistent with Section 2.3. Further, the sample was dissolved in 0.9 mL of hot deionized water. After cooling to room temperature, 0.1 mL of acetate buffer at pH 3.7 and 6.25 µL isoamylase were added for starch debranching. The mixed solution was incubated at 37 • C for 3 h and then heated at 80 • C for 2 h. The debranched starch sample was freeze-dried and ultimately dissolved in the DMSO solution with 0.5% (w/w) LiBr (DMSO/LiBr) for Size-Exclusion Chromatography (SEC) separation with GRAM100 and GRAM1000 columns [22]. X-ray Diffraction (XRD) The X-ray diffractometer (D2 PHASER, Bruker AXS GMBH, Karlsruhe, Germany) with Cu Kα radiation (λ = 0.154) was used to record XRD patterns. The scanning range was 5 • -40 • at a rate of 2 • /min with a scanning step of 0.02. Before the operation, the moisture content of the samples was equilibrated to about 10% by standing at room temperature overnight. The crystallinities of the samples were measured with HighScore Plus 5.1 software (Malvern Panalytical Ltd., Malvern, UK). Scanning Electron Microscopy (SEM) The morphology of the millet particles was obtained using a JSM-7610FPlus SEM (JEOL Ltd., Tokyo, Japan). Samples were first fixed on an aluminum stub before gold sputtering. Then, the millet was imaged by the scanning electron microscope at an accelerating voltage of 15 kV. Textural Profile Analysis (TPA) The determination was carried out according to the method by Li et al. [15]. After cooling the millets to room temperature, single whole millet grain was placed on the base plate. The target distance test was used for measurements with a Texture analyzer (Brookfield Engineering Laboratories, Middleboro, MA, USA) with a TA5 probe attachment. The compression settings were as follows: target value, 0.9 mm; trigger load, 1 g; test speed, 0.70 mm/s. After the compression test, the value of hardness was recorded by TexturePro CT software. Each sample was performed 20 times. Millet Digestion Millet grains were ground and sifted through 80 mesh. Millet powder samples containing 90 mg of starch were firstly cooked in a 50 mL centrifuge tube with 6.0 mL of deionized water at 100 • C for 30 min and then cooled to 37 • C in a water bath. A 5.0 mL pepsin solution (1 mg/1 mL 0.02 mol HCI) was added to the samples. Meanwhile, 5.0 mL of 0.02 mol HCI were added to the controls. After incubation at 37 • C for 30 min, all sample solutions were neutralized by 5 mL of 0.02 mol NaOH. In quick succession, 5.0 mL of porcine α-amylase/amyloglucosidase mixture enzyme (135 U porcine α-amylase and 1 U amyloglucosidase in 5 mL 0.2 mol sodium acetate buffer at pH 6) were added to samples incubated in a water bath at 37 • C and stirred with a magnetic stirrer bar at 300 rpm. Afterwards, 100 µL aliquots were transferred and dispersed into 900 µL of absolute ethanol to terminate the reaction at a series of time points. Then, 100 µL of digestion solution were added to 3.0 mL of GOPOD reagent (glucose oxidase/peroxidase determination reagent). All samples were incubated at 50 • C for 20 min. Then, a 200 µL solution was added to 96-pour plates and measured at the absorbance of 510 nm by a Synergy H1 microplate reader (Biotek lnc, Winooski, VT, USA). The digestibility was calculated according to reference [23] using the following Equation (1): where ∆A (Sample) is the absorbance at each time point and ∆A (D-Glucose Standard) is the absorbance from the standard D-glucose solution. The value 10 × 210 and 162/180 is the computational multiple from 100 µL aliquots to 21.0 mL reaction solution and the transformation coefficient from starch to glucose in weight, respectively. Fitting to First-Order Kinetics Starch digestion data can be fitted to a first-order Equation (2): Then the Equation (2) can be transformed into LOS plot where there is a linear relationship between ln(dC t /dt) and k, using the following Equation (3) In The value of k and C ∞ are calculated from the slope and intercept, which represent −k and ln(C ∞ k), respectively. In this study, the slope was estimated from the second-order finite-difference formula ln[(Ci+1-Ci-1)/(ti+1-ti-1)] as functions of (ti+1-ti-1)/2 for all points except the first and last point [24]. Statistical Analysis All the data was analyzed with analysis of variance (ANOVA) with Tukey's pairwise comparisons for a statistical significance and all values were expressed as mean ± standard deviation. SPSS 22.0 software (SPSS Inc, Chicago, IL, USA) was used for the above data analysis. Molecular Size Distribution of Branched Starch Typical SEC weight molecular distributions of wholly branched millet starch are presented in Figure 1. The average R h , denoted by R h , was calculated and shown in Table 1. The molecular size of branched starch was mainly distributed at Rh~1-1000 nm [25]. Two populations of branched starch with different molecular size could be found: amylose (AM, 10 < R h ≤ 100 nm) and amylopectin (AP, R h > 100 nm) [26]. As shown in Figure 1, the proportion of starch molecules with R h > 100 nm for steamed and baked millet starch was decreased. In Table 1, the R h of steamed and baked millet starch was significantly reduced, especially R h of baked millet starch decreasing to less than 10 nm. For the three millet cultivars, there was the same trend between samples as affected by different processes, showing that steaming and baking treatments significantly reduced the molecular size of the millet starch, especially for the baking process. This also indicates that the steaming and baking process might cause the degradation of starch molecules, especially amylopectin. One study on maize starch also supported that a high temperature could cause the degradation of long starch branches [19]. Figure 2 presents typical CLDs of debranched starch. Starch CLDs, w (logX) , obtained from the DRI signal were plotted against DP X. As described elsewhere [15], the population of chains with X ≤ 100 and 100 < X ≤ 10,000 were defined as AP chains and AM chains, respectively. For the amylopectin CLDs, it showed the usual features of two large amylopectin peaks, while for the amylose CLDs, the millet starch also displayed two peaks with one at X~200-300 and the other at X~700 [27]. Clearly, raw and soaked starch had similar CLD profiles, while steamed and baked starch had a higher peak in the amylose region, particularly for baked starch, displaying an extremely high peak. As shown in Table 1, the amylose region could be further divided into three groups: 100 < X ≤ 1000, 1000 < X ≤ 5000, and 5000 < X ≤ 20,000, which were defined as short, medium, and long amylose chains, respectively [28]. Interestingly, compared to raw and soaked starch, the proportion of short-amylose chains with X~100-1000 of baked and steamed starch was significantly increased, especially for that of baked starch, which almost doubled. Compared with raw and other processed millet, the chain length distribution of baked millet starch had a higher proportion of chains with X~100-1000 and X~1000-5000. This might be caused by the degradation of long amylose chains and/or the repolymerization of short starch chains [19]. It was reported that a part of the medium-and long-amylose chains were depolymerized under a high temperature, thereby generating short chains with X~100-1000 and X ≤ 100. On the other hand, with the heating temperature continuously increasing, the short amylopectin chains were also repolymerized by forming new glycosidic bonds and presented a nonlinear structure with a higher hydrodynamic radius [29], contributing to the apparent increment of starch chains with X~100-1000. The Crystalline Structure of Different Processed Millets The XRD patterns and relative crystallinity of starches from raw and different processed millets are illustrated in Figure 3. Compared to these three millet cultivars, the crystallinity of raw millet starch was different, which might be related to the different amylose content. All raw and processed millet starch displayed the typical A-type diffraction pattern, which includes a doublet peak around 2θ~17 • and 18 • , and two single peaks 2θ~15 • and 30 • [30]. Obviously, for these three millet cultivars, the soaking treatment did not change the relative crystallinity of the millet starch while the steaming and baking processes significantly reduced the peak intensities of the millet starch, especially for the baked millet starch displaying the lowest relative crystallinities. This indicated that the steaming and baking processes made the millet starch gelatinized to some extent. The Granule Structure of Different Processed Millets The morphologies of raw and different processed millets were analyzed using SEM. As shown in Figure 4, the raw, soaked, and steamed millet starch of M1 displayed a spherical shape, whereas the granular structure of raw, soaked, and steamed millet starch of M2 and M3 was polygonal shaped. Although steaming made the millet starch gelatinized, the granular morphology was still observed by SEM, which may be associated with short-time steaming. For the three millet cultivars, the baked millet starch granules were partially destroyed and aggregated into irregular lumps with a closely packed granule structure. This might be attributed to starch gelatinization, moisture loss, and inter-granules connections under the high temperature [31]. Compared with raw and other processed millet, the chain length distributio baked millet starch had a higher proportion of chains with X-100~1000 and X-1000~5 This might be caused by the degradation of long amylose chains and/or repolymerization of short starch chains [19]. It was reported that a part of the medi and long-amylose chains were depolymerized under a high temperature, the generating short chains with X-100~1000 and X ≤ 100. On the other hand, with the hea temperature continuously increasing, the short amylopectin chains were and baking processes significantly reduced the peak intensities of the millet sta especially for the baked millet starch displaying the lowest relative crystallinities. indicated that the steaming and baking processes made the millet starch gelatinize some extent. Textural Property of Different Processed Millets As shown in Figure 5, all three millet cultivars showed significantly different hardness. This might be due to the variations in terms of amylose content and the proportion of amylose branches [32]. For the effect of processing on millet texture, all cultivars presented the similar trend that raw millet was generally harder than soaked and steamed millet, while the hardness of baked millet was close to that of raw millet. It was readily understood that the soaking process could soften the millet grains by increasing the moisture content, and steaming could decrease the hardness of the millet grains by gelatinizing the millet starch, whereas the baking treatment further dried the millet grains and increased the firmness of the millet accordingly. The result was consistent with the SEM results, indicating that millet starch aggregated into lumps under high temperature can increase the hardness of the millet particle [20,31]. Compared with millet sample M1 and M2, M3 had a small difference between different processes, which might be caused by the lower amylose content of M3 [21]. Textural Property of Different Processed Millets As shown in Figure 5, all three millet cultivars showed significantly differe hardness. This might be due to the variations in terms of amylose content and th proportion of amylose branches [32]. For the effect of processing on millet texture, a cultivars presented the similar trend that raw millet was generally harder than soake and steamed millet, while the hardness of baked millet was close to that of raw millet. was readily understood that the soaking process could soften the millet grains b increasing the moisture content, and steaming could decrease the hardness of the mill grains by gelatinizing the millet starch, whereas the baking treatment further dried th millet grains and increased the firmness of the millet accordingly. The result w consistent with the SEM results, indicating that millet starch aggregated into lumps und high temperature can increase the hardness of the millet particle [20,31]. Compared wi millet sample M1 and M2, M3 had a small difference between different processes, whic might be caused by the lower amylose content of M3 [21]. Millet Starch of Digestion In vitro digestion curves of raw and different processed millets are shown in Figu 6. All curves present the similar trend that the initial digestion rate increased rapidly, an then slowed down until reaching equilibrium. Generally, raw and soaked mille displayed the overlapped digestion curve. Compared to other processed millet, steame millet presented the highest digestibility due to the gelatinization of the starch granul [33]. Interestingly, though the baking process can make millet starch gelatinized as we the digestion curve of baked millet exhibited the lowest digestibility. Based on the resu of SEM, the baking process made starch granules packed tightly, plus, TPA also prove the baked millet possessing a higher mechanic property in terms of hardness, which cou block the enzyme approaching and hydrolysis pathways. Further, the generation nonlinear structures, indicated from starch CLD, could also reduce starch digestibility. addition, the effect of processing on the digestion remained consistent between the thr cultivars (Supplementary Materials, Figure S1). Millet Starch of Digestion In vitro digestion curves of raw and different processed millets are shown in Figure 6. All curves present the similar trend that the initial digestion rate increased rapidly, and then slowed down until reaching equilibrium. Generally, raw and soaked millets displayed the overlapped digestion curve. Compared to other processed millet, steamed millet presented the highest digestibility due to the gelatinization of the starch granules [33]. Interestingly, though the baking process can make millet starch gelatinized as well, the digestion curve of baked millet exhibited the lowest digestibility. Based on the result of SEM, the baking process made starch granules packed tightly, plus, TPA also proved the baked millet possessing a higher mechanic property in terms of hardness, which could block the enzyme approaching and hydrolysis pathways. Further, the generation of nonlinear structures, indicated from starch CLD, could also reduce starch digestibility. In addition, the effect of processing on the digestion remained consistent between the three cultivars (Supplementary Materials, Figure S1). Further parameterizing the digestion between millet samples, LOS plots and model fitting curves are shown in Figure 7 and Table 1. LOS plots proved that there was a linear correlation between digestion time and logarithmic form of digestion data. Meanwhile, first-order kinetics were used to fit the digestion curve and predict the reaction end-point of starch digestion. Two relevant parameters, k and C∞, were calculated from LOS plot. k is the starch digestion rate coefficient, and C∞ is the estimated percentage of starch digested at the end-point of the reaction. As shown in Table 1, steamed millet and baked millet presented the highest and lowest starch digestion rate, respectively. Meanwhile, the amount of digestible starch after the steaming process was the highest, while the quantity of digestible starch after the baking treatment was reduced to the minimum in terms of C∞ values. In addition, the k and C∞ values of raw and soaked millet had no significant differences, keeping consistent with the observation in Figure 6. Further parameterizing the digestion between millet samples, LOS plots and model fitting curves are shown in Figure 7 and Table 1 (those for M2 and M3 were displayed in Figure S2). LOS plots proved that there was a linear correlation between digestion time and logarithmic form of digestion data. Meanwhile, first-order kinetics were used to fit the digestion curve and predict the reaction end-point of starch digestion. Two relevant parameters, k and C ∞ , were calculated from LOS plot. k is the starch digestion rate coefficient, and C ∞ is the estimated percentage of starch digested at the end-point of the reaction. As shown in Table 1, steamed millet and baked millet presented the highest and lowest starch digestion rate, respectively. Meanwhile, the amount of digestible starch after the steaming process was the highest, while the quantity of digestible starch after the baking treatment was reduced to the minimum in terms of C ∞ values. In addition, the k and C ∞ values of raw and soaked millet had no significant differences, keeping consistent with the observation in Figure 6. Further parameterizing the digestion between millet samples, LOS plots and model fitting curves are shown in Figure 7 and Table 1. LOS plots proved that there was a linear correlation between digestion time and logarithmic form of digestion data. Meanwhile, first-order kinetics were used to fit the digestion curve and predict the reaction end-point of starch digestion. Two relevant parameters, k and C∞, were calculated from LOS plot. k is the starch digestion rate coefficient, and C∞ is the estimated percentage of starch digested at the end-point of the reaction. As shown in Table 1, steamed millet and baked millet presented the highest and lowest starch digestion rate, respectively. Meanwhile, the amount of digestible starch after the steaming process was the highest, while the quantity of digestible starch after the baking treatment was reduced to the minimum in terms of C∞ values. In addition, the k and C∞ values of raw and soaked millet had no significant differences, keeping consistent with the observation in Figure 6. The soaking, steaming, and baking processes significantly changed the digestive property of the millet grains, while the variation trends remained consistent between all three cultivars. For details, the steamed millet showed the rapid digestive property, which was consistent with the highest k and C ∞ values. The increased millet digestion may be caused by the gelatinization of millet starch by the steaming process. The soaking process did not significantly change the molecular, crystalline, and granular structure of the millets, which explains the similar digestion property between the raw and soaked millet. Interestingly, the baked millet displayed the most desirable digestive property in terms of the lowest k and C ∞ values, whereas the molecular size and the degree of gelatinization of the baked millet starch were the lowest, indicating that the degree of gelatinization is not the only factor affecting cereal digestion. On one hand, it is reported that the baking process made starch granules packed tightly and blocked the enzyme approaching and hydrolyzing pathways [20], which was also supported by the results of the granular structure and hardness in this study. On the other hand, the newly formed glycosidic bonds inside the starch branches during the baking process might also contribute to the slow digestion of baked millets [19]. Conclusions This study investigated the effect of various processes on starch structure, texture, and digestibility of millet during the preparation of "Horisenbada". Between these different processing methods, the baking treatment affected the molecular structure, crystalline structure, and granular structure most significantly, which explained the obviously increased hardness and retarded digestibility. The possible links between starch structure, textural, and digestive properties of millets as affected by different treatments were also proposed, which provided fundamental evidence for Horisenbada as a nutritional food with low digestibility. The study could be beneficial to understanding the structural reasons for the palatability and heathy benefits of Horisenbada, the traditional Mongolian food.
2022-01-15T16:22:53.943Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a65e093c9e73af936ae0f6d0a91ab77604db2bab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/2/212/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3991ed02a5085d8c51ec9380c2c170de4d1d8803", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
21671907
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance Profile of E. coli Isolated from Raw Cow Milk and Fresh Fruit Juice in Mekelle, Tigray, Ethiopia Aim Foodborne illnesses represent a public health problem in developed and developing countries. They cause great suffering and are transmitted directly or indirectly between animals and humans and circulate in the global environment. E. coli are among them, causing a major public health problem. The aim of this study was therefore to study the antimicrobial resistance profile of E. coli from raw cow milk and fruit juice. Materials and Methods A cross-sectional study was conducted from October 2016 to June 2017 on 258 samples collected from milk shops (n = 86), dairy farms (n = 86), and fruit juice (n = 86) in different subcities of Mekelle. Bacteriological procedures were used for isolation of E. coli in the collected samples and for identification of the antimicrobial resistance profile. Result The overall mean viable bacterial count and standard deviation of samples from milk shop, fruit juice, and dairy milk were found to be 8.86 ± 107, 7.2 ± 107, and 8.65 ± 107 CFU/ml and 33.87 ± 106, 6.68 ± 106, and 22.0 ± 106, respectively. Of the samples tested, 39 from milk shops (45.35%), 20 from fruit juice (23.26%), and 24 from dairy farms (27.91%) were found to be positive for E. coli. The isolated E. coli were highly resistant to ampicillin (70%), sulfamethoxazole-trimethoprim (60%), clindamycin (80%), erythromycin (60%), chloramphenicol (50%), and kanamycin (50%) and were found to be susceptible to some antibiotics like gentamicin (100%), norfloxacin (100%), tetracycline (60%), polymyxin B (90%), and ciprofloxacin (90%). Conclusion The current study supports the finding that raw milk and fruit juice can be regarded as critical source of pathogenic E. coli. This supports the need for strict monitoring and the implementation of effective hygienic and biosecurity measures in the whole food chain of these products as well as a prudent use of antimicrobials. Background Foodborne illnesses are an important challenge to public health and cause significant economic problem in many countries [1]. The crucial goal of all food safety programs is to prevent food products contaminated by potential pathogens from reaching the consumer. Milk is an excellent medium for bacterial growth, which not only spoils the milk and associated products but also can cause infections in consumers [2]. Because of the specific production, it is not possible to fully avoid contamination of milk with microorganisms; therefore the microbial contamination of milk is an important tool in determining its quality [3,4]. Huge numbers of microbes can get access to milk and various milk products including E. coli, which is an indicator of milk and fruit juice contamination, constituting a public health hazard [5]. E. coli infection is a disease that can be transmitted directly or indirectly between animals and humans [6]. It is common in developing countries such as Ethiopia because of the prevailing poor food handling and sanitation practices, inadequate food safety laws, weak regulatory systems, lack of financial resources to invest in safer equipment, and lack of education for food handlers [7]. In countries where foodborne illness were investigated and documented, the relative importance of pathogens like S. aureus, Campylobacter, E. coli, and Salmonella species was recorded as a major cause [1,8]. These organisms were known to cause acute gastroenteritis and may cause a more serious septicemic disease, usually in the very young, the elderly, or immunocompromised subjects [9,10]. The ability of these microorganisms to survive under adverse conditions and to grow in the presence of low levels of nutrients and at suboptimal temperatures and pH values presents a formidable challenge to the agricultural and foodprocessing industries. The continued prominence of raw meats, eggs, dairy products, vegetable sprouts, fresh fruits, and fruit juices as the principal vehicles of human foodborne diseases poses a major challenge to coordinate sectorial control efforts within each industry [11]. Such juices have been found to be potential sources of bacterial pathogens, notably Escherichia coli, Salmonella spp., Shigella, and Staphylococcus aureus [12]. Currently, the other major concern to human health is the issue of antimicrobial resistance due to use of antibiotics in livestock production as well as human diseases conditions in developing countries. In Ethiopia, the major antibiotics used for treatment of animal and human diseases include penicillin, streptomycin, gentamycin, and oxytetracycline. Even though it needs a better understanding of antibiotics use in Ethiopia, this resistance variation might be due to indiscriminate use of antimicrobials in animal production without prescription in the animal and human health sector, which might favor selection pressure that increased the advantage of maintaining resistance genes in bacteria [13]. So far, there are no studies conducted on the burden and drug sensitivity profile of E. coli in Mekelle city, Northern Ethiopia. In this study, we isolated E. coli and determined the drug resistance profile. Study Area. The study was conducted from October 2016 to June 2017 in Mekelle city. Mekelle is the capital city of Tigray Regional State located about 783 km north of Addis Ababa, the capital city of Ethiopia, at geographical coordination of 39 ∘ 28 east longitude and 13 ∘ 32 north latitude. The average altitude of the city is 2300 m.a.s.l. with a mean annual rainfall and average annual temperature of 629 mm and 22 ∘ C, respectively [15]. The population of the city is 406,338 (195,605 males and 210,733 females) [15]. The city has seven subcities and 33 Kebeles where over 139 juice houses, 48 dairy farms, and 123 milk shops (street vendor or retailer shops) are inhabited. Besides, the cities possess an extensive public transport network and active urban-rural exchange of goods with about 30,000 micro and small enterprises. Study Design. A cross-sectional survey was conducted from October 2016 to June 2017 on raw cow milk and fresh fruit juice samples collected from different sources of raw milk shops, dairy milk supply centers, and juice houses in Mekelle. Purposive sampling technique was employed. Research Methodology Sampling Technique and Collection. There were a total of 258 food samples among which 172 were milk samples (86 from milk shops and 86 from dairy farms) and the remaining 86 are fresh juice samples (from 86 juice houses) in Mekelle city. After aseptic collection, samples were labeled and packed with sterile bottles and transported with an ice box to Microbiology and Public Health Laboratories, College of Veterinary Medicine, Mekelle University, for bacterial isolation. Samples were processed immediately for bacterial identification to species level using culture media and then isolates were kept in refrigerator at 4 ∘ C until microbial characterization with regular subculturing [16]. Enumeration of Total Viable Count. 1 ml and gram of raw milk and fruit juice samples, respectively, were homogenized into 9 ml of serial peptone water/NSS and 10 g/1 g of each food item was weighed out and homogenized into 90 ml/9 ml of sterile distilled deionized water. Then serial dilutions were prepared. From the 10-fold dilutions of the homogenates, 1 ml of 10 −6 , 10 −7 and 10 −8 dilutions was cultured in replicate on standard plate count agar (HiMedia, India), using the pour plate method. The plates were then incubated at 37 ∘ C for 24 to 48 hrs. At the end of the incubation period, colonies were counted using the illuminated colony counter. The counts for each plate were expressed as colony-forming unit of the suspension (CFU/g) [17]. Isolation and Characterization of Organism. 1 ml and gram of thoroughly mixed raw milk and fruit juice sample, respectively, were aseptically added to 9 ml of sterile nutrient broth and incubated overnight at 37 ∘ C for 24 hours. The mixture of nutrient broth and raw milk and fruit juice sample was subcultured on sterile nutrient agar plate under aseptic condition and incubated at 37 ∘ C for 18-24 hours. Gram staining methods and further biochemical tests, catalase, carbohydrate utilization, indole production, citrate utilization, and methyl red tests, were carried out to identify the organisms that were isolated from the samples according to standard procedure described by [17,18]. Antimicrobial Susceptibility Test. Antimicrobial susceptibility test, through Kirby diffusion test, was performed for all E. coli isolates following the protocol in [19]. At least 4-5 wellisolated colonies of the same morphological type are selected from a nonselective agar plate (nutrient agar); just the top of the colonies is touched and the growth transferred to a tube containing 4-5 ml of NSS or an equivalent medium such as peptone water broth. The inoculated broth is incubated at 35-37 ∘ C until a slight visible turbidity appears, usually within 2-8 hrs. The turbidity of the preincubated broth and the suspension of bacteria are adjusted by comparison with 0.5 McFarland turbidity standards. The standard and the test suspension are placed in similar 4-6 ml thin glass tubes or vials. The turbidity of the test suspension is adjusted with broth or saline and compared with turbidity standard against a white background with contrasting black lines, until the turbidity of the test suspension equals the turbidity standard [19]. The bacterial suspension was inoculated on to Mueller-Hinton agar (Oxoid, UK) with the sterile swab to cover the whole surface of the agar. The inoculated plates were left at room temperature to dry. Before using the antimicrobial disks, they were kept at room temperature for one hour and then dispended on the surface of media. Following this, milk shop, fruit juice, and dairy milk samples were indicated in Table 3. A statistically significant difference ( 2 = 20.4580; value = 0.000) was recorded among samples from the three sites (Table 3 and Figure 1). Antimicrobial Susceptibility Profile of E. coli. The antimicrobial resistance profiles of the bacterial isolates from raw cow milk and fruit juice samples were summarized in Table 4. E. coli showed resistance to antibiotics like ampicillin 4 Veterinary Medicine International (70%), sulfamethoxazole-trimethoprim (60%), clindamycin (80%), erythromycin (60%), chloramphenicol (50%), and kanamycin (50%). The isolates were susceptible to some antibiotics like gentamicin (100%), norfloxacin (100%), tetracycline (60%), polymyxin B (90%), and ciprofloxacin (90%). The multidrug resistance profile of the bacterial E. coli isolates is presented and the mean antibiotic sensitivity of E. coli species from raw milk shop, fruit juice, and dairy milk samples was found to be 16.16, 21.44, and 28.24, respectively ( Table 5). In general, antimicrobial susceptibility test revealed that gentamicin, norfloxacin, polymyxin B, and ciprofloxacin were the antimicrobials indicated as active against E. coli isolated from this study. A total of 13 multiple drug resistance patterns were observed. The highest MDR noted was AMP and STR (100%, 1/1). The maximum multiple drug resistance registered was resistance to one and three antibiotics with the combination AMP and STR, AMP STR ERY (Table 6). Discussion The current finding indicated that samples from milk shop, fruit juice, and dairy milk were found with a viable bacterial count load of 8.86 ± 10 7 , 7.2 ± 10 7 , and 8.65 ± 10 7 , CFU/ml, respectively, with an overall mean viable bacterial count of 8.24 ± 10 7 CFU/ml. The highest mean value of microbial load (8.86 ± 10 7 CFU/ml) was found from milk shop samples. The current study showed a higher viable bacterial count than previous reports such as viable bacterial count from fresh fruit juice samples in Ethiopia [21] to raw milks for which a count was available, 96.8%± 10 2 CFU/ml, and raw milk cheeses for which a count was available, 98.6%± 10 4 CFU/g [22]. This variation could be due to hygiene difference, personal awareness, and proper handling of containers and the food itself. Furthermore, viable bacterial counts of 3.93 ± 0.01 CFU/ml [23] in milk samples from dairy farms in Veterinary Medicine International 5 Khartoum State (Sudan) and 3.64 ± 0.776 CFU/ml 5 [24] from raw milk samples were reported in Ethiopia. In the present study, 115 out of 258 (44.57%) samples were found to be positive for E.coli, of which 55 (63.95%) were from milk shop, 27 (31.40%) from fruit juice, and 33 (38.37%) from dairy farms. The result showed a high contamination rate, which might be attributed to poor hygienic sanitation. Statistically significant difference ( < 0.05) among the sample types in the prevalence of E. coli was recorded. A similar report was also made by previous researchers in Ethiopia. Other researchers reported higher E. coli isolates in raw milk value chain from farmers (89.74%) and shops (90.0%) in Arusha, Tanzania [25]. The isolation rate of E. coli in the present study was found to be lower (44.57%) compared to other reports such as those in Tanga, Tanzania, 100% [26], in Arusha, Tanzania, 90.67% [25], in Dar es Salaam, Tanzania, 83% [27], raw milk along chain, in Tando Jam, Pakistan, 51.66% [28] from milk vending shops, and 58% [29] from raw cow's milk in Ethiopia, whereas it was higher compared to other reports in Ethiopia, 26.6% [30], from milk sample from cafeteria. The variation could be due to the reason that even when drawn under aseptic condition, milk always contains microorganisms that are derived from the milk ducts in the udder. In addition, contaminants coming from milking utensils, human handlers, unclean environmental conditions, and poor udder preparation might expose raw milk to bacterial contamination. Antimicrobial resistance emerges from the use of antimicrobials in animals and human and the subsequent transfer of resistance genes and bacteria among animals, humans, animal products, and the environment. In Ethiopia, there have been reports on the drug resistance of E. coli isolates from animal-derived food products [31,32]. The highest drug resistance recorded in the current study might be due to high antimicrobial use in dairy farms, fruit juices, and individual cows to treat various diseases affecting the dairy sector. Similarly, several studies have indicated that E. coli isolated showed high resistance to erythromycin (100%), streptomycin (50%), tetracycline (75%), and ampicillin (50%) and high sensitivity to penicillin (100%), gentamicin (75%), chloramphenicol (75%), and amoxicillin (50%) reported by [21] in Ethiopia. Veterinary Medicine International Similar studies conducted in Ethiopia by [38] and in Nigeria by [39] have reported comparable susceptibility rates. In this study, gentamicin, norfloxacin, tetracycline, polymyxin B, and ciprofloxacin were found to be the most effective antimicrobials against E. coli isolates. Furthermore, in this study, a high rate of multiple antimicrobial resistance (100%) was recorded, which is consistent with the reports of studies done elsewhere by other scholars [40,41]. Increases in rate of resistance to different antimicrobials have been reported from previous studies conducted in different parts of the world [40,41]. The remarkable degree of resistance to many drugs represents public health hazard due to the fact that foodborne outbreaks would be difficult to treat and this pool of MDR E. coli in food supply represents a reservoir for communicable resistant genes. Hence, due to the relatively limited access and high price to get the newly developed cephalosporin and quinolone drugs, the reports of prevalence of antimicrobialresistant E. coli to relatively low-priced and regularly available antibiotics are alarming for a low-income society living in most developing countries, like Ethiopia. Conclusion The current study gives insights into the magnitude and incidence of E. coli from raw cow milk and fresh fruit juice samples. The study revealed that the development of antibiotic resistance against E. coli could pose serious threat for consumers in the study area. Hence, attention should be given to proper handling of the food items and using recent antibiotics in the treatment of diseases both in humans and in animals. Tetracycline.
2018-05-21T20:56:47.501Z
2018-03-19T00:00:00.000
{ "year": 2018, "sha1": "191f7e922ec4d845e4360bbddd4fe1a1ca9a0e17", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/vmi/2018/8903142.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ac9dc13ed7442634009abc4fcff909a8cd77482", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17216815
pes2o/s2orc
v3-fos-license
Conjugated polymers containing diketopyrrolopyrrole units in the main chain Research activities in the field of diketopyrrolopyrrole (DPP)-based polymers are reviewed. Synthetic pathways to monomers and polymers, and the characteristic properties of the polymers are described. Potential applications in the field of organic electronic materials such as light emitting diodes, organic solar cells and organic field effect transistors are discussed. Introduction A useful strategy in the design of new polymers for electronic applications is to incorporate chromophores which are highly absorbing and emitting in the visible and near infrared region into π-conjugated polymers chains. Potentially useful chromophores for electronic applications can be found among the various organic colourants, especially in the field of so-called "high-performance pigments" developed in the last two or three decades [1]. Among these pigments are 2,5-diketopyrrolo- [3,4-c]pyrrole (DPP) derivatives, which were commercialized in the 1980s [2,3]. DPPs are the subject of many patents, despite the fact that for a considerable time there were only a few publications that dealt with these compounds. In recent years, a growing number of polymer chemists and physicists have become interested in DPPs since it was shown that DPP-containing polymers exhibit light-emitting and photo-voltaic properties. The purpose of the present article is to review recent activities regarding the deeply coloured, and in many cases, fluorescing polymers. Synthetic pathways, characteristic properties, and possible applications are described. Review DPP-based monomers After the 3,6-diphenyl-substituted DPP (diphenylDPP) (Figure 1) was first synthesized in low yield by Farnum et al. in 1974 [4], Iqbal, Cassar, and Rochat reported an elegant synthetic pathway for DPP derivatives in 1983 [5,6]. It was discovered that DPP derivatives could be prepared in a single reaction step in high yield by the reaction of benzonitrile (or other aromatic nitriles) with succinic acid diesters. Numerous DPP derivatives have since been synthesized, their colours ranging from orange yellow via red to purple. Many DPP derivatives exhibit a high photostability in the solid state, weather fastness, deep colour, luminescence with large Stokes-shifts, and a brilliant red colour enabling technical applications in colouring of fibers, plastics and surface coatings such as prints or inks. The electron-withdrawing effect of the lactam units causes the chromophore to have a high electron affinity. Strong hydrogen bonding between the lactam units favors the chromophores forming physically cross-linked chain structures in the solid state, which is the origin for the poor solubility [7,8]. Short distances between the chromophore planes (0.336 nm) and phenyl ring planes (0.354 nm) enable π-π-interactions via molecular orbital overlapping and excition coupling effects [7][8][9], whilst electronic interactions and strong intermolecular forces lead to a high thermal stability of up to 500 °C. For chemical incorporation into conjugated polymers, the solubility of the DPP compound needs to be increased, and the chromophore requires to be functionalized with polymerizable groups. The solubility can be increased by N-alkylation [10], arylation [11] or acylation [12] of the lactam units thus preventing hydrogen bond formation between the chromophores. Polymerizable groups can be attached to the aryl units in the 3-and 6-positions of the central DPP chromophore [13], or to the lactam substituent groups [14,15]. Suitable polymerizable groups are halogen atoms (especially bromine and iodine), hydroxyl, trifluoromethylsulfonate, or aldehyde groups. Synthetic strategies recently described are outlined in Scheme 1. For the preparation of brominated diphenyl-DPPs it is necessary to start from bromobenzonitrile and a succinic acid ester and to prepare first the dibromophenyl-DPP pigment, which is subsequently N-alkylated to yield the soluble dibromodialkyl-DPP monomer M-1. While the N-alkylation of DPP proceeds directly in good yield, the introduction of aryl units in most cases requires a specific synthetic pathway. First, the corresponding diketofurofuran (lactone) compound has to be synthesized [11]. The lactone is subsequently converted into the N-aryl-lactam M-2 by reaction with an arylamine. The bromination of aryl units is important for the subsequent palladium-catalyzed coupling reaction. If the aryl unit is thiophene, direct bromination with N-bromosuccinimide is possible to yield monomer M-3 [16]. For the preparation of conjugated DPP-based polymers, palladium-catalyzed polycondensation reactions such as Suzuki [17], Stille [18] and Heck [19] coupling are especially useful. Other suitable reactions are Ni-mediated Yamamoto coupling [20], Sonogashira coupling [21], or electrochemical polymerization [22]. In the following, a brief review of recently prepared DPP based polymers is presented. DPP-based polymers The first DPP-based polymer was described by Yu et al. in 1993 [13]. Conjugated block copolymers containing phenylene, thienylene and N-alkyl substituted diphenyl DPP units in the main chain were synthesized by Stille coupling. Photorefractive polymers were prepared containing a conjugated main chain and nonlinear optically active (nlo) chromophores in the side chain. DPP was incorporated in the polymers as a sensitizer for charge carrier generation. Some years later, Eldin and coworkers described DPP-containing polymers obtained by radical polymerization of bis-acryloyl-substituted DPP derivatives [14,15]. Polymer networks containing non-conjugated, copolymerized DPP units were prepared, whilst linear DPPcontaining polyesters and polyurethanes were first described by Lange and Tieke in 1999 [23]. The polymers were soluble and could be cast into orange films that exhibited a strong fluorescence with maxima at 520 nm and a large Stokes-shift of 50 nm. However, due to the aliphatic structure of the main chain, the thermal stability was rather poor. Photoluminescent polyelectrolyte-surfactant complexes were obtained from an amphiphilic, unsymmetrically substituted DPP-derivative upon complex formation with polyallylamine hydrochloride or polyethyleneimine [24]. The complexes exhibit a mesomorphous structure with the glass transition temperatures dependent on the structure of the polyelectrolyte. The first synthesis of conjugated DPP-polymers and copolymers via Pd-catalyzed Suzuki coupling was reported by Tieke and Beyerlein in 2000 [25]. The polymers contained N-hexylsubstituted diphenylDPP units and hexyl-substituted 1,4-phenylene units in the main chain and molecular weights of up to 21 kDa were determined. Compared with the monomer, the optical absorption of the polymer in solution was bathochromically shifted by 12 nm with the maximum at 488 nm. The polymer also showed a bright red fluorescence with the maximum at 544 nm. In addition to the alternating copolymer, copolymers with lower DPP content were also prepared. All copolymers showed the DPP absorption at 488 nm, the ε-value being a linear function of the DPP content. Upon UV irradiation the copolymers gradually decomposed. The rate of Scheme 1: Synthesis of DPP monomers. photodecomposition was found to increase with decreasing DPP phenylene comonomer ratio. Two different photoprocesses were recognized: a slow process originating from the absorption of visible light by the DPP chromophore, and a rapid one arising from additional absorption of UV-light by the phenylene comonomer unit followed by energy transfer to the DPP chromophore. The actual mechanism of photodecomposition remains unclear. Comparative studies indicated that conjugated DPP-containing polymers are considerably more stable than the DPP monomers or non-conjugated DPP-polymers. Dehaen et al. used a stepwise sequence of Suzuki couplings to prepare rod-like DPP-phenylene oligomers with well-defined lengths [26]. The resulting oligomers contained three, five and seven DPP units, respectively. Unfortunately, the effect of the chain length on absorption and emission behaviour was not reported. A study on thermomesogenic polysiloxanes containing DPP units in the main chain was published in 2002 [27]: Investigations on the thermotropic phase behaviour using polarizing microscopy revealed nematic and smectic enantiotropic phases. In the same year, the first study on electroluminescent (EL) properties of a DPP-containing conjugated polymer was reported by Beyerlein et al. [28] who studied a DPP-dialkoxyphenylene copolymer in a multilayer device of ITO/DPP-polymer/OXD7/Ca/-Mg:Al:Zn and observed a red emission with maximum at about 640 nm. A relevant plot of current density and light intensity vs. voltage is reproduced in cules with a DPP core [29]. Embedded in a spin-coated polystyrene film, single dendrimer molecules could be imaged via a confocal microscope by utilizing the strong fluorescence of the DPP core. It could be shown that the orientation of the absorption transition dipole of single dendrimer molecules in the film changed in a time window of seconds. Recent work on diphenylDPP-based polymers In recent years a number of studies were reported on synthesis, optical, electrochemical, and electroluminescent properties of conjugated DPP polymers. The polymers were prepared by Suzuki, Heck, and Stille coupling and other catalytic polycondensation reactions. Typical examples are shown in Scheme 2. Rabindranath et al. [30] synthesized a new DPP polymer consisting entirely of aryl-aryl coupled diphenyl-DPP units (poly-DPP, P-1, see Table 1). The polymer was prepared by three different reactions. Pd-catalyzed and Ni-mediated one-pot coupling reactions were carried out starting from dibrominated DPP M-1 as the sole monomer as well as conventional Pd-catalyzed coupling of M-1 and the 3,6-diphenyl(4,4´bis(pinacolato)boron ester) derivative of DPP. The polymer exhibits a bordeaux-red colour in solution with absorption maxima of about 525 nm, and a purple luminescence with a maximum around 630 nm with a Stokes-shift of about 105 nm. Cyclovoltammetric studies indicated quasi-reversible oxidation and reduction behaviour, the band gap being about 2 eV. Characteristic properties of P-1 are listed in Table 1. In a compre- hensive study, Zhu et al. prepared a number of highly luminescent DPP-based conjugated polymers [31]. The polymers consisted of dialkylated DPP units and carbazole, triphenylamine, benzo[2,1,3]thiadiazole, anthracene, or fluorene units in alternating fashion. They were prepared via Suzuki coupling, from the DPP monomers M-1 or DPP-3,6-diphenyl(4,4´bis(pinacolato)boron ester. A number of readily soluble polymers P-2 to P-8 exhibiting yellow to red absorption and emission colours, and fluorescence quantum yields of up to 86% were obtained. Characteristic properties are compiled in Table 1. Compared with the DPP monomers, the absorption of most of the polymers was bathochromically shifted by 24 to 39 nm. The small shift of P-2 was ascribed to a large tilt angle between the π-planes of DPP and the adjacent comonomer units, in this case the anthracene unit, which strongly reduces the conjugation length [32]. EL devices prepared with P-4 exhibited an external quantum efficiency (EQE) of 0.5% and a brightness at 20 V of 50 cd m −2 without much optimization. The maximum emission was at 600 nm, the turn-on voltage was 3.5 V. Cao et al. [33] prepared DPP-fluorene copolymers with a DPP content of between 0.1 and 50%. It was found that absorption and emission spectra, both in solution and thin film, varied regularly with the DPP content in the copolymers. On increasing the DPP content, the absorption only shifted by a few [34] also studied DPP-fluorene alternating copolymers with the fluorene unit being attached to the m-position of the phenyl groups in DPP (in contrast to the usual p-position). While the optical properties were quite similar, the EL properties were inferior. This was ascribed to a reduced conjugation length in these polymers. Novel vinyl ether-functionalized polyfluorenes for active incorporation in common photoresist materials were described by Kühne et al. [35] Among the polymers investigated was a diphenylDPP-fluorene copolymer, the fluorene units carrying ethyl vinylether groups in the 9,9´-position. The vinyl ether functionality allowed for active incorporation of the light emit-ting polymers into standard vinyl ether or glycidyl ether photoresist materials, the polymers retaining their solution fluorescence characteristics. This enabled photopatterning of lightemitting structures for application in UV-down-conversion, waveguiding, and laser media. Using Stille coupling, Zhu et al. [36] first succeeded in the synthesis of copolymers P-9 to P-11 containing diphenylDPP and thiophene, bisthiophene, or 3,4-ethylenedioxythiophene (EDOT) units in alternating fashion ( Table 2). Because of the strong donor-acceptor interaction between the thiophene and the DPP units, the absorption and emission maxima were shifted to longer wavelength: A solution of EDOT-DPP copolymer P-11 exhibited a maximum absorption at 560 nm, and a solution-cast film of the same polymer had a λ max -value of 581 nm. The band gaps were between 1.5 and 1.7 eV, i.e., considerably smaller than for the previously reported DPP-based polymers. The fluorescence quantum yields Φ of the copolymers were rather weak (Φ ~15-35%), the maximum appeared at about 700-720 nm in the solid state. By Heck coupling it was possible to synthesize a polyarylenevinylene-type polymer P-13, the arylene units alternatingly being phenylenevinylene and diphenylDPP (Table 2) [36]. The polymer was obtained upon Pd-catalyzed reaction of dibromoDPP derivatives such as M-1 and divinylbenzene. The resulting polymer had a molecular weight of about 30 kDa, was readily soluble in common organic solvents and its solutions exhibited a bright red colour with red light emission. A further study [37] focused on the incorporation of arylamine units in the main chain. Due to presence of electron-rich nitrogen atoms it was hoped that donor-acceptor interactions along the main chain would be enhanced and lead to a red-shift of the absorption and emission. Furthermore, the presence of easily oxidizable nitrogen in the main chain should give rise to a lower oxidation potential of the polymer. Relevant polymers P-16 -P-20 (Table 3) were synthesized using Pd-catalyzed aryl amination reactions as reported by Hartwig [38,39], Buchwald [40][41][42], and Kanbara [43][44][45][46][47]. As shown in Scheme 2, DPP monomers such as M-1 were copolymerized with primary or Figure 4: Optical properties of copolymers P-21 and P-22 based on two isomeric diphenylDPP monomer units (from [48]). secondary arylamines to yield DPP-containing polyiminoarylenes. The solutions of the polymers in chloroform exhibited a purple red colour with absorption maxima between 530 and 550 nm, and emission maxima from 610 to 630 nm. Fluo-rescence quantum yields were moderate (20 to 60%) (see also Table 3). The nitrogen atoms in the backbone lower the band gap of the polymers to approximately 1.9 eV. The band gaps are lower than for the conjugated DPP-arene copolymers prepared upon Suzuki coupling [31] but higher than for the DPP-thiophene copolymers made by Stille coupling [36]. Except for P-16 and P-18, the polymers exhibit quasi reversible oxidation behaviour. A spectroelectrochemical study revealed that some of the polymers exhibited a reversible colour change between purple in the neutral state and a transparent greenish grey in the oxidized state. The electrochromism was very pronounced for P-19 and P-20. Typical absorption and emission colours of several DPP-containing conjugated polymers are shown in Figure 3. Direct N-arylation of the lactam group of DPP is only possible for activated arene units containing trifluoromethyl or nitro substituent groups. The common synthetic pathway first requires the synthesis of a diphenyldiketofurofuran derivative, which subsequently is reacted with an arylamine to yield the desired tetraarylated DPP derivative [11]. Using this approach, Zhang and Tieke [48] were able to prepare the two isomeric Stille coupling of M-1 and 2-(tributylstannyl)-3,4-ethylenedioxythiophene gave the corresponding bis(thienyl)-substituted monomer [49]. Due to presence of the EDOT units, the monomer exhibited a rather low oxidation potential and could be easily electropolymerized by anodic oxidation. An insoluble, non-luminescent polymer film formed at the electrode that exhibited reversible electrochromic properties ( Table 4). The film could be switched from blue in the neutral state via transparent grey to purple red in the oxidized state. The stability of the film was good, the switching could be repeated many times retaining 96% of the original absorption intensity after 100 cycles, without any protection against air or moisture. K. Zhang et al. [52] continued the studies and converted isomeric monomers M-2 and M-4 into corresponding bis-EDOT-substituted monomers. Both monomers could be electropolymerized, but the optical and electronic properties differed greatly between the two polymers. The polymers with EDOT-phenyl groups in the 3-and 6-positions (structure I in Table 4) represent conjugated polymers with low oxidation potentials and reversible electrochromic properties whereas the polymer with EDOT-phenyl groups in the 2-and 5-positions (structure II in Table 4) is non-conjugated, possesses a high oxidation potential and is not electrochromic ( Figure 5). Our activities have stimulated several other groups to synthesize diphenylDPP-containing conjugated polymers and to investigate their potential use in optoelectronic devices. Kanimozhi et al. [51] prepared alternating copolymers of diphenylDPP and 4,8-dihexylbenzo[1,2-b;3,4-b]dithiophene (P-12, Table 2) by Stille coupling and studied their optical and photovoltaic properties. Polymer-sensitized solar cells were fabricated with P-12 as active layer. A power conversion efficiency of 1.43% was reached. G. Zhang et al. [52] synthesized diphenylDPP-containing polyphenylene-vinylene (PPV)-and polyphenylene-ethynylene (PPE)-type conjugated polymers via Heck-and Sonogashira coupling, respectively. PPV-type polymers such as P-14 (Table 2) exhibit good solubility in common organic solvents, high thermal stability and a broad UV/visible absorption between 300 and 600 nm in films. Bulk heterojunction solar cells were fabricated and showed a power conversion efficiency of 0.01%. A PPE-type polymer such as P-15 (Table 2) ThiophenylDPP-based copolymers The replacement of the phenyl groups in 3,6-diphenyl-substituted DPP derivatives by thiophenyl groups resulted in 3,6-(2thiophenyl)-substituted DPP derivatives (thiophenylDPPs) with absorption maxima at about 530 nm, i.e., more than 50 nm bathochromically shifted compared to diphenylDPP. Corresponding comonomer and polymer structures are listed in Scheme 3. Conjugated polymers containing thiophenylDPP in Scheme 3: Thiophenyl-DPP-based polymers. the main chain exhibited absorption maxima between 600 and 900 nm. Because of their small band gaps and high charge carrier mobilities, the polymers are interesting for applications in field effect transistors (FETs) and organic photovoltaic cells. Conclusion Diaryldiketopyrrolopyrroles are insoluble red pigments, which on N-alkylation of the lactam groups and bromine substitution of the aryl groups can be converted into readily soluble monomers suitable for Pd-catalyzed polycondensation reactions. Using Suzuki, Stille, Heck and other aryl-aryl coupling reactions, new conjugated polymers with good solubility in common organic solvents, high molecular weight, high thermal stability and application potential for optoelectronic devices became accessible. Polymers containing diphenylDPP units in the main chain exhibit brilliant orange, red or purple colours, intense luminescence, high luminescence quantum yields, and Stokes-shifts up to 110 nm. Some of the polymers were studied as active layers in electroluminescent devices and showed a brightness up to 500 cd m −2 . Polymers with dithiophenylDPP moieties in the main chain show broad absorption in the visible exhibiting blue or dark green colours, small band gaps and high charge carrier mobilities. They are suitable as electron donor in bulk heterojunction solar cells with PC 60 BM or PC 70 BM as electron acceptors to give maximum power conversion efficiencies of about 5%. In field-effect transistors they exhibit ambipolar charge transport with large hole and electron mobilities. Variation of comonomer units or aryl groups in DPP monomers might further improve the device properties.
2017-06-16T21:52:31.569Z
2010-08-31T00:00:00.000
{ "year": 2010, "sha1": "e3d034caf3eefa0dbdb7d5948454bbc46d11031e", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-6-92.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3d034caf3eefa0dbdb7d5948454bbc46d11031e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
55799458
pes2o/s2orc
v3-fos-license
The Orbital Dynamics of Synchronous Satellites: Irregular Motions in the 2 : 1 Resonance The orbital dynamics of synchronous satellites is studied. The 2 : 1 resonance is considered; in other words, the satellite completes two revolutions while the Earth completes one. In the development of the geopotential, the zonal harmonics J20 and J40 and the tesseral harmonics J22 and J42 are considered. The order of the dynamical system is reduced through successive Mathieu transformations, and the final system is solved by numerical integration. The Lyapunov exponents are used as tool to analyze the chaotic orbits. Introduction Synchronous satellites in circular or elliptical orbits have been extensively used for navigation, communication, and military missions.This fact justifies the great attention that has been given in literature to the study of resonant orbits characterizing the dynamics of these satellites since the 60s 1-14 .For example, Molniya series satellites used by the old Soviet Union for communication form a constellation of satellites, launched since 1965, which have highly eccentric orbits with periods of 12 hours.Another example of missions that use eccentric, inclined, and synchronous orbits includes satellites to investigate the solar magnetosphere, launched in the 90s 15 . The dynamics of synchronous satellites are very complex.The tesseral harmonics of the geopotential produce multiple resonances which interact resulting significantly in nonlinear motions, when compared to nonresonant orbits.It has been found that the orbital Mathematical Problems in Engineering elements show relatively large oscillation amplitudes differing from neighboring trajectories 11 . Due to the perturbations of Earth gravitational potential, the frequencies of the longitude of ascending node Ω and of the argument of pericentre ω can make the presence of small divisors, arising in the integration of equation of motion, more pronounced.This phenomenon depends also on the eccentricity and inclination of the orbit plane.The importance of the node and the pericentre frequencies is smaller when compared to the mean anomaly and Greenwich sidereal time.However, they also have their contribution in the resonance effect.The coefficients l, m, p which define the argument φ lmpq in the development of the geopotential can vary, producing different frequencies within the resonant cosines for the same resonance.These frequencies are slightly different, with small variations around the considered commensurability. In this paper, the 2 : 1 resonance is considered; in other words, the satellite completes two revolutions while the Earth carries one.In the development of the geopotential, the zonal harmonics J 20 and J 40 and the tesseral harmonics J 22 and J 42 are considered.The order of the dynamical system is reduced through successive Mathieu transformations, and the final system is solved by numerical integration.In the reduced dynamical model, three critical angles, associated to the tesseral harmonics J 22 and J 42 , are studied together.Numerical results show the time behavior of the semimajor axis, argument of pericentre and of the eccentricity.The Lyapunov exponents are used as tool to analyze the chaotic orbits. Resonant Hamiltonian and Equations of Motion In this section, a Hamiltonian describing the resonant problem is derived through successive Mathieu transformations. Consider 2.1 to the Earth gravitational potential written in classical orbital elements 16, 17 where μ is the Earth gravitational parameter, μ 3.986009 × 10 14 m 3 /s 2 , a, e, I, Ω, ω, M are the classical keplerian elements: a is the semimajor axis, e is the eccentricity, I is the inclination of the orbit plane with the equator, Ω is the longitude of the ascending node, ω is the argument of pericentre, and M is the mean anomaly, respectively; a e is the Earth mean equatorial radius, a e 6378.140km, J lm is the spherical harmonic coefficient of degree l and order m, F lmp I and G lpq e are Kaula's inclination and eccentricity functions, respectively.The argument φ lmpq M, ω, Ω, θ is defined by where θ is the Greenwich sidereal time, θ ω e t ω e is the Earth's angular velocity, and t is the time , and λ lm is the corresponding reference longitude along the equator. In order to describe the problem in Hamiltonian form, Delaunay canonical variables are introduced, 2.3 L, G, and H represent the generalized coordinates, and , g, and h represent the conjugate momenta. Using the canonical variables, one gets the Hamiltonian F, with the disturbing potential R lm given by The argument φ lmpq is defined by and the coefficient B lmpq L, G, H is defined by The Hamiltonian F depends explicitly on the time through the Greenwich sidereal time θ.A new term ω e Θ is introduced in order to extend the phase space.In the extended phase space, the extended Hamiltonian H is given by 2.8 For resonant orbits, it is convenient to use a new set of canonical variables.Consider the canonical transformation of variables defined by the following relations: where X, Y, Z, Θ, x, y, z, θ are the modified Delaunay variables. The new Hamiltonian H , resulting from the canonical transformation defined by 2.9 , is given by where the disturbing potential R lm is given by B lmpq X, Y, Z cos φ lmpq x, y, z, θ . 2.11 Now, consider the commensurability between the Earth rotation angular velocity ω e and the mean motion n μ 2 /X 3 .This commensurability can be expressed as qn − mω e ∼ 0, 2.12 considering q and m as integers.The ratio q/m defining the commensurability will be denoted by α.When the commensurability occurs, small divisors arise in the integration of the equations of motion 9 .These periodic terms in the Hamiltonian H with frequencies qn − mω e are called resonant terms.The other periodic terms are called short-and long-period terms. The short-and long-period terms can be eliminated from the Hamiltonian H by applying an averaging procedure 12, 18 : 2.13 The variables ξ sp and ξ lp represent the short-and long-period terms, respectively, to be eliminated of the Hamiltonian H .The long-period terms have a combination in the argument φ lmpq which involves only the argument of the pericentre ω and the longitude of the ascending node Ω.From 2.10 and 2.11 , these terms are represented by the new variables in the following equation: 2.14 The short-period terms are identified by the presence of the sidereal time θ and mean anomaly M in the argument φ lmpq ; in this way, from 2.10 and 2.11 , the term H sp in the new variables is given by the following equations: 2.15 The term ζ p represents the other variables in the argument φ lmpq , including the argument of the pericentre ω and the longitude of the ascending node Ω, or, in terms of the new variables, y − z and z, respectively.A reduced Hamiltonian H r is obtained from the Hamiltonian H when only secular and resonant terms are considered.The reduced Hamiltonian H r is given by B lmp αm X, Y, Z cos φ lmp αm x, y, z, θ . The dynamical system generated from the reduced Hamiltonian, 2.16 , is given by The equations of motion dX/dt, dY/dt, and dZ/dt defined by 2.17 are 2.20 From 2.18 to 2.20 , one can determine the first integral of the system determined by the Hamiltonian H r .Equation 2.18 can be rewritten as 2.24 In this way, the canonical system of differential equations governed by H r has the first integral generated from 2.24 : where C 1 is an integration constant.Using this first integral, a Mathieu transformation 26 can be defined.This transformation is given by the following equations: 2.27 The subscript 1 denotes the new set of canonical variables.Note that Z 1 C 1 , and the z 1 is an ignorable variable.So the order of the dynamical system is reduced in one degree of freedom. Substituting the new set of canonical variables, in the reduced Hamiltonian given by 2.16 , one gets the resonant Hamiltonian.The word "resonant" is used to denote the Hamiltonian H rs which is valid for any resonance.The periodic terms in this Hamiltonian are resonant terms.The Hamiltonian H rs is given by 2.28 Mathematical Problems in Engineering 7 The Hamiltonian H rs has all resonant frequencies, relative to the commensurability α, where the φ lmp αm argument is given by with 2.30 The secular and resonant terms are given, respectively, by B 2j,0,j,0 X 1 , Y 1 , C 1 and Each one of the frequencies contained in dx 1 /dt, dy 1 /dt, dθ 1 /dt is related, through the coefficients l, m, to a tesseral harmonic J lm .By varying the coefficients l, m, p and keeping q/m fixed, one finds all frequencies dφ 1,lmp αm /dt concerning a specific resonance. 2.31 The Hamiltonian H 1 is defined considering a fixed resonance and three different critical angles associated to the tesseral harmonic J 22 ; the critical angles associated to the tesseral harmonic J 42 have the same frequency of the critical angles associated to the J 22 with a difference in the phase.The other terms in H rs are considered as short-period terms. Table 1 shows the resonant coefficients used in the Hamiltonian H 1 . Finally, a last transformation of variables is done, with the purpose of writing the resonant angle explicitly.This transformation is defined by 2.41 Since the term ω e Θ 4 is constant, it plays no role in the equations of motion, and a new Hamiltonian can be introduced, 2.42 The dynamical system described by H 4 is given by 2.45 In Section 4, some results of the numerical integration of 2.43 are shown. Lyapunov Exponents The estimation of the chaoticity of orbits is very important in the studies of dynamical systems, and possible irregular motions can be analyzed by Lyapunov exponents 23 .In this work, "Gram-Schmidt's method," described in 23-26 , will be applied to compute the Lyapunov exponents.A brief description of this method is presented in what follows. The dynamical system described by 2.43 can be rewritten as 3.1 Introducing Equations 3.2 can be put in the form dz dt Z z . 3.3 The variational equations, associated to the system of differential equations 3.3 , are given by dζ dt Jζ, 3.4 where J ∂Z/∂z is the Jacobian.The total number of differential equations used in this method is n n 1 , n represents the number of the motion equations describing the problem, in this case four.In this way, there are twenty differential equations, four are motion equations of the problem and sixteen are variational equations described by 3.4 . The dynamical system represented by 3.3 and 3.4 is numerically integrated and the neighboring trajectories are studied using the Gram-Schmidt orthonormalization to calculate the Lyapunov exponents. The method of the Gram-Schmidt orthonormalization can be seen in 25, 26 with more details.A simplified denomination of the method is described as follows. Considering the solutions to 3.4 as u κ t , the integration in the time τ begins from initial conditions u κ t 0 e κ t 0 , an orthonormal basis.At the end of the time interval, the volumes of the κ-dimensional κ 1, 2, . . ., N produced by the vectors u κ are calculated by where is the outer product and • is a norm.3. 3. In this way, the vectors u κ are orthonormalized by Gram-Schmidt method.In other words, new orthonormal vectors e κ t 0 τ are calculated, in general, according to 3. With new vector u κ t 0 τ e κ t 0 τ , the integration is reinitialized and carried forward to t t 0 2τ.The whole cycle is repeated over a long-time interval.The theorems guarantee that the κ-dimensional Lyapunov exponents are calculated by 25, 26 : The theory states that if the Lyapunov exponent tends to a positive value, the orbit is chaotic. In the next section are shown some results about the Lyapunov exponents.3. 4. Results Figures 4. The initial conditions of the variables x 4 and y 4 are 0 • and 0 • , respectively.Table 3 shows the values of C 1 corresponding to the given initial conditions.4. 5. Figures 5, 6, 7, and 8 show the time behavior of the semimajor axis, x 4 angle, argument of perigee and of the eccentricity for two different cases.The first case considers the critical angles φ 2201 , φ 2211 , and φ 2221 , associated to the tesseral harmonic J 22 , and the second case considers the critical angles associated to the tesseral harmonics J 22 and J 42 .The angles associated to the J 42 , φ 4211 , φ 4221 , and φ 4231 , have the same frequency of the critical angles associated to the J 22 with a different phase.The initial conditions corresponding to variables X 4 and Y 4 are defined for e o 0.05, I o 10 • , and a o given in Table 4.The initial conditions of the variables x 4 and y 4 are 0 • and 60 • , respectively.Table 4 shows the values of C 1 corresponding to the given initial conditions.5. Analyzing Figures 5-8, one can observe a correction in the orbits when the terms related to the tesseral harmonic J 42 are added to the model.Observing, by the percentage, the contribution of the amplitudes of the terms B 4,4211 , B 4,4221 , and B 4,4231 , in each critical angle studied, is about 1,66% up to 4,94%.In fact, in the studies of the perturbations in the artificial satellites motion, the accuracy is important, since adding different tesseral and zonal harmonics to the model, one can have a better description about the orbital motion.5.The initial conditions of the variables x 4 and y 4 are 0 • and 60 • , respectively.Table 5 shows the values of C 1 corresponding to the given initial conditions. Analyzing Figures 13 and 14 show the time behavior of the Lyapunov exponents for two different cases, according to the initial values of Figures 1-4 and 9-12.The dynamical system involves the zonal harmonics J 20 and J 40 and the tesseral harmonics J 22 and J 42 .The method used in this work for the study of the Lyapunov exponents is described in Section 3. In Figure 13, the initial values for C 1 , x 4 , and y 4 are C 1 −1.467778013 × 10 11 m 2 /s and C 1 −1.467819454 × 10 11 m 2 /s, x 4 0 • and y 4 0 • , respectively.In Figure 14, the initial values for C 1 , x 4 , and y 4 are C 1 −1.467765786 × 10 11 m 2 /s and C 1 −1.467821043 × 10 11 m 2 /s, x 4 0 • and y 4 60 • , respectively.In each case are used two different values for semimajor axis corresponding to neighboring orbits shown previously in Figures 1-4 and 9-12. Figures 13 and 14 show Lyapunov exponents for neighboring orbits.The time used in the calculations of the Lyapunov exponents is about 150.000 days.For this time, it can be observed in Figure 13 that λ 1 , corresponding to the initial value a 0 26565.0km, tends to a positive value, evidencing a chaotic region.On the other hand, analyzing the same Figure 13, λ 1 , corresponding to the initial value a 0 26563.5 km, does not show a stabilization around the some positive value, in this specified time.Probably, the time is not sufficient for a stabilization in some positive value, or λ 1 , initial value a 0 26563.5 km, tends to a negative value, evidencing a regular orbit.Analyzing now Figure 14 with λ 1 , initial value a 0 26562.0km.Comparing Figure 13 with Figure 14, it is observed that the Lyapunov exponents in Figure 14 has an amplitude of oscillation greater than the Lyapunov exponents in Figure 13.Analyzing this fact, it is probable that the necessary time for the Lyapunov exponent λ 2 , in Figure 14, to stabilize in some positive value is greater than the necessary time for the λ 2 in Figure 13. Rescheduling the axes of Figures 13 and 14, as described in Figures 15 and 16, respectively, the Lyapunov exponents tending to a positive value can be better visualized. Conclusions In this work, the dynamical behavior of three critical angles associated to the 2 : 1 resonance problem in the artificial satellites motion has been investigated. The results show the time behavior of the semimajor axis, argument of perigee and eccentricity.In the numerical integration, different cases are studied, using three critical angles together: φ 2201 , φ 2211 , and φ 2221 associated to J 22 and φ 4211 , φ 4221 , and φ 4231 associated to the J 42 . In the simulations considered in the work, four cases show possible irregular motions for Analyzing the contribution of the terms related to the J 42 , it is observed that, for the value of C 1 −1.045724331 × 1011 m 2 /s, the amplitudes of the terms B 4,4211 , B 4,4221 , and B 4,4231 are greater than the other values of C 1 .In other words, for bigger values of semimajor axis, it is observed a smaller contribution of the terms related to the tesseral harmonic J 42 . The theory used in this paper for the 2 : 1 resonance can be applied for any resonance involving some artificial Earth satellite. Figure 1 : Figure 1: Time behavior of the semimajor axis for different values of C 1 given in Table3. Figure 2 : Figure 2: Time behavior of x 4 angle for different values of C 1 given in Table3. Figure 3 : Figure 3: Time behavior of the argument of pericentre for different values of C 1 given in Table3. Figure 4 : Figure 4: Time behavior of the eccentricity for different values of C 1 given in Table3. Figure 5 : Figure 5: Time behavior of the semimajor axis for different values of C 1 given in Table4. Figure 7 : Figure 7: Time behavior of the argument of pericentre for different values of C 1 given in Table4. Figure 8 : Figure 8: Time behavior of the eccentricity for different values of C 1 given in Table4. Figure 9 : Figure 9: Time behavior of the semimajor axis for different values of C 1 given in Table5. Figure 10 :Figure 11 : Figure 10: Time behavior of x 4 angle for different values of C 1 given in Table5. Figures 9 ,Figure 12 :Figure 13 : Figure 12: Time behavior of the eccentricity for different values of C 1 given in Table5. Figure 15 :Figure 16 : Figures 13 and 14 show the time behavior of the Lyapunov exponents for two different cases, according to the initial values of Figures 1-4 and 9-12.The dynamical system involves the zonal harmonics J 20 and J 40 and the tesseral harmonics J 22 and J 42 .The method used in this work for the study of the Lyapunov exponents is described in Section 3. In Figure13, the initial values for C 1 , x 4 , and y 4 are C 1 −1.467778013 × 10 11 m 2 /s and C 1 −1.467819454 × 10 11 m 2 /s, x 4 0 • and y 4 0 • , respectively.In Figure14, the initial values for C 1 , x 4 , and y 4 are C 1 −1.467765786 × 10 11 m 2 /s and C 1 −1.467821043 × 10 11 m 2 /s, x 4 0 • and y 4 60 • , respectively.In each case are used two different values for semimajor axis corresponding to neighboring orbits shown previously in Figures1-4 and 9-12.Figures13 and 14show Lyapunov exponents for neighboring orbits.The time used in the calculations of the Lyapunov exponents is about 150.000 days.For this time, it can be observed in Figure13that λ 1 , corresponding to the initial value a 0 26565.0km, tends to a positive value, evidencing a chaotic region.On the other hand, analyzing the same Figure13, λ 1 , corresponding to the initial value a 0 26563.5 km, does not show a stabilization around the some positive value, in this specified time.Probably, the time is not sufficient for a stabilization in some positive value, or λ 1 , initial value a 0 26563.5 km, tends to a negative value, evidencing a regular orbit.Analyzing now Figure14, it can be verified that λ 1 , corresponding to the initial value a 0 26564.0km, tends to a positive value, it contrasts C 1 −1.467778013 × 10 11 m 2 /s, C 1 −1.467819454 × 10 11 m 2 /s, C 1 −1.467765786 × 10 11 m 2 /s, and C 1 −1.467821043 × 10 11 m 2 /s.Studying the Lyapunov exponents, two cases show chaotic motions for C 1 −1.467819454 × 10 11 m 2 /s and C 1 −1.467821043 × 10 11 m 2 /s. Table 2 : The zonal and tesseral harmonics.The zonal harmonics used in 2.34 and 2.35 and the tesseral harmonics used in 2.36 to 2.41 are shown in Table 2.The constant of integration C 1 in 2.34 to 2.41 is given, in terms of the initial values of the orbital elements, a o , e o , and I o , by x 4 , y 4 ; C 1 , 4 , Y 4 , x 4 , y 4 ; C 1 . Table 3 : Values of the constant of integration C 1 for e 0.001, I 55 • and different values for semimajor axis. 1, 2, 3, and 4show the time behavior of the semimajor axis, x 4 angle, argument of perigee and of the eccentricity, according to the numerical integration of the motion equations, 2.43 , considering three different resonant angles together: φ 2201 , φ 2211 , and φ 2221 associated to J 22 , and three angles, φ 4211 , φ 4221 , and φ 4231 associated to J 42 , with the same frequency of the resonant angles related to the J 22 , but with different phase.The initial conditions corresponding to variables X 4 and Y 4 are defined for e o 0.001, I o 55 • , and a o given in Table3.Time behavior of x 4 angle for different values of C 1 given in Table4. Table 4 : Values of the constant of integration C 1 for e 0.05, I 10 • , and different values for semimajor axis. Table 5 : Values of the constant of integration C 1 for e 0.01, I 55 • , and different values for semimajor axis.
2018-12-05T15:39:30.492Z
2012-01-17T00:00:00.000
{ "year": 2012, "sha1": "9341fbb2dd097096f1e58bf6ec28a1719d2c6d49", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2012/405870.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3dee10dec096113488d36666c5fa0a870533221d", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234835835
pes2o/s2orc
v3-fos-license
Considering Condensable Particulate Matter Emissions Improves the Accuracy of Air Quality Modeling for Environmental Impact Assessment : This study examines environmental impact assessment considering filterable particulate matter (FPM) and condensable particulate matter (CPM) to improve the accuracy of the air quality model. Air pollutants and meteorological data were acquired from Korea’s national monitoring station near a residential development area in the target district and background site. Seasonal emissions of PM 2.5, including CPM, were estimated using the California puff (CALPUFF) model, based on Korea’s national emissions inventory. These results were compared with the traditional environmental impact assessment results. For the residential development area, the seasonal PM 2.5 concentration was predicted by considering FPM and CPM emissions in the target area as well as the surrounding areas. In winter and spring, air quality standards were not breached because only FPM was considered. However, when CPM was included in the analysis, the results exceeded the air quality standards. Furthermore, it was predicted that air quality standards would not be breached in summer and autumn, even when CPM is included. In other words, conducting an environmental impact assessment on air pollution including CPM affects the final environmental decision. Therefore, it is concluded that PM 2.5 should include CPM for greater accuracy of the CALPUFF model for environmental impact assessment. PM can be classified into primary PM (directly emitted from the source) and secondary PM (formed by photochemical reactions in the atmosphere after being emitted in the gaseous phase) [12][13][14]. In Korea, the national emission inventory estimates the annual emissions of major air pollutants, including carbon monoxide (CO), nitric oxide (NO x ), sulfur oxide (SO x ), total suspended particles (TSP), particulate matter less than 10 µm (PM 10 ), particulate matter less than 2.5 µm (PM 2.5 ), black carbon, and volatile organic Sustainability 2021, 13, 4470 2 of 10 compounds (VOCs) based on emission sources and regions [15,16]. The emission inventory for PM (TSP, PM 10 , and PM 2.5 ) considers filterable PM (FPM), which is collected through a filter. However, unlike ambient air, the primary PM emitted from emission sources can be classified into FPM and condensable PM (CPM), and the aggregate of FPM and CPM is considered as the total PM [17]. CPM is in the gas phase under high-temperature conditions and condenses into PM immediately after emitting from the emission source. Corio and Sherwell [18] estimated that CPM accounts for about 76% of total PM10 emission from large stationary emission sources. Although the US EPA recognized the CPM issue in the early 1980s and developed a measurement method for stationary sources, it was not considered as a severe issue [19,20]. Sulfur trioxide (SO 3 ) in flue gas can react with water vapor to form sulfuric acid mist, which has often been misunderstood as CPM formation [21]. In recent years, not only studies on CPM emitted from stationary sources [22][23][24] but also studies using CPM as modeling inputs have been conducted [25]. In particular, Morino et al. [25] revealed the contribution of CPM to ambient PM by measuring CPM at stationary sources and confirmed the improvement in the prediction model for ambient PM by including CPM as a model input. Thus, it is necessary to include CPM in the air quality modeling of environmental impact assessments. In atmospheric studies, AERMOD (the AMS/EPA regulatory model) is one of the most widely used models for environmental impact assessment [26,27]. It is a steady-state plume model that calculates atmospheric diffusion, based on the concept of turbulent structure and scaling in the atmospheric boundary layer. Moreover, it can be used in simple (planar) district scenarios and complex terrain scenarios [28]. The CALPUFF (California puff) model is another model that is used in the evaluation of large atmospheric emission sources, such as industrial complexes, power plants, and incinerators [26,29]. CALPUFF is a multi-layered, multi-stage, unsteady puff diffusion model that simulates the effects of temporally and spatially varying weather conditions on the transport of pollutants [30]. Furthermore, it can be applied to rough and complex terrains. Although the CALPUFF model is useful, it does not consider photochemical reactions and chemical reactions of secondary pollutants. Therefore, chemical transport models, such as CMAQ [31], which consider atmospheric chemical reactions, have been recently used in atmospheric environmental impact assessments. To improve the CALPUFF model, this study considered CPM in the emission inventory and applied it to the seasonal environmental impact assessment of the target district. Materials and Methods In this study, the 2013 emission inventory provided by the National Center for Fine Dust Information [32] was used as input data for the CALPUFF model. In addition, total suspended particles (TSPs) and PM 2.5 (particulate matter with an aerodynamic diameter less than 2.5 µm) were selected as the target air pollutants. Moreover, CPM emission factors for stationary sources were used, as published by the National Institute of Environmental Research [33] (Table 1). Liquified natural gas (LNG), diesel, and B-C oil were measured in a boiler without a control device. Furthermore, the bituminous coal emission factor was measured at the end of the control devices in a power plant facility. The concentration of CPM emissions was calculated by multiplying the FPM to CPM emission factor ratio with PM 2.5 , obtained from Korea's emission inventory data. Figure 1 shows a schematic of the CALPUFF model. The CALPUFF model consists of a CALMET module, a meteorological model, and a CALPUFF module, an air pollution model. The CALMET module uses a land cover map, meteorological data, and aerological data as input data to calculate meteorological model results and then inputs the emission inventory to derive the CALPUFF model results. The control file serves to input commands to control each module. To simulate the FPM and CPM emission behavior, a software was developed to calculate the amount of PM emissions from major emission sources in the target district and link this with the meteorological data acquired from the automatic weather system data from the four monitoring stations near the target area. Thus, this software was used to establish a methodology to verify the accuracy of the concentrations of FPM and CPM emissions and understand atmospheric behavior prediction through case studies. model. The CALMET module uses a land cover map, meteorological data, and aer data as input data to calculate meteorological model results and then inputs the e inventory to derive the CALPUFF model results. The control file serves to inp mands to control each module. To simulate the FPM and CPM emission behavio ware was developed to calculate the amount of PM emissions from major e sources in the target district and link this with the meteorological data acquired f automatic weather system data from the four monitoring stations near the targ Thus, this software was used to establish a methodology to verify the accuracy of centrations of FPM and CPM emissions and understand atmospheric behavior pr through case studies. The target area was the Bugok residential development district in G Gyeonggi-do, with a project area of 470,000 m 2 (residential area of 200,795 m 2 , com and business area of 6104 m 2 , and public area of 263,101 m 2 ), accommodating a mately 9300 people. To compare and verify the accuracy of the PM concentra model calculations, real-time data provided by the Korea Environment Corp (KEC) monitoring station in and around the target district were used. Results and Discussion The CPM conversion factor management program developed in this study p a function for searching and managing conversion factors by fuel type. Additional emissions were calculated by applying the CPM conversion factor for each fuel t line, and surface emission sources. In the CPM emission factor management pro each fuel type, information on the FPM and CPM emission factors of PM2.5 e sources, including construction, buildings, and vehicles, was provided and manag ing the development of the residential area. Thus, it was meticulously configured details on emission factors for each construction instrument, vehicle type, and b Results and Discussion The CPM conversion factor management program developed in this study provided a function for searching and managing conversion factors by fuel type. Additionally, CPM emissions were calculated by applying the CPM conversion factor for each fuel to point, line, and surface emission sources. In the CPM emission factor management program of each fuel type, information on the FPM and CPM emission factors of PM 2.5 emission sources, including construction, buildings, and vehicles, was provided and managed during the development of the residential area. Thus, it was meticulously configured so that details on emission factors for each construction instrument, vehicle type, and building energy fuel could be searched. Seasonal variation is one of the most important parameters for environmental impact assessment, so we applied one month of each season [34,35]. Seasonal Results for Environmental Impact Assessment (Winter, January) Figure 2a-d show the modeling results (5.5 × 5.5 km) by segregating the FPM and CPM emissions generated during the operational stage of the Bugok residential development district based on the area to be analyzed and on whether the emission sources outside the target district were considered. The FPM and CPM emissions were calculated by considering the emission sources in the Bugok residential development district. The FPM concentration was estimated to be 0.0038 µg/m 3 , and the CPM concentration was predicted to be 0.27 µg/m 3 . Meanwhile, PM 2.5 was analyzed by considering emission sources that were outside the target district and excluding the emission sources in the Bugok residential development district. The FPM concentration was predicted to be 10.63 µg/m 3 , and the CPM concentration was estimated to be 14.82 µg/m 3 . Therefore, the TPM (CPM + FPM) concentrations by emission sources inside and outside the target district were calculated to be 0.27 and 25.45 µg/m 3 , respectively. The modeling results were compared with PM2.5 data acquired from the air pollutant monitoring stations near the target district. Figure 2e shows the results of this study, monthly averages of the monitoring stations, and traditional environmental impact assessment results. In terms of traditional environmental impact assessment, the PM2.5 concentration at the study site was estimated to be 47.28 µg/m 3 . This was the sum of the annual average PM concentration inside the target district and the PM2.5 concentration data acquired from the monitoring station outside the target district. The modeling result of this study was 58.87 µg/m 3 , including the long-range transboundary emissions (33.15 µg/m 3 ), the FPM concentration inside the target district (0.0038 µg/m 3 ), the FPM concentration outside the target district (10.63 µg/m 3 ), the CPM concentration inside the target district (0.27 µg/m 3 ), and the CPM concentration outside the target district (14.82 µg/m 3 ). In particular, the average concentration of TPM, the sum of FPM and CPM, in the target district was 25.72 µg/m 3 , which was approximately two times lower than the concentration measured in winter at the monitoring station (58.87 µg/m 3 ). Table 2 shows the PM2.5 concentrations in winter in Deokjeok, Seogwipo, and Seosan, which are the national background concentration monitoring stations located in the far The modeling results were compared with PM 2.5 data acquired from the air pollutant monitoring stations near the target district. Figure 2e shows the results of this study, monthly averages of the monitoring stations, and traditional environmental impact assessment results. In terms of traditional environmental impact assessment, the PM 2.5 concentration at the study site was estimated to be 47.28 µg/m 3 . This was the sum of the annual average PM concentration inside the target district and the PM 2.5 concentration data acquired from the monitoring station outside the target district. The modeling result of this study was 58.87 µg/m 3 , including the long-range transboundary emissions (33.15 µg/m 3 ), the FPM concentration inside the target district (0.0038 µg/m 3 ), the FPM concentration outside the target district (10.63 µg/m 3 ), the CPM concentration inside the target district (0.27 µg/m 3 ), and the CPM concentration outside the target district (14.82 µg/m 3 ). In particular, the average concentration of TPM, the sum of FPM and CPM, in the target district was 25.72 µg/m 3 , which was approximately two times lower than the concentration measured in winter at the monitoring station (58.87 µg/m 3 ). Table 2 shows the PM 2.5 concentrations in winter in Deokjeok, Seogwipo, and Seosan, which are the national background concentration monitoring stations located in the far west and south side of Korea. As they are located in the far west and south side of Korea, data collected from monitoring stations are used for evaluation of air pollutants' longrange transportation from polluted areas [36,37]. The observed PM concentrations at the Deokjeok, Seogwipo, and Seosan stations were 31.55, 41.77, and 47.26 µg/m 3 , respectively. The difference between the modeling results and the observed values estimated in this study was considered to be due to the effect of the long-range transboundary emissions from other countries. Table 2 shows the PM concentrations in winter in Deokjeok, Seogwipo, and Seosan. The observed PM concentrations at the Deokjeok, Seogwipo, and Seosan stations were 43.73 µg/m 3 , 58.67 µg/m 3 , and 44.53 µg/m 3 , respectively. The difference between the modeling results and the observed values in this study was assumed to be the effect of the long-range transboundary emissions from other countries. uated as 0.0002 µg/m 3 and 0.04 µg/m 3 , respectively. By estimating the PM concentration outside the target district, the FPM concentration was predicted to be 8.69 µg/m 3 , and the CPM concentration was 7.67 µg/m 3 . The concentration of TPM from emission sources inside and outside the target district was 0.04 µg/m 3 and 16.36 µg/m 3 , respectively. The TPM concentration around the target district was 16.40 µg/m 3 . Figure 3e shows the predicted results of this study, spring measurements of the monitoring station, and traditional environmental impact assessment results. The modeling result of this study, including the CPM concentration, was 28.65 µg/m 3 , which included the long-range transboundary emissions (12.24 µg/m 3 ), the FPM concentration inside the target district (0.0002 µg/m 3 ), the FPM concentration outside the target district (8.69 µg/m 3 ), the CPM concentration inside the target district (0.04 µg/m 3 ), and the CPM concentration outside the target district (7.67 µg/m 3 ). The concentration of TPM in the target district was estimated to be 16.40 µg/m 3 , which was 1.7 times lower than the average concentration measured at the monitoring station in summer (28.65 µg/m 3 ). Differences in the predicted summer PM concentrations were considered to be due to the effect of the long-range transboundary emissions. Moreover, the observed PM concentrations at the three stations of Deokjeokdo, Seogwipo, and Seosan were 23.45 µg/m 3 , 34.58 µg/m 3 , and 23.79 µg/m 3 , respectively (Table 2). Figure 3e shows the predicted results of this study, spring measurements of the monitoring station, and traditional environmental impact assessment results. The modeling result of this study, including the CPM concentration, was 28.65 µg/m 3 , which included the long-range transboundary emissions (12.24 µg/m 3 ), the FPM concentration inside the target district (0.0002 µg/m 3 ), the FPM concentration outside the target district (8.69 µg/m 3 ), the CPM concentration inside the target district (0.04 µg/m 3 ), and the CPM concentration outside the target district (7.67 µg/m 3 ). The concentration of TPM in the target district was estimated to be 16.40 µg/m 3 , which was 1.7 times lower than the average concentration measured at the monitoring station in summer (28.65 µg/m 3 ). Seasonal Results for Environmental Impact Assessment (Autumn, October) Differences in the predicted summer PM concentrations were considered to be due to the effect of the long-range transboundary emissions. Moreover, the observed PM concentrations at the three stations of Deokjeokdo, Seogwipo, and Seosan were 23.45 µg/m 3 , 34.58 µg/m 3 , and 23.79 µg/m 3 , respectively (Table 2). FPM and CPM concentrations in the study site were predicted to be 21.12 µg/m 3 , whic was 1.7 times lower than the average concentration measured in the autumn at the mon toring station (36.06 µg/m 3 ). The difference in the predicted autumn PM concentratio was considered to be due to the long-distance migration. Moreover, it was confirmed th the PM2.5 concentrations at Deokjeokdo, Seogwipo, and Seosan were 30.45 µg/m 3 , 28.1 µg/m 3 , and 29.53 µg/m 3 , respectively. (Table 2). Seasonal Results for Environmental Impact Assessment (Autumn, October) We included CPM and FPM in the PM prediction model which showed a less tha 5% difference compared to the monitoring station data, while the results of tradition environmental impact assessment showed a difference of 20-40% compared to the mon toring station data. Ghim et al. [38] performed an evaluation of a PM2.5 prediction mod including only FPM emissions, and the predicted value of PM2.5 was found to be 69% o the monitoring station data. Thus, including CPM emissions in the PM prediction mod is one of the ways to increase the accuracy of the model for environmental impact asses ment. Even with the results of measuring CPM and FPM at the large stationary emissio source, the portion of CPM occupies more than 80% [39], which disproves that the PM concentration should be predicted using both CPM and FPM emissions. In addition, th US EPA recommends that the interim guidance for new source review permit program should include the CPM in determining a new major stationary source permission [40 Thus, environmental impact assessment should consider CPM as one of the factors of a quality analysis. Figure 5e shows the predicted results of this study, spring measurements of the monitoring station, and traditional environmental impact assessment results. The predicted value of this study, including the CPM concentration was 36.06 µg/m 3 , which included the long-distance inflow (14.95 µg/m 3 ), the in-pipe FPM concentration (0.0017 µg/m 3 ), the outside FPM concentration (11.29 µg/m 3 ), the in-pipe CPM concentration (0.20 µg/m 3 ), and the CPM concentration outside the building (9.63 µg/m 3 ). In particular, the average FPM and CPM concentrations in the study site were predicted to be 21.12 µg/m 3 , which was 1.7 times lower than the average concentration measured in the autumn at the monitoring station (36.06 µg/m 3 ). The difference in the predicted autumn PM concentration was considered to be due to the long-distance migration. Moreover, it was confirmed that the PM 2.5 concentrations at Deokjeokdo, Seogwipo, and Seosan were 30.45 µg/m 3 , 28.19 µg/m 3 , and 29.53 µg/m 3 , respectively. Differences in the predicted autumn PM concentrations were considered to be due to the effect of the long-range transboundary emissions. The observed PM concentrations in the three stations recorded from Deokjeokdo, Seogwipo, and Seosan were 30.45 µg/m 3 , 28.19 µg/m 3 , and 29.53 µg/m 3 , respectively (Table 2). We included CPM and FPM in the PM prediction model which showed a less than 5% difference compared to the monitoring station data, while the results of traditional environmental impact assessment showed a difference of 20-40% compared to the monitoring station data. Ghim et al. [38] performed an evaluation of a PM2.5 prediction model including only FPM emissions, and the predicted value of PM2.5 was found to be 69% of the monitoring station data. Thus, including CPM emissions in the PM prediction model is one of the ways to increase the accuracy of the model for environmental impact assessment. Even with the results of measuring CPM and FPM at the large stationary emission source, the portion of CPM occupies more than 80% [39], which disproves that the PM concentration should be predicted using both CPM and FPM emissions. In addition, the US EPA recommends that the interim guidance for new source review permit programs should include the CPM in determining a new major stationary source permission [40]. Thus, environmental impact assessment should consider CPM as one of the factors of air quality analysis. Conclusions In this study, the PM concentration in the atmosphere was predicted by including CPM emissions in the environmental impact assessment. For the residential development area, the seasonal PM 2.5 concentration was predicted by considering the FPM and CPM emissions in the target area as well as the surrounding areas. In winter and spring, when only FPM was considered, the air quality standards were not breached. However, when CPM results were included in the analysis, air quality standards were exceeded. However, it was predicted that even if CPM is included, air quality standards would not be breached in summer and autumn. This means that air quality forecasting results, including CPM, may alter the results. In addition, the sum of the predicted values of seasonal CPM and FPM was 1.7 to 3 times lower than that of the actual measurement. Compared to the background concentration measurement, it was found to be a result of long-distance travel. Therefore, it is necessary to consider CPM in the emission inventory to carry out environmental impact assessment, air quality modeling, analysis and diagnosis of emissions according to the characteristics of each sector's emission source, and prediction of the PM 2.5 concentration in the surrounding areas. In this study, environmental impact assessment was performed by considering only primary PM using the CALPUFF model. What remains to be undertaken by future research is an environmental impact assessment including secondary PM.
2021-05-21T16:56:49.008Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "d38b45851512227ac631405c93964419c75737e3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/8/4470/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6ee0973a0553abe6eb2da409d3f0d8b2b6df1bd0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
13893899
pes2o/s2orc
v3-fos-license
Strategic delegation in a sequential model with multiple stages We analyze strategic delegation in a Stackelberg model with an arbitrary number, n, of firms. We show that the n-1 last movers delegate their production decisions to managers whereas the first mover does not. Equilibrium incentive rates are increasing in the order with which managers select quantities. Letting u_i^* denote the equilibrium payoff of the firm whose manager moves in the i-th place, we show that u_n^*>u_{n-1}^*>...>u_2^*>u_1^*. We also compare the delegation outcome of our game with that of a Cournot oligopoly and show that the late (early) moving firms choose higher (lower) incentive rates than the Cournot firms. Introduction The Stackelberg model of market competition is a benchmark model of industrial economics. In this model, firms select their market strategies (quantities or prices) sequentially. One of the most important issues in this framework focuses on the relation between timing of commitment and relative profitability of firms. For the case of two players, Gal-Or (1985) showed that if reaction functions are downwardssloping then the first-mover earns a higher payoff than his opponent. On the other hand, in the case of upwards-sloping reaction functions the advantage is with the second-mover. 1 Further studies showed that this result is not robust to variations of the model. Gal-Or (1987) studied a Stackelberg duopoly where firms compete under private information about market demand. In this model the firstmover might earn a lower profit than his opponent, as he produces a relatively low quantity in order to send a signal for low demand. Liu (2005) analyzed a model where only the first-mover has incomplete information about the demand and showed that in some cases the first-mover loses the advantage. For the case of n ≥ 2 symmetric firms, Boyer and Moreaux (1986) and Anderson and Engers (1992) showed that the i-th mover obtains a higher profit than the i+1-th mover, for i = 1, 2, · · · , n−1. Pal and Sarkar (2001) analyzed a model with n ≥ 2 cost-asymmetric firms under the assumption that the later a firm commits to a quantity, the lower its marginal cost. They showed that if cost differentials are sufficiently low, the firm that moves in stage i obtains a higher payoff than its successor i + 1; otherwise, the ranking of profits is reversed. Recently, an integration of the Stackelberg model with the theory of endogenous objectives of oligopolistic firms has taken place. The latter theory was launched with the works of Fershtman and Judd (1985), Vickers (1985) and Sklivas (1987). These works endogenized the objective functions of firms in a context of management/ownership separation by postulating that firms maximize a combination of revenue and profit or quantity and profit. This framework was applied by Kopel and Loffler (2008) to a Stackelberg duopoly with homogeneous commodities (which give rise to downwards sloping reaction functions). Their paper analyzed the impact of delegation on the structure of first versus second mover advantage. The authors showed that only the second mover delegates the production decision to a manager. As a result, the second mover produces a higher quantity than the first mover and earns higher profit. The current paper analyzes strategic delegation in a Stackelberg model with an arbitrary number of firms. It assumes a fixed order of play and perfect observability of choices at each stage. 2 Our work is an extension of the strategic delegation setup presented in Kopel and Loffler (2008). 3 Our aim is to determine the relations among: (i) the timing of commitment to quantities; (ii) the equilibrium delegation decisions and (iii) the relative performance of firms. Moreover, we are interested in comparing the equilibrium of the sequential market with that of a corresponding Cournot market. The main results of the paper are as follows: First, we show that all firms delegate their production decision to managers except for the firm whose manager is the first to commit to a quantity. Moreover, the equilibrium incentive rate is an increasing function of the order of commitment. Namely, the later a manager selects a quantity, the higher his incentive rate he is given. More importantly, letting u * i denote the equilibrium payoff of the firm whose manager commits in stage i, we show that u * n > u * n−1 > · · · > u * 2 > u * 1 . This ordering of profits is due to the result that the managers who commit at late stages choose relatively high quantities (as they are given relatively high incentive rates). Delegation in a Cournot model leads to an equilibrium where all firms endup with a lower payoff compared to the case of non-delegation. This is not true though for the Stackelberg model: Firms whose managers decide on quantities after a threshold stage prefer the delegation regime over nondelegation. Nonetheless, we show that if the number of firms is n ≥ 3, each firm in the Stackelberg market earns a lower payoff than a Cournot firm. The rest of the paper is organized as follows. Section 2 describes the model and section 3 presents the results. Section 4 concludes. The framework Consider an n-firm sequential oligopoly. Firms face the inverse demand function P = max{a−Q, 0}, where P is the market price and Q is the total market quantity given by Q = q 1 + q 2 + ... + q n , where q i is the quantity of firm i = 1, 2, · · · , n. The production technology of firm i is represented by the cost function C(q i ) = cq i , i = 1, 2, · · · , n. Firms are characterized by ownership-management separation. The task of firm i's manager is to select a quantity by maximizing an objective function delegated to him by the owners of the firm. We assume that this objective function is a combination of profit and quantity (Vickers 1985), 4 T i = (P − c)q i + a i q i , a i ≥ 0, i = 1, 2, · · · , n where a i is manager i's incentive rate. The time structure of the interaction among firms and managers is as follows. In stage 0, the firms' owners decide simultaneously on the incentive rates of their managers. In particular, firm i's owners choose a i so as to maximize the profit function u i = (a − Q − c)q i , i = 1, 2, · · · , n These choices are made publicly known. Then play becomes sequential. In stage 1 the manager of firm 1 selects (and commits to) a quantity for his firm. His choice is observed by all other players. In stage 2, firm 2's manager selects a quantity, which is observed by all other players. The process continues this way in stages 3, 4, · · · , n − 1, n. We denote the above interaction by G S . In the next section we identify the sub-game perfect Nash equilibrium (SPNE) outcome of this game. Quantity stages Working backwards, we first analyze the quantity competition stages of G s . We first note that, in essence, managers choose quantities as if their firms face assymetric marginal costs given by (c 1 , c 2 , · · · , c n ) = (c − a 1 , c − a 2 , · · · , c − a n ). Thus depending on the 0-stage choices of (a 1 , a 2 , · · · , a n ) and the resulting asymmetries we can have, a priori, some managers selecting zero quantities. We will show though that any configuration with one ore more managers selecting zero quantities cannot be part of a SPNE outcome of G S . We begin by describing the managers' reaction functions. It will be useful for our analysis to define not only the standard reaction function but also the auxiliary concept of step k reaction function where k is a positive integer) on which we elaborate below. Let Q i = q 1 + q 2 + · · · + q i−2 + q i−1 . Consider first stage n. We will denote by f 1 n (q 1 , ..., q n−1 ) the step 1 reaction function or simply the reaction function of manager n, defined by 5 where T n (q 1 , · · · , q n ) = (a − Q − c + a n )q n . For the moment we do not discuss the positiveness or not of the reaction functions; we will turn to this (critical) issue later on in the analysis. Moving to stage n − 1, the (step 1) reaction function of manager n − 1 is Then the step 2 reaction function of manager n is derived by f 1 n when q n−1 is replaced by f 1 n−1 , i.e., Moving on to stage n − 2, the step 1 reaction function of manager n − 2 is defined by f 1 n−2 (q 1 , · · · , q n−3 ) = arg max Plugging f 1 n−2 into f 1 n−1 and f 2 n will give us the step 2 reaction function of manager n − 1 and the step 3 reaction function of manager n. These are respectively, We can iteratively continue this way and define the reaction functions up to stage 2. Then, in stage 1, manager 1 solves max q 1 ≥0 T 1 (q 1 ) where . The above description will be useful in order to examine what type of quantity configurations can support an SPNE outcome of G S . To this end, consider the generic stage i. Using our description, manager i selects q i in order to maximize the function 6 We argue that case (i) cannot be part of any SPNE outcome. To this end, consider a vector (ã 1 ,ã 2 , · · · ,ã n ) of 0-stage choices. Assume that these choices 6 To be consistent, when dealing with T 1 we need to set Q 1 = 0. are such that all managers select positive quantities except for one, say manager i. Let (q 1 , · · · ,q i−1 , 0,q i+1 , · · · ,q n ) denote this market outcome. Given that q k = f k−i k (q 1 , · · · ,q i−1 , 0), k = i + 1, i + 2, · · · , n, and since we are in case (i), we have j =iq j ≥ a − c +ã i . But then the profit of any firm j, j = i, in stage 0 is To put it differently, a configuration of the form (q 1 , · · · ,q i−1 , 0,q i+1 , · · · ,q n ) cannot support an SPNE outcome as such a configuration would make the market price fall bellow the marginal cost. To see this, notice that the conditions j =iq j ≥ a − c +ã i andã i ≥ 0 imply that a − j =iq j ≤ c. A similar argument holds for outcomes under which more than one managers select zero quantities. Hence in what follows we can focus our attention on the case where all firms produce positive quantities. Since in any SPNE outcome all managers produce positive quantities, we can use the results of Pal and Sarkar (2001) who computed the equilibrium quantities in an n-stage Stackelberg with cost-asymmetric firms (but without delegation) under the assumption that all firms are active. By adjusting their analysis to ours, the manager of firm i chooses the quantity where is the market price. Lemma 1. Consider the delegation stage of G S . Proof. Appears in the Appendix. By Lemma 1 all firms except for 1, delegate in equilibrium. Moreover, the later a manager commits to a quantity, the higher his incentive rate. To give some intuition behind this result, we first note that there is a negative relation between any a i and the market price. For example, consider firms i and i + j with j > 0. The corresponding effects of a i and a i+j on the market prices are To comprehend why the above inequality holds, let us go back to the managers' (step 1) reaction functions: the rate a i appears in the step 1 reaction functions 7 of q 1 , · · · , q i−1 , q i whereas the rate a i+j appears in the step 1 reaction of more terms, i.e. q 1 , · · · , q i , · · · , q i+j−1 , q i+j . Furthermore: (i) the relation between a i and any of q 1 , · · · , q i−1 is negative and so is the relation between a i+j and any of q 1 , · · · , q i , · · · , q i+j−1 ; (ii) the market price depends negatively on quantities. Points (i) and (ii) explain why a i+j has a smaller negative impact on price than a i has. As a result, the owners of firm i + j have incentive to make their manager more aggressive than firm i's owners. Using Lemma 1, market price, individual and total market quantities are given respectively by Let u * i denote the equilibrium profit of firm i, i = 1, 2, · · · , n, in G S . Our next result ranks these profits. One question raised at this point is how does the performance of firms in G S compare with their performance in a sequential market without any delegation activities. Letū i denote the equilibrium profit of the i-th firm in the latter market. We have the following. Corollary 1. There exists a stage i ′ = i ′ (n) such that u * i >ū i if and only if i > i ′ (n). Proof. Appears in the Appendix Therefore, firms whose managers select quantities after the i'th stage prefer the delegation regime over nondelegation; the opposite holds for the remaining firms. This result is explained by our previous finding that the late-moving managers are relatively aggressive at the expense of the early movers. Comparison with Cournot competition In this section we compare the equilibrium outcome of G S with the outcome of the corresponding Cournot market. In the latter framework, we have a two stage interaction which evolves as follows: in stage 0, firms' owners choose the incentive rates of their managers. These choices are made publicly known. Then in stage 1, the managers of the n firms select simultaneously quantities for their firms, using the incentive schemes decided upon in stage 0. Let G C denote this game (which was first analyzed by Vickers, 1985). It is known that in the absence of delegation, the Stackelberg market produces a higher total quantity than the Cournot market (Anderson and Engers, 1992). When delegation is introduced then: (i) In G S not all firms delegate; (ii) in G C all firms delegate. Hence a direct ranking of the Stackelberg and Cournot total market quantities under certain delegation is not obvious. Corollary 2 bellow provides this comparison. It also compares incentive rates and profiles across the two frameworks (in what follows, Q * C , a * C and u * C denote the equilibrium total market quantity, incentive rate and profit 8 respectively under Cournot competition). Corollary 2. Consider the games G S and G C . The following hold. Proof. Appears in the Appendix. Corollary 2 shows an interesting relation: All firms in G S , except for the last mover, choose lower incentive rates than firms in G C . Nonetheless, regarding consumers, the Stackelberg market remains more efficient than the Cournot market as it results to a higher market quantity. In the absence of delegation, Anderson and Engers (1992) compared the profitability in the Stackelberg and Cournot models: for n = 2; the first (second) mover earns a higher (lower) profit than the Cournot duopolists; for n ≥ 3, all Stackelberg firms earn a lower profit than the Cournot firms. When delegation is introduced and n = 2, the second (first) mover earns a higher (lower) profit than the Cournot duopolists (this case is analyzed by Kopel and Loffler, 2008); for n ≥ 3, all Stackelberg firms earn a lower profit than the Cournot firms (Corollary 2(iii)). Let us, at this point, recall our assumption that each stage's choices are perfectly observable. The issue of imperfect observability in strategic games has been analyzed in a series of works. Katz (1991) demonstrated that if delegation choices are not observed by rivals then delegation has no value. Bagwell (1994) showed that observing the rival's action with noise destroys the impact of first mover's commitment. Vardy (2004) analyzed a sequential game where observing the first mover's choice is costly. He showed that being the first mover has no value, no matter how small the observation cost is. Other authors delivered more positive results: Fershtman and Kalai (1997) provided a framework where the value of delegation can be restored, provided there is a positive probability that the delegation contracts are accurately observed. van Damme and Hurkens (1997), Guth et al. (1998) and Maggi (1999) showed that commitment under imperfect observability has an impact on the outcome of the game if one allows for either mixed strategy equilibria (first two papers) or for private information on behalf of the first mover (last paper). Contributing to the above discussion is not a goal (or an ambition) of the current paper. Just to provide some real-world facts we quote from Scalera and Zazzaro (2008): "· · · the assumption of contract observability seems in some cases to be quite realistic. When firms compete to hire managers, it is likely that contractual clauses are publicly declared." Further, the same present the argument that "· · · in many countries, at least as regards quoted companies, firms are obliged by regulators to announce manager compensations to the market and this eases their commitment to the contracts signed with managers." 9 Conclusions We analyzed strategic delegation in a Stackelberg model with an arbitrary number of firms. We showed that the later a firm's manager commits to a quantity, the higher his firm's profit. Delegation improves the payoff of the late-movers and hurts early-movers. Namely, firms whose managers commit late (early) to a quantity end-up with a higher (lower) payoff compared to the non-delegation regime. This is different from the case of delegation under Cournot competition, where all firms are hurt by delegation. Our paper has analyzed a framework with linear demand and cost functions. Introducing a more general framework will allow us to examine the robustness of our results. Further the analysis of a market where incentive contracts are imperfectly observed is of special interest.
2012-02-09T13:55:52.000Z
2011-07-15T00:00:00.000
{ "year": 2011, "sha1": "c7d7bbebd693e5ea9ff3eb6856a88dcc25e81864", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1107.3198", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4f4c94547a5a293fb60b60f8aa42ccdd1c36c2bf", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
119238080
pes2o/s2orc
v3-fos-license
The dynamically disrupted gap in HD 142527 The vestiges of planet formation have been observed in debris disks harboring young and massive gaseous giants. The process of giant planet formation is terminated by the dissipation of gas in the protoplanetary disk. The gas-rich disk around HD142527 features a small inner disk, a large gap from \sim10 to \sim140AU, and a massive outer disk extending out to \sim300AU. The gap could have been carved-out by a giant planet. We have imaged the outer regions of this gap using the adaptive-optics camera NICI on Gemini South. Our images reveal that the disk is dynamically perturbed. The outer boundary of the roughly elliptical gap appears to be composed of several segments of spiral arms. The stellar position is offset by 0.17+-0.02"from the centroid of the cavity, consistent with earlier imaging at coarser resolutions. These transient morphological features are expected in the context of disk evolution in the presence of a perturbing body located inside the cavity. We perform hydro-dynamical simulations of the dynamical clearing of a gap in a disk. A 10Mjup body in a circular orbit at r = 90AU, perturbs the whole disks, even after thousands of orbits. By then the model disk has an eccentric and irregular cavity, flanked by tightly wound spiral arms, but it is still evolving far from steady state. A particular transient configuration that is a qualitative match to HD142527 is seen at 1.7Myr. Introduction The lifetime of disks sets the time available for the planet formation process. In the Solar System, chronology of the oldest solid components based on radionuclides (e.g., Al 26 ) indicates that the duration timescale of the Solar Nebula was short, of order 2-3 Myr (Montmerle et al. 2006). Direct imaging of a 9 ± 3 M jup planet (Lagrange et al. 2010) inside the debris disk of β Pic proves that large gas giants can form by 12 +8 −4 Myr. In protostellar systems, by 5-10 Myr no gas is left (Pascucci et al. 2006). The evidence points at early giant planet formation, as in the candidate massive protoplanets LkCa 15 b (Kraus & Ireland 2012) and T Cha b (Huélamo et al. 2011) (although both remain to be confirmed, e.g. by direct imaging). Radial gaps in gas-rich protoplanetary disks are currently thought of as possible signposts of planet formation. Previous H and K band coronographic images of HD 142527 reveal a hole in the disk, about 100 AU in radius (Fukagawa et al. 2006). This inner cavity is in fact a gap with an inner boundary, abutting on an inner disk (van Boekel et al. 2004), roughly 10 AU in radius (Anthonioz et al., in preparation), and up to 30 AU (Verhoeff et al. 2011), that accounts for the large near-IR excess of HD 142527 (Fukagawa et al. 2010). Radiative transfer modelling of the spectral-energy distribution (SED) and NIR images are consistent with an inclination angle of ∼20 deg (close to face-on, Verhoeff et al. 2011). The youth of HD 142527A, at ∼ 140 pc 1 , is evident from the copious amounts of gas (Öberg et al. 2011) in its ∼0.1 M ⊙ circumstellar disk (Verhoeff et al. 2011 both maskless and exposed to 0.38s, for a total of 1 h. All setups were acquired in stare mode, compensating for parallactic angle rotation. The result of our NICI imaging is shown in Fig. 1, after PSF subtraction 2 . SINFONI integral field spectra. We also conducted complementary integral-field-unit spectroscopy with the SINFONI instrument on the VLT in order to search for possible planetary objects in the gap. In our setup we chose a very narrow jittering throw that samples the inner 0.45 × 0.45 arcsec −2 only. We processed the ESO pipeline datacubes using spectral deconvolution techniques (Thatte et al. 2007), and also through conventional PSF subtraction (using a PSF standard star). The detection limits we achieve are consistent with those achieved by Thatte et al. (2007) in AB Dor with the same instrument (∼9 mag contrast at 0.2 ′′ separation). 2 we used a Moffat profile fit to the wings of the PSF -5 - Observational results 3.1. Size and morphology of the cavity and outer disk inner rim The NICI images clearly reveal the non axi-symmetric shape of the cavity. This is the highlight of our observations. The shape is roughly elliptical, albeit with significant departures, with a representative eccentricity of e c = 1 − (b c /a c ) 2 ≈ 0.5 (where a c and b c are the ellipse axes). But, as illustrated in Fig. 2, the exterior ring can be described as being composed of four arms overlapping in their extremities, except for two small, ∼0.2 arcsec gaps or intensity nulls along the ring, due north and south-south-east from the star. The observation of these morphological feature is based on the shape of the inner rim of the ring seen in direct images (even without subtraction of the PSF glare). Additionally, an unresolved brightness increment, or knot, can be seen along the north-western segment (which is also the brightest segment) -the alignment of this unresolved feature in all filters is illustrated in the RGB image, although we cannot determine from these data alone if this knot is real or perhaps only a ghost or a chance alignment of speckles. The south-western spiral arm segment (labelled arm 2 on Fig. 2) sprouts away from the outer ring at a PA of -45 deg (East of North), where it extends into the outer arm seen by Fukagawa et al. (2006). This outer arm is at much fainter levels than the outer ring -our dataset has finer resolution but is shallower, so that only its root is seen in our figures. As discussed in Fukagawa et al. (2006), this outer arm could have been triggered by a stellar encounter, but the putative partner remains to be identified. A nearby point source turned out to be a background star (Fukagawa et al. 2010) -in our NICI images this source is located at a separation of 5. ′′ 44 from HD 142527, and a PA of 220 deg (East of North). The inner radius of the ring varies from 0.7 ′′ to the north, to about 1.0 ′′ to the south, and a projection effect is discarded since the central star is clearly offset by 0.17 ± 0.02 ′′ -6from the centre of this approximate ellipse. This offset was first noted by Fukagawa et al. (2006), albeit at lower angular resolution. In K-band the west side of the rim we see is brighter, in agreement with Fukagawa et al. (2006). Fujiwara et al. (2006) and Verhoeff et al. (2011) already noted that in the thermal IR, N and Q bands, it is the eastern side that is brightest, which is consistent with a projection effect (Fujiwara et al. 2006). At the shorter wavelength K-band, the front (i.e. nearest) side of the disk appears brighter because of forward scattering at the surface of the disk. On the opposite, the thermal emission from the larger projected surface of the rim of the back side of the disk appears brighter than the slimmer (in projection) rim of the near side. This implies a major axis of the disk in the North-South direction and a minor axis in the E-W direction (as observed here). The emission null we detect due North is seen on both sets of Subaru images, at K-band and in the mid-IR as well. Detection limits in the range 14-35 AU No companion is clearly detected in our NICI images. An upper limit on a binary companion is difficult to obtain because of the speckle noise, and because of the lack of accurate NICI zero points. However, the SINFONI observations allow us to place a lower limit flux ratio at 0.2-0.3 ′′ separation of 1905 at 3 σ in K; Fig. 3 illustrates this upper limit. Modeling the SED (with MCFOST, Pinte et al. 2006) we estimate that half of the total flux density of HD 142527 in K can be assigned to the disk. In that case, the SINFONI flux ratio limit of 1905 corresponds to an absolute magnitude M K > 8.2 at 3 σ. As a comparison point we refer to the COND03 tracks (Baraffe et al. 2003) at 1 Myr of age, which would imply a mass of less than 12 M jup given this M K limit. However the age of HD 142527 and the models are not precise enough to ascertain this upper limit mass value. We stress again that this limit constrains the presence of massive gaseous giants at stellocentric radii of 14 Gap clearing by single or multiple planets could potentially explain wide gaps observed in transitional disks (Rice et al. 2003). We thus focus on the simplest possible scenario to interpret the morphology of the cavity in HD 142527's disk, i.e. dynamical clearing by one single embedded and massive-Jupiter-size planet opening a gap in its gaseous disk. The maximum gap opening width will depend on the mass and the orbital parameters of this planet (Dodson-Robinson & Salyk 2011). The cavity forms due to the torque exerted by the star-planet binary (Artymowicz & Lubow 1994). Elliptical cavities have been predicted with e c ≈ 0.25 in the case of circular binaries with mass ratios > 3 × 10 −3 , in which case the gap becomes large enough to deplete the region in which the eccentricity-damping outer 1:3 Lindblad resonance is produced (Kley & Dirksen 2006). Further numerical work (Hosseinbor et al. 2007) shows that, for eccentric binaries, the outer edge of the cavity becomes even more eccentric. The observed eccentric cavity is then a recurrent feature of gap-clearing models. Fargo hydrodynamic simulations of gap clearing by a planet In order to test further the single-planet origin for the wide gap in the gaseous disk of HD 142527, we conducted hydrodynamic simulations with FARGO. Our goal here is not to fit the data but to verify qualitatively the tenability of this scenario for HD 142527. FARGO is a dedicated grid-based 2D code publicly available on the web (Masset 2000), and specifically designed for planet-disk interactions. The run presented in this paper corresponds to a system evolution equivalent to 1.7 Myr. We picked common fiducial parameters for the model, as detailed below. The model disk is initially axisymmetric, its surface density is initially a power law of radius, star, while the outer boundary is such that we allow for a steady state mass transfer into the disk. Accretion across the disk is modeled by using an α prescription (Shakura & Sunyaev 1973). We adopted α = 0.002 following Dodson-Robinson & Salyk (2011). From the SINFONI limits described above we know that any planet present in the inner 14 to 35 AU cannot have a mass exceeding ∼12 Jupiter masses. We assume no such planet is present further out in the disk either, and chose a planet mass of 10 M Jup . We placed a 10 M Jup planet in a fixed circular orbit at 90 AU in the model disk. This radius was chosen so that the planet's sphere of influence, given by a few times its Hill radius -9 - The resulting gas surface density field is shown in Fig. 4. This specific dynamically perturbed morphology will persist over a timescale of a few tens of thousand years (i.e, around 100 orbits) but is not in steady state. The rim will remain asymmetric over time, and is composed of tightly wound spiral arms, whose superposition forms a roughly elliptical cavity. The cavity centroid is persistently offset from the star, but any particular morphology of the gap is transient and lasts about 100 orbits. In this snapshot view ( fig. 4), the outer edges of the cavity are most reminiscent of those seen in the NICI images. The cavity eccentricity is ∼0.6, and its centroid is offset from the star by ∼16 AU. As seen in Fig. 4, this model produces a massive inner disc, extending out to 50-60 AU in radius, which is not consistent with the observations. Additional ingredients are required to deplete that region in the framework of a dynamical clearing model, e.g., additional planets closer to the star (as in Dodson-Robinson & Salyk 2011). Fitting the inner disc is left for future work. We stress that the purpose of these simulations is to provide a plausible explanation of the NICI observations of the inner edge of the outer disk only. Summary and discussion Our observations of HD 142527 indicate that the inner rim of its outer disk is roughly elliptical, and composed of an overlapping set of spiral arm segments. We confirm that the star is offset by 0.17 ± 0.02 ′′ from the centroid of the cavity. No companion is detected in the range of 14 to 35 AU, with a lower limit M K > 8.2 at 3 σ (for a distance of 140 pc). The observed morphology of the disk in HD 142527 suggests a dynamically perturbed state. The segmented spirals, and the offset between the cavity centroid and the star, can be seen among the varied morphologies predicted by hydrodynamic simulations of HD 142527. To explore, at least qualitatively, the origin of the cavity and its peculiar shape, -10we reported on a simple case. A massive protoplanet on a circular orbit whose radius is comparable to the disk's outer edge perturbs the entire disk, which does not reach steady state even after thousands of orbits. The global shape of the cavity is a long-standing feature; the general asymmetries (off-center star, eccentricity of the cavity) are imprinted early and remain for the duration of the calculations. However, the specific configuration of the model run, position of the streamers, and position and number of spiral arms, form a specific morphology that is only a transient phase. A particular morphology may recur, but lasts for a limited number of orbits. That phase which qualitatively matches HD 142527 lasts for a hundred orbits or so. It is tempting to suggest that the observed shape of the disk rim of HD 142527 today will also be brief compared to its age, with features that are unlikely to be found in other systems that are also undergoing dynamical clearing. In this scenario, snapshot imaging of transition disks are likely to look different in their fine details. However, there should be common features like off-centering of the central star, non axi-symmetry of the disk rim, and presence of spiral arms. Current imaging campaigns are indeed revealing these structures, as in HD 100546 (Grady et al. 2001); HD 135344B (Muto et al. 2012); AB Aur (Hashimoto et al. 2011). Interstingly, the list of known transition disks is rapdily increasing with improving observational capabilities both in direct imaging at optical/near-IR (see e.g., the citations above) and in the sub-mm regime (Andrews et al. 2011). We should be able to verify these ideas in the near future, specifically that the observed morphological variety of transition disks derives from a common dynamical history in this evolutionnary stage when there is large-scale feedback between the planet formation process and its parent disk. A key to test this transient scenario for HD 142527 and transition disks in general would be to detect the gap-crossing planetary accretion flow and link it to the outer-disk
2012-07-09T14:15:36.000Z
2012-07-09T00:00:00.000
{ "year": 2012, "sha1": "d259b4bf8b3648691535e02a059fe2d422d4b44c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.2056", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d259b4bf8b3648691535e02a059fe2d422d4b44c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219067334
pes2o/s2orc
v3-fos-license
E ff ects of Meteorological Factors and Anthropogenic Precursors on PM 2.5 Concentrations in Cities in China : Fine particulate matter smaller than 2.5 µ m (PM 2.5 ) in size can significantly a ff ect human health, atmospheric visibility, climate, and ecosystems. PM 2.5 has become the major air pollutant in most cities of China. However, influencing factors and their interactive e ff ects on PM 2.5 concentrations remain unclear. This study used a geographic detector method to quantify the e ff ects of anthropogenic precursors (AP) and meteorological factors on PM 2.5 concentrations in cities of China. Results showed that impacts of meteorological conditions and AP on PM 2.5 have significant spatio-temporal disparities. Temperature was the main influencing factor throughout the whole year, which can explain 27% of PM 2.5 concentrations. Precipitation and temperature were primary impacting factors in southern and northern China, respectively, at the annual time scale. In winter, AP had stronger impacts on PM 2.5 in northern China than in other seasons. Ammonia had stronger impacts on PM 2.5 than other anthropogenic precursors in winter. The interaction between all factors enhanced the formation of PM 2.5 concentrations. The interaction between ammonia and temperature had strongest impacts at the national scale, explaining 46% ( q = 0.46) of PM 2.5 concentrations. The findings comprehensively elucidated the relative importance of driving factors in PM 2.5 formation, which can provide basic foundations for understanding the meteorological and anthropogenic influences on the concentration patterns of PM 2.5 . Datasets According to CAAQS, annual average PM 2.5 concentrations are limited to 15 µg m −3 (Grade I) and 35 µg m −3 (Grade II). The daily average concentrations are 35 µg m −3 (Grade I) and 75 µg m −3 (Grade II) [39]. Grade I refers to the concentration limit required for scenic spots, nature reserves, and other areas requiring special conservation in China. Grade II refers to the concentration limits required for rural areas, residential areas, industrial areas, cultural areas, and mixed-use residential areas. The daily PM 2.5 concentrations of 366 cities in China were obtained from the China Environmental Monitoring Center. Due to the availability of data, data for Hong Kong, Macau, and Taiwan were not included. Figures S3 and S4 show maps of PM 2.5 concentrations [29]. We acquired meteorological data (839 sites) from the China Meteorological Data Network throughout the whole year of 2016. The daily meteorological data included surface air pressure (PS, hPa), air temperature (TE, • C), relative humidity (RH, %), wind velocity (WI, m s −1 ), sunshine duration (SS, h), and accumulated precipitation (PE, mm) (Figures S5-S11) [29]. The monthly anthropogenic emissions of VOCs ammonia (NH 3 ), sulfur dioxide (SO 2 ), and nitrogen oxide (NOx) were monitored in 2016. They were usually considered as AP of PM 2.5 and were collected from MEIC (multi-resolution emission inventory for China, http://www.meicmodel.org/). The MEIC includes emission data of the four sub-sectors of transportation, power, industry, and residential [52], and has been widely used in the research of air pollution [53][54][55][56]. Figure S12 indicates that the highest anthropogenic precursor emissions are mainly distributed in EC, MYR, SC, NC, and MUYR. GeoDetector In this study, the q statistics of GeoDetector were used to quantitatively analyze the impacts of AP and MCs on PM 2.5 in China. GeoDetector supports a series of statistical methods that can explore spatial difference and identify the driving factors. The main idea is based on the assumption that if an independent variable (X) causes a dependent variable (Y), then the spatial distribution of the independent variable and the dependent variable should be consistent [57][58][59][60]. GeoDetector can detect both qualitative data and numerical data. Compared with traditional linear statistical methods, this is a major advantage of GeoDetector. Another unique advantage of GeoDetector is the ability to detect the interaction between two factors acting on the dependent variable. The GeoDetector includes four detectors, which are factor detection, risk area detection, ecological detection, and interaction detection. In this study, factor detection and interaction detection were used. Factor detector uses q statistic to detect the influence of X (e.g., MCs and AP) on Y (e.g., the PM 2.5 concentrations). The expression is: In the formula, h = 1, . . . , L, which classifies X or Y; N and N h are the numbers of the whole region and categories in classification h; σ 2 and σ h 2 are the Y value of the whole region and the variance of strata h, respectively. SST and SSW are the total variance of the whole region and the sum of variance within the strata, respectively. Greater values of q (0-1) indicate more spatial variation in Y. If the classification is based on X, a higher q value explains the influence of X on Y (i.e., explaining power: 100 × q%). Interaction detection can identify the impact of the interaction between potential driving factors. Based on that, we can assess whether the interaction between X 1 and X 2 will strengthen or weaken the explaining power of Y. Additionally, the influences of these factors on the dependent variable Y would be independent of each other. There are five types of interactions; please refer to [59] for more information. In addition, in order to identify the positive or negative correlations between PM 2.5 concentrations and influencing factors, this study calculated their Pearson correlation coefficients at different temporal and spatial scales. Effects on PM 2.5 Concentrations at the National Scale The influence of each driving factor on PM 2.5 concentrations was acquired by calculating the corresponding q value (the power of determinant, Figure 1a), which indicated the contribution of each impacting factor on PM 2.5 concentrations. Figure 1 shows that there were obvious seasonal and annual difference in factors' impacts on PM 2.5 . Meteorological conditions were dominant impacting factors in PM 2.5 formation at the annual time scale. TE (q = 0.27) was the primary impacting factor, followed by PE (q = 0.22) and PS (q = 0.17). Sustainability 2020, 12, x FOR PEER REVIEW 4 of 13 the explaining power of Y. Additionally, the influences of these factors on the dependent variable Y would be independent of each other. There are five types of interactions; please refer to [59] for more information. In addition, in order to identify the positive or negative correlations between PM2.5 concentrations and influencing factors, this study calculated their Pearson correlation coefficients at different temporal and spatial scales. Effects on PM2.5 Concentrations at the National Scale The influence of each driving factor on PM2.5 concentrations was acquired by calculating the corresponding q value (the power of determinant, Figure 1a), which indicated the contribution of each impacting factor on PM2.5 concentrations. Figure 1 shows that there were obvious seasonal and annual difference in factors' impacts on PM2.5. Meteorological conditions were dominant impacting factors in PM2.5 formation at the annual time scale. TE (q = 0.27) was the primary impacting factor, followed by PE (q = 0.22) and PS (q = 0.17). Meteorological factors were dominant driving forces in spring, such as PE (q = 0.12) and PS (q = 0.10). In summer, AP showed stronger impacts on PM2.5 concentrations than meteorological conditions, and the NOx, NH3, and VOCs were the three dominant factors (q > 0.10). In autumn, the meteorological factors and AP showed comparative influence on PM2.5. The dominant impacting factor in autumn was TE (q = 0.18), followed by PE (q = 0.13), NOx (q = 0.11), and NH3 (q = 0.10). Similar to the autumn, meteorological factors and AP had comparative impacts on PM2.5 concentrations in winter. TE (q = 0.25) was the dominant factor, followed by NH3 (q = 0.18), SSD (q = 0.17), PS (q = 0.16), and VOCs (q = 0.13). This indicated that AP were the dominant impacting factor in winter. Figure 2 shows the effects of AP and MCs on PM2.5 significantly varied at regional and seasonal scales in China. In general, meteorological factors were the major driving forces in China. PE and TE were primary driving forces in southern and northern China, respectively. Figure 2 shows that meteorological factors were primary drivers of PM2.5 formation in spring, which is similar to most regions at the annual time scale. The dominant meteorological factor was PE Meteorological factors were dominant driving forces in spring, such as PE (q = 0.12) and PS (q = 0.10). In summer, AP showed stronger impacts on PM 2.5 concentrations than meteorological conditions, and the NOx, NH 3 , and VOCs were the three dominant factors (q > 0.10). In autumn, the meteorological factors and AP showed comparative influence on PM 2.5 . The dominant impacting factor in autumn was TE (q = 0.18), followed by PE (q = 0.13), NOx (q = 0.11), and NH 3 (q = 0.10). Similar to the autumn, meteorological factors and AP had comparative impacts on PM 2.5 concentrations in winter. TE (q = 0.25) was the dominant factor, followed by NH 3 (q = 0.18), SSD (q = 0.17), PS (q = 0.16), and VOCs (q = 0.13). This indicated that AP were the dominant impacting factor in winter. Figure 2 shows the effects of AP and MCs on PM 2.5 significantly varied at regional and seasonal scales in China. In general, meteorological factors were the major driving forces in China. PE and TE were primary driving forces in southern and northern China, respectively. Sustainability 2020, 12, x FOR PEER REVIEW 5 of 13 and SS in southern and regions of MYR and NC, respectively. Meteorological factors and AP showed comparative impacts on PM2.5 concentrations in summer except for in XJ and QTP. TE was the dominant driving factor on PM2.5 concentrations in autumn in UYR, NE, and MYR, but PE and WI played the dominant role in regions of MUYR and SC, and of NC and EC, respectively. WI was the primary driving factor in NC, NE, EC, and UYR in winter, but PS was the major driving factor in MUYR and MUPR. However, in QTP, PS was the major impacting factor on PM2.5 concentrations throughout the whole year. Interactive Effects on PM2.5 This study explored interactive effects on PM2.5 by using the interaction detector with a total of 45 pairs of interactions. The interaction of any two factors was analyzed by comparing their combined contribution with their individual contributions to PM2.5 concentrations. Figure 3 shows the q values of each pair of impact factors and their interaction through the whole year at the national scale. Interactions of PE ∩ TE, PS ∩ TE, PS ∩ VO, PS ∩ NO, VO ∩ NO, VO ∩ NH, and NO ∩ NH belong to bivariate enhancements and other interactions belong to nonlinear enhancements ( Figure 3). Generally, the interaction between NH and TE (q value = 0.46) was the strongest interaction among all impacting factors. Figure S14 indicates that there were obvious seasonal disparities in the interactive influence. In spring, fall, and winter (but not summer), the interactions between meteorological factors played major roles in PM2.5 concentrations. The interaction WI ∩ RH (q value = 0.38) had the strongest effect on PM2.5 in spring. In autumn and winter, the interaction between SS and TE (autumn: q value = 0.54; winter: q value = 0.42) played the strongest role in PM2.5 concentrations. However, NH ∩ RH (q value = 0.35) was the highest in summer. Figure 2 shows that meteorological factors were primary drivers of PM 2.5 formation in spring, which is similar to most regions at the annual time scale. The dominant meteorological factor was PE and SS in southern and regions of MYR and NC, respectively. Meteorological factors and AP showed comparative impacts on PM 2.5 concentrations in summer except for in XJ and QTP. TE was the dominant driving factor on PM 2.5 concentrations in autumn in UYR, NE, and MYR, but PE and WI played the dominant role in regions of MUYR and SC, and of NC and EC, respectively. WI was the primary driving factor in NC, NE, EC, and UYR in winter, but PS was the major driving factor in MUYR and MUPR. However, in QTP, PS was the major impacting factor on PM 2.5 concentrations throughout the whole year. Interactive Effects on PM 2.5 This study explored interactive effects on PM 2.5 by using the interaction detector with a total of 45 pairs of interactions. The interaction of any two factors was analyzed by comparing their combined contribution with their individual contributions to PM 2.5 concentrations. Figure 3 shows the q values of each pair of impact factors and their interaction through the whole year at the national scale. Interactions of PE ∩ TE, PS ∩ TE, PS ∩ VO, PS ∩ NO, VO ∩ NO, VO ∩ NH, and NO ∩ NH belong to bivariate enhancements and other interactions belong to nonlinear enhancements ( Figure 3). Generally, the interaction between NH and TE (q value = 0.46) was the strongest interaction among all impacting factors. Figure S14 indicates that there were obvious seasonal disparities in the interactive influence. In spring, fall, and winter (but not summer), the interactions between meteorological factors played There were obvious regional disparities in all the interactions in China (Figure 4). The interactions between meteorological factors were strongest in all regions. In the interactions between AP and meteorological factors, the interaction of AP∩TE played a primary role in EC, MUYR, and northern China, and the interaction of AP∩PE played a dominant role in MUYR, MUPR, and SC. Figures S15-S18 show the seasonal interactive effects of each pair of driving factors at the regional scale in China, indicating that there were also obvious regional and seasonal differences in all the interactions. Discussion PM2.5 concentrations were found to be driven by natural conditions and human activities. Previous studies had reported that PM2.5 concentrations correlate with both MCs and AP emissions There were obvious regional disparities in all the interactions in China (Figure 4). The interactions between meteorological factors were strongest in all regions. In the interactions between AP and meteorological factors, the interaction of AP∩TE played a primary role in EC, MUYR, and northern China, and the interaction of AP∩PE played a dominant role in MUYR, MUPR, and SC. Figures S15-S18 show the seasonal interactive effects of each pair of driving factors at the regional scale in China, indicating that there were also obvious regional and seasonal differences in all the interactions. There were obvious regional disparities in all the interactions in China (Figure 4). The interactions between meteorological factors were strongest in all regions. In the interactions between AP and meteorological factors, the interaction of AP∩TE played a primary role in EC, MUYR, and northern China, and the interaction of AP∩PE played a dominant role in MUYR, MUPR, and SC. Figures S15-S18 show the seasonal interactive effects of each pair of driving factors at the regional scale in China, indicating that there were also obvious regional and seasonal differences in all the interactions. Discussion PM 2.5 concentrations were found to be driven by natural conditions and human activities. Previous studies had reported that PM 2.5 concentrations correlate with both MCs and AP emissions [16,[61][62][63][64][65]. We analyzed influences of AP and meteorological factors and their interactions on PM 2.5 in Chinese cities. Results showed that effects of AP and MCs and their interactions on PM 2.5 had obvious seasonal and regional variations across China. In this study, we found that meteorological factors were the leading driving factors at the annual time scale and the national scale, indicating that the meteorological conditions had dominant influences on PM 2.5 concentrations. TE and PE were the two leading factors affecting PM 2.5 concentrations. TE was closely related to PM 2.5 concentrations by affecting atmospheric perturbation and chemical reactions [24]. PE could scavenge PM 2.5 from the air and had a moisture removal effect [11,66,67]. However, the major influencing factors on PM 2.5 concentration had significantly seasonal variations. The influences of meteorological conditions on PM 2.5 concentrations were the strongest in winter, and TE was the leading factor affecting PM 2.5 concentrations. This is because temperature inversion can weaken the scattering and dispersion of PM 2.5 , resulting in higher local PM 2.5 pollution [68,69]. In summer, influences of meteorological conditions on PM 2.5 concentrations were weakest. RH was the leading factor affecting PM 2.5 concentrations in summer. This was because RH was higher in summer than in the other three seasons, which had a suppression effect on PM 2.5 under moist air conditions [29]. At the annual time scale, TE was the dominant factor in northern and southern China (except MUPR), which is consistent with previous studies [70,71]. PE was the primary factor affecting PM 2.5 concentrations in southern China in most seasons throughout the whole year. This was because PE in southern China was higher than in other regions [29]. The increasing PE had scavenging effects on PM 2.5 by wet deposition, and could lower the PM 2.5 concentration [66,67]. WI was the main influencing factor in EC and northern China in winter, which is similar to some reports that WI was the most important and negative impacting factor on PM concentrations [72,73]. This was because weaker East Asia winter monsoons could slow wind speeds and increased the frequency of static wind, which had made it more difficult for PM to disperse [74]. Previous studies have indicated that AP were also crucial driving factors on PM 2.5 [48,63,75]. We found that the influence of AP on PM 2.5 in winter were higher than in other seasons in XJ and northern China (MYR, UYR, NC). This might be due to the high anthropogenic emissions from winter carbon-fired heating and less surface vegetation cover in winter in northern China, which significantly increased pollutant emissions in the atmosphere [76,77]. In addition, NH 3 had important impacts on PM 2.5 among AP factors in winter at the national scale and in some individual regions (NC, SC, MUYR, and MYR). Ammonia is an important precursor, and emissions of ammonia had stronger associations with PM 2.5 concentrations than other anthropogenic precursors; this is similar to results of previous studies [78]. Ammonia participates in photochemical reactions as an atmospheric alkaline gas, which is important in the SIA (secondary inorganic aerosol) formation of compounds such as ammonium salts, sulfate, and nitrate [79][80][81][82][83]. In China, SIA was an important driving factor of PM 2.5 pollution, especially during severe smog events [80,84]. SIA accounted for 32% of PM 2.5 mass concentration in China (e.g., Beijing, Guangzhou, Shanghai, and Xi'an) during the 2013 haze pollution events [84]. There was a 5.7% reduction in the annual concentration of PM 2.5 concentrations when ammonia emissions were cut by 47% in the Beijing-Tianjin-Hebei region of China [79]. PM 2.5 concentrations are affected by complex interactions between AP and meteorological conditions. This study found that interactions between any driving factors at all time and space scales had significant enhancement effects on PM 2.5 concentrations. The leading interactive effect between AP and MCs was between AP with RH and PE at the national scale in summer. This might be due to the fact that PE and RH have scavenging and suppression effects on PM 2.5 by wet deposition in summertime. However, the primary interactive effect between AP and MCs was AP ∩ TE at the national scale in winter. This was due to increased temperature inversion under the lower winter temperature, which weakened the diffusion and dispersion of pollution. The interaction between AP had the dominant effect on PM 2.5 concentrations over most regions in summer, which indicated that accelerated photochemical reactions between AP occurred under high-temperature conditions. Interactions between AP and factors of PE and PS were important at annual and seasonal scales in southern China. This might be due to the high precipitation and surface pressure in southern China [29]. Conclusions The effects of AP and MCs on PM 2.5 in Chinese cities were systematically analyzed in this study. The findings revealed significant seasonal and regional disparities in the impacts of examined factors and how they interacted on PM 2.5 . The results can help us to better understanding the relative importance of the driving factors in the formation of PM 2.5 . The study indicated that local AP and meteorological factors had important impacts on PM 2.5 in China, and had obvious regional and seasonal variations. Meteorological conditions played a leading role in determining PM 2.5 concentrations at the regional and national scales throughout the whole year. At the seasonal time scale, WI was the primary factor on PM 2.5 concentrations in winter in northern China and XJ, but PE was the major driver on PM 2.5 concentrations during most seasons in southern China. However, AP had stronger impacts on PM 2.5 during winter than in other seasons in XJ and most regions of northern China. NH 3 had a stronger effect on PM 2.5 concentrations during winter than other anthropogenic precursors. Interactions between all influencing factors have enhanced effects on PM 2.5 concentrations. In addition, the interaction between MCs and AP played a leading role at the national scale throughout the whole year and in summer and winter. The results could provide a basis for the government to develop more precise air pollution control strategies. Some limitations to the study should be clarified to assist future studies. First, land use and land cover, socioeconomic conditions, elevation, and topography were not considered to assess their influence on PM 2.5 concentrations. Second, the uncertainty of MEIC emission inventory may lead to some uncertainties in the research results. When processing emissions inventory data, because the inventory only has monthly emissions data, so as to match the daily PM 2.5 concentration and daily weather monitoring data, we took the arithmetic average of the monthly inventory emissions data, which also increased the uncertainty of the analysis results. Third, the data used in this study is limited to 2016, and does not include data analysis of other years. This is because the emission inventory data that we could obtain were for 2008, 2010, 2012, 2014, and 2016, but the PM 2.5 data in 366 cities were available only from 2015 to 2017. In order to maintain consistency between the data, we selected 2016 as the research period in this study. There was no comparative analysis of inter-annual variability. Therefore, we should comprehensively consider other factors on PM 2.5 concentrations including socioeconomic, land use, terrain, and elevation in the future. In addition, inter-annual change analysis based on multi-year data needs to be added. Figure S14: The seasonal interactive q values and the original q value of each pair of factors. Figure S15: The interactions between impacting factors in spring at the Sustainability 2020, 12, 3550 9 of 13 regional scale in China. Figure S16: Interactions between impacting factors in summer at the regional scale in China. Figure S17: Interactions between impacting factors in autumn at the regional scale in China. Figure S18: Interactions between impacting factors in winter at the regional scale in China. Table S1: Effect of various factors on PM 2.5 in China in 2016. Table S2: Effect of various factors on PM 2.5 throughout the whole year at the regional scale. Table S3: Effect of various factors on PM 2.5 in spring at the regional scale. Table S4: Effect of various factors on PM 2.5 in summer at the regional scale. Table S5: Effect of various factors on PM 2.5 in autumn at the regional scale. Table S6: Effect of various factors on PM 2.5 in winter at the regional scale.
2020-04-30T09:07:57.663Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "a9ad3522fec19444f3456ed4ba5a14ad3b64a615", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/9/3550/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "859780ae8819ae116fa130177ad899a329dbb2cf", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
54126767
pes2o/s2orc
v3-fos-license
New Shape Function for the Bending Analysis of Functionally Graded Plate The bending analysis of thick and moderately thick functionally graded square and rectangular plates as well as plates on Winkler–Pasternak elastic foundation subjected to sinusoidal transverse load is presented in this paper. The plates are assumed to have isotropic, two-constituent material distribution through the thickness, and the modulus of elasticity of the plate is assumed to vary according to a power-law distribution in terms of the volume fractions of the constituents. This paper presents the methodology of the application of the high order shear deformation theory based on the shape functions. A new shape function has been developed and the obtained results are compared to the results obtained with 13 different shape functions presented in the literature. Also, the validity and accuracy of the developed theory was verified by comparing those results with the results obtained using the third order shear deformation theory and 3D theories. In order to determine the procedure for the analysis and the prediction of behavior of functionally graded plates, the new program code in the software package MATLAB has been developed based on the theories studied in this paper. The effects of transversal shear deformation, side-to-thickness ratio, and volume fraction distributions are studied and appropriate conclusions are given. Introduction Failure and delamination at the border between two layers are the biggest and the most frequently studied problem of the conventional composite laminates. Delamination of layers due to high local inter-laminar stresses causes a reduction of stiffness and a loss of structural integrity of a construction. In order to eliminate these problems, improved materials such as functionally graded materials (FGM), which are getting more and more popular, are used for innovative engineering constructions. FGM is a composite material consisting of two or more constituents with the continuous change of properties in a certain direction. In other words, these materials can also be defined as materials which possess a gradient change of properties due to material heterogeneity. A gradient property can go in one or more directions and it can also be continuous or discontinuous from one surface to another depending on the production technique [1][2][3]. One of the most common uses of FGM materials is found in thermal barriers, one surface of which is in contact with high temperatures and is made of ceramic which can provide adequate thermal stability, low thermal conductivity, and fine antioxidant properties. The low-temperature side of the barrier is made of metal, which is superior in terms of mechanical in terms of mechanical strength, toughness, and high thermal conductivity. Functionally graded materials, which contain metal and ceramic constituents, improve thermo-mechanical properties between layers, because of which delamination of layers should be avoided due to continuous change between properties of the constituents. By varying the percentage of volume fraction content of the two or more materials, FGM can be formed so that it achieves a desired gradient property in specific directions. Figure 1 shows schematic of continuously graded microstructure with metal-ceramic constituents [4]. Depending on the nature of gradient, functionally graded materials may be grouped into fraction gradient type, shape gradient type, orientation gradient type and size gradient type ( Figure 2) [5]. With the expansion of the FGM material application area, it was necessary to improve fabrication methods for mentioned materials. Various fabrication methods have been developed for the preparation of bulk FGMs and graded thin films. The processing methods are commonly classified into four groups like powder technology methods (dry powder processing, slip vesting, tape casting, infiltration process or electrochemical gradation, powder injection molding and self-propagating high temperature synthesis, etc.), deposition methods (chemical vapor deposition, physical vapor deposition, electrophoretic deposition, slurry deposition, pulsed laser deposition, plasma spraying, etc.), in-situ processing methods (laser cladding, spray forming, sedimentation and solidification, centrifugal casting, etc.), and rapid prototyping processes (multiphase jet solidification, 3D-printing, laser printing, laser sintering, etc.) [6]. The basic difference between the mentioned production methods can be made according to whether the obtained materials have a stepwise or continuous Depending on the nature of gradient, functionally graded materials may be grouped into fraction gradient type, shape gradient type, orientation gradient type and size gradient type ( Figure 2) [5]. Materials 2018, 11, x FOR PEER REVIEW 2 of 28 in terms of mechanical strength, toughness, and high thermal conductivity. Functionally graded materials, which contain metal and ceramic constituents, improve thermo-mechanical properties between layers, because of which delamination of layers should be avoided due to continuous change between properties of the constituents. By varying the percentage of volume fraction content of the two or more materials, FGM can be formed so that it achieves a desired gradient property in specific directions. Figure 1 shows schematic of continuously graded microstructure with metal-ceramic constituents [4]. Depending on the nature of gradient, functionally graded materials may be grouped into fraction gradient type, shape gradient type, orientation gradient type and size gradient type ( Figure 2) [5]. With the expansion of the FGM material application area, it was necessary to improve fabrication methods for mentioned materials. Various fabrication methods have been developed for the preparation of bulk FGMs and graded thin films. The processing methods are commonly classified into four groups like powder technology methods (dry powder processing, slip vesting, tape casting, infiltration process or electrochemical gradation, powder injection molding and self-propagating high temperature synthesis, etc.), deposition methods (chemical vapor deposition, physical vapor deposition, electrophoretic deposition, slurry deposition, pulsed laser deposition, plasma spraying, etc.), in-situ processing methods (laser cladding, spray forming, sedimentation and solidification, centrifugal casting, etc.), and rapid prototyping processes (multiphase jet solidification, 3D-printing, laser printing, laser sintering, etc.) [6]. The basic difference between the mentioned production methods can be made according to whether the obtained materials have a stepwise or continuous On the other hand, continuum-based 3D elasticity theory could be used for the analysis of these plates. However, 3D solution methods are mathematically complex which consequently results in prolonged calculation time and the need for high performance hardware. Taking the aforementioned into consideration, developing and using 2D shear deformation plate theories, which consider the effects of previously mentioned shear and normal strains and provide the precision in the same way as 3D models do, represents a trend in the process of analysis of FGM plates. This paper presents, in detail, the methodology of the application of the HSDT theory based on the shape functions. A new shape function has been developed and the obtained results are compared to the results obtained with 13 different shape functions presented in the papers from the reference list. Also, the results have been verified through comparison with the results obtained with TSDT and 3D theories. In order to determine a procedure for the analysis and the prediction of behavior of FGM plates, the new program code in the software package MATLAB (MATrix LABoratory) has been developed based on theories studied in this paper. Finally, the ultimate goal and the purpose of all the previously mentioned studies and analyses is the application of FGM in different areas of engineering and branches of industry. Although FGM were initially used as materials for thermal barrier in space shuttles, today they are becoming widely used in the field of medicine, dentistry, energy and nuclear sector, automotive industry, military, optoelectronics etc. Description of the Problem The subject of the analysis in this paper are FGM plate ( Figure 3a) and FGM plate on elastic foundation (Figure 3b). The plate (length a, width b and height h) is made of functionally graded material consisting of the two constituents, namely, metal and ceramics. On the other hand, continuum-based 3D elasticity theory could be used for the analysis of these plates. However, 3D solution methods are mathematically complex which consequently results in prolonged calculation time and the need for high performance hardware. Taking the aforementioned into consideration, developing and using 2D shear deformation plate theories, which consider the effects of previously mentioned shear and normal strains and provide the precision in the same way as 3D models do, represents a trend in the process of analysis of FGM plates. This paper presents, in detail, the methodology of the application of the HSDT theory based on the shape functions. A new shape function has been developed and the obtained results are compared to the results obtained with 13 different shape functions presented in the papers from the reference list. Also, the results have been verified through comparison with the results obtained with TSDT and 3D theories. In order to determine a procedure for the analysis and the prediction of behavior of FGM plates, the new program code in the software package МATLAB (MATrix LABoratory) has been developed based on theories studied in this paper. Finally, the ultimate goal and the purpose of all the previously mentioned studies and analyses is the application of FGM in different areas of engineering and branches of industry. Although FGM were initially used as materials for thermal barrier in space shuttles, today they are becoming widely used in the field of medicine, dentistry, energy and nuclear sector, automotive industry, military, optoelectronics etc. Description of the Problem The subject of the analysis in this paper are FGM plate ( Figure 3a) and FGM plate on elastic foundation (Figure 3b). The plate (length a, width b and height h) is made of functionally graded material consisting of the two constituents, namely, metal and ceramics. It is assumed that mechanical properties of the FGM in the thickness direction of the plate change according to the power law distribution (Figure 4a): This law defines the change of the mechanical properties as the function of the volume fraction of the FGM constituents in the thickness direction of the plate. It is assumed that mechanical properties of the FGM in the thickness direction of the plate change according to the power law distribution ( Figure 4a): In the Equation (1), h represents total thickness of the plate, and P(z) represents a material property in an arbitrary cross-section z, −h/2 < z < h/2. Pc represents the material property at the top of the plate z = h/2 − ceramic, and Pm represents the material property at the bottom of the plate z = −h/2 − metal. Index p is the exponent of the equation which defines the volume fraction of the constituents in FGM. Practically, by varying the index p, homogenous plates as well as FGM plate with precisely determined gradient structure could be obtained, as it is presented in Figure 4b: when p = 0 the plate is homogenous, made of ceramics, • when 0 < p < ∞ the plate has a gradient structure, • theoretically, when p = ∞ the plate becomes homogenous again, made of metal, although the plate can be considered homogenous even when p > 20. Kinematic Displacement-Strain Relations and Constitutive Equation of Elasticity for FGM According to HSDT based on the shape functions, displacements could be presented in the following way: In the reference literature there are many shape functions which can be polynomial, trigonometric, exponential, hyperbolic. Some examples of the shape functions are given in Table 1. This law defines the change of the mechanical properties as the function of the volume fraction of the FGM constituents in the thickness direction of the plate. In the Equation (1), h represents total thickness of the plate, and P(z) represents a material property in an arbitrary cross-section z, −h/2 < z < h/2. P c represents the material property at the top of the plate z = h/2 − ceramic, and P m represents the material property at the bottom of the plate z = −h/2 − metal. Index p is the exponent of the equation which defines the volume fraction of the constituents in FGM. Practically, by varying the index p, homogenous plates as well as FGM plate with precisely determined gradient structure could be obtained, as it is presented in Figure 4b: when p = 0 the plate is homogenous, made of ceramics, • when 0 < p < ∞ the plate has a gradient structure, • theoretically, when p = ∞ the plate becomes homogenous again, made of metal, although the plate can be considered homogenous even when p > 20. Kinematic Displacement-Strain Relations and Constitutive Equation of Elasticity for FGM According to HSDT based on the shape functions, displacements could be presented in the following way: u(x, y, z, t) = u 0 (x, y, t) − z ∂w 0 (x,y,t) where: u 0 , v 0 , w 0 are displacement components in the middle plane of the plate, ∂w 0 ∂x , ∂w 0 ∂y are rotation angles of transverse normal in relation to x and y axes, respectively, θ x , θ y are rotations of the transverse normal due to transverse shear and f (z) is the shape function. In the reference literature there are many shape functions which can be polynomial, trigonometric, exponential, hyperbolic. Some examples of the shape functions are given in Table 1. [54] (h/π) sin(πz/h) SF 4 Mantari et al. [55] sin(πz/h)e cos (πz/h)/2 + (πz/2h) SF 5-6 Mantari et al. [45] tan(mz) − zm sec 2 (mh/2), m = {1/5h, π/2h} SF 7 Karama et al. [56], Aydogdu [44] z exp −2(z/h) 2 , z exp −2(z/h) 2 / ln α , ∀α> 0 SF 8 Mantari et al. [46] z · 2.85 −2(z/h) 2 + 0.028z SF 9 El Meiche et al. [47] ξ[(h/π) sin(πz/h) − z], ξ = {1, 1/ cosh(π/2) − 1} SF 10 Soldatos [43] hsinh(z/h) − z cosh(1/2) SF 11 Akavci and Tanrikulu [49] z sec h z 2 /h 2 − z sec h(π/4)[1 − (π/2)tanh(π/4)] SF 12 Akavci and Tanrikulu [49] (3π/2)htanh(z/h) − (3π/2)z sec h 2 (1/2) SF 13 Mechab et al. [48] z cos(1/2) This paper proposes a new shape function as follows: The introduced shape function is an odd function of the thickness coordinate z and satisfies zero stress conditions for out of plane shear stresses. Observing the shape functions in the Table 1, may see that the proposed function belongs to the group of simple mathematic functions. This fact makes the integration process easier and thus reduces considerably the calculation time. Having in mind that the function is analytically integrable, there is no need to switch to numeric integration, which additionally increases the precision of the obtained results. The verification of the above claims is shown in the comparative diagrams ( Figure 5) of the newly introduced shape function and the shape functions given in the Table 1. These shape functions' diagrams can be categorized into two groups of functions. In both cases it can be seen in the diagram that, in the case of the ratio z/h = 0.5, all shape functions have extreme values, which are different (Figure 5a For small displacements and moderate rotations of a transverse normal in relation to x axis and y axis, normal and shear strain components are obtained by well-known relations in linear elasticity between displacements and strains: where: For small displacements and moderate rotations of a transverse normal in relation to x axis and y axis, normal and shear strain components are obtained by well-known relations in linear elasticity between displacements and strains: where: where f (z) = d f (z) dz is the first derivative of the shape function in the thickness direction of the plate. The elastic constitutive relations for FGM are given as follows: where the coefficients of the constitutive elasticity tensor could be defined through engineering constants: Due to the gradient change of the plate structure in the direction of the z coordinate, based on (1), the modulus of elasticity could be defined as: while Poisson's ratio ν is considered constant due to a small value variation in the thickness direction of the plate, ν = const. As it could be seen, the coefficients of the constitutive tensor are functionally dependent on the z coordinate which practically means that for p = 0 there is a finite number of planes parallel to the middle plane, where each of these planes has different values of the constitutive tensor C ij . Bending of FGM Plates and FGM Plates on Elastic Foundation It is assumed that the plate is loaded with an arbitrary transverse load q(x, y). Work under external load is defined as: where: is the sinusoidal transverse load with an amplitude q 0 . Plate strain energy is defined as: where force, moments and higher order moments vectors are obtained in the following form: Matrices in the developed form could be presented in the following way: In the Equation (12) by grouping the terms with the elements of constitutive tensor, new matrices with the following components could be defined: Therefore, load vectors could now be defined in the following form: By exchanging plate strain energy (11) and work under external load (9) into the equation which defines the minimum total potential energy principle: The following form is obtained: Materials 2018, 11, 2381 9 of 24 By exchanging the strain components (5) and by applying the calculus of variations, the following equilibrium equations are obtained: δu 0 : N xx,x + N xy,y = 0, δv 0 : N yy,y + N xy,x = 0, δw 0 : M xx,xx + 2M xy,xy + M yy,yy + q = 0, δθ x : P xx,x + P xy,y − R x = 0, δθ y : P xy,x + P yy,y − R y = 0. (18) which could be further solved through analytical and numerical methods. In the case of a plate on elastic foundation, in the Equation (16) deformation energy of the elastic foundation should be taken into consideration, which is defined using Winkler-Pasternak model in the following way: Using the previously mentioned the minimum total potential energy principle, the equilibrium equations of the plate on elastic foundation are the following: Analytical Solution of the Equilibrium Equations Although analytical solution methods are limited to simple geometrical problems, boundary conditions and loads, they can provide a clear understanding of the physical aspect of the problem and its solutions are very precise. Since analytical solutions are extremely important for developing new theoretical models, primarily due to their understanding of the physical aspects of the problem, and considering that a new HSDT theory based on a new shape function has been developed in this paper, the analytical solution of the equilibrium equations for a rectangular plate will be presented in the following part of the paper. For complex engineering calculations, which include solving the system of a large number of equations, it is necessary to use numerical methods which provide approximate, but satisfactory results. For a simply supported rectangular FGM plate, boundary conditions are defined based on [57] as: In order to satisfy these kinematic boundary conditions, assumed forms of Navier's solutions are introduced: Materials 2018, 11, 2381 10 of 24 The equilibrium equation is further developed into: or: Through the matrix multiplication of the Equation (24) with L −1 , the following is obtained: The Equation (25) fully defines the amplitudes of the assumed displacement components. The displacement components are obtained if the displacement amplitude matrix is multiplied with the vector from trigonometric functions which depend on x and y. Numerical Results In order to apply the previously obtained theoretical results to a simulation of real problems, a new program code for static analysis of FGM plates has been developed within the software package MATLAB. Material properties of the used materials are shown in Table 2 [58]. Table 2. Material properties of FGM constituents. Elasticity Modulus, E[GPa] Poisson's Ratio, ν Normalized values of a vertical displacement w (deflection), normal stresses σ xx and σ yy , shear stress τ xy , and transverse shear stresses τ xz and τ yz are given by using HSDT theory based on the new shape function. Normalization of the aforementioned values has been conducted according to (26) as: Table 3 shows comparative results of the normalized values of displacement and stresses of square plate for two different ratios of length and thickness of the plate (a/h = 5 and a/h = 10) and for different values of the index p. Verification of the results obtained in this paper has been conducted by comparing them to the results from the reference papers when a/h = 10. Based on that, the results when a/h = 5 are provided for different values of the index p, i.e., different volume fraction of the constituents in FGM. Using HSDT theory with the new shape function, the obtained results are compared to the results obtained using 13 different shape functions as well as to the results obtained using quasi 3D theory of elasticity [59] and TSDT theory [58]. The results based on the CPT theory are also presented [60] in order to find certain disadvantages of the theory. Based on the comparative results of displacement and stresses, which are provided in this paper and in previously mentioned theories, it could be seen that there is a match with both TSDT theory and quasi 3D theory of elasticity. On the other hand, it is clearly seen that there are some significant differences in the results obtained by CPT theory, especially related to the stress σ xx which shows that CPT theory does not provide satisfying results in the analysis of thick and moderately thick FGM plates. A comparative review of these results with the results obtained using 13 different shape functions shows that the newly given shape function provides almost identical results. However, since these results are given for the plane on a certain height z (for example, stress σ xx on the height of h/3 etc.), a real insight into the values obtained by varying the new function could be offered by presenting stress distribution across the thickness of the plate, which is done through appropriate diagrams. Figure 6 shows the distribution of normal stresses σ xx and σ yy across the thickness of the plate for different values of the index p. By analyzing the diagrams, it could be noticed that the curves representing both stresses are identical. Also, the basic property of FGM could be noticed, namely, the shift of a neutral plane in relation to the plane z/h = 0. It can also be seen that for the planes at the height z/h = 0. - [59] 0. -- Figure 7 shows the distribution of the shear stress τ xy across the thickness of the plate for different values of the index p ( Figure 7a) and for different shape functions (Figure 7b), but for the unchanging values of a/h = 10 and a/b = 1. While analyzing the diagrams, it should be considered that when p = 0 the plate is homogenous made of ceramics, when p = 20 the plate is homogenous made of metal, and when 0 < p < 20 the plate is made of FGM. By analyzing the diagram in the Figure 7a, it could be noticed that for all values of the index p, the stress τ xy achieves the maximum value on the upper edge of the plate. Ceramic plate has the lowest maximum value. Therefore, with an increase of the metal volume fraction when p = 1, maximum stress value also increases and the highest value is achieved when the plate is homogenous made of metal. Moreover, apart from affecting the maximum stress values, the variation of the index p value also affects the shape of the τ xy stress distribution curve across the thickness of the plate. In order to conduct a comparative analysis of the results for different shape functions and to estimate the application of the new shape function to the given problems, Figure 7b shows the distribution of the shear stress τ xy by using newly developed shape function and the shape functions given in Table 1. It is clearly seen that all the previously mentioned shape functions give identical results to the results obtained with the new shape function. Figure 8 shows the distribution of transverse shear stresses τ xz and τ yz across the thickness of the plate for different values of the index p and for different shape functions. By analyzing transverse shear stresses in Figure 8a,c, a basic distinction between homogenous and FGM plates can be noticed. When plates are made of ceramics (p = 0) or metal (p = 20), it can be noticed that both stresses achieve maximum values in the plane at the height z/h = 0, due to the homogeneity of the material. On the other hand, when FGM plates are considered, there is an asymmetry in relation to the plane z/h = 0, therefore, when p = 1 stresses achieve maximum values in the plane z/h = 0.15, and when p = 5 stresses achieve maximum values when z/h = 0.3. In contrast to the homogenous ceramic plate, where stress distribution curve is a parabola with the maximum value in the plane z/h = 0, plates with the larger volume fraction of metal (p = 10) also achieve the maximum value of stress when z/h = 0, but the distribution curve is not a parabola. With the further increase of the metal volume fraction (p = 20), and although the plate can be practically considered homogenous, the diagram still shows the curve which is not a parabola. Generally, due to insignificant but still present ceramic fractions in the upper part of the plate, there is a slight deformity of the curve. By conducting comparative analysis of the stresses τ xz and τ yz for different shape functions, and with fixed values of a/h = 10, a/b = 1 and p = 5, it could be seen in Figure 8b and Figure 8d In order to conduct a comparative analysis of the results for different shape functions and to estimate the application of the new shape function to the given problems, Figure 7b shows the distribution of the shear stress τ xy by using newly developed shape function and the shape functions given in Table 1. It is clearly seen that all the previously mentioned shape functions give identical results to the results obtained with the new shape function. Figure 8 shows the distribution of transverse shear stresses τ xz and τ yz across the thickness of the plate for different values of the index p and for different shape functions. By analyzing transverse shear stresses in Figure 8a,c, a basic distinction between homogenous and FGM plates can be noticed. When plates are made of ceramics (p = 0) or metal (p = 20), it can be noticed that both stresses achieve maximum values in the plane at the height z/h = 0, due to the homogeneity of the material. On the other hand, when FGM plates are considered, there is an asymmetry in relation to the plane z/h = 0, therefore, when p = 1 stresses achieve maximum values in the plane z/h = 0.15, and when p = 5 stresses achieve maximum values when z/h = 0.3. In contrast to the homogenous ceramic plate, where stress distribution curve is a parabola with the maximum value in the plane z/h = 0, plates with the larger volume fraction of metal (p = 10) also achieve the maximum value of stress when z/h = 0, but the distribution curve is not a parabola. With the further increase of the metal volume fraction (p = 20), and although the plate can be practically considered homogenous, the diagram still shows the curve which is not a parabola. Generally, due to insignificant but still present ceramic fractions in the upper part of the plate, there is a slight deformity of the curve. In order to understand the effects of increasing the index p as well as the effect of the thickness and geometry, Figure 9 shows the diagram of the normalized values of the displacement w for different a/h and a/b ratios and values of the index p. By analyzing Figure 9a and Figure 9b, it could be noticed that the displacement values w for the metal plate (p = 20) are the highest, for the ceramic plate they are the lowest, and for the FGM plate they are somewhere in between. Moreover, by varying the volume fraction of metal or ceramics, a desired bending rigidity of the plate could be achieved. In Figure 9a, it could be seen that the curves gradually become closer when a/b > 4. In contrast to that, Figure 9b shows that with an increase of the ratio a/h the curves do not become closer, namely, the difference of the displacement ratio remains constant regardless of the index p change. This conclusion comes from the fact that in thin plates it is less possible to vary the volume fraction of the FGM constituents in the thickness direction of the plate and, thus, the index p has no effects. By conducting comparative analysis of the stresses τ xz and τ yz for different shape functions, and with fixed values of a/h = 10, a/b = 1 and p = 5, it could be seen in Figure 8b,d that, unlike the stress τ xy , the results do not match for all the shape functions. The most significant deviation could be noticed in the results for the El Meiche's and Karama's shape functions. The Akavci's function also shows a slight deviation and it achieves maximum stress value at the height z/h = 0.25, while the results for all the other shape functions are almost identical, achieving the maximum stress value in the plane z/h = 0.25. In order to understand the effects of increasing the index p as well as the effect of the thickness and geometry, Figure 9 shows the diagram of the normalized values of the displacement w for different a/h and a/b ratios and values of the index p. By analyzing Figure 9a,b, it could be noticed that the displacement values w for the metal plate (p = 20) are the highest, for the ceramic plate they are the lowest, and for the FGM plate they are somewhere in between. Moreover, by varying the volume fraction of metal or ceramics, a desired bending rigidity of the plate could be achieved. In Figure 9a, it could be seen that the curves gradually become closer when a/b > 4. In contrast to that, Figure 9b shows that with an increase of the ratio a/h the curves do not become closer, namely, the difference of the displacement ratio remains constant regardless of the index p change. This conclusion comes from the fact that in thin plates it is less possible to vary the volume fraction of the FGM constituents in the thickness direction of the plate and, thus, the index p has no effects. In order to determine the effect of the elastic foundation on the displacements and stresses of the FGM plate, the results of different combinations of the FGM constituents have been presented, as well as different combinations of the Winkler (k0) and Pasternak (k1) coefficient of the elastic foundation. Apart from the normalization given in Error! Reference source not found., it is necessary to apply the normalization of the coefficients k0 and k1, in the following form: where the bending stiffness of the plate is ν The Tables 4 and 5 show the results of the normalized values of displacements and stresses of the square plate on elastic foundation for p = 5, and p = 10, different values of k0 and k1 coefficients, as well as for two different ratios length/thickness of the plate (a/h = 10 and a/h = 5). In order to determine the effect of the elastic foundation on the displacements and stresses of the plate, the values of displacements and stresses for k0 = 0 and k1 = 0 are first shown, which practically matches the case of the plate without the elastic foundation. Afterwards, the values of the given coefficients are varied in order to conclude which of the two has greater influence. Based on the results, it is concluded that the introduction of the coefficient k0 has less influence on the change of the displacements and stresses values then when only k1 coefficient is introduced. By introducing k0 and k1 coefficients, bending stiffness of the plate increases, i.e., displacement and stresses values decrease and the influence of the Winkler coefficient is smaller than the influence of the Pasternak coefficient. This phenomenon is especially noticeable in the diagram dependency which is to be shown later. In order to determine the effect of the elastic foundation on the displacements and stresses of the FGM plate, the results of different combinations of the FGM constituents have been presented, as well as different combinations of the Winkler (k 0 ) and Pasternak (k 1 ) coefficient of the elastic foundation. Apart from the normalization given in (26), it is necessary to apply the normalization of the coefficients k 0 and k 1 , in the following form: where the bending stiffness of the plate is D = E c h 3 12(1−ν 2 ) . The Tables 4 and 5 show the results of the normalized values of displacements and stresses of the square plate on elastic foundation for p = 5, and p = 10, different values of k 0 and k 1 coefficients, as well as for two different ratios length/thickness of the plate (a/h = 10 and a/h = 5). In order to determine the effect of the elastic foundation on the displacements and stresses of the plate, the values of displacements and stresses for k 0 = 0 and k 1 = 0 are first shown, which practically matches the case of the plate without the elastic foundation. Afterwards, the values of the given coefficients are varied in order to conclude which of the two has greater influence. Based on the results, it is concluded that the introduction of the coefficient k 0 has less influence on the change of the displacements and stresses values then when only k 1 coefficient is introduced. By introducing k 0 and k 1 coefficients, bending stiffness of the plate increases, i.e., displacement and stresses values decrease and the influence of the Winkler coefficient is smaller than the influence of the Pasternak coefficient. This phenomenon is especially noticeable in the diagram dependency which is to be shown later. Table 4. Normalized values of displacement and stresses of square plate on elastic foundation for p = 5, different values of the k 0 и k 1 and the ratio a/h (a/b = 1). Figure 10 shows the effect of the Winkler coefficient k 0 on the distribution of the normal stress σ xx , shear stress τ xy and transversal shear stresses τ xz and τ yz across the thickness of the plate on the elastic foundation. By analyzing the diagram, it can be seen that the value of the stresses σ xx and τ xy equals zero for z/h = 0.15. On the other hand, the maximum values of τ xz and τ yz stresses are at z/h = 0.2 when the new proposed shape function is applied, while the maximum values of mentioned stresses is respectively at z/h = 0.15 i.e., z/h = 0.25 for Karama's shape function. Figure 11 shows a comparative review of shear transversal stresses τ xz and τ yz distribution across the thickness of the plate on elastic foundation for different shape functions. As in the case of bending the plate without the elastic foundation, the shape functions do not give the same results. Therefore, it can be seen that for the Mantari's and Akavci's shape functions, stresses achieve their maximum values in the plane z/h = 0.25, and for El Meiche's function in the plane z/h = 0.15, while for all the other shape functions as well as new proposed function, maximum values of the stresses are in the plane z/h = 0.2. In order to get a clear insight on the effect of Winkler and Pasternak coefficients of the elastic foundation, Figure 12 shows the diagram of the normalized values of the displacement w plate on the elastic foundation for different values of the index p and coefficients k0 and k1. By comparing the two diagrams, it can be seen that the change of the displacement value w is higher with the increase of the coefficient k1 value than with the increase of the coefficient k0. For example, for the FGM plate when p = 5, and the increase of the coefficient from k0 = 0 to k0 = 100, the value of deflection changes twice its value. In the other case, with the change of the coefficient from k1 = 0 to k1 = 100, the value of deflection changes 8 times its value. Conclusions The results obtained in the previously published papers have been a starting point for developing and applying the new shape function. They have emphasized the importance and topicality of the research on the application of the functionally graded materials. A thorough and comprehensive systematization and investigation of the literature on the matter have been conducted according to the problem type which authors tried to solve during FGM plate analysis. Special attention and focus have been given to different deformation theories which authors had used in their analyses. The new shape function has been presented along with the comparative review of it with In order to get a clear insight on the effect of Winkler and Pasternak coefficients of the elastic foundation, Figure 12 shows the diagram of the normalized values of the displacement w plate on the elastic foundation for different values of the index p and coefficients k 0 and k 1. By comparing the two diagrams, it can be seen that the change of the displacement value w is higher with the increase of the coefficient k 1 value than with the increase of the coefficient k 0 . For example, for the FGM plate when p = 5, and the increase of the coefficient from k 0 = 0 to k 0 = 100, the value of deflection changes twice its value. In the other case, with the change of the coefficient from k 1 = 0 to k 1 = 100, the value of deflection changes 8 times its value. In order to get a clear insight on the effect of Winkler and Pasternak coefficients of the elastic foundation, Figure 12 shows the diagram of the normalized values of the displacement w plate on the elastic foundation for different values of the index p and coefficients k0 and k1. By comparing the two diagrams, it can be seen that the change of the displacement value w is higher with the increase of the coefficient k1 value than with the increase of the coefficient k0. For example, for the FGM plate when p = 5, and the increase of the coefficient from k0 = 0 to k0 = 100, the value of deflection changes twice its value. In the other case, with the change of the coefficient from k1 = 0 to k1 = 100, the value of deflection changes 8 times its value. Conclusions The results obtained in the previously published papers have been a starting point for developing and applying the new shape function. They have emphasized the importance and topicality of the research on the application of the functionally graded materials. A thorough and comprehensive systematization and investigation of the literature on the matter have been conducted according to the problem type which authors tried to solve during FGM plate analysis. Special attention and focus have been given to different deformation theories which authors had used in their analyses. The new shape function has been presented along with the comparative review of it with Conclusions The results obtained in the previously published papers have been a starting point for developing and applying the new shape function. They have emphasized the importance and topicality of the research on the application of the functionally graded materials. A thorough and comprehensive systematization and investigation of the literature on the matter have been conducted according to the problem type which authors tried to solve during FGM plate analysis. Special attention and focus have been given to different deformation theories which authors had used in their analyses. The new shape function has been presented along with the comparative review of it with 13 different shape functions which were primarily developed by different authors for the analysis of composite laminates but, in this paper, they have been adjusted and implemented in appropriate relations for the analysis of FGM plates. Based on the obtained results of the static analysis of moderately thick and thick plates, it can be concluded that the newly developed shape function could be applied in the analysis of FGM plates. By analyzing the obtained results, the following could be concluded: • the values of the vertical displacement w (deflection) and the corresponding stresses, which were obtained in this paper by using HSDT theory based on the new shape function, match the results of the same values obtained in the reference papers by using TSDT theory [58], quasi 3D theory of elasticity [59] and HSDT theories based on 13 different shape functions. However, in contrast to that, there are significant deviations of the results obtained for the values of the vertical displacement, especially for stresses σ xx , from the results obtained by CPT theory from the reference papers [60]. • the diagram of the distribution of transverse shear stresses τ xz and τ yz across the thickness of the plate shows the difference in behavior between a homogenous, ceramic or metal, plate and FGM plate. A basic property of FGM can be clearly seen, and that is the asymmetry of the stress distribution in relation to the middle plane of the plate (z = 0). The maximum values of stresses, depending on the volume fraction of certain constituents, are shifted in relation to the plane z = 0, which represents a neutral plane in homogenous plates. • the highest values of the displacement w are obtained in a metal plate, the lowest in a ceramic plate and in an FGM plate, the values are somewhere in between and they depend on the volume fraction of the constituents. Based on that, it can be concluded that by varying the volume fraction of metal and ceramic, a desired bending rigidity of the plate can be achieved. • a comparative analysis of the change of transverse shear stresses τ xz and τ yz across the thickness of the plate shows that, unlike the stress τ xy , their values do not match for all the shape functions. Funding: This research received no external funding.
2018-12-02T19:39:54.678Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "df5a6bc99402507ff5ac5745adae8e5bba7a382c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/11/12/2381/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df5a6bc99402507ff5ac5745adae8e5bba7a382c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
860119
pes2o/s2orc
v3-fos-license
Predicting the Second Caustic Crossing in Binary Microlensing Events We fit binary lens models to the data covering the initial part of real microlensing events in an attempt to predict the time of the second caustic crossing. We use approximations during the initial search through the parameter space for light curves that roughly match the observed ones. Exact methods for calculating the lens magnification of an extended source are used when we refine our best initial models. Our calculations show that the reliable prediction of the second crossing can only be made very late, when the light curve has risen appreciably after the minimum between the two caustic-crossings. The best observational strategy is therefore to sample as frequently as possible once the light curve starts to rise after the minimum. Second,the causti c crossi ng i nduces very hi gh m agni cati ons,and therefore they are useful fori ntense photom etri c and spectroscopi c fol l ow -ups;such observati onscan beused to study stel l ar atm ospheres,probe the age,and m etal l i ci ty ofm ai n sequence stars i n the G al acti c bul ge (e. g.A l brow et al .1999a;Lennon et al .1996Lennon et al . ,1997Sahu & Sahu 1998;M i ni ttiet al . 1998;H eyrovski ,Sassel ov & Loeb 2000).T hi rdl y,the causti c crossi ngsal wayscom e i n pai rs, so once we observe the rst crossi ng, i f we can predi ct the second causti c crossi ng, then we can ti m e our observati ons m ore accuratel y.A questi on natural l y ari ses: year,perhaps 5% ofthese w i l lbe causti c-crossi ng bi nary m i crol ensi ng events.T he pri m ary m oti vati on ofthi s paper i s to address thi s questi on. T he l ayout ofthe paper i s as fol l ow s.In secti on 2 we rst gi ve the l ens equati on,and outl i neournum eri calm ethods.In secti on 3 wedescri betheal gori thm ofsearchi ng thebi nary l ensparam eterspace.A nd i n secti on 4 we appl y ourm ethodsto two real -ti m e bi nary events di scovered by the O G LE II col l aborati on.In secti on 5,we di scuss severali ssues i n tti ng bi nary l enses. ci rcl e accordi ng to the source pro l e (see D om i ni k 1998).T he detai l sofouri m pl em entati on can be found i n M ao & Loeb (2001). In the num eri calexperi m ents we use onl y the data representi ng the earl y parts ofthe l i ghtcurves.W e al so change the am ountofdata i ncl udi ng observati onsm ade on subsequent ni ghts and check how i t i n uences our predi cti ons. W e assum e that the data al ready acqui red show s the characteri sti cs ofcausti c crossi ng event,i . e.a strong i ncrease ofbri ghtness fol l owed by a sl ower decl i ne resem bl i ng the begi nni ng ofthe typi cal\U -shaped" l i ght curve.W e al so assum e that the i nter-causti c m i ni m um ofbri ghtness i s al ready covered by the data.Ifthi s i s the case one can esti m ate the total bri ghtness(energy ux)i n three characteri sti c i nstantsofti m e:l ong before the event(\base ux" F 0 ),shortl y before the rst causti c crossi ng (F 1 ) and at the i nter-causti c m i ni m um w here A 1 and A 12 are the l ensm agni cati onscorrespondi ng to F 1 and F 12 .Si nce the source contri butes l ess than 100% ofthe ux (f 1),the fol l ow i ng condi ti ons appl y: O ne can al so obtai n the fol l ow i ng i nequal i ti es: to thi sapproach the l ensm agni cati on hasa generi c form nearthe causti c,and theobserved ux can be expressed as: F itting \square root" form ula For com pari son we use a m ethod w hi ch can onl y be appl i cabl e to the part of the data representi ng ux i ncrease toward the causti c.For a poi nt source cl ose to the crossi ng one can use a generi c form ofthe l i ght curve gi ven by the form ul a: T here are three unknow n param eters: the ux m easured shortl y after the crossi ng F 2 , a constantK ,w hi ch i sproporti onalto K ofeq. (7) R E F E R E N C E S A fonso C .,A lard C .A lbert J. N .et al.,2000,A pJ,532,340 A lcock C .et al.,2000a,A pJ,541,270 A lcock C .et al.,2000b,A pJ,541,734 A lcock C .et al.,2000c,A pJ,542,257 A lcock C .et al.,2000d,A pJ,542,281 A lcock C .et al.,2000e,astro-ph/0011506 A lbrow M .D .et al.,1999a,A pJ,522,1011 A lbrow M . D .et al.,1999b In the upper row w e show the results based on the data for JD < 2451734 -six nights before the actualsecond caustic crossing.T he left panelis based on our best t and show s the source plane w ith caustic pattern, the source path corresponding to the used data and the projected position of the binary (stars).T he m iddle panel show s the corresponding theoretical (solid line) light curve and the data (error bars ). In the right panel w e show the predicted tim e of the second caustic crossing for the best t (big dot -in this case out of the range) and other acceptable m odels (sm alldots).Sim ilar results based on the data for JD < 2451738 and JD < 2451739 are show n in the m iddle and low er row respectively. F igure 3. Sim ulations of ts using approxim ate form ula (eq. 10) to predict the tim e of the second caustic crossing. T he upper panelcorresponds to sim ulations neglecting the observationalerrors.In the sim ulations presented in the low er panelthe G aussian scatter in m easured stellar m agnitudes m = 0:04 is assum ed. E ach curve show s distribution of predicted tim e of crossing based on observations m ade in lim ited span of tim e during source brightening. T he predictions shift system atically from the left ("early") to the right ("late") ifdone later.O n average the predictions give too early crossing,so they are "safe". (See text for details.)
2014-10-01T00:00:00.000Z
2001-04-10T00:00:00.000
{ "year": 2001, "sha1": "8370ae3af7cc37985be53b43c1d1322fa08710a5", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/325/4/1546/3041700/325-4-1546.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "02492dd23a76394c9d0138c44bf7ac4df42715f9", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
261695289
pes2o/s2orc
v3-fos-license
Cooperative regulation of coupled oncoprotein synthesis and stability in triple-negative breast cancer by EGFR and CDK12/13 Significance Triple-negative breast cancer (TNBC) is an aggressive and lethal subtype of the disease. For almost two decades, the epidermal growth factor receptor (EGFR) has been speculated to play an important role in disease progression, but clinical trials of EGFR inhibitors have been disappointing, suggesting that these tumors may possess mechanisms of intrinsic resistance. Here, we identify that this resistance is driven by cyclin-dependent kinases 12 and 13 (CDK12/13) and that as such, combination therapies targeting EGFR and CDK12/13 exhibit potent and synergistic activity in TNBC models. This combination therapy functions through a surprising mechanism involving disrupted synthesis and stability of driver oncoproteins. Together, these findings expand our understanding of pathophysiological cell signaling in TNBC and illuminate a promising therapeutic approach. Triple-negative breast cancer (TNBC) is an aggressive subtype of the disease that constitutes 15 to 20% of all breast cancers.TNBCs are clinically defined by their lack of expression of the three main targetable receptors in breast cancer-the estrogen (ER), progesterone (PR), and human epidermal growth receptor-2 (HER2) receptors.The genetic and molecular heterogeneity within this pooled disease subtype has made patient stratification and targeted treatment particularly challenging (1)(2)(3)(4).While attempting to identify oncogenic drivers in TNBC, immunohistochemical and large-scale genomic studies have suggested that EGFR signaling may be frequently activated and associated with poor prognosis (5)(6)(7)(8).These findings have long positioned EGFR as an intriguing target in TNBC.Efforts to target EGFR in unselected TNBC patients have, however, yielded low response rates (9)(10)(11)(12)(13)(14)(15).This is indicative of possible intrinsic resistance and its underlying mechanisms as a major impediment to more widespread use of EGFR inhibitors in TNBC. Broad dysregulation of gene expression is one of the hallmarks of cancer, including TNBC, pointing to the potential utility of targeted therapeutics that alter gene regulation (16,17).Indeed, the development of specific inhibitors targeting multiple key players in transcriptional regulation has provided opportunities for therapeutic interventions (18)(19)(20).Cyclin-dependent kinases (CDK) 12 and 13 are well known for regulating transcriptional and posttranscriptional processes, and CDK12 has also recently been shown to play a role in the regulation of cap-dependent translation (21)(22)(23)(24)(25)(26)(27).THZ531, a selective inhibitor of CDK12/13, has been reported to suppress expression of genes that support malignant progression and induce apoptosis in cancer cell line models (28,29).This work and our broadening understanding of CDK12/13's regulation of DNA damage response pathway genes have driven the recent development of these kinases as both biomarkers and therapeutic targets (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40).However, the functional interactions between CDK12/13 and most major oncogenic signaling pathways have remained largely unexplored. Motivated by the hypothesis that CDK12/13 may functionally interact with major oncogenic signaling pathways in TNBC, we performed a candidate drug screen to identify synergistic drug combinations between THZ531 and inhibitors targeting possible oncogenic disease drivers.This work led to the unexpected finding that intrinsic resistance to EGFR inhibition in TNBC, a long-standing and unexplained observation, is mediated by CDK12/13.Studies into the mechanism underlying the profound synergy between EGFR and CDK12/13 inhibitors revealed that the stability of driver oncoproteins in TNBC is subject to translation-coupled regulation by these kinases, thereby nominating a mechanistically distinct approach for targeting oncogenic dependencies in this important disease subtype. Results CDK12/13 Inhibition Sensitizes TNBC Cells to EGFR Inhibition.As there is no known single, targetable oncogenic driver in TNBC, we designed a panel of inhibitors targeting an array of key molecular pathways that are frequently implicated in cancer cell proliferation, survival, differentiation, and apoptosis.We tested two TNBC cell lines with this panel of inhibitors in the presence versus absence of a low, sublethal dose of THZ531 (SI Appendix, Fig. S1A).Both TNBC lines were markedly sensitized to the EGFR inhibitors gefitinib and lapatinib, as reflected by 10-100-fold reductions in GI 50 values in the presence of THZ531 (Fig. 1A).Further studies showed consistent THZ531-mediated sensitization to EGFR inhibitors across each member of a panel of eight diverse TNBC cell lines, decreasing their GI 50 values to the submicromolar range in each case (Fig. 1 B and C).The sensitization effect was specific to TNBC cell lines and not observed in luminal breast cancer cell lines (BT474 and SK-BR-3) or the immortalized mammary epithelial line, MCF10A (Fig. 1C).Long-term combined EGFR and CDK12/13 inhibition suppressed colony formation and cell growth in multiple TNBC cell lines (Fig. 1D).Using an established analytic tool-Synergy Finder 2.0 with method selected for the Loewe additivity model and the Bliss independence model, additional quantitative analyses of our drug combination screening data confirmed synergy between EGFR inhibition and THZ531 in multiple TNBC cell lines (SI Appendix, Fig. S1B) (41)(42)(43).To confirm that the effects of THZ531 were through CDK12/13 inhibition, we examined a stereoisomer (THZ531R), and a derivative of THZ531 (THZ532), each of which spare CDK12 and CDK13 (28); neither compound synergized with gefitinib in cell viability studies (SI Appendix, Fig. S1C).Additionally, we observed that the sensitizing effect of THZ531 was lost in cells expressing a mutant version of CDK12 (CDK12 AS ) (44,45), generated by endogenous CRISPR-mediated gene editing, that is not inhibited by the drug (SI Appendix, Fig. S1D).(We note that we were unable to isolate TNBC cells harboring a CDK13 AS mutation.)The loss of CDK12 or CDK13 impeded both the toxicity of THZ531 alone as well as THZ531-induced sensitization to gefitinib (SI Appendix, Fig. S1E), suggesting that inhibition of both CDK12 and CDK13 is required for the observed synergistic effect.Collectively, these findings demonstrate that EGFR is an oncogenic driver in TNBC and that intrinsic resistance to EGFR inhibitors in TNBC can be mitigated by CDK12/13 inhibition. Concurrent EGFR and CDK12/13 Inhibition Decreases the Levels of Key Oncogenic Proteins in TNBC Cells.The global involvement of CDK12/13 in transcription elongation, mRNA splicing, and intronic polyadenylation (24,26,(44)(45)(46) prompted us to examine the genes whose expression was selectively affected by the drug combination.Using an unbiased transcriptomic approach, we performed RNA-seq analyses in two TNBC cell lines treated with vehicle control, the single agents gefitinib and THZ531, and the combination.Consistent with CDK12/13's role in transcriptional regulation, THZ531 treatment resulted in the differential expression of thousands of genes.However, relatively few genes were further differentially expressed in the combined gefitinib plus THZ531 condition (Fig. 2A, SI Appendix, Fig. S2A, and Dataset S1 A and B).This result led us to hypothesize that the synergistic activity of the combination therapy may be due, at least in part, to nontranscriptional mechanisms.To examine this hypothesis, we evaluated the levels of key oncogenic proteins in TNBC cells treated with vehicle control, single agents, and the combination.In three TNBC cell lines, combined treatment with gefitinib and THZ531 was accompanied by markedly reduced levels of MYC and MCL-1 proteins (Fig. 2B).These proteins are notable, as extensive studies have documented their roles as driver oncoproteins in TNBC (47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57)(58)(59)(60). MYC Protein Levels Are Suppressed through both Decreased Protein Synthesis and Increased Ubiquitin-Proteasome-Dependent Protein Degradation in EGFR-and CDK12/13-Inhibited Cells.Given the well-established role of MYC as a driver oncoprotein in TNBC (47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57), we sought to understand the basis for its loss following coinhibition of EGFR and CDK12/13.Specifically, we surveyed each step of MYC biogenesis-transcription, translation, and degradation.Direct analysis of MYC mRNA transcription using quantitative real-time PCR showed increased mRNA expression in gefitinib plus THZ531 treated cells (SI Appendix, Fig. S3A), a surprising result given the loss of MYC protein with the same combination.Although previous studies suggested that MYC mRNA levels may be acutely suppressed with CDK12/13 inhibition (28,37), our data reveal that on the timescale of therapeutic effects, MYC transcript levels are increased and thus cannot account for the observed reductions in MYC protein levels. We next examined the effects of gefitinib and THZ531 on mRNA translation using sucrose density gradient polysome profiling.Polysome profiles showed reduced levels of heavy polysomes in cells treated with THZ531 alone and the drug combination, consistent with global suppression in mRNA translation in the drug-treated cells (Fig. 3A, boxed polysome fractions).To determine whether the global changes in polysome profiles included reductions in MYC mRNA translation, [ 35 S]methionine labeling followed by MYC immunoprecipitation was performed in cells treated with gefitinib+THZ531.As observed in the phosphorimage and normalized quantification, [ 35 S]methionine incorporation into nascent MYC proteins was considerably decreased in the combination-treated versus the vehicle-treated samples (Fig. 3B), providing direct evidence of MYC translational suppression in the presence of combined gefitinib and THZ531. To examine MYC stability in the combination treatment protocol, we performed a cycloheximide chase experiment.MYC protein stability was unaffected by gefitinib but slightly decreased with THZ531 alone and to a greater extent with the combination treatment (Fig. 3C and SI Appendix, Fig. S3B).We further investigated the canonical MYC protein degradation pathway by the ubiquitin-proteasome system.Following MYC protein immunoprecipitation, we observed a substantial increase in MYC ubiquitination in cells treated with gefitinib plus THZ531 (Fig. 3D and SI Appendix, Fig. S3C).Consistent with these data, proteasome inhibition by bortezomib rescued the decline in MYC protein levels seen in the drug combination condition, indicating proteasome-dependent degradation of MYC (Fig. 3E).Together, these data demonstrate that the cumulative loss of MYC protein following combined EGFR and CDK12/13 inhibition results from suppressed MYC translation and increased MYC protein degradation. EGFR and CDK12/13 Inhibition Synergistically Decrease MYC Protein Stability through Their Regulation of 4E-BP1 Phosphorylation. It was recently reported that CDK12 acts as a positive regulator of cap-dependent translation through direct phosphorylation of the mRNA 5′ cap-binding repressor, 4E-BP1 (25).This led us to consider whether the suppression of MYC protein synthesis following combined EGFR and CDK12/13 inhibition occurs via the dephosphorylation of 4E-BP1.In line with this hypothesis, we observed dephosphorylation of the four well-established 4E-BP1 phosphosites (T37, T46, S65, and T70) with the drug combination (Fig. 4A).This indicates that combined CDK12/13 and EGFR inhibition prevented phosphorylation of 4E-BP1, thus retaining cap-binding on mRNA and suppressing cap-dependent mRNA translation, consistent with the polysome profile data illustrated in Fig. 3A. Work in model systems has demonstrated that alterations in protein synthesis rates can affect the stability of nascent polypeptides (61)(62)(63)(64).To understand whether 4E-BP1-dependent suppression of cap-dependent translation drives the reduction in MYC stability, we used a 3' UTR-targeted shRNA to suppress the expression of endogenous 4E-BP1, then simultaneously expressed a dominant negative, nonphosphorylatable mutant of 4E-BP1 (T37A, T46A, S65A, and T70A) under doxycycline-responsive control (SI Appendix, Fig. S4A).In this experimental system, MYC half-life was substantially suppressed, indicating that suppression of MYC synthesis rate is alone sufficient to destabilize the protein (Fig. 4B).Further, we observed that the nonphosphorylatable mutant 4E-BP1 blocked the ability of gefitinib + THZ531 to further destabilize MYC, suggesting that the drug combination destabilizes MYC through its effects on 4E-BP1 phosphorylation (Fig. 4 B and C). Beyond its effects on MYC, the overall cytotoxic effect of the combination therapy was also dependent on 4E-BP1, as replacement of endogenous 4E-BP1 with the nonphosphorylatable mutant blocked the cellular response to the drug combination (Fig. 4D).Consistent with this finding, TNBC cells cultured chronically in media containing gefitinib and THZ531 until they developed resistance were insensitive to drug-induced suppression of 4E-BP1 phosphorylation as well as MYC and MCL-1 levels (Fig. 4E and SI Appendix, Fig. S4 B and C).Consequently, an expected and pronounced dependence on MYC was observed in these TNBC cells (SI Appendix, Fig. S4D).Together, these findings demonstrate that combined EGFR and CDK12/13 inhibition leads to MYC destabilization and cell death through the cooperative regulation of 4E-BP1 activity (Fig. 4F). CNOT1 Is Required for Combination Therapy-Induced MYC Translational Suppression, MYC Degradation, and Cell Death.To gain further insight into the mechanisms underlying the biological activity of the combination therapy, we performed genome-wide CRISPR/Cas9-based loss-of-function screens in cells treated with vehicle control or gefitinib + THZ531 (Fig. 5A).We focused our analysis on genes whose knockouts were enriched in the combination-treated arm, as genes scoring in this group can be interpreted as being required for the full activity of the combination therapy.Analysis of the screen revealed that multiple CNOT family gene knockouts were enriched in cells treated with gefitinib + THZ531 (Fig. 5B and Datasets S2 and S3).The CCR4-NOT complex, which is comprised of multiple CNOT family proteins, is reported to function in posttranscriptional mRNA deadenylation, translational quality control, and protein ubiquitylation (65-72).Among the CNOT family genes, CNOT1, a central scaffolding component of the CCR4-NOT complex, was the top-scoring gene in our screen.CNOT1 knockout led to a nearly complete rescue of the cooperativity between EGFR and CDK12/13 inhibitors as well as the overall toxicity of the drug combination (Fig. 5 C and D).CNOT1 loss also rendered cells insensitive to drug-induced loss of both 4E-BP1 phosphorylation and consequent MYC and MCL-1 protein loss (Fig. 5E).MYC loss in CNOT1 knockdown cells expectedly hindered colony formation, consistent with the notion that these cells maintain MYC dependence (SI Appendix, Fig. S4E).Further, CNOT1 protein expression was lost in cells with naturally evolved resistance to the combination therapy (Fig. 5F), suggesting that it may be responsible for the resistance of these cells to druginduced 4E-BP1 dephosphorylation and death. Discussion The finding over a decade ago of EGFR hyperactivation in TNBC and its correlation with aggressive disease, chemoresistance, and poor prognosis positioned EGFR as a prime therapeutic target in this disease subset.However, the subsequent observation of poor clinical responses to EGFR inhibitors in TNBC patients dampened enthusiasm for this therapeutic target and raised a fundamental question: is EGFR a driver of TNBC pathogenesis whose importance is obscured by mechanisms of intrinsic resistance, or is it simply a bystander signaling event ( 14)? Here, we resolve this question, demonstrating that the inhibition of CDK12/13 reveals an exquisite dependence of diverse TNBC models on EGFR signaling.Further, these studies reveal that EGFR and CDK12/13 cooperate to drive TNBC not through transcriptional regulation, but rather by promoting the synthesis and associated stabilization of key driver oncoproteins, including MYC (47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57).As such, the coupled nature of protein synthesis and stability, which has been well-explored in model systems (61)(62)(63)(64), is shown here to underlie the therapeutic activity of a promising anticancer strategy. Several key open questions remain.First, while the drug combination under study clearly functions through the modulation of 4E-BP1-regulated, coupled protein synthesis and stability, the relative contributions of decreased oncoprotein synthesis versus decreased stability to the observed toxicity have not been clarified.Further, the extent to which EGFR-and/or CDK12/13-regulated transcriptional events or proteasome modulation may template the observed mechanism of action has not been resolved.Second, a number of reports have described cooperativity between EGFR blockade and inhibition of receptor tyrosine kinase-PI3K-mTOR signaling in TNBC (73)(74)(75)(76)(77)(78)(79).A recent report also demonstrated that the effects of EGFR inhibition can be potentiated through blockade of Elongator complex-mediated MCL-1 translation (15).It remains to be determined whether these processes regulate, or are regulated by, CDK12/13.Third, although this study identifies MYC and MCL-1 protein loss as likely key events downstream of combined EGFR and CDK12/13 inhibition, unbiased proteomic approaches may reveal additional TNBC driver oncoproteins that are similarly affected.Fourth, the precise mechanisms by which the CCR4-NOT impacts EGFR-and CDK12/13-regulated 4E-BP1 activity and downstream oncoprotein stability remain to be defined.The CCR4-NOT complex has been shown to regulate mRNA metabolism directly through miRNA-mediated deadenylation of mRNAs and translation by interacting with translational regulators such as eIF4E and DDX6 and blocking the decapping machinery (65)(66)(67)(68)(69)(70)(71).Further, it also functions in the ubiquitination of nascent, translationally arrested polypeptides and the maintenance of 26S proteasome integrity (66,72), suggesting that its regulatory roles in the phenomena under study here may be multifactorial, including at transcriptional level.Finally, the full therapeutic potential of combined EGFR and CDK12/13 inhibition has not been evaluated in preclinical animal models because THZ531 is not amenable to in vivo administration.Interestingly, our findings suggest that as-yet undefined criteria must be met in order to achieve sensitization to EGFR blockade by CDK12/13 inhibition, as in our hands, neither dual knockdown of CDK12/13 nor an alternative CDK12/13 inhibitor, SR-4835, phenocopied the effects of THZ531 (38), despite clear evidence that THZ531 functions in a CDK12/13-dependent manner (SI Appendix, Fig. S1 C-E).The inability of CDK12/13 knockdown to phenocopy THZ531 may be explained by the fact that CDK12/13 function as components of larger complexes, and loss of the proteins may destabilize or alter the makeup of these complexes in ways that are not phenocopied by kinase inhibition, a result observed in other contexts by our group and others (80,81).Additionally, kinase inhibitors work immediately, whereas genetic knockdowns occur over longer periods of time, which potentially allows for compensatory effects.The fact that EGFR inhibitor sensitization by THZ531 is not phenocopied by SR-4835 may be attributable to the fact that the latter compound is a noncovalent, ATP-competitive inhibitor that is mechanistically distinct from the covalent, allosteric THZ-531.SR-4835 is also likely to exhibit a different spectrum of off-target effects than THZ531.Ongoing and future studies are expected to provide mechanistic clarity to explain these observations and precisely define the criteria that must be met in order for an in vivo bioavailable CDK12/13 inhibitor to potentiate the activity of EGFR blockade. In summary, by revealing a long-debated EGFR dependence in TNBC, we have identified a therapeutic approach that functions through an unexpected mechanism of action and holds promising translational potential for the treatment of this difficult-to-treat disease subtype. Materials and Methods Cell lines, Reagents, and Inhibitors.BT20, BT474, BT549, CAL51, HCC1143, HCC1806, HeLa, MCF10A, MDA-MB-231, MDA-MB-468, SK-BR-3, and SUM149PT cell lines were purchased from Duke University Cell Culture Facility or American Type Culture Collection.HeLa-CDK12 AS was kindly provided by Dr. Arnold Greenleaf (Duke University).All cell lines were authenticated using short tandem repeat profiling by the Duke University DNA Analysis Facility and tested negative for mycoplasma contamination using MycoAlert TM PLUS Mycoplasma Detection kit (Lonza).All cell lines were cultured at 37 °C in 5% CO2.See SI Appendix, Table S1 for specific culture media. Drugs were purchased from SelleckChem or Apexbio Technology.THZ531R and THZ532 were generously gifted by Nathanael Gray (Harvard University, Dana-Farber Cancer Institute). Evolving Drug-Resistant CAL51 Cell Line.To evolve resistance to the gefitin-ib+THZ531 combination in vitro, CAL51 cells were exposed to the combined drugs with increasing concentrations.Cells were first drugged at a dose approximately equal to their GI 75 value (concentration for 25% of maximal inhibition of cell viability).As CAL51 cells were insensitive to gefitinib, an arbitrary starting dose of 500 nM was selected, while 50 nM of THZ531 was used.The growth rate was monitored by cell counts with passaging every 3 to 5 d.Once the growth rate was stabilized, the concentration of each drug was increased until the maximal preset synergistic dose of 1 μM gefitinib and 200 nM THZ531 was reached, yielding CAL51-R_GT (CAL51, resistant to gefitinib + THZ531).A paired vehicle control was cultured with DMSO-containing media in parallel (CAL51-parental).Resistant cells were achieved over 8 wk with gradual dose increments.GI 50 and Sensitization Assay.Cells were seeded in 96-well plates at a density of 3,000 to 5,000 cells per well and treated with a 10-fold serial dilution of indicated drug.Calculated drug dilution series yield final drug concentrations starting with vehicle (DMSO) at 0, 0.000002, 0.00002, 0.0002, 0.002, 0.02, 0.2, and 2 µM.The CellTiter-Glo luminescent viability assay (Promega) was used to measure cell viability after 72 h drug incubation.Luminescence from each specific well of each plate was measured using a Tecan plate reader (Infinite M1000 PRO).Each treatment condition was performed in triplicate per plate, and the presented data represent three technical replicates.Relative viability was calculated by normalizing raw luminescence values to vehicle-treated wells.GI 50 values were considered as the dose at which cell viability equates to 50% of DMSO-treated viability and determined by fitting each individual experiment to a four-parameter logistic drug-response curve using GraphPad/Prism9 software. For two-drug combinations, the concentration of a second background drug was kept constant across all wells.Sensitization scores were calculated using GI 50 values with vehicle versus the second background drug as fold change and log 10 transformed; thus, sensitization scores >0 will indicate increased sensitivity to the first serially diluted drug.GI 50 assays were first performed singly to obtain doseresponse curves for THZ531 with each cell line.Background doses for THZ531 were then chosen based on the curves at doses yielding no less than 80% viability to ensure adequate cellular representation of response to the first serially diluted drug. Loewe and Bliss Synergy Scores Calculation.To quantitatively assess synergy, GI 50 assays were first performed for each inhibitor (e.g., gefitinib or erlotinib) with a range of four or more fixed concentrations of the second background drug (e.g., THZ531 0, 50, 100, and 200 nM).Relative cell viability was calculated as described earlier.Data were tabulated according to SynergyFinder 2.0 User Documentation and uploaded on the web application for analysis (41).Four-parameter logistic regression (LL4) was selected for the curve-fitting algorithm and outlier detection was turned on (82).The Loewe and Bliss methods (42,43) were selected, separately, for synergy calculation with the 'Correction' option switched on to eliminate detected outlier and apply a baseline correction method on the single drug-dose responses. Clonogenic Growth Assay.To measure long-term effects of inhibitors and their combination on cell growth, cells were seeded at 500 cells/well in 12-well tissue culture plates or 1,000 cells/well in six-well tissue culture plates in complete media.For cells expressing shRNA(s) or CRISPR construct(s), cells were seeded after at least 3 d posttransduction.Twenty-four h after seeding, media were aspirated, and drugs were added into fresh media to each specific well.Media and drugs were refreshed every 5 d, and assays were cultured for 10 to 15 d.Drug media were then removed, and plates were fixed and stained with 0.5% w/v crystal violet in 80% v/v methanol solution for 20 min at room temperature.Plates were rinsed with distilled water and scanned. Time-to-Progression Assay. To evaluate the relative ability of treatments to delay the reemergence of logarithmic cell growth in vitro, cells were plated in triplicate wells in six-well plates at 1E5 cells per well in normal growth media.After 24 h, the growth media were replaced with the indicated treatment.At the time points indicated, the cells were lifted with 0.25% trypsin (Life Technologies) and counted using a Z2 Coulter Particle Count and Size Analyzer (Beckman Coulter, Pasadena, CA).For each replicate in each treatment condition, all cells were centrifuged at 1,200 rpm for 5 min, supernatants were removed, cell pellets were resuspended in fresh media, and then, up to 1E5 cells were replated in a well with fresh treatment.This procedure was repeated every 5 to 7 d for about 8 to 10 wk, depending on the kinetics of resistance and cell growth.Weekly growth rates (μ) were calculated from the number of cells plated the previous week (N0) and the number counted the current week (N) according to the formula ln N = ln N0 + μ*t; where t is elapsed time.These growth rates were then used to estimate the total cell number. Immunoprecipitation. Cells were seeded in 15-cm plates, treated with DMSO or indicated drugs for 18 h, to yield at least 1 mg of total protein for immunoprecipitation.All subsequent steps were performed on ice.At the time of harvest, cells were washed with PBS, pelleted (3,000 rpm, 4 °C, 5 min), resuspended, and incubated on a rotator for 1 h, 4 °C, in IP buffer (150 mM NaCl, 0.5% NP-40, 20 mM EDTA, 1 mM dithiothreitol (DTT), and 40 mM Tris-HCl, pH7.4) supplemented with protease/ phosphatase inhibitor cocktail (ThermoFisher).After lysis was completed, lysates were clarified at 13,000 rpm, 4 °C, 20 min.Protein was quantified using the Bradford method and normalized to the lowest protein amount among the samples.Input controls were also saved, and samples were prepared by combining with 4× Laemmli Sample Buffer (Bio-Rad) accordingly.2 to 4 µg of primary antibodies (c-MYC, Ab#32072) or appropriate isotype control were added to the clarified cell extracts and incubated overnight on a rotator at 4 °C.40 µL/sample of recombinant Protein-G-Sepharose-4B beads (ThermoFisher) was washed thrice with IP buffer and added to each sample for equilibration on a rotator for 4 h, 4 °C.Immunoprecipitates were collected by centrifugation at 3,000 rpm for 5 min at 4 °C.The bead pellets were washed for a total of five times.After the last wash, immunoprecipitated proteins were eluted with 4× Laemmli Sample Buffer, vortexed briefly, and heated at 95 °C for 5 min.Samples were collected (13,000 rpm, 2 min), subjected to SDS-PAGE, and transferred to PVDF membrane as described above. RNA-Seq Sample Preparation and Analysis.RNA sequencing was performed with External RNA Controls Consortium (ERCC) spike-in normalization as previously described (86).Briefly, CAL51 and MDA-MB-231 cells were seeded in 10-cm plates and incubated in media with DMSO or indicated drugs for 12 h, in triplicates.Cell counts were determined using C-Chip disposable hemocytometers (Bulldog bio, DHC-N01) and equalized across all samples before lysis and RNA extraction.Total RNA from 1E6 cells per replicate was isolated using the RNeasy96 kit (Qiagen).ERCC ExFoldRNA Spike-in Control Mixes (Invitrogen #4456740) (4 µL/sample, diluted at 1:100, ERCC User Guide, table 4) were added after the cell lysis step.The extraction was then continued according to the manufacturer's instructions and eluted in 50 µL nuclease-free water.Total RNA was quantified using the Qubit TM RNA Broad Range Assay kit (Invitrogen) and analyzed on Agilent 4,200 TapeStation for integrity.Samples with the RNA Integrity Number above 9.0 and normalized to 500 ng total RNA were selected for library preparation using the TruSeq® stranded mRNA sample prep kit (Illumina, #20020595).After library preparation, samples were quantified using Qubit TM assay, checked fragment sizes on Agilent 4,200 TapeStation, normalized, and pooled.Libraries were sequenced on the Illumina HiSeq 2,000 sequencing system using 50-bp single-end reads at the Duke University Genome Sequencing Facility. Sequences were processed using Trimmomatic v0.32 (87) and reads that were 20 nt or longer after trimming were filtered for further analysis.Reads were aligned using the alignment tool STAR v2.4.1a (88) following the proposed 2-pass strategy to first identify a splice junction database to improve the overall mapping quality.Alignment was performed to GRCh38/hg38 of the human genome and transcriptome with ERCC synthetic spike-in RNA sequences (Annotations from product webpage manuals, https://assets.thermofisher.com/TFS-Assets/LSG/manuals/cms_095046.txt) appended for mapping.The TPM (transcripts per million) was computed for each mapped gene and synthetic spike-in RNA using RSEM v1.2.25 (89).Differential expression analysis was performed using DESeq2 v1.22.0 (90) running on R (v3.5.1).Briefly, raw counts were imported and filtered to remove genes with low or no expression, that is, keeping genes having two or more counts per million in two or more samples.Filtered counts were then normalized with the DESeq function, using the counts for the ERCC spike-in probes to estimate the size factors.In order to find significant differentially expressed genes, the nbinomWaldTest was used to test the coefficients in the fitted Negative Binomial GLM using the previously calculated size factors and dispersion estimates.Genes having a Benjamini-Hochberg false discovery rate less than 0.05 were considered significant (unless otherwise indicated).Differential gene expression was tested for all possible drug pairwise comparisons within each cell line, for example, single drug versus DMSO control, combination versus DMSO control, combination versus single drug, and so on. Quantitative Real-Time PCR.Cell counts were determined and normalized across all samples before lysis.RNA was extracted using the RNeasy Mini kit (Qiagen).After the cell lysis step, samples were spiked-in with the ERCC spike-in controls (2 µL/sample, diluted 1:100, Invitrogen #4456740) and treated with oncolumn DNase digestion according to the manufacturer's specifications (Qiagen).RNA purity and concentration were measured by absorbance at 260nm (A 260 /A 280 ).cDNAs were reverse-transcribed using the SuperScript TM VILO TM cDNA Synthesis kit (Invitrogen) with 100 ng to 1 µg of RNA template as directed by the manufacturer's protocol.qRT-PCRs were carried out in triplicates using the TaqMan assay (Applied Biosystems) and CFX96 or CFX384 Touch Real-Time PCR Detection System according to manufacturers' recommendations (Bio-Rad).Average cycle threshold (C t ) values were calculated for each gene, and the maximum C t value was set at 40 cycles.Average C t values of technical replicates were normalized to the exogenous spike-in or reference gene, ERCC-00096 or GAPDH respectively, and relative gene expression was determined using the comparative ΔΔC t method.Average and SD were results of at least three independent experiments.Specific TaqMan gene expression assay IDs were as follows: ERCC-00096 (Ac03459927_a1), GAPDH (Hs02786624_g1), and MYC (Hs99999003_m1). Sucrose Density Gradient Sedimentation of Polysomes.CAL51 cells were seeded in 10-cm plates and cultured until 80 to 85% confluence at the point of harvest.Drug treatments with DMSO or indicated drugs were performed over a 12 h period prior to processing.Sucrose density gradients (15 to 50% sucrose in 200 mM KCl, 25 mM K-HEPES, pH 7.4, 15 mM MgCl 2 , 0.2 mM cycloheximide, 1 mM DTT, and 10 U/mL RNaseOut TM ) were prepared prior to cell harvesting.All subsequent steps were performed on ice.Untreated or treated cells were washed twice with ice-cold PBS and lysed on ice for 10 min with 1 mL/ plate lysis buffer (200 mM KCl, 15 mM MgCl 2 , 1% NP-40, 0.5% sodium deoxycholate, 0.2 mM cycloheximide, 1 mM DTT, 40 U/mL RNaseOUT TM , 1× protease inhibitor, and 25 mM K-HEPES, pH 7.4).Lysates were clarified at 13,000 ×g, 4 °C, 10 min.Clarified supernatants were overlaid onto sucrose gradients.The samples were centrifuged in a swinging-bucket rotor (SW41 Ti, Beckman) at 35,000 rpm, 4 °C, for 3 h.Sucrose gradients were fractionated on a Teledyne-ISCO density gradient fractionation system with continuous A 260 monitoring.Photomultiplier output was continuously sampled and converted to digital values with a TracerDAQ TM A/D converter.CAL51 cells were seeded in six-well plates, grown to 80 to 85% confluence, and subsequently treated with DMSO or indicated drugs.After 12 h of vehicle or drug incubation, cells were washed twice with PBS and methionine-starved by incubation in serum-supplemented methionine-free media (Gibco) for 30 min at 37 °C.Cells were labeled with 150 µCi/mL [ 35 S] methionine (Perkin-Elmer, NEG772002MC) in methionine-free media for 45 min.[ 35 S] methionine incorporation was terminated by washing cells twice with 100 µg/mL cycloheximide (Sigma-Aldrich)in serum-free methionine-free media, incubating for 10 min at 37 °C for the second wash, followed by two washes with 100 µg/mL cycloheximide in PBS.Cells were then lysed with IP buffer on ice for 10 min.Samples were precleared by addition of rabbit IgG isotype control (Ab #172730) and Protein-G-Sepharose-4B beads for 1 h at 4 °C.MYC immunoprecipitation was performed as described above.Following washing, immunoprecipitated proteins were eluted in 4× Laemmli Sample Buffer (Bio-Rad), vortexed briefly, and heated at 95 °C for 5 min.Two microliters of each sample was added to 3 mL of liquid scintillation liquid, and [ 35 S] radioactivity was measured and recorded.The remaining sample volume was resolved by SDS-PAGE and transferred to PVDF membrane as described above.Membranes were exposed on a phosphorimaging screen overnight and visualized on Amersham TM Typhoon TM NIR Biomolecular Imager (GE Healthcare).See SI Appendix, Table S2 for antibodies. CRISPR Homology-Directed Repair (HDR) Generation of CAL51 CDK12 AS Mutant Clones.Cells harboring a CDK12 AS mutation were generated using the CRISPR/Cas9 system and HDR as previously described in ref. (45).Briefly, 5E5 cells were transfected in a six-well plate with DNA for the PX458 plasmid (Addgene ID 48138) encoding a Cas9 nuclease and sgRNA targeting the CDK12 gatekeeper residue (4 µg) along with an HDR donor template containing the sequence of the CDK12 AS mutation (1 µg), utilizing 8 µL Lipofectamine 3,000 (ThermoFisher) supplemented with 8 µL P3000 (ThermoFisher) reagent in concordance with the product literature.Cells were cultured in the presence of Alt-R TM HDR Enhancer V2 (IDT #10007910), collected, and sorted based on GFP expression via FACS to generate single-cell clones.Clones were screened via PCR amplification of the CDK12 gatekeeper locus from gDNA isolated using the DNeasy Blood & Tissue Kit (Qiagen) followed by Sanger sequencing.Heterozygous CDK12 AS mutation was confirmed in the selected clone.See SI Appendix, Table S2 for sgRNA sequences. Cloning of CRISPR and shRNA Constructs.CRISPR constructs were cloned using published methods (91, 92) using characterized sgRNAs from the TKOv3 genome-wide library (93).Detailed cloning steps were as previously described (94).In brief, unique 20-mer sgRNA inserts targeting genes of interest were synthesized by CustomArray with flanking sequence adaptors.The synthetic oligo (diluted 1:100) was amplified using NEB Phusion Hotstart Flex PCR master mix and array primers.Amplified inserts were bead cleaned up at a 1.8× ratio by volume (Axygen, AxyPrep TM Mag PCR Clean-up kit).Ninety microliters of magnetic beads was added to 50 μL of each PCR product, vortexed vigorously, incubated at room temperature for 10 min, and separated on a magnetic tube stand.Clear liquid was aspirated from the magnetic beads, and beads were washed three times with freshly prepared 70% ethanol.After the final wash, PCR products were eluted in molecular-grade water.lentiCRISPRv2 (Addgene ID 52961), PX458 (Addgene ID 48138), or FUW-U6-enhanced gRNA-hUbC-mCherry-PuroR [kindly provided by Charlie Gersbach (Duke University)] was digested by Esp3I (BsmBI) (NEB) and size selected by 1% agarose gel electrophoresis and extraction.Linearized gRNA expression vector were then annealed with clean array amplified sgRNA oligos by Gibson assembly reaction.The reaction mixture was then transformed by chemical or electroporation method into Stbl3 (ThermoFisher) or E. cloni 10G (Lucigen) competent cells, respectively.Transformed cells were recovered and spread on LB-ampicillin plates for overnight incubation.Single colonies were picked, cultured overnight in liquid LB, and extracted using the plasmid miniprep kit (Qiagen).Plasmid DNA sequences were checked using Sanger sequencing (Eton Bioscience) for sgRNA inserts to confirm successful cloning.See SI Appendix, Table S2 for sgRNA sequences. Glycerol stocks for shRNA targeting genes of interest and bacterial stab cultures of plasmids were obtained from the Duke Functional Genomics Core Facility and Addgene Plasmid Repository, respectively.Inoculants from glycerol stocks or stab culture were cultured overnight in liquid LB at 37 °C, and plasmids were extracted using the Plasmid miniprep kit (Qiagen).See SI Appendix, Table S2 for shRNA identity, TRC number and target sequences, and plasmid Addgene ID. Lentivirus Production and Transduction.Lentivirus production was adapted from ref. (92).HEK293FT cells were grown to ~80% confluency in 10-cm or sixwell plates, for 10 mL or 2 mL final viral media harvest, respectively, and transfection reagents were scaled according to seeding area.For a 10-cm plate, 3.5 -4E6 cells were seeded and incubated for 24 h (37 °C, 5% CO2).Transfection reagents were prepared in Opti-MEM TM reduced serum medium (Gibco) and performed using 94.2 µL Lipofectamine 2,000 (ThermoFisher), 103.6 µL PLUS TM reagent (ThermoFisher), 8.2 µg psPAX2, 5.4 µg pMD2.G, and 10.7 µg construct DNA.The mixture was incubated at room temperature for 5 min and gently added to the HEK293FT cells for 4 h incubation (37 °C, 5% CO2).The medium was then replaced with prewarmed harvest media (DMEM 30% FBS).Forty-eight h after the start of the transfection, the lentivirus supernatant was collected and filtered (0.45 µm).Transductions were conducted directly at the time of lentivirus harvest or freshly thawed from frozen aliquots.0.5 to 1 mL of virus media and polybrene (1 µg/mL) were added to cells seeded in a six-well plate in 1 to 1.5 mL of growth media.Cells were spinfected at 2,250 rpm, 1 h, room temperature (25 °C), and incubated overnight (37 °C, 5% CO2).Twenty-four h posttransduction, cells were selected by puromycin (2 µg/mL) for 48 h. Pooled Genome-Wide CRISPR Positive Selection Screen and Analysis. The TKOv3 pooled library was obtained from Addgene (Addgene ID 90294) and amplified as previously described (92,93).Lentivirus production of the TKOv3 library was scaled up and conducted as described above.CAL51 cells were seeded into 6-well plates at a density of 0.5E6 cells per well and transduced at an MOI less than 0.2.A total of 60E6 cells were transduced in 24× six-well plates.Twentyfour h posttransduction, cells were selected by puromycin (2 μg/mL) for 48 h.Puromycin-selected cells were collected and counted to confirm at least 100× library coverage.Transduced cells were propagated in puromycin-containing media for a total of 7 d and split into vehicle (DMSO) and gefitinib (750 nM) + THZ531 (100 nM) combination treatment conditions in duplicates.The screen was conducted over a total of 3 wk, for approximately 15 cell doublings.Cells were counted and passaged with replenished drug every 3 d.Each treatment condition and replicate was represented by a minimum of 10E6 cells to maintain at least 100× library coverage (>100 cells per unique sgRNA) during each split throughout the screen.A total of 12E6 cells were collected at 48 h after puromycin exposure, screen initiation (t 0 ), and at every passage till screen termination (t final ).DNA was extracted from cell pellets (DNeasy Blood & Tissue Kit, Qiagen) and stored at −80 °C until completion of screens.Samples were further processed for sequencing as previously described (91).Screen libraries were sequenced on the Illumina NovaSeq 6,000 sequencing system (50-bp, single-end reads) at the Duke University Genome Sequencing Facility to achieve 20 million reads total per sample (~200 reads per guide). Pooled samples were matched by barcoded reads, and guide-level counts were computed using bcSeq (v1.12.0)Bioconductor package (95) in the R (v3.5.1) programming environment.As the screen was designed for positive selection, resistance to gefitinib+THZ531 combination was determined by evaluating differential guide compositions between vehicle control (DMSO) and combo-treated (GT) cell populations at t final .Cells that survived the GT combo were enriched with guides targeting genes that we coined 'resistor' genes and are required for the drug synergistic activities.Differential analysis was carried out using the DESeq2 (v1.22.0)Bioconductor package in the R (v3.5.1) programming environment.Out of the 18,053 genes in the TKOv3 library, 29 genes (0.16%) were excluded due to low counts.Enrichment effects in combo-treated arm were expressed as log 2 (fold-change) for GT versus DMSO (vehicle-control as the denominator). Quantification and Statistical Analysis.Statistical analyses were performed in Prism9 (GraphPad) software or R (v3.5.1) (https://www.r-project.org/).All results are shown as mean ± SD.P Values were determined using unpaired, two-tailed Student's t tests and considered significant at a threshold of <0.05, unless otherwise stated. Data, Materials, and Software Availability.The RNA-seq data have been deposited in Gene Expression Omnibus (GEO) accession number GSE221475 (96) and are publicly available as of the date of publication.RNA-seq counts table after normalization with synthetic ERCC spike-in-RNA is available in Dataset S1 A and B. Raw counts table for the TKOv3 positive selection screen of CAL51 cells treated with DMSO or gefitinib+THZ531 combination is available in Dataset S2.Analyzed screen data are available in Dataset S3.All study data are included in the article and/or supporting information. Fig. 3 . Fig. 3. MYC protein loss is driven by decreased protein synthesis and increased ubiquitin-proteasome-dependent protein degradation.(A) Polysome gradient profiles of CAL51 cells treated with DMSO, gefitinib (1 µM), THZ531 (200 nM), or gefitinib + THZ531 for 12 h.Polysome fractions are indicated by the black box.Representative analysis of the polysome distribution of n = 2 independent experiments yielding similar results.(B) Relative incorporation of [ 35 S]methionine in MYC protein, determined by phosphorimager analysis and densitometry quantification of immunoblots, following [ 35 S]methionine labeling and MYC protein immunoprecipitation in the absence or presence of proteasome inhibitor, bortezomib (20 nM) in CAL51 cells treated with DMSO or gefitinib (1 µM) + THZ531 (200 nM) for 12 h.ns = not significant, ****P ≤ 0.0001 by Student's t tests; n = 3.Data are mean ± SD of three biological replicates.(C) Immunoblot analysis of MYC and vinculin over a time course as indicated in the absence or presence of cycloheximide (20 µg/mL) in CAL51 cells treated with DMSO, gefitinib (1 µM), THZ531 (200 nM), or gefitinib + THZ531.Representative immunoblot of n = 3 independent experiments.Relative MYC protein level at time 0 and 30 min with indicated treatment conditions, derived from densitometry quantification of immunoblots from cycloheximide chase experiment of CAL51 cells.ns = not significant, **P ≤ 0.01, ****P ≤ 0.0001 by Student's t tests; n = 3.Data are mean ± SD of three independent experiments.(D) Immunoblot analysis of ubiquitin, MYC, and vinculin on immunoprecipitated MYC protein and input control in the absence or presence of proteasome inhibitor, bortezomib (20 nM) in CAL51 cells treated with DMSO or gefitinib (1 μM) + THZ531 (200 nM) for 18 h.Representative immunoblot of n = 3 independent experiments yielding similar results.(E) Immunoblot analysis of MYC and vinculin protein levels over a time course as indicated in the absence or presence of bortezomib (20 nM) in CAL51 cells treated with DMSO, gefitinib (1 µM), THZ531 (200 nM), or gefitinib + THZ531.Representative immunoblot of n = 3 independent experiments yielding similar results.
2023-09-13T06:17:06.676Z
2023-09-11T00:00:00.000
{ "year": 2023, "sha1": "03f9f890dd3be8f9d2e1945d02fa33c0ca8fe4c6", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "e54b523f36b8cfd2294be42fdb8e2fbeb5335347", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240052580
pes2o/s2orc
v3-fos-license
Assessing the learning process in transdisciplinary research through a novel analytical approach ABSTRACT. Interand transdisciplinary research projects bring with them both challenges and opportunities for learning among all stakeholders involved. This is a particularly relevant aspect in social-ecological research projects, which deal with complex real-world systems and wicked problems involving various stakeholders’ interests, needs, and views, while demanding expertise from a wide range of disciplines. Despite its importance in such research efforts, the learning process is often not the primary focus of investigation and therefore the knowledge about it remains limited. Here, we put forward an analytical framework that was developed to assess the learning process of both the research team and other participating stakeholders within the scope of an international transdisciplinary project dealing with urban green and blue infrastructure. The framework is structured around five dimensions of the learning process: “Why learn?” (the purpose of knowledge generation and sharing); “What to learn about?” (the types of knowledge involved); “Who to learn with?” (the actors involved); “How to learn?” (the methods and tools used); 'When to learn?' (the timing of different stages). We developed an interview protocol to operationalize the framework and tested our approach through interviews with project researchers. Based on our empirical results, we draw main lessons learned that can inform other transdisciplinary projects. These include capitalizing on what already exists, addressing trade-offs inherent to different types of knowledge, fostering interand transdisciplinarity, engaging stakeholders, supporting a learning environment and fostering reflexivity. Besides the empirical insights and the lessons we present, the main contribution of this research lies in the analytical framework we developed, accompanied by a protocol to apply it in practice. The framework can capture the learning process taking place in transdisciplinary research more comprehensively than similar existing frameworks. The five intertwined dimensions it covers are essential to understand and plan such learning processes. INTRODUCTION Mutual learning and self-reflexivity are key for transdisciplinary knowledge production (Polk and Knutsson 2008, Jahn et al. 2012, Wittmayer and Schäpke 2014, which in turn is an important process underlying the resilience and sustainability of socialecological systems (Brandt et al. 2013, Clark et al. 2016, Hoffmann et al. 2017Evely et al. 2012. Transdisciplinarity is, at its core, "both critical and self-reflexive: It not only systematically scrutinizes in which ways knowledge is produced and used by different societal actors in support of their concerns; it also methodically challenges how science itself deals with the tension between its constitutive pursuit of truth and the ever increasing societal demand for the usefulness of its results" (Jahn et al. 2012:9). A greater recognition of the different ways of understanding and working with knowledge is thus needed. So is moving away from merely technical approaches to knowledge exchange, limited to unidirectional, linear exchanges (Reed et al. 2014). Knowledge that supports action toward sustainable development should be perceived by stakeholders as salient (relevant to their needs), credible (scientifically adequate), and legitimate (unbiased, fair, and respectful of stakeholders' divergent values and beliefs; Cash et al. 2003). Existing research suggests that the attributes of knowledge co-production processes-tightly linked with knowledge legitimacy-are important determinants of whether that knowledge leads to action (Posner et al. 2016). Approaches to assess such attributes are therefore needed, in line with calls for monitoring, reflecting on, and continuously refining knowledge exchange as a flexible process (Reed et al. 2014). The learning process often refers to the production of knowledge as a joint process among stakeholders, including scientists , building on the notion of mutual learning, defined as "the basic process of exchange, generation, and integration of existing or newly developing knowledge in different parts of science and society" (Scholz 2001:118). Despite its importance in transdisciplinary social-ecological research efforts, the learning process is often not the primary focus of investigation and therefore the knowledge about it remains limited. Literature presenting self-reflections by researchers on the learning taking place in transdisciplinary efforts is rare, while empirical studies of learning often remain implicit regarding who learns about what and why (van Mierlo et al. 2020). Empirical evidence from the different parties involved in transdisciplinary research is needed to improve the existing body of knowledge and better support guidance for knowledge exchange (Reed et al. 2014). Hence, several authors stress the need for more studies focusing on learning, for example in the context of sustainability transitions research (van Mierlo et al. 2020). In this article we put forward an analytical framework that was developed to assess the learning process of both the research team and other participating stakeholders within the scope of an international transdisciplinary project dealing with urban green We drew on existing literature to develop our analytical framework for assessing the learning process taking place in transdisciplinary research. As noted by Hoffmann et al. (2017), various frameworks have been developed to structure evaluations (ex ante or ex post; formative or summative) of transdisciplinary research. However, most of them are unsuitable or too limited for our purpose. For example, very few differentiate types of actor involvement at different stages of the research (Hoffmann et al. 2017). Our framework draws more heavily on the works of Enengel et al. (2012), Hoffmann et al. (2017), and Roux et al. (2017). On a study about the specific challenges for implementing co-production of knowledge in doctoral studies, Enengel et al. (2012) developed an analytical framework to compare transdisciplinary case studies, consisting of the following elements: (1) typology of actor roles: Who?, (2) research phases: When?, (3) objectives and forms of actor integration: Why?, and (4) types of knowledge: What? Hoffmann et al. (2017) adapted the framework by Enengel and colleagues to compare transdisciplinary integration across four synthesis processes regarding different types of generated knowledge (what?), different types of involved actors (who?), and different levels of actor involvement (how?) at different stages of the processes (when?). The study by Roux et al. (2017) is the most aligned with our purpose because it focuses on mutual learning within transdisciplinary research, specifically on three aspects that could guide researchers in designing and facilitating such learning: "who to learn with," "what to learn about," and "how to learn." The development of the analytical framework was supported by joint reflections and feedback from all consortium researchers in internal workshops throughout the ENABLE project (described below). Our analytical framework is structured around five dimensions of the learning process ( . Why learn? (the purpose of knowledge generation and sharing); https://www.ecologyandsociety.org/vol26/iss4/art19/ world approach vs. an inner-scientific approach to transdisciplinary efforts (Jahn et al. 2012), linked with local or context-specific knowledge vs. generalized knowledge , Enengel et al. 2012. This dimension differs from the "Why?" by Enengel et al. (2012), which was focused on different types of stakeholder involvement. The knowledge, insights, ideas, and perspectives involved in the learning process (What to learn about?) can be limitless, so several authors have developed typologies of knowledge. We consider three mutually dependent types of knowledge inherent to transdisciplinary research as particularly relevant (Hadorn et al. 2008): (i) systems knowledge of empirical processes and interactions of factors, addressing questions about the origin, development, and interpretation of life-world problems; (ii) target knowledge concerning questions related to determining and explaining the need for change, desired goals, and better practices; (iii) transformation knowledge dealing with questions about technical, social, legal, cultural, and other means of action to transform existing practices and introduce desired ones. We were also interested in "knowledge on how to create knowledge," which refers to a reflection about the research approach (e.g., how different methods can be combined to generate knowledge on GBI-related issues in different case study cities; Dunford et al. 2018). Regarding the participants of the learning process (Who to learn with?), we identified two main groups: (a) the research team; (b) other project stakeholders. These can be further detailed, following the five categories developed by Ritter et al. (2010) and adopted by Enengel et al. (2012): (i) core scientists: the main scientific actors throughout the course of a project; (ii) scientific consultants: academic experts who support the core group; (iii) professional practice experts: practitioners who are often very familiar with the practical and political aspects of the issues investigated, but not necessarily with the specific local case context; (iv) strategic case actors: practitioners at case level with a specific formal or informal responsibility, or professional competence; (v) local case actors: all other actors involved in the processes at the case level. It can be relevant to consider alternative ways of categorizing participants, for example according to the types of knowledge they represent (Roux et al. 2017) or to their level of interest and influence on the project (Reed 2008). The methods and tools used in the learning process (How to learn?) will vary across projects and case studies, but two concepts are important here (Opdam et al. 2015, Roux et al. 2017): (i) boundary concepts: a special case of boundary objects (e.g. models, indicators, and maps). Co-production of these objects can establish shared interest and bridge understanding across multiple knowledge domains. Similarly, boundary concepts, which are non-material, can play a mediating and translating role in a transdisciplinary context, by creating a discursive space in settings with a common urgency, but without consensus or a common knowledge base; (ii) "third places": in a transdisciplinary sense, a third place represents a learning space at the interface between academia and practice, where academics and nonacademics can have an equal voice when they engage to find common ground regarding particular social-ecological issues. We consider that third places refer not only to physical spaces, but more widely to settings that can promote such a learning space (e.g., through a set of rules of engagement). The timing of different stages in the learning process (When to learn?) can have several implications, like influencing the policy uptake of scientific knowledge, according to policy windows (Rose et al. 2020). The key stages of a knowledge co-creation process can be categorized as (1) problem history, (2) problem identification and structuring, (3) research design and selection of methods, (4) data collection, (5) data analysis and triangulation, (6) reflection/interpretation and synthesis, and (7) dissemination of results (Pohl andHadorn 2007, Enengel et al. 2012). The framework's dimensions do not necessarily follow a specific sequence; there can be different and multiple entry points, depending on the features of a specific application (like aims, scope, or time of assessment). For example, if applying the framework ex ante to support the design of a learning process, it might make sense to start with what motivates the learning process (why learn?), followed by who should be involved (who to learn with?) and what knowledge, insights, or perspectives are expected or wanted from participants (what to learn about?), and finally considering methods and tools to support the process (how to learn?), always keeping in mind the time dimension (when to learn?). The ENABLE project ENABLE was a research project funded under the 2015-2016 call from BiodivERsA, a network of national and regional funding organizations promoting pan-European research on biodiversity and ecosystem services. The project aimed at enabling GBI potential in complex social-ecological regions using a systems perspective and engaging local actors in five case studies: Barcelona (Spain), Halle (Germany), Łódź (Poland), Oslo (Norway), and Stockholm (Sweden). New York City (USA) was also included as an external node for benchmarking. The research reported in this article targeted the five European case studies. As a transdisciplinary project, ENABLE represented an opportunity to foster learning among all participants, including members of the consortium and the different stakeholders who were engaged in the process, mainly in each case study. Project partners developed approaches tailored to each urban region to achieve that aim (as illustrated by many of the articles in this Special Feature) under ENABLE's common overarching conceptual framework (Andersson et al. 2019). Interviews We developed an interview protocol to operationalize (ex post) the analytical framework (Appendix 1). Joint reflections and feedback from all ENABLE researchers in internal workshops throughout the project supported the development of the interview protocol (in line with the analytical framework). Some questions differed depending on the main group of participants (the research team or the project stakeholders). To contextualize the results regarding the five dimensions of the framework, it was also important to gather information on what participants found interesting and useful about the process. This can be used to identify relevant aspects that influence learning (Restrepo et al. 2018). To capture that information we added questions drawing Table 1. Main topics emerging from the interviews for each dimension of the analytical framework. The contextualization and details given in the main body of text are essential to interpret the topics listed. Dimension Main topics Why learn? To improve actionability and relevance for green and blue infrastructure planning and management To build a shared systems understanding To provide a platform for discussion among stakeholders To actively enact the integration of transdisciplinary research To guide future research To expand the individual researchers' conceptual understanding or methodological toolbox What to learn about? How to work together with different stakeholders (benefits, challenges, limitations, needs) How to apply different methods to specific issues or contexts How to (co-)create knowledge (contextualized meaning and use of concepts, boundary concepts) Opportunities to extend and amplify learning processes Who to learn with? Local authorities (especially planning, environmental, green space management, or similar departments) Initiatives and organizations at sub-municipal scale (e.g., neighborhood) Citizens in general Colleagues in and outside research consortium Private actors Politicians Marginalized groups Grassroots groups How to learn? Workshops with local stakeholders Participation in expert groups Thematic meetings with individual stakeholders Training events Consortium workshops Using boundary concepts Devising "third places" When to learn? Project preparation phase Temporal alignment with real ongoing processes Follow-up on stakeholder engagement events Dissemination and assessment of new knowledge on the Most Significant Change technique (Serrat 2017), a storybased, qualitative method for uncovering most significant project impacts experienced by individuals. The main guiding question used to open a conversation through this technique was: "What did you find most interesting and useful from the project? What were the main "take-home messages"?" Two other questions included in this technique drew on the work by Cvitanovic et al. (2016), one on barriers preventing knowledge exchange and one on suggestions for improving knowledge exchange. Ten members of ENABLE's research team were interviewed per online call toward the end of the project, between June 2019 and April 2020. We aimed for individual perspectives (as opposed, for example to having a spokesperson per case study) because we see them as most relevant in an inter-and transdisciplinary learning process, where researchers within the same team have had different roles. Because of practical constraints it was not possible to conduct the interviews with project stakeholders across case study cities. The first author of the present article conducted all the interviews and was not included as interviewee, whereas the remaining co-authors were. Interviews were conducted in English and lasted between 30 and 60 minutes. Interviewees signed an informed consent form (Appendix 2). The first author transcribed and manually coded the interviews supported by the software MAXQDA Plus 2020, release 20.0.8 (VERBI Software 2019). The remaining co-authors have verified the coding in a subsequent stage, to identify potential inconsistencies or deviations in interpretation. Interviews were transcribed, in a close way to what the interviewees said, but not fully verbatim, because it was the content of what was being said that was of interest and not the wording (Kuckartz and Rädiker 2019). We anonymize interviewees when presenting results in this article, using identifiers composed of the initials of the case study city (BAR: Barcelona; HAL: Halle; LOD: Łódź; OSL: Oslo; STO: Stockholm; CC: cross-case) followed by an ordinal number (e.g., BAR1). This retains the identification of different case studies and interviewees, which is relevant for the analysis of results. RESULTS AND DISCUSSION We present results according to, first, the different dimensions of our analytical framework (see Table 1 for a summary), and second, topics cutting across dimensions. Because the analytical dimensions are closely interrelated, we cross-reference dimensions along the text when pertinent, for example by flagging content that is relevant for other dimensions with "→[dimension's short designation]." Given the qualitative nature of this research, we have tried to highlight recurring topics from the interviews, while capturing the diversity of topics brought up by interviewees. However, it is not possible to cover all points raised by interviewees, so we refer readers to the coded interviews' transcriptions in Appendix 3. Why learn? Applicability for policy, society, and science Findings on the usefulness of the knowledge, insights, or perspectives resulting from ENABLE varied across case study cities while covering the applicability for policy, society, and science. In terms of applicability for policy and society, because of the scope of ENABLE, most of its outputs and outcomes were Ecology and Society 26(4): 19 https://www.ecologyandsociety.org/vol26/iss4/art19/ aiming to be relevant for GBI planning and management in the case study cities, or in other words, to be salient (Cash et al. 2003). Overall, the project has raised general awareness about GBI benefits, enhanced the focus on the social dimension (distributional issues) of GBI, and it provided planning authorities with data and analyses that they probably could not accomplish themselves because of time constraints or lack of technical capacities. For example, in Oslo, three tools were developed that are already being taken up in practice: a modelbased tool to prioritize where green roofs fill demand gaps most effectively, which supports planning and zoning decisions; a Nordic standard for tree valuation, which can equip Oslo's municipality with an up-to-date tree damage compensation assessment that includes ecosystem services; a blue-green factor standard that can be used as a policy instrument to integrate GBI in new property developments (Horvath et al. 2017). In Łódź, research on children's exposure to green spaces while walking to school and the production of a digital sociotope map (a map of social functions of public green spaces; Łaszkiewicz et al. 2020) are among the outputs that have "started to inform the local authorities on different green space availability and accessibility standards" [LOD1]. In Stockholm, through the resilience assessment, researchers have promoted "more of a systems understanding" that GBI is not only about the infrastructure itself, "but very much a question of how you think about the city and its inhabitants, around those green and blue spaces" [STO1] (see Borgström et al. 2021;Andersson et al., in press). That process also raised stakeholders' awareness that GBI "will change and be impacted by change -demographic, economic, governance changes, climate change, environmental change" [STO2] (see Borgström et al. 2021). ENABLE has started a discussion (and provided supporting knowledge) about how to move beyond the dichotomy of conservation only in natural areas vs. densification only in urban areas. In Barcelona, among other efforts that are aligned with policy concerns, a direct contribution to the new municipal resilience strategy (De Luca et al. 2021) was highlighted as a relevant ENABLE outcome for the planning and management of GBI. Both in the Stockholm and Barcelona cases the joint learning process itself was noted as useful instead of knowledge as such, which "is very intangible in a way, but we speak now the same language, we understand each other in these forums, and I understand the city's needs and they understand where we are heading, this is very critical and a fundamental way of bringing in new concepts, new critical ideas into the discussion" [BAR1]. The learning process provided a platform for stakeholders "to meet and discuss things that they normally do not have room for discussing in their daily work-life context" [STO2] (→How). This focus on the learning process itself supports the notion that knowledge is not a package that can simply be transferred from producers to users; instead it is better seen as "a process of interaction characterized by multiple changing meanings and interpretations about what the knowledge is about, and how relevant, challenging, or good it is considered to be" (Tuinstra et al. 2019:135). Related to this, we argue that the saliency of the knowledge produced, apparent in several of the ENABLE cases, was tightly linked to its legitimacy, i.e., respectful of stakeholders' values and beliefs in an unbiased and fair way (Cash et al. 2003), which in ENABLE was actively sought through its transdisciplinary approach (→How, →Who, →When). Nevertheless, we note the transitory nature of solutions to societal problems, inherent to transdisciplinary research (Jahn et al. 2012). It also became apparent that differences across ENABLE cases in terms of knowledge applicability for policy and society reflect the notion that actors in the learning process "enter a setting that has already been shaped by previous experts and past advisory practices, including formal and informal rules and codes of working, as well as a certain understanding of what counts as authoritative knowledge" (Tuinstra et al. 2019:128). Concerning the applicability for science, across cities ENABLE was seen as useful for considerations of "how to build a comprehensive approach to both understanding and actively engaging with green and blue infrastructure and its functionalities and benefits" [STO1]. The mixed-and multi-methods approaches used within ENABLE ) "were quite useful to think about how can we look at and address a specific issue through multiple lenses and still combine the insights from them" [STO1]. Similarly, interviewees from Halle and Barcelona highlighted the thinking around filters through ENABLE's conceptual framework , together with the concepts of availability, accessibility, and attractiveness of GBI (Biernacka and Kronenberg 2019) in relation to environmental justice (Langemeyer and Connolly 2020) as useful for science but also for policy and society, which underlines their potential to act as boundary concepts (→How). All these insights speak to the notion of integration, considered to be the main cognitive challenge of transdisciplinarity and defined as "the cognitive operation that establishes a novel, hitherto non-existent connection between distinct entities of a given context" (Jahn et al. 2012:7). Considering the insights reported throughout this article (see also , it becomes apparent that the ENABLE learning process entailed the three levels of integration suggested by Jahn et al. (2012): epistemic (understanding the methods, notions, and concepts of other disciplines and recognizing and explicating the limits of one's own knowledge); social-organizational (explicating and connecting different interests or activities of participating researchers, subprojects, and larger organizational units); communicative (establishing some kind of common language that advances mutual understanding and agreement). In this regard, ENABLE's outcomes could be useful for funding bodies because they show "what interdisciplinary research can be about and what different parts are needed, ... other capacities than [ordinary] research projects" [STO2]. Finally, interviewees also noted that the learning from ENABLE can support the writing of new research proposals and how they conduct future similar research ("why it worked or why it didn't work" [CC1]), teaching and writing of scientific publications, work as experts in other processes, the ability to engage with emerging topics, like the role of GBI during the COVID-19 crisis (see Barton et al. 2020), or promoting further collaboration with local stakeholders. Interviewees also reflected on the usefulness of knowledge for themselves. Most answers referred to expanding one's conceptual understanding or methodological toolbox, related to: the concept of filters, "quite useful ... for the way you engage with the benefits of green and blue infrastructure" [HAL2]; the framework (and assessment methods) of GBI availability, accessibility, and attractiveness; having "a more operational idea of the actual Ecology and Society 26(4): 19 https://www.ecologyandsociety.org/vol26/iss4/art19/ design of transdisciplinary science, ... what are the critical things that need to be integrated, how can they be integrated and how can I describe how to do that and the resources needed" [STO1]; a deeper understanding of preferences, values, and perceptions of citizens concerning the design of green infrastructure; the Blue-Green Factor assessment; or thinking about "the beneficial overlaps between different techniques and methods" [HAL2] (→What). Expanding one's conceptual understanding can support individuals in adapting mental models and promote double-loop learning (Fazey et al. 2005). These self-reflections can stimulate individual researchers to orient their work toward favoring learning over knowing, which is one of the ways to help build improved capacity for social learning in a sustainability context (Clark et al. 2016). What to learn about? Different types of knowledge The most recurring topic emerging from the interviews regarding this dimension was related to insights on working with stakeholders. These included (i) the benefits, challenges, and limitations implied ("For the first time we were doing this exercise with stakeholders ... and I think this is something that we learned is very useful and that we would like to do in the future as well" [LOD1]; "I was reminded of the challenges of working with stakeholders, in terms of problem understanding, the time budget and capacity in total" [HAL2]; "I see much more the limitations linked to that and the bias that the selection of stakeholders brings with it" [BAR1]); (ii) better understanding of stakeholders ("Knowing who the actors are, how they view the system, how they think about other actors" [STO1]); (iii) how to better align the research with stakeholders' needs ("Getting the research from the lab to the end-users and practitioners, that is definitely what we have learned a lot about" [BAR1]). These insights represent target knowledge as well as "knowledge on how to create knowledge." Related to the latter but also with systems knowledge, another topic that emerged was learning on applying different methods to specific issues or contexts, which is closely related to the scope and goals of the project ("How different aspects can be studied using different research methods and how manifold methods have been applied to different extents in the different cities and also with different outcomes" [CC1]; see also . A third emerging topic that links with different types of knowledge concerned governance issues with a spatial expression. In one case this had to do with a disconnect between the city-wide scale of planning and the problems at neighborhood scale (related to transformation knowledge and systems knowledge). The other case concerned the surprisingly large impact of formal administrative boundaries in how people talk about values (more related to target knowledge). The researchers gained further trans-and interdisciplinary "knowledge on how to create knowledge" through ENABLE. They learned new terms, which can act as boundary concepts (Opdam et al. 2015;→How). These were mainly the concept of filters (infrastructural, perceptual, institutional) mediating the benefits flowing from GBI, put forward in ENABLE's conceptual framework (see Andersson et al. 2019; flows (of benefits) and barriers, both closely associated with the filters (Wolff, Mascarenhas, Haase, et al., unpublished manuscript); and the triad of availability, accessibility, and attractiveness to or of GBI (see Biernacka and Kronenberg 2019). Several interviewees stressed that it was not so much about learning new terms per se, rather trying to operationalize them and "having a deeper understanding of what the terms could mean" [STO1], particularly in the different contexts of each case study city. This happened for example with the concepts of environmental justice, nature-based solutions, sustainability, and resilience. In line with a process perspective of learning (Beers and van Mierlo 2017), several interviewees identified not only knowledge, ideas, insights, or perspectives as such, but referred to learning opportunities that the project offered them, often related with the conceptual approaches and different methods that were applied in the project (→How). For example "approaching the green infrastructure planning and the benefits of green infrastructure under a framework of resilience and environmental justice" [BAR2], "looking more in-depth into the mapping of preferences and values ... try and test and adjust the Q-methodology for the first time on our own" [CC1], or more generally "learning by doing, learning by mistakes in trying to develop tools for discussing these things along the way" [STO2]. The latter challenges the fear to fail, one of the most critical shortcomings that transdisciplinary sustainability research has to navigate (Lang et al. 2012). Who to learn with? Diversity of perspectives ENABLE researchers engaged with various stakeholders throughout the project and drew different learning insights from that engagement. Because of the scope of ENABLE, focused on the benefits flowing from GBI in urban areas, partners engaged mainly with local authorities, especially their planning, environmental, green space management or similar departments. Engaging with those stakeholders was seen as particularly beneficial to learn about "what is going on in terms of policy" [BAR2], "how processes actually work, what are the real obstacles" [STO1], "the realities and challenges of planners" [BAR1]. Another type of stakeholder involved in several of the case study cities were initiatives or organizations at very local scale, e.g., of the neighborhood. This was considered useful to learn, for example, about the multiple perspectives of residents in a neighborhood facing several social challenges like unemployment or poor integration of migrants [HAL2]. In some cases, stakeholders also included citizens in general, who were "there on their free time just because they cared about the area or had a specific interest in the area" [STO2]. Engaging with stakeholders generally provided an opportunity for critical reflection among the researchers and gaining a better understanding of how to design participatory processes in a transdisciplinary research context (including insights on requirements or different degrees of inclusiveness) or how to apply methods coming from research to specific contexts, "so that it is still understandable and can also create meaningful results" [CC1] (→What, →Why). Interactions with colleagues within the project consortium promoted learning on a more abstract level. This included conceptual development of aspects related to the ENABLE framework, like the notion of barriers (Wolff, Mascarenhas, Haase, et al., unpublished manuscript), learning how to conduct integrated research or work with different epistemologies, ontologies, and different researchers' backgrounds, or stimulating reflexivity to extract lessons from what worked or not in each city (see . Learning also took place through discussions with other scientists, e.g., in conferences or case study workshops, where insights and experiences from ENABLE can https://www.ecologyandsociety.org/vol26/iss4/art19/ be compared with those from similar projects [STO1] (→What). This provides support to the notion that mutual learning among the researchers during a research process needs to be actively established and learning processes beyond the boundaries of individual projects must take place for a comprehensive embedding of the own case and contributing to extant knowledge (Lang et al. 2012). Interviewees identified stakeholders who could have been beneficial to the learning process, but who were not engaged. Private actors were mentioned several times, for example, "stakeholders from private housing companies ... who actually have quite decisive impact on GBI benefits" [HAL2]. Politicians were noted as a type of stakeholder with similarly high influence. Difficult-to-reach stakeholders were also mentioned, namely marginalized groups representing a specific kind of GBI users who influence "the functionality and perceptions of green and blue infrastructure" [STO1]. Other stakeholders included grassroots groups or neighborhood associations, as well as the general public, which included people who might be engaged in societal issues but not necessarily through organized groups. Insufficient contact with stakeholders (mainly decision makers and practitioners) from case study cities other than one's own was also noted. Related with this, "also maybe direct interaction between cities could be beneficial for the project" [LOD2]. Engaging with other projects running under the same funding scheme was also seen as potentially beneficial, "to exchange, see what is their research focus and if there may be some overlaps or similarities" [CC1]. How to learn? Framings, boundary concepts, and third places The project partners promoted a variety of events or opportunities to foster learning within ENABLE. Across case studies, this included workshops with local stakeholders, participation in expert groups, thematic meetings with individual stakeholders, or training events. Additionally, consortium workshops in each case study city brought together the project partners, allowing them to internally discuss different aspects of the project (including self-reflections on the transdisciplinary process itself), as well as getting to know each case study better through field trips and direct interactions with local stakeholders. Common across case study cities was an effort to meet project needs through the events and learning opportunities promoted, while aligning them with ongoing "real" local GBI planning and management processes and challenges for the planning and management of GBI (→When, →Why). This has guided the framing of each event and the choice of appropriate boundary concepts, around which to focus discussions. For example, in Barcelona the concept of nature-based solutions (linked to GBI) served as an overarching boundary concept. Then, each event was framed around specific topics related to it, such as the evaluation of effective green roof strategies (Langemeyer et al. 2020) or the resilient flow of ES (De Luca et al. 2021). In Stockholm, a resilience assessment process provided an overarching framing, with each event serving as a stepping stone in the process (Borgström et al. 2021). Researchers there made an effort to "find a language and commonalities, common boundary objects to talk about. We've had to work very hard to find something that they could start their dialogue about" [STO2]. They also conducted "constant framing exercises that we had to do to explain what we were doing and also for us to learn about the system. The framing was everything from writing invitations, writing documentation, having the first presentation at all the workshops that we had ... all these meetings have a very careful thinking about how we start them, how we talked about the system that we wanted to discuss with the actors. So using words that we know that they know about but also then linking them to the conceptual framework within the project, that was a very tricky part" [STO2]. The Oslo case offered an example of another kind of approach. There, the leading ENABLE researcher engaged with ongoing processes as a member of expert groups. These sorts of collaborative approaches can promote a genuine bridging of research and practice, hence addressing a critical challenge for knowledge exchange, that of providing access to research knowledge in ways that meet stakeholders' needs and constraints (Hurley et al. 2016), and enhancing knowledge utilization (Hoffmann et al. 2019). This is aligned with the notion of problem solving organized around a particular application, an attribute of transdisciplinary knowledge production (Gibbons et al. 1994). Framing issues persuasively is an integral part of responding to policy windows, increasing the chances that the research is taken up by policy (Rose et al. 2020). Boundary concepts such as the ones described here can help finding shared interests and bridge understanding across multiple knowledge domains (Opdam et al. 2015, Roux et al. 2017. Across different framings, goals, and formats, several interviewees stressed the fact that the events described here promoted learning both for researchers and other stakeholders ("It's also learning for us, because we always use these forums for giving key stakeholders the opportunity to present and discuss their work ... There's also a learning process in two directions" [BAR1]; "we had a nice exchange [with a local stakeholder], which I would count as a learning event for both sides. For us as researchers as well as the local stakeholders" [HAL2] (→Who). This illustrates the efforts from ENABLE partners in promoting third places (Roux et al. 2017), and is aligned with the notion that collaboration between individuals is needed to gain a fuller understanding of dynamic social-ecological systems (Olsson et al. 2004, Fazey et al. 2005. In an urban planning context like the one in ENABLE, planning practice benefits from new perspectives and improved understanding of problems and solutions from research, while research benefits from being informed by practice problems and practical knowledge (Hurley et al. 2016). This also helps building informal and formal linkages between the project team and other stakeholders, which can play a key role in enhancing the use of knowledge coming from the project (Hoffmann et al. 2019). When to learn? Key stages, temporal alignment The most relevant topic emerging from the interviews, related with this dimension, was the temporal alignment of the research project with ongoing processes in each case study city, in order to maximize the relevance of the former to the latter (→How). This shows recognition that timing influences both the extent to which research findings are likely to be perceived as relevant by decision makers, and the way that knowledge from research is used in the decision-making process (Reed et al. 2014), aligned with the notion of "policy windows" (Rose et al. 2020). It played a relevant role to guide the "research design and selection of methods" (one https://www.ecologyandsociety.org/vol26/iss4/art19/ of the key stages introduced in the analytical framework), and it seemed to play a bigger role in the cases where stakeholder engagement was more extensive. For example, in Barcelona, with stakeholder workshops taking place around three times a year, the topics of the meetings varied "depending on the needs of the project at some point, at the same time we try also to talk about topics that are relevant for the stakeholders" [BAR2]. However, aligning project and others' timelines involved some trade-offs: "At times the two timelines did not align too smoothly, so we tried to bring in ENABLE inputs at specific times that we thought were relevant. So trying to address different stakeholders' needs and desires in terms of outcomes, which has sometimes maybe detracted from the more pedagogical design of the process" [STO1]. The time preceding the project's beginning often played an important role for aligning the project with the needs and interests of local stakeholders, thereby increasing its relevance. In most cases, ENABLE was part of broader, pre-existing processes involving the researchers and local stakeholders. There were also consultations with stakeholders in the project's preparation phase, "about their needs, what are the priority questions, what are the key topics they want to work on through this process and also thinking about key areas in the city for interventions" [BAR1]. This kind of setting the scene and determining what was relevant for the city was seen as a "critical phase" and "a very useful approach in making the entire stakeholder engagement process worth the effort for the stakeholders" [BAR1] (→Why, →How). This illustrates the key stage of "problem identification and structuring" (Pohl andHadorn 2007, Enengel et al. 2012), being analogous to the "problem transformation" process, the first phase in Jahn et al.'s model of an ideal transdisciplinary research process, whereby societal and scientific problems are linked to form a common research object (Jahn et al. 2012). The time following stakeholder engagement events was also stressed, particularly in the Barcelona and Stockholm cases, as important to contact stakeholders, requesting feedback from them, and for focused internal reflection: "We test our ideas and approaches with the stakeholders in the individual meetings. And then we have the reporting back phase, where we presented results to the stakeholders and asked for additional feedback. Depending on the study this is more or less intensive" [BAR1] (→What, →How). This is more related with the stages of "data analysis and triangulation," "reflection/interpretation and synthesis" or assessing new knowledge, and also "dissemination of results/new knowledge" (Pohl and Hadorn 2007, Enengel et al. 2012, Hoffmann et al. 2019. The two latter stages were also the main focus of stakeholder workshops organized across cities, toward the end of the project. Cross-cutting topic: barriers to learning Several barriers to learning within the project have been pointed out. Concerning interactions between the project team and other stakeholders, barriers included the following: different "cultures of participation" and different starting points across cities (in some cities, there were previous collaborations between the ENABLE researchers and local stakeholders, in others not, or the general willingness to participate was low); reaching stakeholders "who do not see themselves as stakeholders" [STO1]; conflicts in scheduling, particularly relevant for stakeholders like grassroots groups, neighborhood associations, or NGOs (→Who); ENABLE's level of abstraction, making it hard for stakeholders to grasp its conceptual framework and demanding extra effort to make it more concrete through illustrative examples. Some stakeholders who could have been beneficial to the learning process were not engaged (→Who). Reasons included changes in personnel within local organizations, which demand renewing contacts and rebuilding trust with researchers, bad or unwanted relationships between researchers and stakeholders, issues of trust among stakeholders ("If you involve people with very strong and very different opinions ... it could take a long time just to find common ground and start to build trust" [STO1]), lack of time from stakeholders like politicians or businesses, and different schedules (e.g., between stakeholders participating on a professional vs. voluntary basis). In this respect, one interviewee noted that "[w]e do have a gap in cooperating with stakeholders from the private sector, that would be in theory and in practice I am not really sure if that would have been helpful for this stakeholder process to learn more. Obviously we could have learned different things, but probably we would have missed out others" [BAR1]. This reflects the need to consider the best form, level, and scale of participation, tailored to the research topic and the preferences and capacities of different stakeholders, instead of assuming that more participation is always better (Enengel et al. 2012, Lang et al. 2012. Within the consortium, the parallel evolution of a common theoretical framework during the project was thought to have negative implications for the design and integration of empirical methods. A similar issue has been experienced by other authors, for example, in the context of transdisciplinary synthesis projects (Hoffmann et al. 2017). The level of consistency between case studies was often mentioned as not satisfactory. There was the feeling that different teams were working using different approaches "and because of this the opportunities for mutual learning are not as big as they could have been had everyone worked on much more similar things" [LOD1], or if there had been "a more joint comparative analysis" [LOD2]. For one interviewee there was a tension between trying to understand the system and then also adding the aspects of change. There was a focus on the former, which left the researchers with little capacity to address the latter. Finally, time and resource constraints (both from researchers and other stakeholders) were also seen as a barrier. We hypothesize that the barriers described here can be associated with the explorative nature of the project, and the different research teams iteratively working toward a joint understanding of it, making the end goal less clear. Difficulties related to the use of terms or jargon, including different interpretations thereof, also posed a barrier to learning, mainly within the consortium, but also in engaging with stakeholders. "Sometimes we managed to reach some sort of consensus, in other cases we just had to step back and leave the differences where they were" [STO1]. The triad of GBI availability, accessibility, and attractiveness was mentioned most often. Some partners struggled with the exact definition of each one of those concepts and to some extent different teams used the concepts differently, posing a challenge when it came to crosscase integration. Similar issues of coherence in interpretation were noted for the concepts of perceptions, institutions, governance, or justice. These are known communicative integration challenges https://www.ecologyandsociety.org/vol26/iss4/art19/ in transdisciplinary research (Lang et al. 2012). Regarding possible reasons underlying such difficulties, not putting enough effort into discussing terminology and differences in how different people express their ideas was mentioned. One partner who works in applied research felt there was an overload of complex theoretical terms. In relation to stakeholder engagement, the language also needed adjustments according to stakeholders' backgrounds. For example, in Barcelona, stakeholders were concerned about the concept of nature-based solutions, because they were more familiar with the concepts of ecosystem services or environmental services and green infrastructure. Although the difficulties described above posed barriers to learning, discussions on finding common ground for definitions were "particularly insightful for all" [LOD1] and they have resulted in "a deeper understanding of what the terms could mean" [STO1]. This is a positive learning outcome and is aligned with the idea that a "learning zone" can emerge out of a situation of discomfort (beyond the comfort zone), as conceptualized by Freeth and Caniglia (2020). Establishing some kind of a common language that advances mutual understanding and agreement also supports integration in transdisciplinary research (Jahn et al. 2012). It is also useful to identify unmet expectations and the reasons behind them. In ENABLE's learning process these were mainly related to four issues: (i) Several interviewees were expecting more comparative work (using joint approaches like common scenario development) to be conducted during the project than it did. Reasons for this included the constellation of disciplines and expertise in the project, different interests across research partners, or the need to be pragmatic in face of the existing amount of work. This provides an alternative expression of the concern that "transdisciplinary settings allow for mutual learning but not for joint research" (Maasen and Lieven 2006:406); (ii) The balance between a more theoretical or empirical approach. Whereas one researcher thought that ENABLE ran too much as a scientific project, thereby missing more contact with stakeholders from other cities to learn "from those who deal with realities" [LOD2], another researcher would have wanted "more in-depth discussion on how do we best connect methods, theories, frameworks" [STO1]. This mirrors the two contrasting approaches to transdisciplinarity found in the literature: a lifeworld approach vs. an inner-scientific approach (Jahn et al. 2012), which are linked with a tension between local or context-specific knowledge vs. generalized knowledge , Enengel et al. 2012. Hoffmann et al. (2019) regard these as two processes of knowledge production, which transdisciplinary research processes strive to combine: a societal one, where stakeholders address a particular sustainability problem, and a scientific one, where researchers develop research on that particular problem; (iii) Not being able to conduct some analyses, or at least reaching as far as desired. This was noted, for example, for system and agent-based modeling, as "data gathering was so hard" [HAL1], or learning about justice and resilience together, which was not entirely possible, because "it has been so much work just to link green-blue infrastructure just to these two dimensions" [STO2]. Related with this, one interviewee noted that possibly researchers have tried to address too many topics and that "we might have gotten further if we focused on fewer issues" [OSL1]; (iv) There were difficulties in implementing a planned mobility scheme for young researchers across the cities. This was seen by some as a missed opportunity because it "is a very fruitful way of learning and understanding and exchange" [STO2]. It is a very concrete example of an effort to foster conditions for collaborative learning, in line with suggestions by Freeth and Caniglia (2020). One interviewee noted that expectations have changed several times over the course of the project, which is not necessarily negative, as illustrated by the Barcelona case, where most of the studies conducted were carried out as they emerged as relevant during the project's lifetime. Cross-cutting topic: role of context The role of context, in a project like ENABLE analyzing real complex urban social-ecological systems, became apparent in several responses. Different cities are in different stages in terms of capacities, existing data, and knowledge. The starting point in each city determines to a smaller or greater extent how far one can go in terms of testing new ideas or approaches. "Maybe ecosystem services and green infrastructure are two examples for that: Barcelona has incorporated that already, other cities have not, so if you now come up with new concepts and you elaborate further on this, but the baseline is not given to work with these concepts, then obviously that is much more difficult" [BAR1]. As another interviewee put it, "I would love to be advanced but first I need to have a basic database" [LOD2]. There are also different cultures of participation shaped by the levels of trust and interest in such participatory processes. This became apparent when comparing the stakeholder engagement that took place for example in the Nordic cities (Oslo, Stockholm) represented in the project and in post-socialist cities (Halle, Łódź). Other contextual factors inherent to stakeholders, like cultural differences, e.g., different languages, or different interests, had to be dealt with when engaging with them. Political changes or changes in personnel within stakeholder organizations, like local authorities, can imply contextual changes in perspectives or attitudes and demand building new relationships between project researchers and other stakeholders. Even among project researchers, "your personal background and legacies play a role how you see things and how you understand progress, conflicts, dependence, weakness, success," so that it becomes relevant "to see how previous learning shapes recent learning" [HAL1]. These insights corroborate the notion that "[t]ransdisciplinarity is a contextspecific negotiation" (Klein 2004:521) Study's limitations and strengths A relevant limitation of our application was the inclusion of only the consortium partners, or core scientists (Enengel et al. 2012). Including the views of other stakeholders involved in the project would allow us to assess the learning process more comprehensively. It would also contribute to our approach's ability to, at least partly, assess social learning, a change in understanding in the individuals involved, and how did the process occur through social interactions and processes between actors within a social network ). However, this was not possible for practical reasons. In Appendix 1 we provide the interview protocol developed specifically for that purpose, for future applications. https://www.ecologyandsociety.org/vol26/iss4/art19/ The double role of the co-authors also as researchers in the ENABLE project demands some clarification and reflection. The first author was part of the research team leading the case study for the city of Halle (Saale) in Germany. This allowed him to be more actively involved in, and consequently gain deeper insights about, the project activities taking place in that city, than for the remaining case studies. However, he took more of a secondary role in his involvement on most of the activities specific to the Halle case study, allowing for a rather more distanced perspective. Nevertheless, it is impossible to equate this to a situation where the first author would be external to a specific case study or even to the whole project consortium. In principle that would allow for a more distanced perspective, but it could also carry disadvantages with it, most notably a lower level of trust between interviewer and interviewees, with negative impact on the (quality of) information given by interviewees or on their willingness to be interviewed at all by someone external to the project. Aware of the limitations inherent to this study's context, we took some precautions. The first author strove to draw his analysis solely from the material resulting from the interviews. He also wrote the draft manuscript of the article, while the remaining co-authors contributed at a later stage and were not involved in processing interview data. This was important because they were also interviewed for the study. By appending the coded interview transcripts to the article (Appendix 3), we also give readers the opportunity to make their own judgments on our findings and claims, in face of the underlying data. The analytical framework developed in this research proved useful to us for capturing the learning process. It enables a broader analysis than each one of the frameworks adapted for its development (Enengel et al. 2012, Hoffmann et al. 2017, Roux et al. 2017) because it covers more dimensions. For example, "including the 'who' and 'when' may lead to a more sophisticated conceptualization of knowledge that goes beyond simply categorizing different types of knowledge and instead emphasises knowledge as more as a process that can be modelled" (Evely et al. 2012:7, unpublished manuscript). Also, the questions developed to guide the interviews have elicited from the interviewees the information needed to operationalize the framework. We argue that our approach can be useful for future transdisciplinary research projects with similar scope and in different geographic contexts, not only for ex post analysis as we did, but also ex ante, to consider the different aspects of the learning process explored here at a planning stage. As one interviewee put it: "One thing that could be very beneficial for us researchers who are aiming at these very complex research and knowledge processes is to find tools for ourselves to capture this, like having this interview got me thinking about things that I would not necessarily have time or room or acknowledged that I would need to reflect upon. Because if I have that self-reflexive routine that would make this transfer of experiences and insights between projects and processes more clear and visible for me and maybe for others as well" [STO2]. This statement is aligned with the notion that learning outcomes may lead to increased reflexivity, but they can also result from reflexivity changes (Beers and van Mierlo 2017). Applying our approach in other projects would allow gathering additional empirical data to build a more robust body of evidence regarding the findings of this exploratory research. Whereas the analytical framework supporting our analysis can be used in different stages of a learning process, the interview protocol we developed to operationalize the framework is suitable for an ex post analysis. Nevertheless, we acknowledge the importance of continuous reflexivity throughout transdisciplinary research efforts (Polk andKnutsson 2008, Lang et al. 2012). In the ENABLE project, this was pursued in different ways, for example in meetings among case study teams, or through time slots in project workshops dedicated to joint reflection. However, reporting on the whole reflection process is beyond the scope of this article. Fostering a learning process within transdisciplinary research projects: take-home messages Interviewees have reflected on what were the main take-home messages from the project. Based on their answers and further reflection among the authors, we present a set of lessons learned, aiming to support future similar transdisciplinary research projects. Regarding their validity, we acknowledge the exploratory nature of this research. Nevertheless, one should note that transdisciplinarity is "problem solving capability on the move," so it is hard to predict "where this knowledge will be used next and how it will develop" (Gibbons et al. 1994:13). The following emerged as main lessons learned (clustered around six themes), which can be helpful for future similar initiatives. With these take-home messages we aim at supporting similar efforts: 1. Capitalizing on what already exists: (a) Assess what sort of systematic learning can be gained from already existing data and knowledge, e.g., feeding it into dynamic models, before collecting new data. There is often the tendency to add more data rather than learn from what already exists. (b) Take advantage of opportunities to engage with ongoing policyrelated processes, instead of designing stakeholder engagement processes from scratch that do not have a policydriven purpose or relevance. 2. Addressing trade-offs inherent to different types of knowledge: (a) Find a balance between addressing local stakeholders' concerns and conducting comparative research. Transdisciplinary urban research should be relevant for stakeholders, building on their needs if it is to be impactful. Nevertheless, comparing problems across cities helps put the magnitude of local problems in perspective and in context, and sorting out priorities. It also helps thinking about future scenarios, because one can see alternative states that a given city could be in. Approaching different case studies with a common approach is particularly useful for learning among scientists. These goals can be achieved for example by establishing cross-case working groups targeting specific sets of issues and promoting interactions between researchers and local stakeholders from other cities. Being part of a multi-city endeavor can also leverage stakeholder engagement (higher willingness to participate if people know the same effort is being conducted in other cities, especially "model"/ frontrunner cities. (b) Take into account the important role of context in real complex urban social-ecological systems. This relates to the previous point and is particularly relevant when trying to draw more general insights from different case studies. 3. Fostering inter-and transdisciplinarity: (a) For integrated research running in multiple case studies, promote a continuous (as possible) dialogue between the different research teams. In ENABLE, conducting a deeply integrated transdisciplinary project over a dispersed network proved challenging in this regard. Having a mobility scheme in place, which allows extended stays from researchers in partner organizations, might be helpful. ENABLE had such a scheme but it was not fully realized, so reflecting on its potential was part of the learning process. (b) Embrace different views, expectations, the variety of knowledge people have, and the way they use this knowledge. Accept that there are multiple possible pathways toward a certain desirable state or goal. This might require stepping out of one's comfort zone, e.g., in terms of one's academic background, which can be useful to stimulate learning in interdisciplinary collaborative research (Freeth and Caniglia 2020). Paying attention to how one frames issues and looking for ways to find a common ground can prove useful to deal with such differences. This demands being aware of and assuming certain researcher roles, like that of a process facilitator (facilitating the learning process), knowledge broker (mediating between different perspectives), or selfreflexive scientist (being reflexive about one's positionality and normativity, as part of the system or process under study; Wittmayer and Schäpke 2014). (c) Assign different roles within the team promoting the learning process. This can enable different team members to have different perspectives on the same process. This requires the respective human resources, for example, one person will in most cases not be enough to cover all the different needs of the process, like facilitating and being an observer. Constant reflection on researchers' roles is also advisable; see previous point and Wittmayer and Schäpke (2014) for additional roles. 4. Engaging stakeholders: (a) Consider the pros and cons of different stakeholder engagement formats when designing the engagement process. For example, smaller focus groups bring less perspectives together than a larger stakeholder workshop, but they can create a safer space for discussion among stakeholders, while they can also free the researchers from other roles (like being more a facilitator), with benefits for the learning process in both cases. A mix of different formats in different stages of the project, targeting specific objectives, can be most useful for the learning process. Choosing the best mix should take into account the distinct interests, roles, and practices of communication brought by stakeholders. (b) Accept that virtually no one participatory process is perfect. Every project has its limitations, leading to trade-offs in terms of who is involved and what is learned. It might not always be needed and suitable to involve stakeholders in all phases of the project, because different stakeholders contribute differently to different stages of the research process. Participation is shaped by the research aims and should consider stakeholders ' values, preferences, interests, power levels, or constraints. (c) Be explicit about what is on the agenda in terms of stakeholders or processes exerting pressure on GBI, underlying conflicts, or factors hindering research or initiatives to promote GBI. 5. Supporting a learning environment: (a) Promote exploration and researchers' own learning within the research team. This was seen as a very positive experience from ENABLE because of its flexibility, and as something that is not taken for granted, when compared to other projects with a more rigid approach. (b) Acknowledge that different kinds of learning opportunities can be important to foster learning, each contributing with its own benefits to the whole learning experience. ENABLE researchers identified various activities in this regard, for example, the writing of scientific articles as an interdisciplinary learning process, internal workshops providing a safe-to-fail environment, or workshops in other case study cities giving insight into other contexts. (c) Encourage learning also beyond the boundaries of the project. Strive for sharing the project's products and knowledge with stakeholders at different levels, enabling a sustained communication channel between the researchers and other stakeholders. (d) Acknowledge the importance of failure in both process and outcomes. Analyzing non-success can reveal the weak points of a system, which can put it onto an undesired pathway. Reflecting on failing efforts can be insightful not only for the internal learning process but also for others to avoid making the same mistakes. In ENABLE, having safe-to-fail opportunities was seen as beneficial for learning, in line with the notion that a "learning zone" can emerge by going beyond an understimulating comfort zone (Freeth and Caniglia 2020). 6. Fostering reflexivity: Develop tools and routines to capture the learning process taking place in the project. Having a self-reflexive routine can facilitate the transfer of experiences and insights between projects and processes. Several ENABLE researchers found the exercise reported in this article as useful, to trigger thinking about issues for which they would not necessarily have the time or acknowledged they would need to reflect upon. CONCLUSION Our analytical framework for capturing the learning process taking place in transdisciplinary research projects covers different dimensions of the learning process (Why, What, Who, How, When). It draws inspiration from and expands existing similar frameworks, and has been operationalized through an interview protocol across five European urban regions. The framework helped us distill a set of recommendations for future similar transdisciplinary research projects. These include capitalizing on what already exists, addressing trade-offs inherent to different types of knowledge, fostering inter-and transdisciplinarity, engaging stakeholders, supporting a learning environment, and fostering reflexivity. More generally, the case application also provided empirical insights for each of the framework's dimensions, and identified cross-cutting issues concerning barriers to learning and the role of context. Further research is needed to test and develop the framework's applicability for more diverse groups of stakeholders; the case only drew on the experiences of the researchers in the project consortium. Finally, while ours was an ex post application, the framework can also be used ex ante to plan transdisciplinary projects that enhance learning in its multiple dimensions, and throughout projects to identify and engage with barriers to learning and make best use of evolving insights. This questionnaire aims at capturing the learning process that accompanied the project implementation, from the point of view of different actors involved in the process. 1.2. What do we mean with "learning process"? The learning process refers to the production of knowledge as a joint process among stakeholders and scientists , building on the notion of mutual learning, defined as "the basic process of exchange, generation, and integration of existing or newly developing knowledge in different parts of science and society" (Scholz, 2001). You can also think about "insights" or "perspectives" gained through the process. Disclaimer on data handling Results of this questionnaire will be used exclusively for research purposes under the scope of the ENABLE project. Presentation of results will not identify any respondent's name. No personal information such as phone number or bank details will be collected. E-mail address will only be collected if voluntarily given by the respondent (for purposes of receiving further information on the project), but will not be included in the presentation of results. By proceeding you consent to take the survey (you can revoke this at any time 3. Questions related with the Most Significant Change 3.1. What did you find most interesting and useful from the project? What were the main "take-home messages"? The questions below are relevant during the MSC interviews and should be introduced, but only if they are not spontaneously mentioned by participants. 3.2. Could you actually apply some of the new knowledge/insights/ideas resulting from the project in your own activities (e.g. in other research projects)? 3.3. Could you identify any barriers that prevented knowledge exchange between the research team and local actors? 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? 3.7. Is there anything you want to add regarding your experience with the project, which has not been mentioned so far? Interview protocol for stakeholders 1. Introduction General ENABLE introduction ENABLE is a EU-funded research project that aims to develop and test new methods and tools to leverage the potential of GBI interventions in neighbourhoods and across metropolitan regions while adopting a social and environmental justice perspective and taking into account the perceptions of local stakeholders. It tests possible GBI solutions to urban challenges in the metropolitan regions of Halle, Barcelona, Łódź, Stockholm and Oslo, while also exchanging with the city of New York. Aim of this questionnaire This questionnaire aims at capturing the learning process that accompanied the project implementation, from the point of view of different actors involved in the process. 1.3. What do we mean with "learning process"? The learning process refers to the production of knowledge as a joint process among stakeholders and scientists , building on the notion of mutual learning, defined as "the basic process of exchange, generation, and integration of existing or newly developing knowledge in different parts of science and society" (Scholz, 2001). You can also think about "insights" or "perspectives" gained through the process. Disclaimer on data handling Results of this questionnaire will be used exclusively for research purposes under the scope of the ENABLE project. Presentation of results will not identify any respondent's name. No personal information such as phone number or bank details will be collected. E-mail address will only be collected if voluntarily given by the respondent (for purposes of receiving further information on the project), but will not be included in the presentation of results. By proceeding you consent to take the survey (you can revoke this at any time). 2. Questions on the learning process 2.1. In which role(s) did you get involved with ENABLE (e.g. practitioner in organization X; researcher at university Y; citizen with no particular affiliation)? 3. Questions related with the Most Significant Change 3.1. What did you find most interesting and useful from the project? What were the main "take-home messages"? The questions below are relevant during the MSC interviews and should be introduced, but only if they are not spontaneously mentioned by participants. 3.2. Could you actually apply some of the new knowledge/insights/ideas resulting from the project in your own activities? 3.3. Did you feel that you could influence some aspects of the project (e.g. directing research questions; identifying issues to focus research efforts)? 3.4. Do you think that the project promoted interactions with other actors in the city? 3.5. Could you identify any barriers that prevented knowledge exchange between the research team and local actors? 3.6. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? 3.7. Is there anything you want to add regarding your experience with the project, which has not been mentioned so far? Informed consent form Research project title: ENABLE Research investigator: André Mascarenhas Research Participant's name: Within the ENABLE project, we are conducting a study on the learning process taking place during the project. For that study, we are conducting interviews with members of the research team, to gather their insights on that topic, based on their experience during the project. No personal data will be collected through this interview. The results of the study are to be published in the form of an open-access scientific article. This consent form is to ensure that you understand the purpose of your involvement and that you agree to the conditions of your participation. Please read the information contained in this form and then sign it to certify that you approve the following: • the interview will be recorded and a transcript will be produced; • you will have access to the transcript and be given the opportunity to correct any factual errors; • the results of the study are to be published in the form of an open-access scientific article; • the transcript of the interview will be analysed by André Mascarenhas as research investigator; • access to the interview transcript will be limited, during the writing of the scientific article, to the co-authors (André Mascarenhas, Johannes Langemeyer, Erik Andersson, Sara Borgström, Dagmar Haase), and afterwards will be made available as supplementary material to the scientific article; • any summary interview content, or direct quotations from the interview, that are made available through academic publication or other academic outlets will be anonymized so that you cannot be identified, and care will be taken to ensure that other information in the interview that could identify yourself is not revealed; • the actual recording will be deleted after the scientific article has been published; • you have the right to stop the interview or withdraw from the research at any time; • any variation of the conditions above will only occur with your further explicit approval. R: Maybe in three contexts. I involved mostly non-ENABLE collaborators / stakeholders, which means that the learning is more dynamic and interac ve, there's learning related to more method development, but that's more classical research, in this context it is also interes ng the learning with users of the results outside of our ins tu on. So the three contexts were the par cipa on and development of a Norwegian standard for the blue-green factor or green point system, par cipa on in the development of (a vision for) a standard for valua on of trees in Nordic countries, where we par cipated in an expert group on behalf of the project, and work on spa al modelling of green roofs as an input to the green roof strategy of Oslo. Those were the three where we had a lot of contact with external actors. Regarding the order in which we did these things, since the beginning of ENABLE we were arguing for the need for a blue-green factor standard for Norway, there was only one exis ng for Oslo and a few other municipali es, so we were part of the actors calling for a Norwegian standard and also since the beginning of the project we were engaging with other Nordic tree valuing researchers on the need to update an exis ng standard for valua on of trees, to take be er account of ecosystem services. Those are a few things that are ongoing since 2016 and then the green roofs modeling with mulcriteria analysis was a later ini a ve which started a couple of years ago. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: The fact that conclusions or hypothesis you might already have had working in your city, they are either rejected or reinforced by the possibility to compare with other ci es. So I found it very useful the work that was done on comparing green space access and availability, the compara ve mapping work across the ci es I found that very useful to sort out what was important and what wasn't important to focus on in Oslo. Sounds a li le bit contradictory in the sense that we should as a researcher be working on city specific needs but it's easy to get the needs of the city and the research interests mixed up, and it was kind of easier to sort things out si ng together with other ci es and hearing about their priori es and also comparing if access to green space was really a big issue in Oslo or if it's just a big issue locally. Those kind of things become clear when it has this comparison possibility that the project gives you. R: In ENABLE, more than in any other project previously we tried to connect with ongoing ini a ves or real decision-making process outside the project, not so much constructed stakeholder discussion spaces within the project. We did that more than I've done before in other projects and that's why I was emphasizing the par cipa on in these expert commi ees on the blue-green factor standard, or the standard for tree valua on or the green roof strategy in Oslo. Those were processes not designed by ENABLE, but where ENABLE contributed to either their set-up just par cipated in something that was already established as exper se and that gave us some insights we wouldn't already have, so the learning was a lot about how to make the research in ENABLE relevant for these external processes that already exist, instead of designing a stakeholder interac on space where the stakeholders were adap ng to that research design. O en we design stakeholder workshops where we set the program and we invite the stakeholders into a space created by us as researchers but at least in Oslo and ENABLE we were more than anything par cipa ng in spaces designed by other people and trying to make our research relevant for that process. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? And the least? R: See processes men oned in 2.2. The learning was a lot about how to make ENABLE results relevant to these other processes. 2.5. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: I guess we spent quite a long me but that was a complementary one on deciding about these filters. I don't think I ever used the concept of filters before.(unclear) We were kind of par cipa ng in its defini on from the start but possibly [STO1] had used this concept before, but it was a new concept to me or a new framework. Difficul es: I can't remember any situa on where we were obviously talking past each other, but the discussion on the filters took a long me so that's maybe evidence that we weren't pu ng the same things into those concepts to begin with, but of course we set up a process to understand each other from the start, so that was normal. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: In every project on urban ecosystem services we con nue we have to work with engagement with the municipali es, it's not like you just do it in one project and then everything stops. You have to keep coming back to it, O en several mes within the same project, because the municipality has -even the Oslo municipality, the best equipped municipality in Norway -has very ght budgets and personnel me to engage with research projects. And there's a lot of experts with whom we make a rela onship, who quit and move on, then there's new personnel and you have to start the whole trustbuilding exercise from the beginning again. That has happened several mes with different agencies (like the planning and building agency, the environment agency, the water and sewage agency and so on), so it's a constant effort to renew contacts. In variable ways I could have hoped for more engagement from the environment agency, but the reasons for that are due to personnel changes. It's not a structural thing about ENABLE or even a structural weakness of the environment agency or the municipality, it's just the reality that -if you want another learning experience -that's probably the kind of metaexperience in the background that engaging with the stakeholders is a con nuous and me-demanding process. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: Those three processes -the green-blue factor, the tree valua on exercise and the spa al modeling of green roofs. One way of structuring the purposes of the research that I've used previously is informa ve purposes, decisive purposes and technical support purposes. For those three processes the green roofs modeling was for the purpose of spa al priori za on of where green roofs fill demand gaps and provide most effect for the use of space and that's a decisive purpose targeted at planning and zoning and then for the valua on of trees working on the Nordic standard that would be used by Oslo municipality that's a technical support purpose because it's equipping the city with a tree damage compensa on assessment that's up-todate including ecosystem services and the same would go for that the blue green factor standard, it's technical support purpose. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: I could come back to the three tools I was talking about before which all will lead to future work because they're being integrated into standards or plans in the municipality or at na onal level, so I think I will come back to them in the future most definitely, but one thing recently was… there would be no way I couldn't even thought of reac ng to the current shutdown and the relevance for green space research without having interacted with the ci es in ENABLE and the research without having interacted with the ci es in ENABLE and the researchers on access to green space which was a very important aspect of the project. Maybe I wouldn't even imagine I have any relevance in that debate whereas having been in ENABLE I felt like we could within a few days react to the situa on so that's definitely… not saying I'm gonna become a COVID-19 green space access researcher from now on -but you could maybe see if our blog piece gets published rela vely quickly then we might get contacted, some of us in our respec ve ci es to par cipate on further research on those topics -on resilience and in rela on to pandemics. That would be a very direct result of ENABLE that couldn't have come about with any other project. R: One expecta on I had myself which I didn't fulfill was, I was hoping that it was possible to do more compara ve modeling work between the ci es so we tried with mul -criteria analysis valua on. I thought maybe in the very beginning when we were wri ng the proposal that I would be working more on for example monetary valua on of the benefits of the green space which is my core exper se but coming to the project there weren't many other researchers within the team that had that background so it didn't seem possible to do compara ve monetary valua on, so we switched to mul -criteria analysis. For a while we were trying to see whether we could do something compara ve on agent-based modeling. That didn't go anywhere so I probably had the excessive expecta ons on implemen ng the same kind of quan ta ve spa al monetary modeling across ci es that didn't turn out to be possible because of the combina on of the constella on of disciplines and exper se in the project and also because of maybe over-ambi on on my part. And then didn't end up having enough capacity to do that and the other things we wanted to do in the project. So I think possibly we went out with too many topics from our side, but it is my fault we might have go en further if we focused on fewer issues. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: It's a bit of a chicken-egg problem. I think we spent a lot of energy working on our conceptual approach with the filters paper and so we didn't have in place this common theore cal design un l halfway, or didn't have in place this common theore cal design un l halfway, or even past halfway in the project, which meant that it wasn't so easy to design the empirical methods within the context of the filters framework -although it's so general that you can always squeeze things back into that framework -but if the filters paper had existed and we had based the proposal with that as a theore cal framework we could have maybe achieved more integra on across methods. In the current project design there was kind of le ng many flowers bloom approach and the project has been very rich for that but if we had a framework earlier we might have been able to link across between methods and cases -this is a hypothesis. 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? R: I have enjoyed par cipa ng in policymaking guidance and making processes on behalf of ENABLE in Norway and I will definitely encourage to use that modality again in the future if it's possible. Some mes it's a ma er of taking advantage of opportuni es that present themselves because you don't always have a policy process you can connect into from a research project and we obviously can't design a policy process with a research project, so I would say to the extent possible or where possible connect the project to ongoing policy processes rather than designing stakeholder interac on contexts which don't have a policy driven purpose. Some mes it feels like as researchers we're driving the policy agenda for the stakeholders -and we have to some mes, there's a vacuum and there's no other way to do it -but if there is a process ongoing try to connect to that instead of designing a separate space for interac on. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: Comparing problems across ci es helps you to put the magnitude of your local problems in perspec ve and in context and helps to maybe sort out priori es, so if you can complement doing that with priori zing your research issues related to what local planners and stakeholders are saying is important. Then that would be the op mal combina on. If you only knew one or the other than you might be focussing on too many problems if you listen to all stakeholders with all their agendas you might get bogged down in rabbit holes so having the cross city perspec ve helps you to kind of find out what problems do you have in your city and what are the real resilience issues. It's really quite difficult to think about urban resilience if you are locked in your own city bubble, because it's really hard to think in terms of future scenarios when all you can see is your own city landscape at the present point in me. When you can compare and contextualise your ecosystem services or nature-based solu ons by looking across ci es you get this space-me dimension which helps you to think more clearly about urban resilience, because you can see alterna ve states clearly about urban resilience, because you can see alterna ve states that your city could be in and that's not possible or at least much more difficult when you only work in your own city bubble. If I think in a very conceptual way that might be a take-home message from ENABLE. You can't really do urban resilience studies well unless you have a cross-city comparison approach. 29 ..Most Significant Change ..Recommendations for future 1/6 HAL1 2.2. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: Two workshops in Halle with stakeholders, one more at the beginning of the process, the other one more towards the end. 2.3. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: There was one big insight: The scales in the city operate independently. Exis ng problems at the local scale are not reflected by urban planners. For example the bad image of Halle Newtown where people don't use offers of municipality. Urban planners don't understand, that giving money to and engage ac ve people (like [local stakeholder name]) does not solve the issue. The Issue is in the pa ern of the popula on in the city, which is reinforced by the city government by pu ng all neglected groups there. The point here is that some mes one scale doesn't see the other and vice versa. This told me also that our core principles in landscape ecology or urban ecology of scale transparency or scale transmission might be correct at natural science side but might be misleading in some spheres of the social-and planning economic side. Arjen Buijs with his mosaic approach might be closer to how this works. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? (main items) R: Yes, there were two issues that were not very posi ve and that unexpectedly did not work. The first issue is that the Q-method doesn't work. I was so op mis c that this method could yield addi onal knowledge and could be complemented with the mental mapping and I am not sure what exactly went wrong. The second issue was the resilience assessment. It is so much shaped to the condi ons of the researchers who are developing this concept that it is hard to adapt it to any other content that might run under a slightly different regime. income groups. Also the last final workshop in Brussels showed that there are many different people interested. The lady from Oslo was very posi ve, ac ve and 'about data', the lady from Lodz was saying: "We have to take people at hand and guide them through the jungle of what we are doing". And the ladies from the EU want to push something. But there are, again, so many scales and levels, that the green, the core issue, the ecosystem is almost unimportant. Human and societal impairments were the more present topic. We talked about green roofs, but nature, as a real intrinsic issue did not play a role. Nobody talked about diversity. This was the same at URBES project. The diversity and the real nature aspect, beyond the func ons for humans was missing. There is an ongoing bias, which makes us circling around our core issue. Maybe we don't understand it or have no knowledge about it. Or have a fear that we would discover something which is completely not working, like this virus or insects we don't want to have. We just touch the core issue in terms of its an asset, a stock, and we have to plan and look for percep ons. But we do not look at it. (This was more a side line of learning and did not build too much on what was there). Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: I knew this term before, but I use it much more now, the flows term that [STO1] introduced in June, in the first mee ng because he introduced it very clearly. The other group of terms is related to accessibility, availability, a rac veness and barriers. I never used the word barriers so o en as within ENABLE. The term barriers was very useful, because it shed light on an aspect that we don't discuss too much. It has its limita ons but it is a missing link in terms of green space accessibility. Also the Q-method was new. Not new to me was mental mapping as a method, but the content was new. I really engaged with this method for the first me. Also, I used the word filter before in different contexts, like chemical or op cal filters or energy budgets. But I applied the word filter in my science for the first me in terms of ecosystem services embeddedness or flow. Difficul es: The accessibility and availability terms that were defined by the Polish team, the exclusion or inclusion criteria are not en rely clear to me. This was for me too sta c. We o en talked about two different things when talking with the Polish team. Leipzig / Halle and Eastern Berlin knowledge in. Here I listened to very crude perspec ves of environmental injus ce in other parts of eastern Europe, which I did not know to this extent. This opened my eyes on how rela ve the assessments of non-accessibility and barriers is in a part of a con nent where you think there are many common legacies and other common things. That was the most impac ng outside club of people during the ENABLE project. And what might not be a personalized actor, but what impacted my research, thinking and learning during ENABLE was the paralyzed units of planning in Leipzig and Berlin during the hot summers 2018/19. This shaped my thinking towards why establish new green if we cannot even care about the exis ng one. What does it mean to have more and more green while having less and less water? 2.8. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: In general, we created new data for each city which is always good for urban / regional planning departments or teams. Also, we contacted new stakeholders, and the other case studies contacted exis ng rela onships, offering our support. Par cularly for Halle mental mapping was very important, because we played it back in the final workshop. And you could see helplessness, maybe we contributed a bit here, insofar this was the purpose of shedding light on something. The focus was also to open up a new case study for our team. Our case studies were not really a joint approach. They were similar case studies which could have been ge ng an interes ng bundle of cases running through a certain lab with a certain sequence of methods. But this was not doable, because every case study has its interests etc. So the purpose of crea ng a cross-European lab for GBI flows, barriers and filters could not be really reached. But the process was started and we worked on theories and concept with illustra ve examples. We didn't come up with new guidelines for European ci es with "do's and dont's". What new knowledge or new insights resul ng from the project do you consider the most relevant for the planning and management of green and blue infrastructure in your case study city? R: In Halle I think there are two things: One is the barriers and the thinking in filters. This can be nicely applied to planning, where a lot of planning is already conducted. This is about real assets, about real issues. And when understanding these units that are used and the different variables, this is really very helpful. The other thing is -at a smaller extent -the green roof issue was for Oslo very important, for other ci es more complementary. For example, for Oslo the green roof issue that [OSL1] was running was very important and wanted. Maybe a bit outside the ENABLE context. And these filters, accessibility, availability and barrier issue was the key -which I think accessibility, availability and barrier issue was the key -which I think you should remember when thinking about ENABLE. Not so much the resilience assessment they did in Stockholm, because it was hard to see how other case studies who were doing this and could really benefit from this. It was a really hard exercise without a clear big benefit. 2.9. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: In terms of pure methods, the applica on of mental mapping in Stockholm and Halle, which really went well and we got interes ng results. It is a kind of complement to the survey types I use so far in green spaces like PPGIS (public par cipa on geographic informa on systems) or similar par cipatory observa ons or surveys and so on. This was really a gain. The second was to be aware of impairments, that we have so many issues that seem very loosely and that it's not really about intrinsic func oning of nature. This is a very sad finding. It was not really about acknowledging the dangers for nature that we run in under climate change. The basic requirements for nature were simply ignored. We don't get into the systems and living organism next to humans in this nature. We talk about "co-" but we try to push our impressions through everything and we see green as a servant, and this makes me sad. Did the project meet your expecta ons regarding what you wanted to learn about? If not, what would you have liked to learn about, which was not possible through the project? R: Firstly, we wanted to con nue what we started in URBES. And secondly, related to the case study, I wanted to get away from Berlin and Leipzig and open up a new case study, which is s ll more fragile than these growing poles. So, this was a very regional related issue. My expecta ons were that we get deeper into where we had to stop in URBES. And we did this with the accessibility and barriers, which is a nice con nua on of the ecosystem service results from URBES. Also, the resilience assessment, where our plan in URBES didn't work out. This didn't work out in ENABLE as good as the accessibility and barriers issue. I wanted to get the system modelling and agent-based modelling in, because the case studies were interes ng and we have new case studies, but I saw that the data gathering was so hard. I was disappointed that we didn't manage to get the system-and agentbased models running in the life me of the project. We have a good pre-requisite now, but my expecta ons were higher than what we could achieve. They need more me and cannot be done with so many case studies. Maybe with one case study you could get a deeper understanding and establish a system as well as an agent-based model. But with so many case studies and so many spread workloads it seems impossible. If I could rewrite ENABLE, I would say we should collect the knowledge, compile it, structure it, and try to develop a desk-study on the knowledge we have acquired and get into a desk-study on the knowledge we have acquired and get into a systema c learning on how to use real dynamic models, not look-up tables. To learn from what we already have, because we have a lot of data, but we tend to add more data rather than learn from what we have. 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? R: One of the core issues is to focus on failure, on non-success which is be er for understanding instead of highlight numbers of what increased or what was be er. Because, you come to the weak points of a system which can turn a system onto a bad pathway. Another issue is honesty, which relates to the pandemic. We have nice and friendly communica on, which should stay this way, but its not touching the hot points for different reasons. Like "I couldn't hire a person; I didn't have enough money; the stakeholders don't want to hear that". My clear statement for communica on is: I think we should touch conflict points and give nature and humans in nature a stronger mandate and not try to be polite to those who set nature and humans in nature under pressure. We have to say more clearly what is on the agenda. When we write a project proposal, we write about basics in terms of achievements, we don't clearly say what is not working. Maybe in a report or a round table, but not in official communica on of project. We are always adding, but nobody is wri ng about problems and conflicts and no-go's and issues during the project. But this would be helpful -also for the funder. Insofar, honesty would be a big issue for me. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: There is a certain mutual dependence that shapes the interac on, and it is characterized by limited resources and limited power. And it is one part of a neo-liberally shaped system where a lot of deficits need to be fought. For example, in science we have half posi ons or 25% posi ons that create very fragile condi ons, also for planners. Environmental and social budgets are the first to be cut, which makes us a very vulnerable group of people that try to make the best out of these situa ons. We are mutually dependent -they have to include science and we have to apply our knowledge to disseminate case studies, so we are relevant. The celebra ng of the mutual relevance shapes our rela onships in the same way as real interest. But I'm not sure if mutual dependence is more important than the interest in nature and in people. All in all, this vulnerability in the system shapes us all, since we are not the powerful actors like e.g. actors from the housing market, they won't listen. And we know this and we know that our sugges ons are non-valid if we don't include powerful actors. This general dependence became very clear in ENABLE. ..Barriers to learning R: Firstly, that scales ma er but they don't always communicate. Second, we are circling around the real co-habita on of humans and nature in ci es. We look at nature as a stock or asset but forget about its real importance. And third, we need more empirical data measurements and knowledge from the nature side. We were strong at the social side and weak at the nature side and like this, co-working and co-learning cannot work. 3.7. Is there anything you want to add regarding your experience with the project, which has not been men oned so far? R: At the workshop with the Eastern Europeans from Romania, Hungary, Slovakia and Ukraine in Lodz it became clear to me that where you come from, your personal background and legacies play a role how you see things and how you understand progress, conflicts, dependence, weakness, success. We saw this in ENABLE, comparing the restric ve opinions by [colleague name] compared to the "we know how this works"-a tude from [colleague name]. Your local context plays a role how you learn even if you see the same things. In another round of interview a er ENABLE you could ask people about what sources of remembering, of personal knowledge, of tacit knowledge they use to reflect and mirror projects like ENABLE. It would be interes ng to see how previous learning shapes recent learning. But this needs more prepara on to formulate the right ques ons. The nega ve shape of change and overall loss shapes peoples' minds as well as experience of no real change or other changes. I saw this in ENABLE, but we need a concept to really ar culate this in a structured and systema c way. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: At the beginning we had a big event (Halle workshop with ENABLE partners and local stakeholders) and a smaller flexible event centric to the topic of barriers at the end. But we also had a small and temporary visit to the ladies of the Quar ersmanagement, where we had a nice exchange, which I would count as a learning event for both sides. For us as researchers as well as the local stakeholders. When we went to Halle Neustadt, we had the brainstorming and the exchanging using a map. We also went to Neutopia, but this was more for informing each other, not necessarily capacity building in terms of learning. When the students were in the field for mental mapping, it was a li le bit in between. Part of the method, when engaging with people, asking for support could be counted as learning. But the assessment itself I would purely count as an inves ga on method in the field. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: There were various levels where I could gain some insights and ideas. First I was reminded of the challenges of working with stakeholders, in terms of problem understanding, the me budget and capacity in total. There is a vicious circle in the co-design process: The less capacity the municipality has, the less it is able or interested in contribu ng to the co-design process. This was a challenge for us working in Halle, working with the limited me availability and the limited problem understanding of the stakeholders. From my perspec ve, what I learned for the next project I would focus on a certain set of problems and don't try to address the whole bunch of project-goals to the stakeholders. This became clear to me at the barrier workshop, which was a li le bit too small for us but in general it was quite concentred talk and debate and exchange about a certain set of barriers and how to benefit from GBIs in its different facets from physical, ins tu onal and percep onal perspec ves. This was also something new for stakeholders to start thinking about the interlinkages. In summary: the capacity building as learning. And secondly, although you are focussing one specific ques on it is quite diverse when you start talking about different perspec ves on the one hand and different overlaps on the other hand. And this is very interes ng for stakeholders, decisionmakers as well as from a scholarly perspec ve. R: I can just speak for the workshops I par cipated in and they were quite diverse. But I would say what was very interes ng was the workshop in Stockholm, which was kind of different to the one we had in Halle, in that the Stockholm colleagues sought for a huge group of different and diverse stakeholders. They had a longer tradi on in talking and they were quite rooted in the way they exchanged. This was quite new to me but also exci ng, since we could talk already in very much detail about specific challenges concerning our problem or even the solu ons. At the same me this was challenging because of the diversity of the stakeholders, the perspec ves, sectoral languages etc. Just to turn to perspec ves, that was quite beneficial from the small barriers workshop in Halle where we had few stakeholders and we could really talk in detail and consider one planning perspec ve. There were different shapes [of workshops] with goals and outcomes, but the Stockholm way, I think, is the next level we would like to have in Halle. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: I cannot recall specific terms I learned in addi on. There were a few terms that were used with a slightly different meaning. I believe that the term 'barriers' was understood very broadly among the stakeholders, par cularly in physical terms. And I think we managed to enlarge the understanding of this term. Not so much in terms of prac cal or implementa on ques ons but in terms of conceptualizing and theorizing, finding the overlaps in the language of resilience was quite interes ng. Especially the terms system and systemic factor and to what extent are they equivalent to what we understood as filters. So, maybe filters and systemic factors was something new and I think we arrived at a shared understanding of what we mean by that. That was something new and I need to learn to work and deal with that. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? R: Working with the stakeholders was quite interes ng. To see to what extent do our different methods target specific groups in the neighbourhoods or exclude certain groups. For example, children, young people or elderly. I realized that they are more diverse, there are specific user groups that cannot really be labelled. And to which this perspec ve with our method, how we assessed the different perspec ves was quite interes ng -I'm thinking about the discussions with the Quar ersmanagement in Halle-Neustadt. From talking to consor um partners, of course due to the intense exchange with the consor um partners, of course due to the intense exchange with the colleagues in Lodz we are able to further conceptualize the barrier perspec ve. This was something that we jointly further developed with a more socio-ecological touch and not so much in ins tu onal se ng, but of course with plenty of overlaps and synergies. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: I remember how we started conceptualizing the system of our case studies, si ng in Berlin and drawing the system dynamics model and mapping different components in the model. And comparing to who was actually on the table -when you engage with stakeholders you realize that you hardly can cover all those components. Of course that depends on how you set the boundaries of your system. From our perspec ve, talking about socio-ecological se ngs and the access to GBI benefits as one aspect in that, for instance we did not have any stakeholders from private housing companies or other more profitoriented stakeholders who actually have quite decisive impact on GBI benefits. Smaller enterprises for instance who are really important for community sense within the neighbourhood. We had the Quar ersmanagement, the city administra on and planning officials and local grassroots ini a ves, but no private actors. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: Everything we did is useful but the ques on is for whom. A lot of what we did could be used for further scholarly work and case studies, but in terms of implementa on, we created an extended understanding of barriers and the overlaps, embedded in a broader system -thinking about who are the other actors in play. This was interes ng to think about for the stakeholders and if they use it further, that might also lead to a certain implementa on, which we haven't achieved with ENABLE. For example, one stakeholder from the city administra on was very interested in the way we looked at barriers from different perspec ves but also on the way we incorporated housing market mechanisms, which are important for the way how people distribute in space, which is quite decisive to the actual accessibility to the benefits of green and blue infrastructure. This is what I meant with broader context. In ENABLE we just started to work together with Halle stakeholders. We did what we could but in terms of available capaci es on both sides, it would be nice to have gone one step further towards implementa on. We have not really contributed to a specific goal in the city but rather contributed to a more diverse problem understanding or awareness. Shi ing the perspec ve away from implementa on, I would say, the conceptualiza on of barriers and thinking about systemic filters, and then bridge it to empirical observa on could be useful to be further developed, to be fed with more details. If you enrich this with more 4/5 developed, to be fed with more details. If you enrich this with more empirical material and within another co-design process, you come back to the stakeholders and you could con nue the ping pong between scholarly work and implementa on. Not just on the level of city administra on but also local Quar ersmanagement, local ini a ves, how can they make use of the results like those from the mental mapping -looking at smaller pieces on smaller scales. 2.9. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: The filters are quite useful, not just for the way you engage with the benefits of green and blue infrastructure, but also in terms of e.g. land use change detec on or for urban studies in general. ENABLE managed to jus fy the usefulness of filters, not in a sense that they create something besides exis ng typologies.(unclear) So, pu ng forward a structure to work with and to feed it. I started working in this field when I started in ENABLE, ecosystem services was not the center of my previous work, so everything was new for me. But in our scien fic papers, we write about that we are forced or enabled to think about synergies between different methods, but we have not really followed this in a systema c sense. ENABLE provided a toolbox and encouraged everybody to think about the beneficial overlaps between different techniques and methods. Talking about filters, barriers, the ways we assess or the interrela ons of findings. I do believe It was not the pure inten on of ENABLE to come up with set of compara ve elements. I understood ENABLE more as a explora ve way, trying to address as much local specific challenges as possible and therefore loosing sight of the compara ve element, which is perfectly fine. But I would encourage everybody working on final products to be honest in this regard. I missed the willingness to actually work on the compara ve part of the project. We focused on two scales -the urban/regional one and the more locally specific one, you could say the former is the more compara ve -and I would have loved to work more on the more compara ve elements, for instance talking about scenarios, but I think this was a pragma c way of saying "OK we already have so much, let's 5/5 this was a pragma c way of saying "OK we already have so much, let's s ck with that". What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: The most interes ng from a scien fic point of view was this understanding of a system coming from theories to empirics and observa ons. 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? R: There are at least two different levels to this answer: One is referring to arranging the internal project learning channels, where I would say that there should be more shared working organiza on, in the sense that you manage specific cross-case working groups targe ng on a specific set of different aspects of the project would enrich this. But I also do see, In our case for instance that it's also very pragma c and some mes a more efficient way to keep the organiza on more case study related. And the other level is referring to how you flag and use ENABLE material to encourage learning beyond the boundaries of the project, like a homepage. We have two strong partners in sharing products and knowledge coming from ENABLE towards the other communi es, like stakeholders or policy or planning. I would like to have this on a smaller scale, like the case study level, to have some tools, which enable a more sustained communica on channel between scholars and stakeholders. 3.7. Is there anything you want to add regarding your experience with the project, which has not been men oned so far? R: What was very nice, was that we did not only do deliverables and follow our du es, but that we also had the freedom to be understood in a more explora ve way. Doing research in order to foster our understanding and our own learning, which is not something common, looking at other projects. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: The resilience assessment is interested in the outcomes but it is a methodological approach to designing and running a process. The major outcome of the process is the knowledge co-crea on that happens through the process. It is all about s mula ng or trying to promote a good learning environment for social learning. There we have had different designs: interac ve workshops and consulta ons. The different workshops have been designed to reach different types of outcomes and also reflect on different ways of knowing your system (whether it is more system knowledge, target knowledge or opera onal knowledge). It has differed depending on the individual focus of each workshop. We o en also contacted people a er the workshops when they have had me to digest and reflect a bit, to have a more individual reflec on. I think this has been a nice complementary way of s mula ng a learning process. We have also had an internal team reflec on a er each individual step in the process. In terms of the meline: we tried to align the ENABLE meline with larger ongoing processes in the region. Our workshops in Stockholm were a con nua on of a previous pilot study on desirable futures, so a con nua on of an exis ng learning process. It was a combina on of trying to make use of the outputs of ENABLE -and failing, which is an interes ng learning outcome in itself -and other processes, because the case study is not exclusive to ENABLE. At mes the two melines did not align too smoothly, so we tried to bring in ENABLE inputs at specific mes that we thought were relevant. So trying to address different stakeholders' needs and desires in terms of outcomes, which has some mes maybe detracted from the more pedagogical design of the process. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: Many of the things covered through the resilience assessment (like the systems understanding, framings, etc) it was not really that much new to me (system knowledge). My primary take-home was more on target knowledge, on the opera onal side. Knowing who the actors are, how they view the system, how they think about other actors and try to understand what are the barriers and enabling factors for trying to do something about that system. The insights were probably not surprising as such but they are newer knowledge. (Across the case studies:) It has reinforced how important or context- 2/8 (Across the case studies:) It has reinforced how important or contextspecific are solu ons and strategies for trying to make best use of green and blue infrastructure, and how to be er balance that contextspecificity with more general ideas of how we can understand systems, how we can design them in different ways. We have seen across the cases not only different systems but also different possible solu ons. And not just a list of important things to consider when trying to shi something but the sequen ality of interven ons (this is where you need to start because...) -so a be er idea on causality and designing a (change) process. That is something our cases have shown to a greater or lesser extent. The mixed-methods and mul -methods approaches we used were quite useful to think about how can we look at and address a specific issue through mul ple lenses and s ll combine the insights from them. This has been a challenge but as we are ge ng close to synthesis there are a number of interes ng things we can do, to offer more transdisciplinary perspec ves on a number of issues or already iden fied challenges but where we could add addi onal perspec ves or different angles to them. I think that is a nice learning outcome from ENABLE, which has been facilitated by our engagement with local stakeholders and our internal diversity in terms of what methods we use and how we ask ques ons. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? R: There was one internal workshop within the resilience assessment where we tried to build-in a mul -criteria component which did not quite work out. Just reflec ng on why it did not work out was maybe the more interes ng learning opportunity. When things work out the way you thought, that is a posi ve reinforcement of what you are doing but the things that fail (and this was a safe-to-fail situa on) allowing you to reflect upon them, that was interes ng. It has taught us about how to think about the logic behind different methods, especially ones that have a sequen ality, and be aware that when melines do not align it will be harder to integrate methods. That was a very good workshop. In terms of feedback, there was a series of conferences or one-off events (not necessarily with the stakeholders engaged in the longer process). They can be some mes useful, some mes confusing, but they allow you to get external perspec ves to shed new light on your process when you are too deeply embedded. We decided to accomodate for that. I have been trying to not be too involved in the resilience assessment (while the other two colleagues were) to supply a different perspec ve on the process and the different outcomes. So recognizing that different people can, and maybe should, have different roles in this learning process. These different roles have enabled us to have a different sort of learning process than if we would all have entered the process with the same ambi ons and ideas of what our mandates were. A take-home message there is that you of what our mandates were. A take-home message there is that you need more than one person trying to do these things if you want to broaden what kind of learning you could hope to have yourself. It really has helped for us to be three people on board. So a minimum of two and one is not enough because then you will be very thinly stretched to cover all the different needs of the process. One of the things we did a er the stakeholder mee ng in Lodz was to talk through the different perspec ves from the facilitators to the process owners to the par cipants -how the expecta ons and the whole experience differed through these different roles. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: Trying to discuss our different frameworks and different concepts and terms we ran into some interes ng differences in terms of how we understand concepts and terms. Some mes we managed to reach some sort of consensus, in other cases we just had to step back and leave the differences where they were. I have looked for boundary objects that could connect case studies. Some of them have worked and others apparently not. For example, there was a discussion on the framework of availability, accessibility, a rac veness where we could not agree on the scope of the a rac veness dimension and eventually we had to drop the discussion. So, not necessarily new terms but we tried to opera onalise some of the terms that we brought into the project, so having a deeper understanding of what the terms could mean. That shortlist of terms is highly relevant for me and for coming projects -what to build on and what terms are most useful to capture certain things, so what terms can be used for and which ones are more useful. Difficul es (Links back to what was previously said): There is a constant struggle in transdisciplinary projects on how to best find a language that allows you to discuss beyond terms. I think we have made some progress there although it is s ll a challenge. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? (main items) R: Within the group (interdisciplinary work) it has been more on the more abstract level of theories and how they connect. We did not discuss with other stakeholders at that level of abstrac on. Much of the more opera onal side of things -how processes actually work, what are the real obstacles. Of course you get a reflec on from colleagues within the project, but a "second hand" reflec on. Some mes I get a reflec on from them on a given situa on but then I get a different reflec on from their stakeholders. But to understand how the system works and why it works in certain ways but I would 4/8 how the system works and why it works in certain ways but I would say that has been more of a combined listening to project members but also other stakeholders (primarily stakeholders in Stockholm). In parallel with discussions within the consor um there have been discussions with other scien sts (in our ins tute, in conferences, or in other networks), discussing insights and experiences from ENABLE and from similar projects. So, more of a scien fic systems knowledge from the academic partners and target / opera onal knowledge from other actors and some mes the more ac on-oriented members of ENABLE. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: There are certainly other people whose opinions and needs are relevant for what we did in Stockholm. The first constraint is that people are very busy. So people like poli cians, or business have been harder to convince. Other actors we have just not been able to reach (like homeless people or criminals, who are a specific group of users of greenspace, influencing the func onality and percep ons of green and blue infrastructure). Se ng up such a learning process, there are these issues of trust -who can actually be involved for the process to s ll be construc ve. There are some really strong vested interests in some problems, at least when you have a limited amount of me to go through your process. If you involve people with very strong and very different opinions (like developers and conserva on groups) it could take a long me just to find common ground and start to build trust. So we started somewhere where there is at least some trust already between the actors, which influences or restricts who you can involve. But even there it has been problema c for prac cal reasons to get this limited group of people, because some of them do this more on a professional basis, others more on an individual interest basis, which means they have very different melines. Other case studies have reached out more to individual groups in different ways and not trying to bring the groups together into one venue. That is a different way of trying to handle this diversity, listening to mul ple voices even though as a researcher you become more of an ac ve interpreter of their inputs to the processes instead of le ng them sort things out themselves. You become the mediator or facilitator which leads to different and maybe slightly biased outcomes. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: It has elements of all of them (planning and management), but overall the most relevant contribu on is how we design these joint learning processes or how we think about science and research in different ways can inform prac ce (can be planning prac ce or something else). This is something we are trying to get out with these policy op ons, where we try to think about what types of knowledge have we generated. Everything from the more factual that could 5/8 have we generated. Everything from the more factual that could directly inform certain types of planning to things that are more about project design or building your own solu ons for governing (rather than planning) green and blue infrastructure as embedded in a larger system. I think that overall the considera on of how to build a comprehensive approach to both understanding and ac vely engaging with green and blue infrastructure and its func onali es and benefits, that is overall the most useful. Then we have specific pieces of knowledge that could be of interest to specific cases or a set of specific cases or a specific target group. But overall it is that approach to how to think about what kind of knowledge we need and how you could make sure that you have an ac ve process for producing, diges ng and making reflec ve use of that knowledge. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: Star ng from a general and vague idea of how I want to design and a empt transdisciplinary science, I am learning more and more how to both frame different things and also what are the strong points you could build on and not just uniformly try to integrate or connect things, but really what are the most cri cally needed and also the easier to get at. I am developing a be er language and feel like I can be er describe in project proposals what it is I want to do, the challenges with it, the me demands, the resources needed, I am ge ng much be er at ar cula ng what transdisciplinary science is and also be er understanding what type of transdisciplinary science it is I am doing and I am more comfortable with. That is drawing on everything that has worked and not worked (or at least has been much more challenging). So I am star ng to have a more opera onal idea of the actual design of transdisciplinary science, not just that everything should be integrated but what are the cri cal things that need to be integrated, how can they be integrated and how can I describe how to do that and the resources needed. ENABLE is just one project in an evolu onary line of trying to deepen transdisciplinary work and we are learning, slowly, how to do things be er. It will help me make a more effec ve use of my me in the future when I have a be er idea of which things are more likely to lead to the outcomes that I am hoping for. That said, it is always good to test new things, because some mes what you thought would be the best alterna ve may not be. See also Q2.5. of who does what in terms of planning and management. What we tried to do with the resilience assessment was to point to interconnected issues that together will decide on what you can expect from the system. We also started to discuss different ways of doing that. But just by making people more aware of that you have a be er basis for actually finding system-based solu ons rather than individual or specific contribu ons, or projects that might not make much sense, or ini a ves ed to one sector but not to other relevant sectors. I think we have promoted more of a systems understanding and also an understanding that green and blue infrastructure is not necessarily a ques on of just the green and blue spaces themselves, but very much a ques on of how you think about the city and its inhabitants, around those green and blue spaces. People do recognize this but I think where we helped is that we added more detail and nuance to that understanding and also a language for addressing those different connec ons and that is something that other actors can take further -we le it at a first fledgling strategies for trying to move towards an aggreed upon, desired target, but we think developing these strategies further is not really something for us as researchers to take the lead on, but something that we could support. Something we though about for Stockholm was how and when in the process do you shi ownership and roles, when can scien sts lead and when is it be er for us to step back and support someone else, depending on different mandates and the specific design outcomes of a phase in the project. Which is also useful I think for planning processes as not very sta c or the responsibility of one specific actor, but how you could maybe shi a bit more smoothly between different actors and different processes. So, in the planning process, maybe at some stage you could delegate to someone else too (and I guess that happens to some extent) but I think there is more there to design a bit more flexible processes for planning and thinking about what to do with urban space and there we have supported, not giving any final answers but at least poin ng to a way of doing things differently and also a way of addressing more complexity within your system. Did the project meet your expecta ons regarding what you wanted to learn about? If not, what would you have liked to learn about, which was not possible through the project? R: I have an interest in transdisciplinary science and how to do transdisciplinary or sustainability science. We have had a number of workshops or exercises a emp ng at aligning methods, finding ways of synthesizing insights, etc and we are working on a more theore cal level. But I would have wanted a bit more focus, more in-depth discussion on how do we best connect methods, theories, frameworks and so on. That is s ll something I am pushing for, for one of our papers under prepara on -a more theore cal discussion on how to do transdisciplinary science it was something maybe not equally shared by each and every consor um member, so maybe we have not made it as far as I would have wanted. 3.1. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: In terms of challenges I really realized (or reinforced) how difficult it is to run a deeply integrated transdisciplinary project when you only meet infrequently or every six months or so. It is much easier when you can have a con nuous dialogue or discussion with people. Trying to do integrated studies over a dispersed network has been a challenge and that is certainly something to take with me for future collabora ons -how big can the consor um be, how ghtly connected is the group and based on that what is a relevant ambi on for integra on. We are a nice mul disciplinary team but there are some perspec ves I would have loved to have within ENABLE, a couple of viewpoints that maybe we are missing, people with a different understanding of things that could have been beneficial to have in the discussions. Maybe in retrospect it could have been interes ng (although tere is an issue of me) to have not only our internal workshops but also workshops with other people to join us on a broader discussion of the ENABLE framework and how we do things. We have had it at some of our joint conferences but they have not really been dedicated to this issue and maybe some mes a bit too big. It could have been interes ng to have a clearer design of working within the core team and then connec ng both to a larger academic world and to other stakeholders. Could you actually apply some of the new knowledge/insights/ ideas resul ng from the project in your own ac vi es (e.g. in other research projects)? R: Yes for further proposal wri ng. ENABLE is just one of other things we do in Stockholm and I bring insights from ENABLE to all these other processes, but also in my interac ons with other stakeholders. If I am not the organiser of workshops I could be the expert member of someone else's process and there I bring insights from ENABLE, both more factual about green and blue infrastructure, but also on how to think about co-crea on and knowledge processes to those processes. That is very useful. ENABLE came from similar insights from mul ple processes and it will feed into a second genera on of similar processes. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: There are the classics of me and resource constraints. These things do take a lot of me and we had different star ng points across case studies, for example in Stockholm we have a long tradi on ourselves of working together with others but there is also a long tradi on of trying to have some sort of joint processes and exchange in the Swedish system. In Lodz, for example it is very different. Different ci es, different countries have very different baselines or star ng points. If there is no trust or direct interest in these processes you face a very different situa on. Some mes we take it for granted that people are interested in par cipa ng and that need not be the case. One main barrier is how to reach stakeholders who do not see themselves as stakeholders. What are good arguments for convincing them that this is a ques on that they both have a stake in and could allocate some me to, if we wanted to work together with them. We have become be er at that but there is s ll work needed to find good ways of reaching through to different communi es or different interests. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? R: I am a strong believer in being embedded in different environments. We have had a mobility scheme in place, which would have allowed people to spend a bit more me in different environments (researchers stays in partner ins tutes) and to have an extended me period of constant exposure to an environment different perspec ves, to not necessarily agree with but to understand it. Those longer periods is something we did not have in ENABLE and it could have helped. Many of the reasons why we have the partners we have in the consor um is because someone has spent an extended stay somewhere (with another consor um partner). What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: One thing that we have put a lot of effort into was to find a language and commonali es, because we've had a very diverse group of stakeholders, ranging from people being there on their free me just because they cared about the area or had a specific interest in the area, to people who had a strategic responsibility, not necessarily being locally anchored or having visited the area, but had a formal responsibility. That said, we've had to work very hard to find something that they could start their dialogue about. For us that turned out to be ac vi es taking place in the green-blue infrastructure in the landscape. Just to come to that it was a learning for us. That started for them also to feel that what is the ac vity as a joint tool for them to learn about each other, others' perspec ves and the project. Another thing was the constant framing exercises that we had to do to explain what we were doing and also for us to learn about the system at the same me we also tried to be a bit ahead of the process. So I would say the framing and finding common boundary objects to talk about, and not use the ones that are favouring in different groupings, so say, is it red list of species -no that is not a go, is it ecosystem services -no, not necessarily, so finding something that is neutral and very generic, that was one thing we did in order to find a dialogue and then indirectly that dialogue is supposedly leading to learning, or at least exchange. The framing was everything from wri ng invita ons, wri ng documenta on, having the first presenta on at all the workshops that we had, I'm talking about the resilience thinking process that became five or six workshops and all these mee ngs have a very careful thinking about how we start them, how we talked about the system that we wanted to discuss with the actors. So using words that we know that they know about but also then linking them to the conceptual framework within the project, that was a very tricky part. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: One of the main insights, a bit surprising but also confirming that the ins tu onal barriers in terms of formal administra ve boundaries has a huge, surprisingly large impact in how people talk about values. We had this landscape where we have this formally protected area in the middle -that was the setup, and all along un l the end they had very hard difficul es in discussing the whole landscape. It was inside or outside that boundary, it was so strict, and I mean the whole outset of the project is to discuss the flows. I think both [STO1] and I were surprised about how difficult that was. The other thing that I learned was also that, when it comes in the Swedish context to nature was also that, when it comes in the Swedish context to nature conserva on and green-blue infrastucture, it is a framing about things that are not built in our city, it is s ll very much on the conserva on side. So, the no on of change -because that is something we really wanted and hoped that the stakeholders would start thinking of and help us to formulate what is changing in the system and how can we then build capac ty to handle that change -that was something that became very external and abstract, like climate change. Then we had to put a lot of effort into transla ng that climate change, big thing, into something like "what will happen in this par cular landscape". And also to think about demographic changes. So change was very difficult and the ins tu onal barriers was very difficult and I would add learn by doing, learning by mistakes in trying to develop tools for discussing these things along the way. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? R: If we s ck to our case study, then I think the smaller se ngs with more homegeneous groups discussing things, where they could frame a story and there is not so much on nego a ng, so for example talking with environmental strategists from the municipality or with interest organisa ons, these smaller focus groups actually helped me be er to understand the system. If I would re-design the process I would actually be more careful in having these focus groups and then great diversity and then focus groups again, instead of trying to mix the perspec ves at all stages. That is something that I bring with me from a methods point of view. And maybe also: one of our goals was to build capacity and there is something where you discuss things in more closed se ngs where you have more of a safe environment, and of course it is very tricky for the municipal officials to sit with their stakeholders and then be held responsible for things, this kind of tensions in terms of mandate and responsibility, to handle that in the mee ngs, while they were going on, I think that impacted how freely people talked about things. So we were too naive and ambi ous when it came to par cipa on, I learned the most about the system and about the different perspec ves when we actually had these focus groups rather than when we had these huge diverse groups, because then as a researcher I became more of a facilitator, a nego ator, pedagogue, communicator person than actually someone learning more about the issue or the system. We had so much focus about being overly inclusive at all stages and I would not do that again. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: Star ng from the la er: the concept of ins tu ons has been super tricky for me throughout the project and that comes from the 3/9 tricky for me throughout the project and that comes from the consor um discussions, so the different ideas about ins tu ons, that is one thing. The other thing has been the tensions between trying to understand the system and then also adding the aspects of change, so it is kind of similar to the struggles we have had in the Stockholm case, I also see them in the consor um, where we have had a lot of these "we need to describe, we need to understand, we need to map out" and then have had not so much capacity to add the change that is actually part of the core of the project. So that has been one thing. Then about the concepts that I have learned, one thing that s cked to my mind was when the municipality described -so we are dealing with a very complex organised municipality, it is huge and it has lots of capacity in terms of money compara vely, but they explained that a lot of the challenges have to do with the internal dynamics of the municipality, and then you have not added other actors at all, just that very strong actor -and they described us the tools that they have just to make sure that they know what each and everyone is doing, and one is called "ledstång" in Swedish, which is actually the handle in a staircase, so every me they encounter some kind of confusion, they go to that document which clarifies who (which division within the municipality) has the right to do what, and then they have what they call the interface (unclear), which is a bit similar because it also says "OK at this stage the other department takes over this issue", so they have tried to map out who is responsible for what, which is a way to ar ficially try to handle wicked problems like sustainability or landscape governance or flow of people or whatever, but it is a quick fix compared to changing the organisa onal setup. So, maybe not concepts as such as words but they are intruments, tools for the adapta on but not necessarily for the needed changes. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? (main items) R: The whole thing of working with par cipa on, this how to design a co-crea on, par cipatory, different degrees of inclusiveness, where research is part of the process, how to do that, I learned quite a lot in terms of do's and dont's, and also you can look at the stakeholders in terms of their capacity to think about strategies, about really concrete local things and how you need to recognize that and see how you can work with it but s ll keeping inclusive, so lots of insights and learning about how to work on par cipatory processes. I had started pre-ENABLE but for sure this process that we have been running in Stockholm has added a lot. And cri cal reflec ons too, what is this collabora ve approaches, what does it require, how much competence and even more specified different competence that is needed in that. So that has added to my knowledge. And I would say the similar goes for the consor um: how to work with different epistemologies, ontologies, where people come from different backgrounds, tradi ons, both in terms of geography, history of research in different countries or different university contexts and what is then the capacity that different researchers have, not to say that it is more or less but it is different capaci es, and how you work with that, especially with the different researchers having different degrees of in-depth knowledge about the cases that you also want to involve. So a much more specific and a bit cri cal thinking and insights regarding the capacity and competence and how you need to be aware of that. The other thing is the power of small, maybe trivial ac vi es, so just a mee ng we had I proposed, could we create a figure for all the papers that we are wri ng and that is a way to not have that as a product but as a tool. So all the tools you need in interdisciplinary work; I remember when we tried to decide on a logovery interes ng process, when we tried to create a joint project folder and the text with it, also very interes ng, these small things that seem like: can we not just leave this to some communica on expert to do that? I would rather say the opposite: this could be at the core of star ng to find a common narra ve or at least explore the diversity of narra ves within the consor um. And also this not viewing paper wri ng processes as the focus on the product -of course that is the merit system we are in -but also see the wri ng process as an interdisciplinary learning. [STO1] and I have had quite substan al me si ng together and I have tried to support him in terms of "maybe we should ask this ques on", "maybe we should clarify like this", "maybe we should ask people for one slide with bullet points about ques ons they have", these small, pedagogical things that you can do in order for people to not just s ck to their ordinary way of doing but actually trying to reach out and connect, finding common terms. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: At all mee ngs I am in, in the Stockholm region, and I have been working here for fi een years or so, there is always the poli cians, and I would not say that in the next project I would add them to it, because their rela ons to the civil servants is also a very intricate one that you need to be very careful about and make sure that you know what you are doing as a researcher if you set up that kind of discussions or dialogues, or interac ons, but that is a group that we seldom invite -the decision-makers -and they are of course crucial if you are also aiming for change and capacity-building, but in a way we trust the officials to then grasp what they capture, what they feel is relevant and then build that into their organisa on or in the communica on with the decision-makers in a way we trust them to be that indirect link to the decision-makers. That is one group. The other aspect is that we have a challenge to engaging the public, where we go for the easy, or doable, feasible way, which is to engage with the interest organisa ons, and there is lot of engagement in society that doesn't necessarily take the form of a very tradi onal Swedish associa on (like Facebook groups and so on), and we have not yet found ways to engage with them and that is a big gap I would say. We 5/9 found ways to engage with them and that is a big gap I would say. We clearly have an age bias towards elderly people in the interest organisa ons, those that actually have me and room in their life, or are used to come to this kind of se ngs. Not necessarily that we want to capture that diversity, but in a way we captured that through other methods, we have had these public data collec ons -the Q-method and the mental mapping -so in a way we have captured that informa on. What would have been fantas c is if the Q-method and mental mapping had occurred before our resilience process, which was not the case. In the coming projects what would be great is if -we have learned a bit more about how to set up a series of different methods to play out, what should be done first and what should then build on that -it would have been fantas c to have that material to build the resilience assessment process on, but that was not the case. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: On the larger picture I would say the approach that we have in ENABLE is based on systems thinking and over and over again I encountered how great knowledge is produced, communicated, used and then confusion why it is not working out and I see this contextualiza on, so even if it is difficult, even if it is fuzzy, the system thinking is an important contribu on and not as a product but as a process to constantly be part of and reminding different discussions, dialogues, mee ngs, being it in Brussels or in Fla en landscape about the system thinking, that I think is a very important contribu on from this consor um and other similar consor a. Then, what we contributed to very much, and the actors say that in their evalua ons and feedback, [STO1] and I in our projects in Stockholm we provide a pla orm for these stakeholders to meet and discuss things that they normally do not have room for discussing in their daily worklife context. That is interes ng because it is not necessarily something you think that research or research projects should do, or is that really our task, but it is just the way it is. We allow these actors to actually get some space for thinking outside their immediate here and now problem-solving, handling fires here and there. Just the conference I a ended the past days there was lot of apprecia on: you created this space for us to li a li le bit and look at things in another way. For an allotment gardener in Fla en to meet with a green infrastructure planner, there are very few other pla orms for that to happen and create that listening and link. Then it is up to them to see what they want to do, but at least it is happening because we are running these different kinds of workshops. And then I think we have a very important task here as researchers to con nuously develop our thinking, our different epistemologies but at the same me be open to at least try to understand others and that I think we need to have be er tools for. I mean in the ENABLE project we would have really needed some process facilita ng capaci es, someone responsible for our mee ngs, our interac ons, who knows what research is about, so it is not like a manager but more like a researcher's facilitator it is not like a manager but more like a researcher's facilitator competence, to help us with posters, models, paper wri ng, to help us spread the word, listening, this kind of things. We tried to organize these processes with stakeholders but we are rather bad in doing it internally in most of the cases, because we do not add that to the budgets. 2.9. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: As I said we have this very strong discourse of conserva on on nature and densifica on on the urban. The knowledge we are providing is to showcase that we need to move beyond that and we have started a discussion about how to move beyond that dichotomy. That is something that is needed and at least I think some of the actors that we have involved with find useful. And for us it is, together with them, trying to bridge that dichotomy and change the idea of land-use or ins tu onal arrangement in the city. I think the insight we have from the ENABLE process could contribute a lot to the future research projects that are aiming for working from a systems thinking star ng point, because we have learned a lot that could be useful for them to not do the same mistake and move further along the lines that we found successful. So also a clear message for the funding agencies at European and na onal level to really what interdisciplinary research can be about and what different parts are needed, like facilita on, more mee ngs maybe, other capaci es than normal research projects. So those are other stakeholders who I think would benefit from what we have done. What new knowledge or new insights resul ng from the project do you consider the most relevant for the planning and management of green and blue infrastructure in your case study city? R: Not so much about new knowledge or new insights but more about the process that the project allowed and the aspect of change (that the system will change). So, when I showed a map showing the thousands of new houses or dwellings around or in the green infrastructure that is there today, and then assist the different actors start to think "what will this mean in five, ten, twenty, fi y years?" And also regarding these ac vi es. So, the process and the no on of change, because the change is new in the Stockholm context, that the green-blue infrastructure will change and be impacted by changedemographic, economic, governance changes, climate change, environmental change. And also start to unpack what is this "parkifica on" that I talk about, what is this climate change, what is this segrega on, because it is rather imature among the people working with green-blue infrastructure, not to say it is immature among other stakeholders who work with social sustainability, or climate change, but for the green-blue infrastructure people then it is rather new grounds. R: If I think about the different work packages and think about different themes, I clearly see that we have work around jus ce, around resilience, but a dream would have been to learn about jus ce and resilience together and we did not really reach that. It has been so much work just to link green-blue infrastructure just to these two dimensions. You also clearly see now that we have a special issue on jus ce and we have another one with the ENABLE conceptual framework, which will include resilience and methods and method integra on. That says a lot, that we have not come that far. My core research is about governance and that has not been -that was from the start so I cannot say I expected it -but I would have been able to contribute more if we have had that broader idea with the governance and not just the ins tu ons. I have outlets for that in other projects, but s ll I think it is a bit cu ng one of my arms off. And it pops up in the policy op ons, policy mapping, but it has not been so much part of the research but more of the background landscaping descrip on and then it is there and we try to feed into it, but it has not been research as such, as part of how we understand the system and that has been frustra ng. I can imagine that others feel the same because their arms have also been cut off in different ways that I do not know about, because I do not know that theme (like econometrics or modelling, or whatever). I think those are the trade-offs that we do when we do interdisciplinary research. It is tough, but it is also how it needs to be, because you cannot really add the in-depth of all the different aspects. Another thing, on a more prac cal note, which has to do with learning: we had lots of hopes and added to the proposal that we wanted people to sit in different contexts, visi ng each other, like inresident Ph.D. students or post-docs and we had this mobility money and it has been really hard to set that money into ac on. It's a bit surprising and a bit disappoin ng that we have not had this opportunity of young scholars visi ng different areas, because I think that is a very frui ul way of learning and understanding and exchange. That has been much harder than I thought. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: That stakeholder interac on is very context-dependent. It is not rocket science to understand that, but to really see, visi ng Lodz, si ng in the city hall and really see how are colleagues navigate that context, compared to how we do it in Stockholm, or when we sat at the mee ng in Halle, that is a very important take-home message for me, to really understand what it means for interac ve, transdisciplinary research with mul ple case studies in drama cally different contexts. And it is very vulnerable since it is dependant on different contexts. And it is very vulnerable since it is dependant on poli cal processes in these different contexts, so say you have an enabling context in Barcelona, in Stockholm as well so far but we also had an elec on in between, which was a bit scary. In Lodz they had to struggle nearly on a weekly basis, with differences on how the approaches were and people wan ng or not to par cipate. That is a very important take-home message. The other one is that there are both in the interdisciplinary (within the consor um) and the transdisciplinary way, this carefulness of framing, finding commonali es, the small tools that can help you to find common grounds. To focus on that is another take-home message I think. When it comes to Stockholm, the need to constantly reflect on the role of the researcher, because we have hardcore experts as stakeholders, so what is then the role of the researcher when they come with much more in-depth data than we can provide within the project. And then we need to make sure that we are relevant in what we are doing, otherwise they will not par cipate. In some of the case studies we started off with mapping, here we had first to have a dialogue with the stakeholders, about what is the state-of-the-art when it comes to understanding the system, because it is not in our ownership. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: The level of abstrac on. I cannot really take the ENABLE conceptual model and trus ng that it really comes through to our stakeholders. It needs so much more concreteness and illustra ve examples. That is a very strong barrier. Then, the stakeholders are also formed in different languages, which means that you also need to adjust what you are saying in rela on to this degree of abstrac on in rela on to the words they are using, so the story can be very different depending on the stakeholder you are talking to, or want to communicate to. Since this is a research-driven project and it is not co-designed more than the fact that we know for our case study that these are relevant issues, but it is not necessarily relevant or mely to discuss them, meaning that there are huge trade-offs you need to do in order to become and stay relevant for the stakeholders. I think in some parts they have much more exchange with consultancies where they have an assignment where they say we want you to inves gate this and we want to have this product. So they go to the ecological consultancies to get that. Why engage with the fuzzyness of researchers who are exploring while they are running a process. That's a huge barrier as well. So finding your role, what is the role of research in a very expertdriven region like Stockholm, where the consultants are Ph.D.'s. It is a very thin line between [STO1] and me being researchers or consultants in some sense. ..Barriers to learning 9/9 R: I would set up a team including process designer, facilitator and communicator. Meaning different competences, it can be two or three people but with that setup of competences. And then let the researcher be more of a researcher than trying to embrace all those roles. In the beginning set out what learning are we aiming for. And also finding ways to very early on, without losing trust, asking what expecta ons are there, it is a balance there because if you are too open and ask what do you want or what do you expect then the actors think "they don't know what they are doing", so you need to frame it, but on the other hand it is necessary to know what are the expecta ons if you aim for learning. So have a strategy for learning, parallel to the research that is going on. Could you actually apply some of the new knowledge/insights/ ideas resul ng from the project in your own ac vi es (e.g. in other research projects)? R: Before I started in ENABLE I worked in another project and I referred many mes to what we did in that project, when we wrote the proposal and also along the way. We o en asked ourselves how things were done in that project and how could we do it in ENABLE. I am pre y sure that the same will happen in my coming project proposal wri ng and if they get funded I will come back to ENABLE and how we did things. One thing that could be very beneficial for us researchers who are aiming at these very complex research and knowledge processes is to find tools for ourselves to capture this, like having this interview got me thinking about things that I would not necessarily have me or room or acknowledged that I would need to reflect upon. Because if I have that self-reflexive rou ne that would make this transfer of experiences and insights between projects and processes more clear and visible for me and maybe for others as well. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: For us I think that the most important were the mee ngs with stakeholders and opportuni es to learn from them and to ensure that they learn something from us were related to the presenta ons of our research. We organized such mee ngs twice and we invited people from the different local authori es, mostly local authori es like the municipal planning office and green space management authority and some other departments of the city office with whom we discussed what we were doing and then we sought addi onal ques ons from them and in a way they informed our research but at the same me they had the opportunity to listen to what we are actually doing, which I think is very good and we are planning to do some something like this in the future as well to share findings of our work with the local authori es because in general we mostly publish our work only in English and even though it might be of interest to the local stakeholders they are definitely not reading academic papers and to some extent everything is lost from the perspec ve of local developments so in this way we were able to tell at least a li le bit to them so that they know what we are doing and how this may be of use for their work and I think that based on these mee ngs we had some further collabora ons with some of these stakeholders which was an addi onal benefit of this collabora on and mutual learning. Timing: One was I think December 2018 and the other was November 2019 2.3. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: Of course we learn all the me and the project offered us the opportunity to learn but it's not very clear what par cularly was the result of this project and not of some other work that we had at the same me but in terms of learning I would say that for the first me we were doing this exercise with stakeholders that we invited them to mee ngs where we presented our research and I think this is something that we learned is very useful and that we would like to do in the future as well. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? R: I think we could learn different things from different kinds of mee ngs depending on the audiences and depending on the format mee ngs depending on the audiences and depending on the format of the mee ng. I men oned the mee ngs that were organized by ourselves that were rela vely small and in which we had closest interac on with the stakeholders and that's why they in my opinion offered us the best opportuni es for learning both to us and to the stakeholders but on the other hand we also had this mee ng with the local authori es within the ENABLE annual mee ng and that was of course also an important mee ng in that we could meet some stakeholders whom we have never met before because we invited them broadly but the people who came to that mee ng were in reality not the ones with whom we are working on a more regular basis because some ins tu ons for example sent interns and never sent representa ves but they only par cipated part-me in this mee ng (the workshop of stakeholders organized during the annual mee ng). So that's why I'm men oning these smaller mee ngs within which we had the most direct contact with the stakeholders as the most important ones. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: The term environmental jus ce is new to stakeholders in Lodz so when we were referring to environmental jus ce this was something new to them and then they probably learned some other terms from us but I'm not sure we learned anything in terms of new terms or jargon as researchers. Difficul es: one of the biggest challenges that we see in this project is the understanding of the basic terms such as availability, accessibility and a rac veness of urban green spaces or green and blue infrastructure. The reason I'm men oning this is that different teams in the project used these terms differently and it is the challenge that we are now addressing in wri ng this joint paper on barriers where we have to deal with the different defini ons that we developed or used within the project in our own teams and somehow bring them to some common ground so I think this is a very good example of where this exchange between different project partners was par cularly insigh ul for all, ourselves included. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: In Lodz there is the idea to organize the Hor cultural exposi on in 2024 and there is a specific team working within the city office on this topic they filed the applica on to the Interna onal Organiza on Bureau of exposi ons that deals with this event and then they were flying the world over to promote Lodz as a candidate for this exposi on and eventually they seem to have succeeded. The 3/5 exposi on and eventually they seem to have succeeded. The organiza on has agreed that the exhibi on in 2024 is held in Lodz. They s ll are not sure whether there will be enough funding for this from the government and from the local authori es but there is already green light to organize things and there is quite a lot of investment in this direc on. But this team is composed of people who are from a completely different world and they are really reluctant to work with us and they are quite impolite to be frank in the way they treat us as well and I think that this is a team with which we should probably work but we somehow don't want to work, we refrain from working with them, which is a challenge because they seem to be overtaking green space and issues in Lodz now given that this event is seen as such priority for the authori es. It's just that there is an ins tu on that is responsible for a big forthcoming event with whom we don't have good contacts because they are not interested in local knowledge and research they are used to working in a completely different way when they want to have something done they invite big consul ng companies to work on these issues and the way the big consul ng companies typically work is to consult us as local stakeholders to develop the solu ons so we typically refuse to work with these big consul ng companies and this leads to a situa on in which they consult other stakeholders who are not from Lodz who define design solu ons for our city which is a mess really but this is how these people work they are used to work in this way. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: There are a couple of things like this one is that we have another project called sociotope map for Lodz (a sociotope map is a map of different social func ons of public green spaces in our city) and we developed this ourselves but in collabora on with the local authori es and we saw some interest on their part in this project and we hope that at some point they will be using this map and it is something that could support communica on with inhabitants and this could also support public ins tu ons' management prac ces, because it's all digi zed, it's something that they have never had in digital format they always only use this in paper files and every office had a separate paper file for different things related to different green spaces. Now they have everything in one folder, in one electronic map. Another thing is that we somehow started to inform the local authori es on different green space availability and accessibility standards and they seem to have been very interested in what we've done. This is what we know from those mee ngs and they really want to develop some tools to be able to plan green spaces be er in response to the needs of the society in terms of where green spaces are to sa sfy the needs of the popula on. And then the third thing is related to the work of [colleague name] primarily and [colleague name] to some extent as well and it's about children and their way to school. They were working on whether children on their way to school are exposed to green spaces or to green space views, especially when they have to 4/5 green spaces or to green space views, especially when they have to walk to school rather than be driven by car and they and the local authori es especially from the municipal planning office are really interested in this in the way they modelled it and in the way they studied it in general I think that they may be interested in using the same procedures for developing some broader green space accessibility standards not just related to green spaces such as parks or green squares and forest but also to very small green spaces like street side greenery. These are the most relevant examples from the perspec ve of what we had to transfer to the local authori es. I think that this is something that they benefit the most and the areas in which we have the largest opportunity to really have an impact on local stakeholders with this specific project. Did the project meet your expecta ons regarding what you wanted to learn about? If not, what would you have liked to learn about, which was not possible through the project? R: I would say that the project met my expecta ons I'm happy with it and the only thing that I would probably want to happen differently is to have more consistency between the case studies. I think it would have been even be er if we defined very similar things, very similar sub-projects in each of our case studies and were able to work on them in parallel, then the findings would probably be more informa ve for each other case study now I feel that everyone was working on, in general, similar things but slightly different or not slightly but very different and because of this the opportuni es for mutual learning are not as big as they could have been had everyone worked on much more similar things. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: A project such as this offers us a leverage because even from the perspec ve of our work with local stakeholders it's not that we are just carrying out some projects and things that we design ourselves and we want to just study on our own but it's a part of a bigger endeavor. It's carried out in collabora on with serious partners from abroad so even in light of our own work here locally it provides leverage and people perceive this work differently if they know that someone did a similar study, a similar work is being carried out in Barcelona, in Halle, in Stockholm and so on, so I think this is really a big advantage of working in a project from the perspec ve of working with local stakeholders but of course for us as researchers it is also very important to work with other researchers and to see how you work, what do you do and how do you solve problems related to lack of data or insufficient data, to what models you use and so on, so this mutual exchange is also extremely important for us as researchers. R: We definitely want to work on green space availability, accessibility and a rac veness further and I think this is something that we started within ENABLE and we want to con nue this work so it is a legacy of this project. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: See Q2.11 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? R: I would try to have more consistency between the work carried out in different ci es so to ensure that the work is really similar and comparable, directly comparable. I think that this would be interes ng. I would insist that we work on more similar things together so that we can more directly work, prepare some joint papers, in which we compare the situa on in the same regard in different contexts, this will provide a broader overview of challenges related to some things and opportuni es to solve different problems, like I know that in the nordic countries in Oslo and Stockholm the colleagues worked or at least discussed the idea of "white space" (related with snow) so they found a common topic they addressed it and they tried to discuss it from the different perspec ves of nordic countries you know white space. This was to me an example of a joint ini a ve that was related to the project and somehow directly connected to some project partners with regard to exactly the same thing. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: Answers are a li le bit tricky because we had simultaneously two projects which were more or less focused on the same. So in fact I did not divide it too much into in between projects. So what happened was that when the other project had some requirements to learn something or to teach something we just organized the event. So it was kind of mixed not necessarily en rely ENABLE messages or habits. It was more like to convene the message which was needed at the moment. Under both projects I would say that we used mostly opportunity for having workshops and courses with stakeholders who are mostly decision makers. But we tried also to address ci zens and here both inhabitants and NGOs. Stages: Not really because our work as Center started in 1997. So for us it's just con nuous process. So whatever our projects it's just trying to con nue some sequence of messages and knowledge which best fit the momentum and the momentum is usually (unfortunately) related to some ac ons in the city which raise either controversies or kind of uneasiness of people. Or some mes it's just when we have a new set of decision-makers who need to learn something immediately. Like we had elec ons a year ago so we kind of started from the scratch in some sense and then now we have a new department of ecology and climate. So it's again a li le bit going back to repeat some messages at the city level. So there is a con nuous process. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: We spent some me on needs of our ci zens regarding Blue-Green Infrastructure. But definitely we never worked too much on jus ce issues. So this is something which was kind of new and this was also the first me when we looked closer at accessibility of spaces. So these were two things. And the third one I would say is: Even if we are somehow s ll in the process of resilience assessment we got a li le bit more knowledge of what is it about. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? R: Definitely from consor um yes. Regarding other issues rather not. I would say that we mostly act as provider of informa on and knowledge than we gain something reading from outside. represents mostly the side of the project and work package, I represent more the side of the city and demos idea. So I don't think as a city we had too much contact with other ci es. I would say almost none. So it was a li le bit isolated as case. So we provided informa on to work packages but I don't feel having much exchange between the ci es. So, it was more through the ac vi es taking place in Lodz. See also Q3.1 2.5. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: I don't think I faced something like that because we work more or less in the same environment -also, many of the people for a longer me. So there were not much mismatches in between. I would only say that regarding resilience assessment it was a li le bit eye opening because I think that usually we kind of mess between sustainability and resilience which may not necessarily mean the same. I mean sustainability is always posi ve and resilience may some mes reflect a reality so we don't accept as ecologist or you know other specialists. But I did not really experience this kind of different understanding of issues. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: Well, I think that ENABLE ran too much as scien fic project -even if we have ci es in. So for me we maybe missed this input from at least decision-makers. So we could get more from ci es and learn as scien sts more from those who deal with reali es, not only the scien fic approach. And I think that also maybe direct interac on between ci es could be beneficial for the project. 2.8. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: I will not discover anything special because basically for spa al planning... which is not bad because for years we are trying to just build up arguments for a certain type of planning against other types of planning. So with each project we try to make one step further just to provide more evidence. And from this point of view ENABLE was very important because we managed to pull at least some data, some ..Why very important because we managed to pull at least some data, some informa on about Blue-Green Network. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: We are s ll in the process of analysing informa on and hopefully we will deliver some more outputs soon. First of all, we learned some new methodologies like Blue Green Factor could be one of those which we can promote throughout the city or for different reasons and purposes. Another thing which is city-specific where data were never really pulled together to get kind of more holis c pictures. So it's always important to build this picture just to have a look at the city as such and then dig into par cular issues. So ENABLE was important also for that reason. So hopefully we manage to analyze a li le bit more about ecological proper es of blue-green network and climate regula on and also these planning op ons. And as we are all the me in the process of wri ng new proposals so of course it's a piece for other projects. What was not really met was my expecta on to collaborate more with other ci es and to have more joint compara ve analysis and to have more things which are real outcome of the project. Because many things which we did, we did ourselves in the city. But let's say I could do this having another project with a bit of money. But in the project you also expect to build something together. And I missed a li le bit this "together". So something which would tackle all the ci es and analyse all commonali es or differences or so. This joint work happened in a limited way between two or three ci es only, while it could be a more broad scale. Could you iden fy any barriers that prevented knowledge exchange between the research team and local actors? R: What was men oned in Q2.11 and I would say if we agreed on common methodologies and then applied them in all the ci es to gather a certain type of the informa on and to make comparison, all teams and all the ci es would get to a certain level of knowledge and exper se. While if we shared methodology only to tackle certain aspects it means that some ci es did not fit, I don't know, green roofs or something else because ci es are different. And therefore we had no possibility to use common methodology to get the kind of common results. So it was s ll interes ng but it was not, I would say, like the training for us and for ci es. R: See Q3.3. It's always nice to have part of work which is built on specificity of the city. So this can be done just separately looking at good examples. But then there should be part which is common and something which is applied everywhere across scales and we can get a similar result to posi on ourselves in these other ci es and processes and so on. So I would say that this push for more joint work would be very profitable. In another project, for example, learning about Bayesian belief network and how to use it was much easier. It was not so much common learning and capacity building within ENABLE. In fact this was something which was delegated to another project together with our workers. It was more about team building and building the capacity to tackle same problems across all the ci es. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: It's difficult to say one message. What was good and interes ng and maybe I missed it for other ci es: I learned a lot from the visit in Halle when we all saw the processes which go on there and how they are tackled in a life-world like how the common space is built and so on. So I learned a lot from experience and I missed the same examples, for example from Barcelona. So it would be nice to have the same overview. And another thing which is kind of take-home message is this kind of... I'm ecologist by educa on. And the situa on put me and our team in the role of defenders of nature. So even if we try to keep the discussion about nature in the city and ecosystem services open we are expected even by ci zens and by the city to be defenders. And I would say that talking to the team from Stockholm and Oslo and Halle made us more resilient, accep ng different views and trying to find out a good way out of seeing the variety of expecta ons and also variety of knowledge people have and the way they use this knowledge. So there is no one trajectory towards a certain aim. Maybe we need to find out different ways to get to the same point which we would like to get while we try to be more rigid normally. This is what I would like to say. We give up a li le bit from our rigidity although as I said, this is also the role which was prescribed to us not necessarily by ourselves but also from expecta ons of other players and other stakeholders. impression that also the ci es and teams within ENABLE have different history of collabora on. So some team members were closer to each other because they con nued some issues from previous projects and some like our city for example and us were not. So I am really interested in, let's say, as beginners in this group what was the perspec ve of those who sit there for longer me and already managed to analyze much more because our city in many projects is a kind of newcomer not that we didn't have projects earlier but in terms of missing informa on and knowledge gaps which some mes we need to build really from the scratch, while in other ci es this knowledge already exists or data already exists so people can you know move much farther and be more advanced in the way they think. I would love to be advanced but first I need to have a basic database. 30 ..Previous collaboration ..Role of context 1/7 BAR1 2.2. What events or other opportuni es to foster learning did you promote in your ENABLE case study city? In which stages of the project did they take place? R: The main events were our stakeholder workshops taking place three mes a year, they were important at all stages of the project. Stakeholder engagement took place not only within the ENABLE project, but we were well aware of the ENABLE project in this process. As a prepara on for the ENABLE project we were talking to the stakeholders in those workshops about their needs, what are the priority ques ons, what are the key topics that they want to work on through this process and also thinking about key areas in the city for interven ons. This was a kind of se ng the scene and we had two separate workshops, one working on the spa al one on the content to determine what is relevant in the city and I think this was a very useful approach in making the en re stakeholder engagement process worth the effort for the stakeholders. That was the formal instrument that we used, for presen ng results, where we stand, tes ng new ideas, new conceptual approaches. It's also learning for us, because we always use these forums for giving key stakeholders the opportunity to present and discuss their work, so it's not a one-way direc on, not only us providing input but also stakeholders interested in making their approaches more public, from different public en es (mainly), some mes also private en es or NGOs. There's also a learning process in two direc ons. (I: Could you iden fy some key stages in the process?). The pre-phase (determining the needs and interests) was definitely one cri cal phase, before the project started and then the first six months, for the others we did a bit of back and forth there, we tested some of the ideas, approaches, concepts in the early phase but since we had different approaches this was going on in parallel. For par cular studies we also had individual mee ngs with experts, city planners in the green space planning department or a public planning en ty (Agencia de Ecologia Urba). We test our ideas and approaches with the stakeholders in the individual mee ngs. And then we have the repor ng back phase, where we presented results to the stakeholders and asked addi onal feedback. Depending on the study this is more or less intensive. Then we have other studies where we are using stakeholder knowledge explicitly (the par cipatory resilience assessment for example) to gather informa on, so stakeholder knowledge becomes part of the empirical work. We have done that for several studies, for example priori zing the importance of ecosystem services provided by green roofs to then come up with a city-wide green roof priori za on model. For the par cipatory resilience assessment we discussed what is the change of supply and demand of ecosystem services with regard to different extreme scenarios that the city could face and what would be appropriate policy measures to deal with that. 2.3. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: A very general lesson is this: how to make your research really relevant to the people who are supposed to work with that. Ge ng the research from the lab to the end-users and prac oners, that is definitely what we have learned a lot about. In the future, and already now wri ng new projects it is pre y much determining the way start designing new studies, I am much more focusing on incorpora ng the needs of the end users at the very beginning of the process. There are many more lessons, it is hard to pick the key ones. Maybe I have changed my way of thinking about par cipatory approaches in general. I see much more the limita ons linked to that and the bias that the selec on of stakeholders brings with it. I was more op mis c about these approaches in the past, I s ll believe they are super essen al, but I'm understanding much be er the bias that is involved. There is no perfect par cipatory process, we can have some standards or ideas of how to conduct par cipatory research but we will never reach those ideals we will always stay at an unsa sfying level. Maybe it is comparable to what modelers do, they always try to represent the reality, but they will never be able to do it fully, in a par cipatory process that's very similar, you always have the constraint that networks already exist, that some people are key to engage with your work and others are not, and if you use the par cipatory processes not only for dissemina on of your results but also for producing some new knowledge these will always be cri cal limita ons. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? R: It is important to gain some addi onal insights from other ci es to put into perspec ve what is going on in your own city. What helped me most was probably the par cipa on in the resilient ci es conference in Bonn last month, where there was not a single city presen ng their approach, but the reality of mul ple city planners and prac oners in the same room, so a dominance of that group and not of scien sts. There you could really no ce that ci es are at very different stages in incorpora ng ideas that we have in ENABLE, we ran a workshop on more jus ce-related issues and the inclusion of people and I only realised in that moment how far advanced Barcelona is in these topics, even compared to northern European ci es where, at least that was my stereotype, that are more advanced when it comes to incorpora ng people in decision-making. Talking to the planners there made very clear that they have very li le experience and they are much more stuck into their departmental silos than what we see here in Barcelona. That was interes ng to reflect on what we are doing here, here it goes more hand in hand: the knowledge we produce and what the city is thinking about and where they stand and produce and what the city is thinking about and where they stand and we stand. That started already in that they had the green infrastructure in place before we started with the URBES project in 2012, so they are really at the fron er of what we are working on. It's an interes ng process to understand that what we produce may be really vanguard for other ci es. I think that is why the Barcelona case worked so well, it is not only that we prepared it very well, it is also that there is right now this window of opportunity that they are very advanced in their approaches, so they are super open for incorpora ng the new approaches coming from science, whereas other ci es that are lagging behind and implemen ng what is already common sense in the scien fic part may struggle if you come with new concepts now because they need to incorporate s ll older ones. Maybe ecosystem services and green infrastructure are two examples for that: Barcelona has incorporated that already, other ci es have not, so if you now come up with new concepts and you elaborate further on this, but the baseline is not given to work with these concepts, than obviously that is much more difficult. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: What was key for me was engaging much more with the term of nature-based solu ons, it was not new but I had no rela onship with that term and I didn't feel it. I think we have worked a lot with that term, with that approach and have started giving it meaning in this context of Barcelona and also within our research. There is many more approaches from planning that I got more familiar with through this deep engagement with a lot of planners. Regarding difficul es in the communica on: I think what we produced in ENABLE was some mes a bit too theore c, especially our frameworks, so we put a lot of effort in not making them too prominent in working with the stakeholders, or really doing the effort of transla ng things. I'm thinking par cularly about the filters approach, that was not per se very intui ve to the stakeholders. Having those different dimensions (infrastructures, ins tu ons, percep ons) is all fine and people are familiar with it, but merging them into this filters approach was something that people struggled with. The same applies for the resilience principles we tried to bring in the work, we no ced in the prepara on mee ngs with our partner, in that case the city's resilience office, even for them that was too abstract, so we mainly dropped them in the work with the stakeholders. Regarding within the consor um, people not working directly with the project coordinator struggle when he starts to lay down his theore cal thoughts, but I'm not sure that is an issue of communica on of terminology and concepts or rather a personal way of communica on. I include myself in that group for the beginning of the project and then I started following his lines of thinking and now I am closer to his way of thinking, I understand how he thinks and how he expresses his thoughts. We have had some misinterpreta on of he expresses his thoughts. We have had some misinterpreta on of terms, that's for sure, and to some extent we s ll have them. We have s ll not reached a coherent jus ce framework; we are s ll struggling with the availability, accessibility, a rac veness framework from the Polish colleagues. I think the working together on the concepts was something we always wanted to do and we never really did. Maybe we did in the context of the main ENABLE framework, but on side ques ons we struggled much more; the defini on of ins tu ons for example, we s ll don't have a common understanding of ins tu ons in the context of this project and the same applies for a rac veness or percep ons. This is not specific to ENABLE, but what is ENABLEspecific is that we tried and we failed, so I think our internal methodologies to get these defini ons done was not very efficient. I thought about that I am not fully clear what would be a different approach there but we definitely were lacking some of the main review tasks in the very beginning. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? (main items) R: I think the reali es and challenges of planners is something I have learned through this deep engagement with the planners in the city and beyond the city and that is shaping my way of doing interdisciplinary research in the future. Within the consor um, thinking about things in a more theore cal way, especially from the resilience-informed ideas was good and that was mainly due to learning from the project coordinator, even though the others were key for that, so it was not only the discussions with him but also the ques ons and discussion created with others. So it was not that I learned from the project coordinator, but he helped me to trigger my own thoughts. I reflected a lot on the way of doing integrated research, integra ng different methods and that was mainly due to reflec ons with colleagues on why is it working in Barcelona, thinking in a more ra onal way about the things we do in a more intui ve wayit worked well, so why did it work well. That is a ques on I reflected on a lot with partners (like [LOD1], [OSL1]), thinking about their own processes and reflec ng on the process we have going on here, so combining different methods and making them most useful that's definitely a lesson I have learned and reflected upon more thoroughly. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: It goes in the direc on of what I have said in the beginning, that I have learned that there is no perfect par cipatory process and that you will never reach that ideal. We do have a gap in coopera ng with stakeholders from the private sector -that would be in theory and in prac ce I am not really sure if that would have been helpful for this stakeholder process to learn more. Obviously we could have learned stakeholder process to learn more. Obviously we could have learned different things, but probably we would have missed out others. I would rather frame it as a trade-off, or shi ing the emphasis: with the actors that you have engaged you learned something and if you engage others you learn something different. In this case maybe we could have learned more about the reality of the private sector (their needs and so on) and maybe that could have increased the knowledge we have in how to enable benefits for people, but I think we learned more about the public actors and some NGO actors and so on and I would leave it there. All projects have their limits in what they can do and we did well in not trying to do everything. 2.8. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? R: The general lesson is that the centrally produced products -that would be the centrally produced scenarios, or the centrally produced maps, but also the policy aspects are not very relevant for our stakeholders on the ground. It's just too far away from their reali es and it's not the ques ons that they're asking. So, that is an important lesson, that in the end the ones who make the impact for research are the ones that are in place working with the people on the ground. We have a good example for that: when [project partner] engaged in our stakeholder process in the very end, that is the policy outcomes that will be more relevant, than what I produced before. So, that's the general lesson. I think the processes are really key, this learning together processes in this stakeholder workshop format but also the individual mee ngs with key actors are crucial, so that's all the process learning and that is very intangible in a way, but we speak now the same language, we understand each other in these forums, and I understand the city's needs and they understand where we are heading, this is very cri cal and a fundamental way of bringing in new concepts, new cri cal ideas into the discussion, so I have the hope that we will influence the resilience discusion in Barcelona with what we did in our process, thinking about it in a less sta cal perspec ve, thinking more about how mul ple drivers change the system in a different way and ge ng a bit the focus away from this climate change as the main aspect of resilience. And I think again there we did not bring in a completely new message to the city, but we supported a very small group within the city planners, and gave them a forum to further develop their ideas, to have a scien fic backup for them and to make them more known in the planners world, in Barcelona. Again, I don't think we are bringing in completely new concepts and ideas, we are really taking up or building this together with the city and that is where we have an impact. 6/7 R: That is maybe the strongest impact long term that we have now on the newly developed resilience strategy that is right now underway and one of the core mee ngs to establish this was one of our stakeholder mee ngs. So that is the hope I have that it is reflected there, but there are other things that are more concrete: we have done a valua on of ecosystem services from street trees and looking at jus ce aspects in their distribu on and that is definitely informa on that the city will take up because they were very keen on having this, making the value of street trees more visible in a city where we have lower green but many street trees. Another thing that I am expec ng to have impact, and again it falls on frui ul ground in the city, is our work on gender differences in the use of parks. The city has just this year brought up a document for incorpora ng gender perspec ves in urban planning and this is the first empirical evidence that they have for the gender inequali es in the use and the flow of benefits from urban parks, so this will definitely have some impact and there are other things like the green roofs study will give some guidance and so on. One thing is the process and then there are other more concrete studies that tackle one specific need or specific knowledge the city was lacking. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? (I: you already men oned that for you the knowledge or the learnings you got from the project were useful, for example, in how you design new studies or new proposals and that you are more focused in incorpora ng the end needs of the users in the process, so that was a way in which the ENABLE experience was useful for you, are there any others that you can think of?) R: As said before, there were some conceptual things that I have taken up, that have advanced my own perspec ve, and maybe there are also some methodological issues, even though that was rela vely small in what I would assume I have learned here, a li le bit on the Bayesian Belief Network modelling and maybe the modelling the Berlin group is doing as well, understanding a bit more how that works; understanding how to create integra ons with my own work. Did the project meet your expecta ons regarding what you wanted to learn about? If not, what would you have liked to learn about, which was not possible through the project? R: It was quite a messy process, it was difficult to navigate through this process and I think the expecta ons have changed over the course of the project several mes, so I cannot really remember what were the ini al expecta ons. I think I don't have that very clear. I think the trajectory was quite surprising. As co-lead of work package 2 I thought I would work more on policies, which in the end I did not do, and we had those very strong focuses on the Barcelona case, and none of the had those very strong focuses on the Barcelona case, and none of the studies that we did in the Barcelona case were really thought of in the beginning, so thinking about these mapping or gender issues or green roofs and so on. So those were all products that over the course of the project it became clear that they were relevant and possible to conduct and that we were interested in doing them, but nothing of that is in the ini al proposal. Neither is the use of social media data. That was a message: making our work impac ul and meaningful for the people on the ground, we cannot define pre y close in the very beginning, otherwise we lose the flexibility we need to reach this. That is for me a core message. So the expecta ons in the beginning were more in this direc on of making this work impac ul and that is where we learned a lot and worked pre y well. 3.1. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: Star ng a research with the needs of actors is the fundamental thing to make research impac ul. We started the stakeholder process in March 2017 in the framework of [other project name] and ENABLE projects. We have done so far about 8 or 9 sessions and the topics have been changing, but we call the process a dialogue on the implementa on of naturebased solu ons and green and blue infrastructure in the Metropolitan Area of Barcelona. We try to bring together stakeholders from different organiza ons, both from public authori es like the city council of Barcelona but also the regional council, some departments of the Catalan Government, planning agencies, other research centers, we try also to have social organisa ons or neighbourhood associa ons, although it is more difficult for them to come because we organise these in the morning so it is more difficult for people to come. The topics of the mee ngs have been changing depending on the needs of the project at some point, at the same me we try also to talk about topics that are relevant for the stakeholders. The first one was, for example, on the iden fica on, mapping and priori za on of nature-based solu ons, we did a kind of workshop to do this spa alisa on exercise. The second was more on the opportuni es and barriers to integrate nature-based solu ons and green infrastructure in urban planning, so it was more related with ENABLE with this barriers approach. Then the third one was on how can we increase green infrastructure in compact ci es like Barcelona and in all of these mee ngs we have presenta ons from par cipants who are working on different ini a ves at the policy level or urban planning level or projects related to green space. In this third one the urban green infrastructure program of the city council was presented and discussed. Then in March 2018 the topic was more related to health, so what is the impact of nature-based solu ons and green infrastructure on human health and how can we make healthy ci es. Here we invited researchers from a center here who are doing a lot of research on this topic, they are environmental epidemiologists. Then in June 2018 we did a more ENABLE-related mee ng, related to green roofs. We organised it on a roo op garden from the city council, they are doing a very interes ng project related to social integra on of people with mental disabili es and they work in this garden and other gardens in the city -they won a prize recently for this project. In this mee ng it was a kind of priori sa on of where green roofs should be implemented in Barcelona based on different criteria, basically based on ecosystem services demand in the city. [BAR1] was more leading this study, which was recently published in a scien fic journal. Then in December 2018, because our group is working more on environmental jus ce, we had a mee ng on how can we integrate social and environmental jus ce and equity in urban greening. We openned that one a bit more than normal, we organised a public event in the one a bit more than normal, we organised a public event in the morning (in the a ernoon there was a workshop). Then, this year we have organised already two: in February we organised one that was very focused on [other project name], because it was the tes ng of the urban nature index, which is kind of an assessment framework that has been developed in the [other project name] project by some colleagues. The idea was to test the framework using case studies of Barcelona and to give feedback to improve the framework. The last mee ng was last June (2019), more related to ENABLE because it was related to resilience. Actually we invited the city council of Barcelona, namely the people who are in charge of the resilience strategy -they are working on it and only next year they will approve it. We did a workshop based on how can the resilience be enhanced in Barcelona through green infrastructure planning. One of the outputs of our ENABLE case study will be based on this workshop. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? R: I was involved in the ENABLE posi oning paper published in BioScience journal, so for me the learning process was more related to this ENABLE framework, so the filters at different levels, based on infrastructure, percep ons and ins tu ons. Also to integrate environmental jus ce (although I had already worked a bit with that topic) and to consider the resilience lens in green infrastructure planning was also something more related to ENABLE because [other project name] is not so much about resilience. So I would say these two aspects: the framework based on the filters and approaching the green infrastructure planning and the benefits of green infrastructure under a framework of resilience and environmental jus ce. 2.5. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: It is not that I have learned any new terms, I have been working with concepts like green and blue infrastructure or ecosystem services or even nature-based solu ons, so I wouldn't say that I learned new concepts, because for example in the ENABLE framework this idea of filters -infrastructures, percep ons, ins tu ons are not really new terms, it is more pu ng together different aspects that we have been working in the last years. I wouldn't say that specifically in terms of new concepts or terms have been new to me. But then maybe it is true that it has been a bit difficult for me to understand this systems model, this framework at the beginning. This has been especially led by [STO1] and I have the feeling that he knew quite well what was the framework in his mind but it was a bit difficult for him to communicate in the beginning. And it was a bit difficult for me to understand his idea of filters, their rela onship with resilience and equity and so on. idea of filters, their rela onship with resilience and equity and so on. When I was working with that paper I just volunteered to do this kind of example box where we tried to describe how these systemic filters can be explained in a real example, for me this was quite a learning exercise, trying to bring this framework into a real case. 2.6. Do you feel you learned something from the research team? And from other actors in the city? Can you iden fy what you have learned from each of them? (main items) R: Difficult to say because we have been doing so many mee ngs. In terms of concepts I remember that at the beginning, during the first mee ngs, some of the stakeholders were a bit concerned about the concept of nature-based solu ons, because in Barcelona they have been taking up the concept of ecosystem services or environmental services and green infrastructure but not so much the nature-based solu ons concept, so it was a bit difficult in terms of terminology, especially from the side of [other project name] in this case. Besides the workshops, what we have been doing in these mee ngs we have been invi ng many organisa ons to present ini a ves related to green infrastructure, like strategic plans or urban planning instruments, the already men oned resilience strategy of the city, so it has been a learning process also for me in rela on to what is going on in terms of policy, both at the city level and the metropolitan level, and how they are trying to implement this re-naturing agenda in the city. Something that is clear is that Barcelona is giving more importance to these aspects of greening not only in terms of the more tradi onal recrea onal aspects but also in terms of climate change adapta on, social benefits, things like that. I couldn't men on any specific thing but more a general learning process from the stakeholders as well. (From the research team see Q.2.5) 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: We have always had difficul es to reach grassroot groups, or social groups, more the beneficiaries of these policies, the neighbourhood associa ons, even NGOs, even if we have been invi ng some of them it has been more difficult for us to reach them, maybe because of this me issue that we were organising most of these mee ngs during the week and in the morning, and many of these people have their jobs so it is difficult for them to just go to a mee ng. This is different for public authori es or research centers or for agencies because for them a ending this kind of events is part of their work, so they can do it more easily. So in this case it has been probably a bit unbalanced in terms of this kind of stakeholders. 2.8. For which purposes do you see the knowledge created in the project useful (e.g. suppor ng GBI planning/managing processes)? First would be simply awareness raising about the benefits of green infrastructure, maybe also the narra ve aspects but also the benefits. Second, could be more directly influencing policy, priority-se ng. Those would be the two levels to which the outputs of the project could be more relevant. Actually in the last mee ng in June about resilience we co-organised the workshop with the city council (urban resilience department) so it could be useful for the strategy of the city, because they are now developing this strategy. We always try to organise these mee ngs together with stakeholders who have a kind of mandate for integra ng these concepts in urban policy. Even if some mes it is more about knowledge transfer only. 2.9. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: I don't know if it was for specific things, being involved in research projects like ENABLE is more a learning process in general. Gaining knowledge is also useful for teaching. Some mes I teach at Master level and the knowledge generated in these projects can be used, since it is something relevant and recent. In terms of benefits, the publica ons are also important for us as reseachers. Then all the networks that we are maintaining in these projects with different research organisa ons in Europe. I would say these are the most relevant for me. 2.10. What new knowledge or new insights resul ng from the project do you consider the most relevant for the planning and management of green and blue infrastructure in your case study city? R: I would say that the ENABLE framework, with this idea of the three filters, is something new, even if it is more difficult to opera onalise. At the level of research I think it is important. For me all these aspects related to environmental jus ce have been useful, the idea of availability, accessibility and a rac veness, because our group here is focussing on environmental jus ce aspects, so it has been a good synergy with the ENABLE project. R: It is a bit difficult to answer this because I was supposed to be a bit more involved in the project than I could in the end. I do not exactly remember if my expecta ons were very high or not. I guess something that I was more or less involved in the beginning and I'm not sure what is the status now but it seems that it is not going to be studied at least in this project, is the aspect of func onal traits that is related to resilience. We had this plan to assess the func onal traits of plants in different ci es and how can we assess the poten al impacts of climate change events, extreme events like heat waves or flooding on the ability of vegeta on to provide ecosystem services. This is something we started working on in the beginning but then did not get very implemented during the project, so I hope that this can be at some point re-taken maybe in another project. 3.1. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: Maybe the most important thing is that I have been able to work on this framework of green infrastructure and ecosystem services in rela on to urban environmental jus ce aspects. As you know we have been se ng up a special issue on that topic in the Environmental Science and Policy journal, so all this work that has been done within ENABLE in rela on to environmental jus ce has been from my point of view one of the most important things. I know that there has been a lot of work related to resilience, to policy, percep ons and so on, but in my case I have been par cularly more interested in these aspects related with jus ce and how can the ENABLE framework address environmental jus ce and equity aspects. 2.3. What knowledge or ideas/insights/perspec ves did you gain through your par cipa on in the project (even if you don't consider them as something you have "learned")? Did you learn them from the research team or from other actors? (instead of 2.6) R: Of course there was a lot of different research methods applied and tested on the project which of course for us was a lot of new things and new insights on how different aspects could be studied. I had no chance to dig into each of those methodologies and each of those outcomes but for example a few things that were very interes ng for us that we looked into in-depth for example like the mapping of preferences and values when it came to the ci zens and also like how different green spaces have been used. There was for example work that was done by [project partner] that was presented also in Halle to see the different techniques because this is knowledge I think that could be easily transferred and used by a city and we also had a chance to try and test and adjust the Q-methodology for the first me on our own and to use that for example in two different spots (in Halle and Stockholm) which was also of course crea ng new insights for those case studies and at the same me also leading us at the ques on: how could this be used and translated. We were also very interested to see how does the resilience assessment worked. I think, not being involved in the process it was a bit difficult to get the full picture of this super complex analysis. We just now got for the city survey that this was in several ci es something really new but also a method and results that had a good impact on green city development in several ci es, so I think it's something we have to look at in more detail. This was discussed more theore cally among the case studies and I think you only really understand once you are on the workshop together with the stakeholders where this concept is being presented. Apart from the whole GIS analysis which is also insigh ul, what was also new knowledge was on this different understandings for different terms that we also discussed. I think it was especially quite different when it came to barriers having different approaches more on the ins tu onal side looking at what was done by Lodz and where we came with this more tradi onal approach where you have different types of barriers from technical, financial cultural, so I think it was quite interes ng for us to see how different terms can be approached, specific issues, and I think the same also with governance and ins tu onal issues, so I think that was a very enriching process to have the discussion with the other researchers but also with the challenge to bring the different minds together, also crea ng like an umbrella concept, so that captures it all. There are a lot of more papers that I have to read I had not the me yet but it was really interes ng to see also how manyfold methods have been applied to different extents in the different ci es and also with different outcomes. 2/7 Our contact was rather limited because everything happened through the city research partners. I think the closest contact we had was with the different actors in Halle so with the city administra on people and also with the people for example from Neutopia in Halle-Neustadt and I think that was insigh ul as well. It made us start thinking how can you really like apply a method which is usually super theore c and super research driven but how can you apply it to a specific context, so that it is s ll understandable and can also create meaningful results, at least results that could be presented and ideally also feed into ongoing processes, so I think this was something that definitely we learned. Also there was so many different factors from culture, different habits of the people there and different interests and also with the language of course that's where we needed people who speak Arabic which we luckily had, or Russian. For us it was a challenge but it was a really good one to to embrace all those different factors. There was of course also contact with the city but I think they were always so overwhelmed with work, especially in the last workshop as I remember, they were unfortunately not that many people as expected so I think that le us with a ques on mark: the people who were there were really mo vated, but s ll the ques on what could be achieved if other people had been involved. That was not fully clear from this contact. Our interac on in Lodz also enlightened us like how for example the term green and blue infrastructure is being perceived and also opera onalized which can be super different from technological solu ons or just really be regenera on towards really green solu ons taking into account mul ple benefits. I think this is what we learned when engaging with the stakeholders either through workshops or just for the study site visits. 2.4. Through which project-related ac vi es (e.g. workshops in other ENABLE ci es, stakeholder workshops in own city) do you think you have learned the most? R: Maybe because it's fresh but also because I was more strongly involved, were the latest series of workshops that happened in the second half of last year, that was what we call it the co-crea on workshops to discuss the policy op ons. We have been deeply involved also in the design of some of these, to see how we we can really get the messages that we want, so I think we took a lot of it of course. There's a lot of variety of different outcomes but it also made us realize how different processes are and also showed to some extent that each city had a different focus some were regional in focus some had a specific topic to follow up. You cannot just compare those results but I think what was really good for my learning was to see the different approaches in terms of people that they had invited, the discussions they had but also to see how different the outcomes were, like when it came to poten al policy op ons that have been discussed and that have a lower or higher chance also to feed into future policy development. I think this is what I remember the most. I was really amazed that even for ci es that were struggling, like Lodz, they were 3/7 amazed that even for ci es that were struggling, like Lodz, they were super mo vated but of course there are also difficul es to get the engagement from the city they wanted but s ll they had the workshop and also they had good results a er the workshop. 2.5. Did you learn any terms (like technical terms) that were new to you? If yes, how useful do you find them for your ac vi es? Did you experience some difficulty communica ng with/understanding others due to the terms/jargon used? R: Not really new terms that were used, there were some things that had been used already, like socio-ecological system. The challenge was more, for me who comes more with an applied research perspec ve, that the concept of the project was to my understanding driven a lot from hypothesis and research theories so I think that is the challenge itself if you work more on the applied research side and some mes I had a feeling there was an overload of terms being used for example when we talked about the filters and the different principles. Some mes I had the feeling there were like a hundred concepts and in a paper each of them usually would need a page to explain it to people who are not familiar with that. I think some mes it was a challenge to get a grasp on all those very theore cal defini ons and the terms. There were some new ones like for example with the barriers I think that was really opening and I have a be er understanding of ins tu onal barriers. Others were not new but the project was useful to fill them, like distribu onal effects, or learning more about equity and jus ce the project really helped to fill those terms that we were aware of before, but to see how to get content to it and a be er insight into these topics. Difficul es: it is always this ins tu onal and governance policy analysis which of course depending where you come from can be interpreted quite differently and if you, as I do, work more really closely with the European Commission and the ministries it has a really clear meaning really referring to the policies and instruments, but I had a feeling that, although I think all of the partners also had a really good understanding of this, some mes when it came more to the in-depth discussions and with the different research foci of each of the partners that had different meanings. I think what was helpful and what we had in the Halle workshop was the discussion in the different groups with three groups (one was on preferences and values, the other on ins tu ons with the filters and the last one with the barriers). I think that was really helpful at least to discuss it even if of course you don't come to agreement but at least you have a list of different terms and the different understandings how they are being handled in the project. I think that's fine and I don't think you need more you don't need to bring them together I think it's just important and it's also good for learning to sit together and everybody says what he or she understands about this term and how this concept is being applied in a project. 2.7. Do you think there were other actors, who could have been beneficial to the learning process, but who were not engaged in the project? Were there any par cular reasons to not engage them? R: We had an event in Brussels and invited the right people because we had for example DG Environment and DG RTD present and other stakeholders. It would have been interes ng to engage, to a certain extent, with one or two other projects that were running under the same funding scheme, to exchange, see what is their research focus and if there may be some some overlaps or similari es. For the ci es usually you should always have the people who have a say in these topics -like the city officials from the city development or the environmental department. That was the case in some of our case studies, but you should also engage people who also can have a contribu on, maybe people who work in educa on or in health. What was I think super interes ng from the Oslo case, and maybe that was a special case but they were I think quite successful, at least in some areas, because [OSL1] worked really closely with the technicians. That is quite interes ng because I think there were no really technicians in the other cases, maybe it wouldn't have made sense but it was interes ng to see that depending on what you work on, you might need to engage other people. What new knowledge or new insights resul ng from the project do you consider the most relevant for the planning and management of green and blue infrastructure in the case study ci es? R: Star ng with presen ng and discussing a new holis c approach also in terms of a resilience assessment that considers mul ple benefits in a really integrated way. That was something new to the ci es and to some extent also triggered other perspec ves and views on green and blue infrastructure, so that helped in several ci es to open their minds. I think that was something, like increasing and handing a more systemic understanding of green and blue infrastructure and what is included, so it's not just climate change but it could be also social aspects or other things that are being addressed. Another thing which is part of that but has been addressed more specifically was having more focus on the social dimension of green and blue infrastructure, for example thinking about how green and blue spaces are distributed, how different people benefit or don't benefit from it and then addressing this issue of equity, which is emerging but I think it's s ll not something that's in the minds of the people who are planning maybe because they don't have me to think about it. Another thing that was quite beneficial was just improving the evidence for decisionmaking. A lot of those things have been supported also by GIS analysis and providing data and maps (for example in NYC data on stormwater management), so providing planning authori es with analyses that maybe they couldn't do themselves mostly maybe because they don't have the mely capaci es or they don't have the technical capaci es 5/7 have the mely capaci es or they don't have the technical capaci es so I think these analyses also were a big step to maybe support or to inform future ini a ves. In which ways is the knowledge produced in the project useful for you (as support to your ac vi es)? R: Mostly from stuff we did ourselves, for the first me we started digging into this topic of preferences and values and percep ons of ci zens when it comes to the design of green infrastructure, this was a super good opportunity for us to to try and test a specific method but also learn about other methods that have been applied by other partners in this regard and that is also something that we will follow up in the future and in other projects because we should take into account people's preferences and not presume that we know what they want. This is something that helped us enlarging the focus of our green and blue infrastructure work. Another thing we are super interested in is what is needed to establish and enable a good rela onship and a good working basis between researchers and prac oners (and in this case the ci es) because maybe compared to others, we are not so much involved in scien fic research (tes ng hypothesis, publishing papers) but we are very interested to look into what the ci es have learned here and maybe over many years and helping us elicit the hampering but also success factors, but also showcasing what results have been through processes and why it worked or why it didn't work. I think that was something that's adding new knowledge for our work that is very important also to take into account for future work because you can much be er encounter those factors and then adjust again your method as necessary, also when building own processes with ci es it's also helpful to know what is really needed or what could be a good approach for example to establish a good rela onship and a trusty rela onship because that's key to create benefits for both sides and also to work towards a goal. I think this was not explicit in ENABLE so we just had a discussion using transdisciplinary approach but there are different defini ons so if you take it easy you could just say it's a good big coopera on between researchers and administra ons or ci zens but there are also concepts that say no well actually you start already with a joint problem defini on and then you create your research agenda and you work towards it. I had the feeling we started with the methods and with the research which was fine because this was the scope of the project but in my understanding it was not a hundred percent transdisciplinary approach because then we would have started with a gap and a problem analysis in each of the ci es and would have designed [the research] but of course it was not the purpose because the purpose was a different one. 3.4. Based on your experience with ENABLE, how should knowledge exchange strategies and processes be designed in the future to enhance the learning process? There are different interests always in such a research project so of course we have to advance with specific methods and different approaches, assessment this has to be advanced so this is one interest. On the other hand we should address societal challenges, seeing what are the key problems and then to see what could be the methods that could be applied. Some mes I felt the amount of methods we were dealing with was quite overwhelming and the ques on would be how they fit together, because the idea was to have a complementary implementa on of the different methods and I think they all had good results but maybe there could have been a few less methods to apply to a specific target but of course with the freedom of the researcher also to test new things so working with the instruments and having the researcher as a sort of knowledge broker who has knowledge but knowing that there's also knowledge from the city side. There is a lot of local knowledge that does not always necessarily go into our research. So start earlier with a discussion with the ci es and then find a concept that fits the interest of the people being involved and that is s ll of course realis c and maybe also don't commit to too many methods but a good set of methods that could be beneficial for most of the ci es and then try if possible to have a more frequent exchange maybe with smaller mee ngs to establish a small dialogue so that it's not only like once a year or twice a year but so keep the people informed. In the end what you deliver is something that could be really not just informing but ideally something that could already directly feed into ongoing processes. I think that already happened in a few ci es but if you even increase the number of dialogues which s ll works for both in terms of workload I think that could be quite beneficial. And even more exchange between the ci es not just between the researchers. I think it would be interes ng for the ci es if they have not just the role of a research subject but also give them the room to exchange because it's also a learning experience and you learn the most if you go into the ci es and see the places and hear the stories. Did the project meet your expecta ons regarding what you wanted to learn about? If not, what would you have liked to learn about, which was not possible through the project? R: First of all we were a newbie to this team, there was a lot of rela ons already established through I think the URBES project. I think we were lucky to get the opportunity to work with this new group of people, which was a new network for us which is always a new exci ng experience for us because we know there's a lot of exper se that this network has and that we can just benefit from, because all partners had a lot of publica ons. We were really excited to get the opportunity to work with this team. I don't think we had like a specific expecta on, I think there was a lot of interest when we had the concept and for example to learn more, from the very beginning, concept and for example to learn more, from the very beginning, about the resilience assessment which I s ll think that we have to look probably now more into the concluding papers to get a full understanding, maybe you don't get like a hundred percent understanding but maybe like 80 or 90 percent to fully grasp this concept, but I think that was something that we were curious about from the very beginning. I think there were no specific things that we had in mind that we would like to learn apart from exploring on our own new methodologies as part of this project so I think we were really open to all the aspects, it was just the curiosity and also to work with these new experts and with this new team in this se ng. There is nothing that I would say there was no opportunity to learn about. 3.1. What did you find most interes ng and useful from the project? What were the main "take-home messages"? R: One of the things that definitely we will take away and take back to our work is this potpourri of possible assessments and methods, that worked really well in prac ce together with the ci es and to take those into account when star ng to work with other ci es, just to learn what research can contribute to improve the current planning when it comes to green and blue infrastructure, or nature based solu ons, having an overview of methods which you can pick from, I think that's really worth having and then trying to implement them in other projects. Thinking about what does it take to create a good coopera on with the city that's beneficial for both sides knowing about all the different factors that could impact such a work and either hamper but also be success drivers, I think that's also very insigh ul. And then learning from the experiences from the ci es, what specific research has made an impact in those ci es and taking those also as good examples when going to other ci es. It's always good to see what has worked in a given city, what was the result and maybe this is something that we can try, of course adjus ng to the local context, and I think that's also helpful with these different different regional cases that we looked at and of course also looking at the things maybe that worked less well that's also like learning from the failures or things that worked out differently than expected, this is quiet a valuable informa on for other ci es for example.
2021-10-28T15:08:28.351Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e9e6151b6744ff63eb6a854552329094f2cc8424", "oa_license": "CCBY", "oa_url": "http://www.ecologyandsociety.org/vol26/iss4/art19/ES-2021-12631.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c554a4af12c21699435901c8dda47cce57aef3fe", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
259376792
pes2o/s2orc
v3-fos-license
VBD-NLP at BioLaySumm Task 1: Explicit and Implicit Key Information Selection for Lay Summarization on Biomedical Long Documents We describe our systems participated in the BioLaySumm 2023 Task 1, which aims at automatically generating lay summaries of scientific articles in a simplified way so that its content becomes easier to comprehend for non-expert readers. Our approaches are based on selecting key information by both explicit and implicit strategies. For explicit selection strategies, we conduct extractive summarization based on selecting key sentences for training abstractive summarization models. For implicit selection strategies, we utilize a method based on a factorized energy-based model, which is able to extract important information from long documents to generate summaries and achieve promising results. We build our systems using sequence-to-sequence models, which enable us to leverage powerful and biomedical domain pre-trained language models and apply different strategies to generate lay summaries from long documents. We conducted various experiments to carefully investigate the effects of different aspects of this long-document summarization task such as extracting different document lengths and utilizing different pre-trained language models. We achieve the third rank in the shared task (and the second rank excluding the baseline submission of the organizers). Introduction Lay summarization is a crucial task and gaining increasing attention due to its potential to provide accessible and digestible scientific information to the general public (Guo et al., 2021). The task involves summarizing technical and specialized content into a readable format for non-expert readers. This task is particularly relevant in biomedical fields, where research findings have significant implications for public health (Vinzelberg et al., 2023). In order to help broaden access to technical texts and progress toward more usable abstractive summarization models in the biomedical domain, the BioLaySumm 2023 shared task (Goldsack et al., 2023) has been organized for lay summarization task on biomedical research articles. The challenges of this lay summarization task are in two folds: 1) input texts are full articles containing up to 10k sentences, which require models to capture long dependencies and extract key information fragments to generate summaries; 2) the lay summarization task requires us to generate summaries which not only convey the main meaning of the articles but also non-expert vocabularies for readers. We build our systems based on sequence-tosequence models with different key information selection strategies to solve the lay summarization task on biomedical long documents. Our abstractive summarization systems are built using sequence-to-sequence (seq2seq) architectures, which have shown state-of-the-art (SOTA) performance in recent abstractive summarization models (Lewis et al., 2019;Zhang et al., 2019;Liu et al., 2022). In order to deal with the issues of long documents, we focus on two key information selection strategies. Specifically, for the first strategy, we explicitly select key sentences as the input for training abstractive summarization models. For the second strategy, long documents are used as inputs and important information is implicitly extracted based on the factorized energy-based model to generate summaries, in which we utilize a model called Fac-torSum (Fonseca et al., 2022), which has shown to be effective in long document abstractive summarization. Furthermore, our systems are initialized by BioBART (Yuan et al., 2022), LED (Beltagy et al., 2020) to take advantage of the biomedical domain pre-trained language model. We evaluate our systems by conducting experiments on different aspects such as the effects of sequence length selection, the pre-trained language models, and applying the SOTA model (Liu et al., 2022). We obtain the best performance with the implicit selection FactorSum models and BioBART, sepa-rately trained on the two datasets, i.e., PLOS and eLife of the shared task. For the final results on the test set, we achieve the third rank in average scores. For separate metrics, our systems outperform the top teams in three of seven metrics, i.e., the relevance metrics (ROUGE (1, 2, and L) (Lin and Hovy, 2003), and BERTScore (Zhang et al., 2020)), the readability metrics (Dale-Chall Readability Score (DCRS) (Tanprasert and Kauchak, 2021), and Flesch-Kincaid Grade Level (FKGL) (Chernichky-Karcher et al., 2019)), and the factuality metric (BARTScore (Yuan et al., 2021)). PLOS: The Public Library of Science (PLOS) is an open-access publisher that hosts influential peerreviewed journals across all areas of science and medicine. The journals in question focus specifically on Biology, Computational Biology, Genetics, Pathogens, and Neglected Tropical Diseases. eLife: eLife is an open-access peer-reviewed journal with a specific focus on biomedical and life sciences. Similarly to PLOS, these digests aim to explain the background and significance of a scientific article in a language that is accessible to non-experts. As the data statistics presented in Tables 1 and 2, this task is challenging when we need to generate summaries from long documents with an average from 6k to 10k words in each document, and even more than 25k words in maximum. This requires models dealing with capturing long-range dependencies to extract important fragment information while avoiding out-of-memory issues. For the data sizes, PLOS contains 24k samples while eLife contains only 4k samples. Our Approaches We present two different strategies that we investigate to build our systems to solve this longdocument lay summarization task. Explicit Selection Models for Summarization Extracting Key Sentences We first explicitly extract important information (key sentences) before feeding to abstractive summarization models. We use the following approaches. • ExSum(Lead): We extract the first three sentences (lead-3) and the last sentence of each article's section. • ExSum(Key): We select the abstract, conclusion, and the lead-3 sentences of the remaining sections. Abstractive Summarization Models The extracted sentences are then used to train our abstractive summarization models based on sequence-tosequence models. Implicit Selection Models for Summarization Instead of explicitly selecting a subset of sentences, we feed the full text of articles to train abstractive summarization models. FactorSum (FS) We utilize the FactorSum (Fonseca et al., 2022) -a recent abstractive summarization model, which achieved SOTA on several long scientific article datasets such as PubMed and arXiv (Cohan et al., 2018 budget used to cover salient content improves the quality and capacity of abstract summaries through two steps: (1) generation of abstractive summary views covering salient information in subsets of the input document (document views); (2) generates an abstract summary from these views, following a budget (a threshold that limits the number of words used in summary) and content guidance (information that guides the summarization system about what information to focus on in summary). Data sizes Ideally, we would like to train the entire texts of articles. However, due to the limitation of our computation hardware, we limit the sequence length to two different sizes: 9k words and 12k words. From our analyses, these sizes all cover more than 90% of the articles' texts. Settings For our implicit selection models, we utilize the FactorSum model, which is implemented in Py-Torch. The model is initialized by BioBART (Yuan et al., 2022), a recent pre-trained language model trained on biomedical texts. We use AdamW (Loshchilov and Hutter, 2019) optimizer with a learning rate is 5e −5 . We set the generation_max_length to 512 and genera-tion_num_beams to 4, max_source_length to 1024, max_target_length to 490 with PLOS, and 512 with eLife. We set batch_size to 2 because of the limitation of GPU memory. The gradient will accumulate every 4 iterations. The maximum number of training iterations is 50000 for all experiments on 1 GPU (NVIDIA RTX 2080Ti). During the training, we save the best model with the highest ROUGE-1 score based on the validation set. For our explicit selection models, we implemented our abstractive summarization (Seq2Seq-AbsSum) systems using the standard sequence-tosequence models on the public PyTorch implement from Transformers. 1 Compared Systems We compare the systems based on the FactorSum trained with long texts (9k and 12k words), and the Seq2Seq-AbsSum trained with ExSum approaches. • FS_(9k, 12k): Our sequence-to-sequence systems based on the FactorSum model (Fonseca et al., 2022), which we described in Section 3.2. We limit the article length to 9k and 12k words and the last sentence of the article as input to the FactorSum model. We only use pre-trained BioBART (Yuan et al., 2022) models for experiments with the FactorSum models. Results are evaluated using the officially provided metrics including: relevance(ROUGE (1, 2, and L) (Lin and Hovy, 2003) and BERTScore (Zhang et al., 2020)), readability (FKGL and DCRS), and factuality (BARTScore (Yuan et al., 2021)). We use the best systems on the validation dataset with BARTScore (FS_nk, where n = 12 for PLOS and n = 9 for eLife) to generate test summaries for our submissions. The comparison results are presented in Section 4.3. Table 4 shows the evaluation results of different experiments on the validation set of the PLOS dataset. The ExSum(Key) + BioBART model achieved the best results in the ROUGE-1, ROUGE-2, ROUGE-L, and DCRS metrics. The ExSum(Lead) + Bio-BART model achieved the best result in the FKGL metric. Meanwhile, the FS_12k model achieved the best results in the BERTScore and BARTScore metrics. For the PLOS dataset, our submission was chosen based on selecting the best model with the BARTScore metric, which is the FS_12k. Furthermore, we can also see that the FactorSum-nk models also show evenly results across metrics used to evaluate the share-task. Table 5 shows the evaluation results of different experiments on the validation set of the eLife dataset. For this eLife dataset, we have not yet experimented with the FS_12k model because of time limitations during the competition. The Ex-Sum(Key) + BioBART model achieved the best results in the ROUGE-L metric, while ExSum(Lead) + BioBART model achieved the best results in the ROUGE-2 metric. Besides, ExSum(Key) + LED achieved the best results in the DCRS and BARTScore metrics, and FS_9k has the best results in the ROUGE-1, BERTScore, and FKGL metrics. Overall on both PLOS and eLife datasets, we can see that FactorSum-nk models (n = 12 for PLOS and n = 9 for eLife) seem to have the most promising results, which is why we selected it to submit to the leaderboard. Table 3 shows the best results submitted on the leaderboard for PLOS, eLife, and both datasets using the FS_nk models (n = 12 for PLOS and n = 9 for Life). Although our BARTScore metric is lower compared to teams ranked higher (Top-1, Top-2), we have achieved better results in other metrics such as ROUGE-1, ROUGE-L, and DCRS. We also show detailed results for each PLOS and eLife dataset in Table 3. Overall, our model achieves positive results in evaluation metrics: relevance, readability, and factuality. Conclusion We have presented our systems and participated in the BioLaySumm shared task to generate lay summaries for long biomedical articles. We approach the task by focusing on the two key information selection strategies: explicitly extracting key sentences to train abstractive summarization models and implicitly extracting important information by utilizing the FactorSum model. The results show that the implicit selection model with FactorSum obtains the best performance. We achieve the third rank on the test set and obtain several promising results, which outperformed the top teams on several metrics. Limitations Though our systems achieve promising results in solving the summarization task for long documents, we believe that we can gain more improvement with the following further considerations. The current explicit key information selection strategy is somehow heuristics. We can alternatively try extractive summarization methods. Also, this lay summarization is interesting which helps nonexpert readers can understand scientific articles. However, specific strategies focusing on this aspect such as using non-expert vocabulary, or mapping to general knowledge, are yet applied. Some minor parameters such as the sequence lengths (9K, 12K), or tuning the SOTA BRIO model also need to be investigated more deeply.
2023-07-10T13:05:23.918Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "b2c98e2e25edcb4f9c9e448fe09411ee2789b27d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "b2c98e2e25edcb4f9c9e448fe09411ee2789b27d", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
15792730
pes2o/s2orc
v3-fos-license
Midline Cervical Cleft: Review of an Uncommon Entity Introduction. Midline cervical cleft is a rare congenital malformation which nonetheless has a classic presentation. This study presents one of the largest single series of new patients with MCC and provides an exhaustive review and catalogue of publications from the international literature. Materials and Methods. Retrospective chart review performed in two academic medical centers and literature review performed with primary verification of all quoted references. Results. Ten patients with MCC were identified (8 boys and 2 girls). All patients presented with the classic findings of this congenital anomaly, and the length of the skin defect correlated with an increase in the patient's age. Surgical excision was complete in all cases. Thorough international literature review yielded only 195 verifiable previously reported cases. Conclusions. This is one of the largest series of new patients with midline cervical cleft presented in the world literature. Although rare (with less than 200 cases published to date) this entity does have a reliable presentation that should lead to rapid and accurate diagnosis. Complete surgical excision at an early age is appropriate since the anomaly increases in length commensurate with the patient's age. Introduction Midline cervical cleft (MCC) is a rare congenital anomaly whose embryological origin is uncertain. Review of the international literature reveals at least 195 cases reported to date (not including the 10 patients in this series). Taken together, the information from previously published case reports has shown relatively consistent anatomic and pathologic findings, but there have been few series with more than several patients, and no one has undertaken a review of all the published literature. The purpose of this study was to evaluate the physical and pathologic findings associated with this condition and to provide a comprehensive catalogue of published cases in the world's literature. The findings in this large series of ten patients illustrate the clinical and operative findings, demographics, and treatment of this unusual entity. Materials and Methods This study was performed as a retrospective chart review of patients treated by the University of Southern California Keck School of Medicine Department of Otolaryngology-Head and Neck Surgery and the Duke University Division of Otolaryngology-Head and Neck Surgery. Ten patients having the clinical and pathologic diagnosis of midline cervical cleft were identified. IRB approval was obtained from both USC and Duke. The charts were reviewed for history, clinical and pathologic findings as well as timing of surgical intervention and postoperative complications. Complete surgical excision of all components of the cleft was performed in every patient, and closure of the surgical wound was achieved using simple vertical closure or single versus multiple Z-or W-plasties depending on the length of the incision and the laxity of the neck tissue. The literature search was performed by using the search terms "midline cervical cleft" and "congenital midline cervical cleft" in PubMed and then performing an exhaustive review of all possible publications (journal articles, textbook chapters, doctoral dissertations, and case reports) and references. Copies of original manuscripts were obtained from library and internet resources although it was not possible to acquire some dissertations. In these cases, utilizing email communication, librarians provided the number of new cases described in the dissertation. Whenever possible, authors were contacted directly to clarify whether or not cases had been previously reported and in some cases to provide the gender of the patients. If no information was available on Results Demographically, in this series, there were eight male and two female patients ranging in age from 2 months to 12 years. Seven children were Hispanic, two were Caucasian with Northern European lineage, and one was of Filipino descent. Clinically, there were six consistent findings: (1) a midline, vertical atrophic skin defect, (2) a lack of adnexal elements within this skin defect, (3) a superior skin tag, (4) an inferior blind sinus, (5) a midline subcutaneous fibrous cord, and (6) an increase in the size of the defect commensurate with an increase in the patient's age. Mucous could be expressed from almost all of the patients from the inferior sinus. The length of the skin defect ranged from 3 cm to 12 cm and there was an almost direct correlation between the age of the patient and the length of the defect ( Figure 1). Patients were treated with surgical excision at the time of presentation or at one year of age for those patients who presented prior to their first birthday. The fibrous cord also became more prominent as the age of the patient increased with the older patients having some restriction of neck extension. Postoperatively, one patient had a wound infection that was treated with local wound care and healed without sequelae. There were no recurrences. Table 1 provides a catalogue of the 195 cases found in the international literature. Author, date of publication, number of cases, and gender are included as well as pertinent notes. Some authors included cases which had been presented before, and in this situation only new patients appear in the table for that publication. Overall, there were 195 cases, 77 females and 58 males and 61 instances of cases presented, but no gender was given. Cases were restricted to the accepted definition of an MCC presentation (superior skin protuberance, midline skin defect, underlying fibrous cord, and inferior blind sinus without involvement of the mandible or sternum). Cases not having at least most of these elements were not included in the final tally. Discussion Every patient in this series presented with the classic findings which define midline cervical cleft: a usually erythematous, vertical, and atrophic skin defect in the midline of the neck which lacks adnexal elements, a subcutaneous fibrous cord which is often longer than the overlying skin defect, a superior skin tag, and an inferior blind sinus. This constellation of clinical findings may be found clearly described in Ombredanne's work in 1949 [1]. Initially in the English literature Bailey in 1925 and Gross in 1940 called this entity "thyroglossal fistula, " but their pictures and descriptions are consistent with MCC [2,3]. By the time he published on this subject again in 1953, Gross' nomenclature had changed to the term "midline cervical cleft" [4]. Luschka published the first case of MCC in 1848 under the description of "Congenital Fistula of the Neck" (translated) [5]. The drawing in his report is exactly the same as a picture of one of today's patients ( Figure 2). In 1864 and 1865, there appeared three reports of neck fistulas by Heusinger but one is more consistent with a bronchogenic cyst (barrel chest, cyanosis, and sinus tract that extends from the left anterior chest toward the lower neck) and the other two have fistula tracts/sinuses involving the lateral aspect of the neck [6]. Those midline cervical clefts are part of a continuum of midline defects which can be seen from two cases that were excluded from the tally: Barsky's case in 1938 in which the patient had a thick midline cord but no epithelial defect [7] and Szenes' case from 1922 in which the cleft in the neck extends inferiorly into the sternum [8]. Early reports indicated a significant preponderance of female patients [9][10][11][12][13], but the inclusion of the new cases presented here helps to narrow the gap between the sexes. In 61 of the cases from the world literature, the gender was not reported. This could have an effect on the gender distribution if most of these cases were found in either girls or boys. There is no apparent explanation for why girls would be affected more than boys if in fact this gender predilection is true. Most patients with MCC do not have a family history of congenital anomalies or other birth defects [11,14], and this was true for the patients in this series as well. At birth, the external layer of the cleft may consist of a weeping, red membrane which then heals to produce cicatricial skin as the patient grows. The fibrous cord, which usually extends down to the pretracheal fascia as it did in these patients, also becomes more prominent as the child grows. This is because the affected tissues lag behind in vertical growth compared with the surrounding normal neck tissue. Those patients in whom the cord is apparent even without neck extension have difficulty extending their necks. When the fibrous cord extends to the level of the mandible, a bony spur is often seen on the anterior, inferior surface of the bone secondary to the traction placed on the mandible by this tethering cord which may be severe enough to produce an open bite deformity [11,19,91,102,111]. This case series clearly demonstrates an important finding which impacts the timing of intervention in these patients. There was almost a direct correlation between the patient's age and the length of the defect (Figure 1). Whereas some previous publications recommended early excision only in those patients in whom the fibrous cord was prominent and severe producing inability to extend the neck or remodeling of the mandible [10,11], early excision is recommended to prevent an increased scar length as well as the problems associated with a tethering midline cord [106,111]. We treated patient surgically at the time of presentation for those over the age of one year and waited until age one for those who presented very early in life. It is also important to completely excise the lesion. Simply transecting the fibrous cord or performing incomplete excision of the cutaneous and subcutaneous elements leads to recurrence [9][10][11]74]. Closure of the surgical defect is performed with a simple vertical closure if the defect is not long and the surrounding skin is lax. The use of single or multiple W-or Z-plasties is recommended for longer defects to break up the scar and improve the cosmetic and functional results. This has long been proposed as the best way to deal with the vertical defect created by excision of the MCC and has become the usual way in which many patients are closed [40]. Early some patients with a long defect treated with a vertical closure developed neck contractures and an open bite deformity secondary to scarring after the surgery [11]. Conclusion This case series demonstrates two interesting points. First, there was a preponderance of male patients (8/10) in contrast to previous case series in which females have predominated. International Journal of Pediatrics 7 Second, since the length of the defect increased as the patient's age increased, early excision of the lesion to minimize scarring is recommended. The catalogue of cases from the world literature also provides an organized list that may be helpful for future research. Conflict of Interests The author declares that there is no conflict of interests regarding the publication of this paper.
2018-04-03T01:34:46.700Z
2015-04-23T00:00:00.000
{ "year": 2015, "sha1": "ff46eadce5623cf2590a9a2e924c783c0a293234", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijpedi/2015/209418.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51a9c3b52802ec4e7e6415007065c32a463f7986", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51955898
pes2o/s2orc
v3-fos-license
A Recombinant Newcastle Disease Virus (NDV) Expressing S Protein of Infectious Bronchitis Virus (IBV) Protects Chickens against IBV and NDV Infectious bronchitis virus (IBV) causes a highly contagious respiratory, reproductive and urogenital tract disease in chickens worldwide, resulting in substantial economic losses for the poultry industry. Currently, live-attenuated IBV vaccines are used to control the disease. However, safety, attenuation and immunization outcomes of current vaccines are not guaranteed. Several studies indicate that attenuated IBV vaccine strains contribute to the emergence of variant viruses in the field due to mutations and recombination. Therefore, there is a need to develop a stable and safe IBV vaccine that will not create variant viruses. In this study, we generated recombinant Newcastle disease viruses (rNDVs) expressing the S1, S2 and S proteins of IBV using reverse genetics technology. Our results showed that the rNDV expressing the S protein of IBV provided better protection than the rNDV expressing S1 or S2 protein of IBV, indicating that the S protein is the best protective antigen of IBV. Immunization of 4-week-old SPF chickens with the rNDV expressing S protein elicited IBV-specific neutralizing antibodies and provided complete protection against virulent IBV and virulent NDV challenges. These results suggest that the rNDV expressing the S protein of IBV is a safe and effective bivalent vaccine candidate for both IBV and NDV. Result Generation of rNDVs expressing S1, S2 or S protein of IBV. The expression cassettes containing the codon optimized S1, S2, S and non-codon optimized S genes of IBV were cloned into the cDNA encoding the complete antigenome of NDV strain LaSota, using the PmeI site, between P and M genes (Fig. 1). The correct sequences of genes cloned into full length cDNA of NDV were confirmed by nucleotide sequence analysis. Infectious recombinant NDVs containing S1, S2 and S genes of IBV were recovered from all cDNAs. The sequences of S1, S2 and S genes present in the rNDVs were confirmed by RT-PCR. To evaluate genetic stability of rNDV expressing codon optimized S protein, the viruses were passaged five times in 9-day-old embryonated specific pathogen free (SPF) chicken eggs. The nucleotide sequence analysis of the S gene showed that the inserted ORF were maintained without any adventitious mutations. Evaluation of the expression of the S1, S2 and S proteins of IBV. The expression of codon optimized S2, and S proteins and non-codon optimized S protein of IBV strain Mass-41 by rNDV constructs was detected by Western blot analysis in DF-1 cells using a chicken polyclonal anti IBV serum ( Fig. 2A-upper panel and B). As the expression of non-codon optimized S was not detected clearly in the first attempt ( Fig. 2A), we detected it in another attempt (Fig. 2B). The expression level of codon optimized S protein of IBV was significantly higher than that of the non-codon optimized S protein of IBV. For the codon optimized S protein of IBV expressed from rNDV ( Fig probably represent uncleaved S protein (S0) or polymeric forms of S protein. The ~95 kD band represents S2 or S1 subunit of cleaved S protein of IBV. In the case of rNDV/IBV-S2 strain ( Fig. 2A-lane 1), there are two bands (~170-220 kDa) on top, representing polymeric folded forms of S2 protein, a ~105 kDa band and a ~95 kDa band representing S2 subunit. The expression of S2 protein from a transcription cassette in which the signal peptide sequences of S protein was not fused with S2 gene was not detected (data not shown). Lane 4 of Fig. 2A and lane 3 of Fig. 2B represent rNDV as control. Lane 5 of panel A represents non-infected DF-1 cells. These results showed that codon optimized S and S2 proteins of IBV were expressed efficiently. The non-codon optimized S protein was also expressed from rNDV, but not efficiently and not consistently. A monoclonal anti-NDV/HN antibody was used to detect a ~70 kDa of HN protein of NDV in lysates, confirming similar level of NDV protein in each lane ( Fig. 2A-lower panel). We further evaluated incorporation of IBV S and S2 proteins into NDV virions. The rNDVs expressing codon optimized S and S2 proteins and rNDV expressing non-codon optimized S protein were inoculated into eggs, 3 days after inoculation, viral particles in infected allantoic fluid were partially purified and analyzed by Western blot (Fig. 2C-upper panel). Two bands (~170-220 kDa) on top, representing S protein, a ~95 kDa band and a ~60 kDa band representing S2 or S1 subunit of cleaved S protein, were detected in purified particles of rNDV expressing codon optimized S protein by Western blot analysis (Fig. 2C-lane 2). The lane 4 of Fig. 2C shows two bands (~170-220 kDa) on top, representing polymeric folded of S2 protein, a ~105 kDa band and a ~95 kDa band representing S2 subunit. The lane 1 of Fig. 2C represents purified rNDV control and lane 3 of Fig. 2C shows purified rNDV expressing non-codon optimized S protein. These results suggested that the codon optimized S and S2 proteins of IBV expressed by rNDVs were incorporated into rNDV particles. A monoclonal anti-NDV/HN antibody was used to detect a ~70 kDa of HN protein of NDV in partially purified virions, confirming similar level of NDV protein in each lane (Fig. 2C-lower panel). The expression of codon optimized S1 protein expressed from four individual rNDV constructs were detected by Western blot analysis in lysates (Fig. 3A) and supernatant (Fig. 3B) of infected DF-1 cells, using a chicken polyclonal anti IBV serum. The lanes 1-5 represent infected DF-1 cell lysates of rNDV, rNDV/S1, rNDV/ S1 + IBV-S-TM&CT, rNDV/S1(cs−) + NDV-F-TM&CT and rNDV/S1(cs+) + NDV-F-TM&CT, respectively. A ~130 kDa band representing expression of S1 by rNDV/S1 + IBV-S-TM&CT, rNDV/S1(cs−) + NDV-F-TM&CT, and rNDV/S1(cs+) + NDV-F-TM&CT in lysate of DF-1 cells (Fig. 3A-lanes 3-5) and rNDV/S1 in infected DF-1 cell supernatant (Fig. 3B-lane 2) was observed. Our attempts to detect the incorporation of the S1 protein into NDV envelope were not successful, due to the difficulties in the detection of very low level of S1 protein by Western blot analysis (data not shown). Our results showed that the S1 protein was expressed at very low level by all the rNDVs based on Western blot analysis. Only the unmodified S1 protein was detected in the cell culture supernatant. Growth characteristics of rNDV constructs. The recovered rNDVs were passaged in 9-day-old embryonated SPF chicken eggs. All the viruses were able to replicate well in eggs (≥2 8 HAU/ml). rNDV/S1, rNDV/ S1(cs+) + NDV-F-TM&CT, rNDV/S2, rNDV/codon optimized-S and rNDV were evaluated in the presence of exogenous protease in DF-1 cells (Fig. 4). Compared to the parental virus, rNDV expressing codon optimized S protein of IBV grew slightly less efficiently. The maximum titer of parental virus reached 10 7.5 TCID 50 /ml at 40 hours post infection, whereas the maximum titer of rNDV expressing codon optimized S gene of IBV reached 10 7.2 TCID 50 /ml at 40 hours post infection. These results indicated that presence of S, S1 and S2 genes did not significantly affect the growth characteristics of rNDV. The protective efficacy of rNDVs expressing S1, S2 or S protein of IBV in chickens against a virulent IBV challenge. IBV protection experiment 1. To evaluate the protective efficacy of rNDVs expressing S1, S2 or S protein of IBV, SPF chicks were immunized at 1-day-old age with each virus via oculanasal (ON) route. At three weeks post-immunization, chickens were challenged with virulent IBV strain Mass-41. The severity scores of IBV clinical signs were recorded twice a day for 10 days post-challenge (Fig. 5A). Compared to chickens immunized with parental rNDV and chickens inoculated with PBS, chickens immunized with rNDVs expressing Figure 1. Schematic diagram of recombinant NDV constructs containing IBV genes. Seven transcription cassettes including; 1-4) Four versions of codon optimized S1 subunit of S gene of IBV strain Mass-41; namely, (a) S1 subunit of S gene (1614 nt), (b) S1 subunit of S gene (1611 nt) fused with N-terminus of transmembrane and cytoplasmic tail of S gene (255 nt), (c) S1 subunit of S gene (1611 nt) containing five putative cleavage site residues of S gene fused with N-terminus of transmembrane and cytoplasmic tail of F gene of NDV (171 nt). In this construct, five C-terminus putative cleavage site residues of S1 gene (RRFRR) plus the first serine (S) residue of N-terminus of transmembrane and cytoplasmic tail of F gene of NDV provides six putative cleavage site residues of S protein of IBV strain Mass-41 (RRFRR/S). (d) S1 gene (1593 nt) without cleavage site residues of S gene fused with N-terminus of transmembrane and cytoplasmic tail of F gene of NDV (171 nt), 5) the N-terminus of codon optimized S2 gene of IBV strain Mass-41 (1878 nt) fused with C-terminus of signal peptide sequence of S gene (69 nt), 6) the codon-optimized S gene (3489 nt) and 7) the non-codon optimized S gene of IBV strain Mass-41 (3489 nt) were flanked into individual plasmids containing cDNA of LaSota between P and M genes using PmeI site. Each transcription cassette contains the ORF of foreign gene with the addition of PmeI restriction enzyme site sequence, 15 nt of NDV UTR, GE signal of NDV, one T nucleotide as intergenic sequence, GS signal of NDV, nucleotides for maintaining the rule of six and Kozak sequence. SCientifiC REPoRTS | (2018) 8:11951 | DOI:10.1038/s41598-018-30356-2 codon optimized S, S1 or S2 protein of IBV showed significantly less severe of clinical signs (P < 0.05). Among groups of chickens immunized with rNDVs expressing codon optimized S1, S2 or S protein, the group immunized with rNDV expressing codon optimized S protein showed the least severity of clinical signs (P < 0.05). In order to evaluate the efficacy of rNDVs expressing S1, S2 or S protein of IBV in preventing shedding of virulent IBV challenge virus in immunized chickens, on day five post-challenge, tracheal swab samples were collected from chickens of each group and were evaluated for the viral load by RT-qPCR. Our results did not show significant difference in virus shedding among groups of immunized chickens at day five post challenge (Fig. 5B). However, the results of the inoculation of the tracheal swab samples into 10-day-old embrynated chicken eggs showed that 14 out of 15 (93.3%) chickens vaccinated with rNDV expressing codon optimized S protein of IBV and 0 out of 5 (0%) of non-infected chickens were shedding virus in trachea, respectively, whereas 15 out of 15 (100%) of chickens of all other groups were shedding virus in the trachea (data not shown). IBV protection experiment 2. To evaluate the protective efficacy of rNDV expressing codon optimized S protein of IBV in adult chickens, SPF chickens were immunized at 4-week-old age. The protective efficacy of rNDV Two bands (~170-220 kDa) on top represent uncleaved S protein (S0) or polymeric forms of S2 or S1 protein (C-lane 2). The ~95 kDa and the ~60 kDa band represent S2 or S1 subunit of cleaved S protein (C-lane 2). The two bands (~170-220 kDa) on top represent polymeric forms of S2 protein, the ~105 kDa band and the ~95 kDa expressing codon optimized S gene of IBV was determined by challenging the immunized chickens with the World Organization for Animal Health (OIE) recommended dose (10 3.1 EID 50 ) of virulent IBV strain Mass-41 at 3 week post-immunization 1 . The severity scores of IBV clinical signs were recorded twice a day for 10 days post-challenge (Fig. 6A). Compared to chickens inoculated with PBS, chickens immunized with rNDV expressing codon optimized S protein of IBV and chickens immunized with a commercial live attenuated IBV vaccine showed significantly less severe clinical signs (P < 0.05). In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, at day 5 following challenge with a Figure 3. Western blot analysis of rNDV expressing S1 protein of IBV. The expression of codon optimized S1 protein of IBV expressed from four individual rNDVs expressing four different expression cassettes of S1 protein were detected using Western blot in cell lysates (A) and cell supernatant (B) of infected DF-1 cells infected with rNDVs, using a chicken polyclonal anti IBV serum. The lanes 1-5 represent cell lysates of rNDV, rNDV/S1, rNDV/S1 + IBV-S-TM&CT, rNDV/S1(cs−)+NDV-F-TM&CT, rNDV/S1(cs+) + NDV-F-TM&CT, respectively. A ~130 kD band represent expression of S1 protein by rNDV/S1 + IBV-S-TM&CT, rNDV/ S1(cs−) + NDV-F-TM&CT and rNDV/S1(cs+) + NDV-F-TM&CT in infected DF-1 cell lysate (A lanes [3][4][5] and rNDV/S1 in infected DF-1 cell supernatant (B-lane 2). The full-length gel is presented in Supplementary Figure S1. The severity scores of IBV clinical signs include; ocular discharge, nasal discharge and difficulty in breathing (0 = normal, 1 = presence of mild ocular discharge, mild nasal discharge and or sneezing 2 = presence of heavy ocular discharge and or heavy nasal discharge with mild tracheal rales and mouth breathing and or coughing 3 = heavy ocular discharge and heavy nasal discharge with sever tracheal rales and mouth breathing, gasping, dyspnea and or severe respiratory distress) were recorded twice a day for each chicken for 10 days after challenge. The severity scores represent as average scores of clinical signs measured for each chicken over 10 days. (B) Relative viral load determined by RT-qPCR in tracheal swab samples at day five following virulent IBV challenge. The relative viral load expressed as mean reciprocal ± SEM log 10. whereas chickens inoculated with PBS showed high levels of viral load in the trachea (P < 0.05). However, compared to chickens immunized with a commercial IBV vaccine, chickens immunized with rNDV expressing codon optimized S showed slightly less viral load in the trachea (Fig. 6B). IBV protection experiment 3. To evaluate the protective efficacy of rNDV expressing codon optimized S protein of IBV in adult chickens against a higher dose of virulent IBV challenge, SPF chickens were immunized at 4-week-old age. The protective efficacy of rNDV expressing codon optimized S gene of IBV was determined by challenging the immunized chickens with 10 4.7 EID 50 virulent IBV strain Mass-41 at 3 week post-immunization. The severity scores of IBV clinical signs were recorded twice a day for 8 days post-challenge (Fig. 7A). Compared to chickens immunized with rNDV and chickens inoculated with PBS, chickens immunized with rNDV expressing codon optimized S protein of IBV and chickens immunized with a commercial live attenuated IBV vaccine showed significantly less severe clinical signs (P < 0.05). In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, at days 4 following challenge with virulent IBV, the tracheal swab samples collected from five chickens of each group were analyzed for the IBV specific lesions in chicken embryo. Our results showed that 2 out of 5 (40%) chickens vaccinated with rNDV expressing codon optimized S protein of IBV and 1 out of 5 (20%) chickens vaccinated with a commercial IBV vaccine were shedding virus in trachea, respectively, whereas 5 out of 5 (100%) of chickens immunized with parental rNDV and 5 out of 5 (100%) of chickens inoculated with PBS were shedding virus in the trachea (Fig. 7C). The tracheal swab samples collected from five chickens of each group were also analyzed for the viral load by RT-qPCR. Our results showed that chickens vaccinated with rNDV expressing codon optimized S protein of IBV showed low levels of viral load in the trachea and chickens vaccinated with a commercial IBV vaccine showed very low levels of viral load in the trachea, whereas chickens inoculated with PBS and rNDV showed high levels of viral load in the trachea. Compared to chickens immunized with rNDV expressing codon optimized S protein, chickens immunized with a commercial IBV vaccine showed less viral load in the trachea (P < 0.05) (Fig. 7B). IBV protection experiment 4. To evaluate the effect of the route of inoculation of virulent IB challenge virus on the outcomes of the protective efficacy of rNDV expressing codon optimized S protein of IBV, SPF chicks were immunized at 1-day-old age. The protective efficacy of rNDV expressing codon optimized S gene of IBV was determined by challenging the immunized chickens with 10 4 EID 50 virulent IBV strain Mass-41 by the intraocular route at 3 week post-immunization. This route of challenge has been specified in USDA-CFR-9 for IBV 33 . The severity scores of IBV clinical signs were recorded twice a day for 10 days post-challenge. Compared to chickens immunized with rNDV and unvaccinated chickens, chickens immunized with rNDV expressing codon optimized S protein of IBV and chickens immunized with a commercial live attenuated IBV vaccine showed significantly less severe clinical signs. However, compared to chickens immunized with commercial IBV vaccine, chickens immunized with rNDV expressing S protein showed less severe clinical signs (P < 0.05) (Fig. 8A). In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, at days 5 following challenge with virulent IBV, the tracheal swab samples collected from all chickens of each group were analyzed for the IBV specific lesions in chicken embryos. Our results showed that 2 out of 10 (20%) chickens vaccinated with rNDV expressing codon optimized S protein of IBV and 5 out of 10 (50%) chickens vaccinated with a commercial IBV vaccine were shedding virus in trachea, respectively, whereas 10 out of 10 (100%) of chickens immunized with parental rNDV and 5 out of 5 (100%) of unvaccinated chickens, infected with IBV, were shedding virus in the trachea (Fig. 8B). The protective efficacy of rNDVs against a highly virulent NDV challenge. To evaluate the protective efficacy of rNDV expressing S gene of IBV against a virulent NDV strain, groups of five 1-day-old chicks were immunized with rNDV, rNDV expressing codon optimized S protein and PBS. Three weeks after immunization, chickens were challenged with virulent NDV strain Texas GB in our BSL-3 plus facility. Our results showed that all chickens immunized with the rNDV and rNDV expressing codon optimized S gene of IBV survived after highly virulent NDV challenge, while all chickens in PBS group died at day 5 and 6 post-challenge (Fig. 9A). Antibodies produced against IBV and NDV. Hemagglutination inhibition (HI) assay using a standard protocol of OIE was used to assess the level of antibodies mounted against NDV in serum samples of chickens 21 days after immunization. The results showed that HI titers of NDV was detected in serum samples of all chickens immunized with rNDV and rNDV expressing codon optimized S protein. There was no significant differences observed among HI titers against NDV in serum samples of chickens from groups immunized with rNDV and rNDV expressing S protein (Fig. 9B). Virus neutralization assay was performed according to a standard protocol of OIE to assess the level of neutralizing antibodies mounted against IBV strain Mass-41 in serum samples of chickens at 21 days after immunization. The results showed that the neutralizing antibodies against IBV were detected in serum samples of chickens immunized with rNDV expressing codon optimized S protein of IBV and with commercial live attenuated IBV vaccine (Fig. 7D). Neutralizing antibodies against IBV were not detected in 1:8 dilution of a serum sample from a chicken immunized with empty rNDV vector. This result showed that the rNDV expressing codon optimized S protein of IBV induces neutralizing antibodies against IBV. Discussion This study was conducted to compare the protective efficacies of S1, S2, and S proteins of IBV using rNDV as a vaccine vector. The S1, S2, and S genes of IBV strain Mass-41 were individually inserted between the P and M genes of rNDV strain LaSota. This site was chosen because it has been identified as the optimal site for insertion of foreign genes into NDV genome 26,34-37. Four different versions of IBV S1 gene were used to identify the version that is expressed at the highest level and incorporated into NDV particles. We were able to recover all the recombinant viruses and their growth characteristics were similar to rLaSota. However, the recombinant viruses containing IBV S gene grew slightly slowly than the parental virus. The viruses were stable after passages in SPF chicken embryos. Western blot analysis showed that chicken codon optimized S2 and S proteins were expressed at much higher levels and were incorporated into NDV particles. Whereas, all the four versions of S1 protein were detected at very low levels by Western blot analysis. It is noteworthy that the unmodified S1 protein was detected in the infected cell culture supernatant, indicating that the modification of S1 protein probably caused retention of the protein in the cell. These results suggest that the S2 protein acts as a chaperone to assist in the folding of the S1 protein. The S1 protein is folded incorrectly in the absence of S2 protein and the new structure probably causes loss of some conformational epitopes for IBV antibodies. In the first IBV protection experiment, we found that 1-day-old chicks immunized with rNDV expressing the S protein of IBV conferred better protection from disease compared to 1-day-old chicks immunized with rNDVs expressing either S1 or S2 protein of IBV. Our results showed that the S protein, which contains both S1 and S2 proteins, is the best protective antigen of IBV. The S2 protein lacks major neutralizing epitopes which are present in the S1 protein, hence it is not an effective antigen. The S1 protein contains major neutralizing epitopes, but it losses some conformational epitopes when expressed separately. Although in the first study we showed that rNDV expressing S protein provided enhanced protection, it could not reduce virus shedding, indicating that we needed to determine whether the elimination of virus shedding would probably require a much higher level of immune response than that is induced by rNDV expressing S protein or would require the optimization of the IBV protection study. In this study, our results support previous reports that rNDV vectored IBV vaccines prevent disease but do not stop virus shedding 23,24 . These results also support the recent report that a spike ectodomain subunit vaccine protects chickens against IBV 38 . In the second IBV protection experiment, we investigated whether age at immunization influences the outcome of IBV challenge. Our results showed that a single immunization of 4-week-old chickens with rNDV expressing S protein completely protected chickens against IBV challenge based on disease and viral load in tracheas. Indeed, the level of protection conferred by rNDV expressing S protein was similar to that of a commercial IBV vaccine. However chickens immunized with either commercial live attenuated IBV vaccine or rNDV expressing IBV S protein showed very low levels of tracheal viral load. This showed that protection was greater when the chickens were immunized at an age when their immune system is relatively well developed. In the third IBV protection experiment, we showed that rNDV expressing IBV S protein protects adult chickens against a higher dose of virulent IBV challenge. However, compared to standard challenge dose of virulent IBV, a higher challenge dose of virulent IBV caused higher levels of tracheal viral load in adult chickens immunized with rNDV expressing IBV S protein and low levels of tracheal viral load in chickens immunized with commercial live attenuated IBV vaccine. Our result showed that although both the age of immunization and dose of challenge virus affect the results of IBV challenge, the influence of the age of immunization is greater than the effect of the dose of challenge virus. Our results also showed that when we challenged adult immunized chickens with standard challenge dose of virulent IBV, rNDV expressing S protein showed slightly better protection than a commercial live IBV vaccine, based on disease and viral shedding in trachea, but when the adult immunized chickens were challenged with a higher dose of virulent IBV, commercial live IBV vaccine showed slightly better protection than rNDV expressing S protein. Hence to compare the efficacy of rNDV expressing S protein of IBV with the efficacy of live attenuated IBV vaccine, a large IBV protection study using commercial chickens is needed. In the fourth IBV protection experiment, we showed that rNDV expressing IBV S protein protected young chickens against virulent IBV challenge by the intraocular route. The route of challenge has been recommended by USDA-CFR-9. Our results showed that compared to infection of chickens with virulent IBV by the oculanasal route, infection of chickens with virulent IBV by the intraocular route, caused much lower levels of tracheal viral load in young chickens immunized with rNDV expressing IBV S protein and low levels of tracheal viral load in chickens immunized with commercial live attenuated IBV vaccine. Our results showed that the route of the challenge virus inoculation affected the results of the tracheal virus shedding in young chickens immunized by rNDV expressing S protein following IBV challenge; however, it did not affect the outcomes of the severity of clinical signs. Although our studies showed that the rNDV expressing the S protein and commercial live IBV vaccine provided comparable protection, rNDV expressing the S protein has several advantages over live IBV vaccines in controlling IB in the field. (i) NDV vectored IBV vaccine is highly safe in 1-day-old chicks, (ii) it will not create new vaccine derived variant viruses, which is a major concern in using live modified IBV vaccines, (iii) a single vaccine can be used to control both NDV and IBV, (iv) we believe that the level of immunity induced by the NDV vectored vaccine against IBV is probably sufficient to completely stop IBV infection in field condition, and (v) the immune response of NDV vectored vaccine can be enhanced by prime-boost vaccination strategy. In summary, we have shown that although the S1 and S2 proteins of IBV are known to contain virus neutralizing epitopes, the presence of the whole S protein is necessary for eliciting a strong protective immune response. The S protein is the antigen of choice for any vectored IBV vaccine. NDV is an attractive vaccine vector for IBV, because it can be used as a bivalent vaccine. Our results suggest that a recombinant NDV vectored IBV vaccine is the vaccine of choice for controlling IBV infection in the field. Cells and viruses. Chicken embryo fibroblast (DF-1) cells and human epidermoid carcinoma (HEp-2) cells were obtained from the American Type Culture Collection (ATCC, Manassas, VA). They were grown in Dulbecco's minimal essential medium (DMEM) containing 10% fetal bovine serum (FBS). The recombinant avirulent NDV strain LaSota was generated previously in our laboratory using reverse genetics 39 . The rNDV and rNDVs expressing chicken codon optimized S1, S2 and S genes and non-codon optimized S gene of IBV strain Mass-41 were grown in 9-day-old embryonated SPF chicken eggs at 37 °C. The virulent IBV strain Mass-41 was propagated in 10-day-old SPF embryonated chicken eggs and harvested five days after infection. The titer of virus in harvested allantoic fluid was determined by 50% embryo infectious dose (EID 50 ) method. Briefly, ten-fold serial dilutions of IBV strain Mass-41 was inoculated into 10-day-old embryonated SPF chicken eggs. Seven days after inoculation, infected embryos were examined for IBV specific lesions such as stunting or curling. The titer of virus was calculated using Reed and Muench method 40 .The modified vaccinia virus strain Ankara expressing T7 RNA polymerase (MVA-T7) was propagatd in monolayer primary chicken embryo fibroblast cells. Generation of rNDVs containing S1, S2 or S gene of IBV. A plasmid containing full-length antigenomic cDNA of NDV strain LaSota has been constructed previously 39 . In order to develop an effective IBV vaccine the maximum neutralizing epitopes with correct conformation are needed to be displayed. Most neutralizing epitopes are located in the S protein. In this study seven transcription cassettes containing S, S1 or S2 genes of IBV were constructed to identify the best protective antigen for the development of NDV vectored IBV vaccines. The S, S1 and S2 genes were chicken codon optimized for higher level of expression in chickens. The following transcription cassettes were designed: (i) a transcription cassette containing the S gene of IBV strain Mass-41 (3489 nt) was designed to determine whether the expression of the whole S gene from NDV will lead to display the maximum neutralizing epitopes in correct conformation, (ii) a transcription cassette containing the S2 subunit of S gene (1878 nt) of IBV fused with C-terminus of signal peptide sequence of S gene (69 nt) was constructed for transport of the protein from the cell (iii) a transcription cassette containing the S1 subunit of S gene (1614 nt) was designed to determine the protective efficacy of S1 protein, (iv) a transcription cassette containing the S1 gene (1611 nt) fused with N-terminus of transmembrane and cytoplasmic tail of S gene (255 nt) was designed for incorporation into NDV envelop, (v) a transcription cassette containing the S1 subunit of S gene without S1 protein cleavage site residues (1593 nt) fused with N-terminus of transmembrane and cytoplasmic tail of NDV F gene (171nt) was designed for incorporation of the S1 protein into envelope of NDV. (vi) a transcription cassette containing the S1 subunit of S gene containing S1 protein cleavage site residues (1611 nt) fused with N-terminus of transmembrane and cytoplasmic tail of NDV F gene (171 nt) was designed to incorporate the S1 protein into NDV envelope and also to know whether adding the cleavage site residues has any effect on the fusion of two SCientifiC REPoRTS | (2018) 8:11951 | DOI:10.1038/s41598-018-30356-2 proteins, and (vii) a transcription cassette containing the non-codon optimized S gene (3489 nt) was constructed to compare the level of protein expression between the codon optimized and non-codon optimized S genes. NDV genome contains six genes: nucleocapsid(N), phosphoprotein(P), matrix(M), fusion(F), hemagglutinin-neuraminidase(HN) and large(L). The genes are ordered 3′-N-P-M-F-HN-L-5′. The beginning and the end of each gene contain conserved transcriptional sequences known as the gene-start (GS) and gene-end (GE), respectively. Between the genes, there are gene junctions 26 . Any of the gene junctions is a potential insertion site for the transcription cassette of a foreign gene. However, we and others have found that the intergenic region between the P and M genes is a good site for expression of most foreign genes 26,[34][35][36][37] . The transcription cassettes containing IBV genes contained PmeI restriction enzyme sequence, 15 nt of untranslated region (UTR) of NDV, NDV GE signal, one T nucleotide as intergenic sequence, NDV GS signal, extra nucleotides to maintain the rule of six 26,41 , Kozak sequence at the upstream of foreign gene ORFs and PmeI restriction enzyme sequence at downstream of foreign gene ORF. The transcription cassettes of codon optimized and non-codon optimized S gene were digested from two commercially synthesized (GenScript; pUC57-IBV-Mass-41-S syn) plasmids containing codon optimized (GenScript; optimization on Gallus Gallus codons using OptimumGene TM PSO algorithm) and non-codon optimized S gene of IBV strain Mass-41(GenBank Accession no. AY851295. 1), respectively. The transcription cassettes of codon optimized S1 and S2 genes were amplified from the commercially synthesized plasmid containing codon optimized S gene of IBV strain Mass-41 and cloned into individual shuttle vectors (pGEM ® -T Easy Vector, Promega Corporation). Then the flanking DNA of transcription cassettes were digested from shuttle vectors. The transcription cassettes derived from shuttle vectors were cloned into complete individual plasmids containing cDNA of rLaSota at P and M gene junction using PmeI site (Fig. 1). The correct sequences of the foreign genes were confirmed by nucleotide sequence analysis. rNDVs containing the IBV genes were recovered by reverse genetics as described previously 39 .Briefly, each full length cDNA was co-transfected with three expression plasmids containing N, P or L gene of NDV strain LaSota into MVA-T7 infected HEp-2 cells. Three days post-transfection, 200 µl of supernatant of transfected cells were inoculated in 9-11 day-old SPF embryonated chicken eggs. After three days, haemaglutination test was used to detect infected allantoic fluids collected from eggs. rNDVs containing S1 gene, S1 gene fused with transmembrane and cytoplasmic tail of IBV S gene, S1 gene containing cleavage site residues of S gene of IBV fused with transmembrane and cytoplasmic tail of NDV F gene, S1 gene without cleavage site residues of S gene fused with transmembrane and cytoplasmic tail of NDV F gene, S2 gene, codon optimized S gene and non-codon optimized S gene were named rNDV/S1, rNDV/S1 + IBV-S-TM&CT, rNDV/S1(cs+) + NDV-F-TM&CT, rNDV/S1(cs−) + NDV-F-TM&CT, rNDV/S2, rNDV/codon optimized-S and rNDV/non-codon optimized-S, respectively. The IBV genes were amplified from the rNDV constructs by RT-PCR. To determine the incorporation of IBV proteins into NDV envelope, rNDV, rNDV/S2, rNDV/codon optimized-S and rNDV/non-codon optimized-S were inoculated into 9-day-old embryonated SPF chicken eggs. Three days after incubation, recombinant viral particles from infected allantoic fluids were partially purified by sucrose density gradient centrifugation and analyzed by Western blot analysis. A monoclonal anti-NDV/HN antibody also was used to detect HN protein of NDV in lysates and purified virions by one more Western blot analysis. Growth characteristics of rNDV constructs. In order to determine the growth kinetics of rNDVs expressing S1, S2 or S protein of IBV, confluent monolayers of DF-1 cells in 6-well tissue culture plates were infected at a MOI of 0.1 with rNDV, rNDV/S1(cs+) + NDV-F-TM&CT, rNDV/S2 and rNDV/codon optimized-S and adsorbed for 90 minutes at 37 °C. After adsorption, cells were washed with PBS, then incubated with DMEM containing 2% FBS and 10% fresh SPF chicken egg allantoic fluid at 37 °C in presence of 5% CO 2 . Aliquots of 200 µL of supernatant from infected cells were collected and replaced with fresh DMEM including FBS at intervals of 8 hours until 64 hours post-infection. The titer of virus in the harvested samples was determined by TCID 50 method in DF-1 cells in 96-well tissue culture plates. The protective efficacy of rNDVs expressing S1, S2 and S protein of IBV against virulent IBV challenge. Based on the level of expression of S1, S2 and S proteins of IBV from rNDVs, rNDV/ S1(cs+) + NDV-F-TM&CT, rNDV/S2, and rNDV/codon optimized-S viruses were selected for animal study to evaluate their protective efficacy against virulent IBV challenge. IBV protection experiment 1. In this study, the protective efficacy of rNDVs expressing S1, S2 or S protein of IBV strain Mass-41 were evaluated in 1-day-old SPF chicks. Briefly, a total of eighty 1-day-old chicks were divided into five groups of fifteen each and one group of five. Chicks of the first four groups were inoculated with 10 7 EID 50 of rNDV, rNDV/S1(cs+) + NDV-F-TM&CT, rNDV/S2 and rNDV/codon optimized-S strains via oculonasal route. The fifteen chicks of group five and five chicks of group six were inoculated with PBS. Three weeks after immunization, all immunized chickens, were challenged with 10 3.1 EID 50 of virulent IBV strain Mass-41. This challenge virus dose was determined by an experimental chicken infection study. The severity scores of clinical signs of IBV including, nasal discharge, ocular discharge and difficulty in breathing (0 = normal, 1 = presence of mild ocular discharge, mild nasal discharge and or sneezing 2 = presence of heavy ocular discharge and or heavy nasal discharge with mild tracheal rales and mouth breathing and or coughing 3 = heavy ocular discharge and heavy nasal discharge with sever tracheal rales and mouth breathing, gasping, dyspnea and or severe SCientifiC REPoRTS | (2018) 8:11951 | DOI:10.1038/s41598-018-30356-2 respiratory distress) were recorded twice a day for 10 days post-challenge. In order to evaluate protective efficacy of rNDVs expressing S1, S2 and S genes of IBV in preventing shedding of virulent IBV in immunized chickens, at day five post-challenge, tracheal swab samples were collected from fifteen birds of each group and placed in 1.5 mL serum free DMEM with 10 X antibiotics. The swab samples were analyzed for quantification of viral RNA using an IBV-N gene-specific RT-qPCR. IBV protection experiment 2. In this study, the protective efficacy of rNDV expressing codon optimized S protein of IBV was evaluated in 4-week-old SPF chickens against the OIE recommended dose of virulent IBV challenge 1 . A total of twenty 4-week-old SPF chickens were divided into four groups of five each. Five chickens of groups one and two were inoculated with 10 7 EID 50 of rNDV and rNDV/codon optimized-S, respectively, via oculanasal route. Five chickens of group three were inoculated with 10 recommended doses of a commercial live attenuated Mass-type IBV vaccine via oculanasal route and chickens of group four were inoculated with PBS. Three weeks after immunization, chickens of all groups were challenged with 10 3.1 EID 50 of virulent IBV strain Mass-41 by the oculonasal route. The severity scores of clinical signs of IBV, described in IBV protection experiment 1, were recorded for 10 days post-challenge. In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, at day 5 post-challenge, tracheal swab samples were collected from twenty chickens and placed in 1.5 ml serum free DMEM with 10 X antibiotic. The swab samples were analyzed for quantification of viral RNA using an IBV-N gene-specific RT-qPCR. IBV protection experiment 3. In this study, the protective efficacy of rNDV expressing codon optimized S protein of IBV was evaluated in 4-week-old SPF chickens against a higher dose of virulent IBV challenge. A total of thirty two 4-week-old SPF chickens were divided into four groups of eight each. Eight chickens of group one and two were inoculated with 10 7 EID 50 of rNDV and rNDV/codon optimized-S, respectively, via oculanasal route. Eight chickens of group three were inoculated with 10 recommended doses of a commercial live attenuated Mass-type IBV vaccine via oculanasal route and chickens of group four were inoculated with PBS. Three weeks after immunization, chickens of all groups were challenged with 10 4.7 EID 50 of virulent IBV strain Mass-41 by the oculonasal route. The severity scores of clinical signs of IBV, described in IBV protection experiment 1, were recorded for 8 days post-challenge. At day 4 post-challenge, three chickens from each group were euthanized for tracheal ciliostasis analysis (data not shown). In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, tracheal swab samples were collected from five chickens from each group and placed in 1.5 mL serum free DMEM with 10 X antibiotic. Each fluid was tested for IBV specific lesions on chicken embryo by inoculation (0.1 ml) of one 10-day-old embryonated SPF chicken egg. The swab samples were also analyzed for quantification of viral RNA using an IBV-N gene-specific RT-qPCR. The swab samples collected from two non-vaccinated SPF chickens involved in another IBV protection study also were used as control. IBV protection experiment 4. In this study, the protective efficacy of rNDV expressing codon optimized S protein of IBV was evaluated in 1-day-old SPF chicks against virulent IB challenge virus infected by the intraocular route. Intraocular route was used, because this route of IBV challenge has been specified by the USDA-CFR-9 33 . A total of forty five 1-day-old SPF chickens were divided into three groups of ten each and three groups of five each. Ten chickens of group one and two were inoculated with 10 7 EID 50 of rNDV, rNDV/ codon optimized-S, respectively, via oculanasal route. Ten chickens of group three were inoculated with one recommended dose of a commercial live attenuated Mass-type IBV vaccine via oculanasal route and chickens of groups four to six were left non-vaccinated. Three weeks after immunization, chickens of all groups one to four were challenged with 10 4 EID 50 of virulent IBV strain Mass-41 by the intraocular route, chickens of group five were challenged with 10 4 EID 50 of virulent IBV strain Mass-41 by the oculanasal route, and chickens of group six were left non-infected. The severity scores of clinical signs of IBV, described in IBV protection experiment 1, were recorded for 10 days post-challenge. In order to evaluate the efficacy of rNDV expressing S protein of IBV in preventing shedding of virulent IBV in immunized chickens, at day 5 post-challenge, tracheal swab samples were collected from all chickens of each group and placed in 3 mL serum free DMEM with 10 X antibiotic. Each fluid was tested for IBV specific lesions on chicken embryo by inoculation with 0.2 ml to each of five 10-day-old embryonated SPF chicken egg. The sample was considered positive for virus shedding, if any of the five embryos showed IBV lesions. We performed all experiments involving virulent IBV in our USDA approved Biosafety level-2 and Biosafety level-2 plus facilities following the guidelines and approval of the Animal Care and Use Committee (IACUC), University of Maryland. The protective efficacy of rNDV expressing S protein of IBV against virulent NDV challenge. The protective efficacy of rNDV expressing S protein of IBV strain Mass-41 was evaluated against a virulent NDV strain GB Texas challenge in our biosafety level 3 (BSL-3) plus facility. Briefly, a total of fifteen 1-day-old chicks were divided into three groups of five each. Chicks of two groups were inoculated with 10 7 EID 50 of rNDV and rNDV/IBV-codon optimized-S via oculonasal route. The five chickens of group three were inoculated with PBS. Three weeks after immunization, blood samples of all birds were collected for NDV antibody response analysis and challenged with one hundred 50% chicken lethal dose (CLD 50 ) of the highly virulent NDV strain GB Texas via oculonasal route. The chickens were observed daily for 10 days after challenge for mortality with clinical signs of disease (neurological signs included torticollis, paralysis, and prostration). We performed the experiment involving virulent NDV in our USDA approved Biosafety level-3 plus facility following the guidelines and approval of the Animal Care and Use Committee (IACUC), University of Maryland. SCientifiC REPoRTS | (2018) 8:11951 | DOI:10.1038/s41598-018-30356-2 Serological analysis. The level of antibodies induced against NDV and IBV were evaluated. The serum samples were collected three weeks post-immunization. Hemagglutination inhibition (HI) assay using a standard protocol OIE was used to assess the level of antibody titer mounted against NDV in chickens immunized by rNDVs 27 . The virus neutralization assay according to OIE was used to measure the level of neutralizing antibodies mounted against IBV 1 . Briefly, serum samples of three birds from the group immunized with rNDV expressing codon optimized S protein of IBV and serum samples of three birds from the group immunized with commercial IBV vaccine group were incubated at 56 °C for 30 minutes. One hundred EID 50 of IBV strain Mass-41 was mixed with 2 fold dilutions of antiserum and incubated for 1 hour at 37 °C. One hundred µL of each serum and virus mixture was inoculated into three 10-day-old embryonated SPF chicken eggs. To confirm that at least 100 EID 50 of virus was inoculated into each egg, three eggs were inoculated with 100 µl of PBS containing 100 EID 50 of IBV. Three eggs were inoculated with 100 µL of PBS as negative control. Three eggs were inoculated with a mixture of 100 EID 50 of IBV and a dilution of 1:8 of a randomly selected serum sample collected from a bird immunized with rNDV strain LaSota as vector control. The eggs were incubated at 37 °C and were observed daily for dead chicken embryos for 7 days post inoculation. The serum titers were calculated according to the method of Reed and Muench 40 , based on mortality and IBV specific lesions on chicken embryos. Quantitative reverse transcription-polymerase chain reaction (RT-qPCR). RNA was extracted using TRIzol Reagent (Invitrogen) from tracheal swab samples collected from chickens. The first strand cDNA was synthesized using Thermo Scientific RevertAid Reverse Transcriptase (RT). SYBR green RT-qPCR was performed using a specific primer pair set: (a) N gene -296 forward primer: 5′ GACCAGCCGCTAACCTGAAT 3′ and (b) N gene -445 reverse primer: 5′ GTCCTCCGTCTGAAAACCGT 3′ amplifying 150 nt of N gene of IBV strain Mass-41. PCRs were performed using a Bio-Rad CFX96 Cycler. Each 20 µl reaction was carried out using 5 µl of cDNA, 10 µl of iTaq Universal SYBR Green Supermix (Bio-Rad), 2 µl of forward and reverse primers and 3 µl of nuclease free water. Forty cycles of PCR at 95 °C for 10 s (denaturation), 58 °C for 20 s (annealing), and 72 °C for 30 s (elongation) followed by melting curve analysis that consisted of 95 °C for 5 s and 65 °C for 60 s. A serial 10 fold dilution of cDNA synthesized from extracted RNA of allantoic fluid stock of a virulent IBV strain Mass-41 with 10 7.5 EID 50 /ml was used to establish the standard curve. The cDNA synthesized from extracted RNA of allantoic fluid stock of a virulent IBV strain Mass-41 and the cDNA synthesized from extracted RNA of swab sample solution were served as positive and negative controls, respectively. Melting point analysis was used to confirm the specificity of the test. Statistical analysis. Data were analyzed among groups by One-Way-ANOVA test. The student t-test was used to compare two groups. To avoid bias, all animal experiments were designed as blinded studies.
2018-08-11T13:27:14.359Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "8742b08bdad42981418ffb838b1570fef124bc62", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-30356-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1579fbff7af9b156c6f49fee0526e48f852ea460", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology", "Medicine" ] }
34705786
pes2o/s2orc
v3-fos-license
Coassociation of Rap1A and Ha-Ras with Raf-1 N-terminal region interferes with ras-dependent activation of Raf-1. Raf-1 is a major downstream effector of mammalian Ras. Binding of the effector domain of Ras to the Ras-binding domain of Raf-1 is essential for Ras-dependent Raf-1 activation. However, Rap1A, which has an identical effector domain to that of Ras, cannot activate Raf-1 and even antagonizes several Ras functions in vivo. Recently, we identified the cysteine-rich region (CRR) of Raf-1 as another Ras-binding domain. Ha-Ras proteins carrying mutations N26G and V45E, which failed to bind to CRR, also failed to activate Raf-1. Since these mutations replace Ras residues with those of Rap1A, we examined if Rap1A lacks the ability to bind to CRR. Contrary to the expectation, Rap1A exhibited a greatly enhanced binding to CRR compared with Ha-Ras. Enhanced CRR binding was also found with Ha-Ras carrying another Rap1A-type mutation E31K. Both Rap1A and Ha-Ras(E31K) mutant failed to activate Raf-1 and interfered with Ha-Ras-dependent activation of Raf-1 in Sf9 cells. Enhanced binding of Rap1A to CRR led to co-association of Rap1A and Ha-Ras with Raf-1 N-terminal region through binding to CRR and Ras-binding domain, respectively. These results suggest that Rap1A interferes with Ras-dependent Raf-1 activation by inhibiting binding of Ras to Raf-1 CRR. Ras belongs to a family of small GTP-binding proteins playing essential roles in cell proliferation and differentiation. Mammalian ras genes carrying activating mutations are found in many types of neoplastic tissue and are able to induce morphological transformation in vitro when transfected into fibroblast cell lines. However, the rap1A gene (1), encoding a 21-kDa GTP-binding protein with high homology to Ras, has been shown to to induce reversion of the transformed phenotype in Ki-ras-transformed NIH3T3 cells (2). In addition to the overall structural homology, Rap1A shares two important structural features with Ras. One is that Rap1A has an identical effector domain (amino acids 32-40) to that of Ras. The effector domain of Ras is essential for the association with and activation of its effectors (3). The other is that Rap1A undergoes similar post-translational modification to Ras at its C terminus except that Ras is farnesylated and Rap1A is geranylgeranylated (4). This modification is essential for the function of Rap1A as observed for Ras (5,6). Raf-1, a serine/threonine kinase regulating the mitogen-activated protein kinase cascade, is a major mammalian Ras effector and is thought to play a key role in Ras-induced cellular transformation (7). Although the precise mechanism of Rasdependent Raf-1 activation remains unclear, it is known that the effector domain of Ras interacts with the N-terminal RBD 1 (amino acids 51-131) of Raf-1 and that this interaction is essential for physical association between these proteins as well as for the activation of Raf-1 (7). Rap1A, too, has been shown to associate with Raf-1 N-terminal fragment in vivo (8), and a recent x-ray diffraction study of the crystal of the complex between Rap1A and Raf-1 RBD has provided evidence for this association at the atomic level (9). These studies suggest the possibility that the suppression of Ras function by Rap1A is due to the competitive inhibition of Ras-RBD interaction (10), although it is unclear why Rap1A cannot activate Raf-1. We have recently identified Raf-1 CRR (amino acids 152-184) as another Ras-binding domain and demonstrated that interaction of Ras with both RBD and CRR is necessary for the activation of Raf-1 (11). Two mutations, N26G and V45E, were found to abolish the interaction of Ha-Ras with CRR and attenuate the activation of Raf-1 by Ha-Ras. The fact that both of these mutations replaced Ha-Ras residues with corresponding Rap1A residues prompted us to examine the possibility that the inability of Rap1A to activate Raf-1 is due to its failure to interact with Raf-1 CRR. Contrary to the expectation, we found that Rap1A exhibited a greatly enhanced ability to bind to CRR. EXPERIMENTAL PROCEDURES Expression and Purification of Rap1A and Ha-Ras Proteins-Rap1A cDNA was amplified from a human lung fibroblast cDNA library by polymerase chain reaction (12) using a pair of primers, 5Ј-CGGGATC-CGATATGCGTGAGTACAAGCTAG-3Ј and 5Ј-AACTGCAGCAGCTA-GAGCAGCAGACATGATTTC-3Ј. After cleavage with BamHI and PstI in the primer sequences, it was cloned into matching cleavage sites of the baculovirus transfer vector pBlueBac III (Invitrogen Inc., San Diego, CA). The cDNA for an activated Rap1A, Rap1A V12 , was prepared by oligonucleotide-directed mutagenesis (13) and cloned into pBlueBac III as for the wild-type cDNA. pV-IKS, another baculovirus transfer vector for expressing proteins as GST fusions, was provided by Dr. D. Midra (University of California, San Francisco, CA) through Dr. A. Kikuchi (Hiroshima University, Hiroshima, Japan) (14). For expression of Ha-Ras fused to GST, Ha-Ras cDNA was amplified by polymerase chain reaction using a pair of primers, 5Ј-CGCGTCTAGAATGACGG-AATATAAGCTGGTG-3Ј and 5Ј-GCCGGAATTCTCAGGAGAGCACA-* This investigation was supported by grants-in-aids for cancer research and for scientific research from the Ministry of Education, Science, and Culture of Japan and by grants from the Ciba-Geigy Foundation (Japan) for the Promotion of Science and from the Suntory Institute for Bioorganic Research. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Assay for Rap1A and Ha-Ras Binding-MBP fusion proteins of Raf-1 N-terminal fragments were expressed in Escherichia coli and immobilized on amylose resin as described (11). Binding reaction was carried out by incubating 20 l of the resin carrying various amounts of MBP-Raf-1 proteins with various amounts of GTP␥Sor GDP-bound Ha-Ras or Rap1A in a total volume of 100 l of buffer A (20 mM Tris/HCl, pH 7.4, 40 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, 5 mM MgCl 2 , and 0.1% Lubrol PX) as described (11). After incubation at 4°C for 2 h, the resin was washed, and the bound proteins were eluted with buffer A containing 10 mM maltose and subjected to SDS-PAGE followed by Western immunoblot detection with anti-Ras monoclonal antibody Y13-259 (Oncogene Science Inc., Manhasset, NY) or anti-Rap1A polyclonal antibody (Santa Cruz Biotechnology Inc., Santa Cruz, California). Both anti-Ha-Ras and anti-Rap1A antibodies exhibited little cross-reactivity to Rap1A and Ha-Ras, respectively (data not shown). The assay for competitive inhibition of Ras binding to the MBP-Raf-1 fusion proteins by Rap1A were carried out by including a fixed amount of Ha-Ras and various amounts of Rap1A in the same binding reaction. For the in vitro co-association of Rap1A with GST-Ha-Ras, GST-Ha-Ras in Sf9 cell lysate was first immobilized on glutathione-Sepharose and then loaded with GTP␥S. The resin was then incubated with GTP␥S-bound Rap1A in the absence or the presence of purified MBP-Raf-1(51-131) or MBP-Raf-1(48 -206). The binding condition was the same as described above except that the bound proteins were eluted with 10 mM glutathione in buffer A. Rap1A Has Enhanced Binding Activity to Raf-1 CRR-In the previous study, we have shown that Ha-Ras binds to immobilized MBP-Raf-1(132-206), representing CRR, demonstrating that Raf-1 CRR acts as another Ras-binding domain independently of RBD (11). This binding was GTP-independent in contrast to the GTP-dependent binding to MBP-Raf-1(50 -131), representing RBD. Ha-Ras mutations N26G and V45E abolished binding to CRR without affecting binding to RBD. The fact that these two mutations replaced Ha-Ras residues with those of Rap1A prompted us to examine if Rap1A lacks the ability to bind to Raf-1 CRR. In the same in vitro binding assay, Rap1A bound to RBD in a GTP-dependent manner (Fig. 1A, lanes 1 and 2), although the GTP dependence of this binding was less clear compared with that of Ha-Ras (11). Unexpectedly, Rap1A bound efficiently to CRR (lanes 3 and 4). As observed for Ha-Ras, this binding was GTP-independent (lanes 3 and 4) and required an intact zinc finger structure of CRR because Rap1A bound very poorly to MBP-Raf-1(132-206, C168S) (Fig. 1B). Binding of Rap1A to MBP-Raf-1(48 -206) containing both RBD and CRR was severalfold stronger and less GTP-dependent than that to RBD (Fig. 1A, lanes 5 and 6). These results suggested that CRR is necessary for efficient association between Rap1A and Raf-1 and that CRR rather than RBD plays a major role in this association. Comparison of the binding properties between Ha-Ras and Rap1A is shown in Fig. 1C. Ha-Ras bound to CRR yielded roughly 10-fold less signal than that to RBD even though the amounts of CRR in the binding reaction were doubled, indicating that the ability of Ha-Ras to bind to CRR was roughly 20-fold less than that to RBD. This estimation by immunoblot was consistent with our kinetic measurement of the affinity of Ha-Ras for RBD and CRR using a competitive inhibition of Ha-Ras-dependent activation of Saccharomyces cerevisiae adenylyl cyclase (15,16). 2 In a striking contrast, Rap1A bound to CRR yielded stronger signal than that to RBD under the same condition, indicating that it has the ability to bind equally to RBD and to CRR (see also Fig. 1A). After taking account of the observation that Rap1A yielded twice as much signal as Ha-Ras on the equimolar basis (Fig. 1D), the ability of Rap1A to bind to CRR was roughly 10-fold greater than that of Ha-Ras. In addition, the ability of Rap1A to bind to RBD was about one-half of that of Ha-Ras after the same normalization. These results indicated that Rap1A has greatly enhanced rather than impaired activity to bind to CRR. In our previous report, we found that the binding of Ha-Ras to CRR required post-translational modification (11). To test if binding of Rap1A to CRR was also dependent on its posttranslational modification, we incubated RBD and CRR with a lysate of Sf9 cells expressing Rap1A. As shown in Fig. 1E, RBD bound both modified and unmodified Rap1A represented by the faster and slower migrating bands, respectively, on the Western immunoblot as described before (17 Mechanism of Suppression of Ras by Rap1A 11703 that binding of Rap1A to CRR also requires its modification. Substitution of Lysine for Glutamate at Position 31 Makes Ras a Typical Rap1A-type Mutant-The above data are somewhat contradictory to our previous finding that binding to CRR is necessary for Raf-1 activation, because Rap1A can bind to CRR better than Ha-Ras and still cannot activate Raf-1. One possible explanation is that enhanced binding to CRR is detrimental to the activation of Raf-1. To test this possibility, we first screened for a Ha-Ras mutant whose binding to CRR was abnormally enhanced. After testing more than 40 mutants described previously (16), the E31K mutant was found to possess roughly 10-fold higher activity to bind to CRR compared with wild type (Fig. 1C). Next, we co-expressed Raf-1 with either Rap1A V12 or Ha-Ras V12 (E31K) mutant in Sf9 cells. The Raf-1 immunoprecipitates were examined for the activity to induce phosphorylation of KNERK2 in the presence of MEK (Fig. 2C). The results showed that Rap1A V12 and Ha-Ras V12 (E31K) indeed could not activate Raf-1 (lanes 1-4). Further, triple expression of either of them along with Ha-Ras V12 and Raf-1 was found to suppress the activation of Raf-1 by Ha-Ras V12 (lanes 5 and 6). These data suggested that the enhanced binding to CRR is indeed detrimental to the activation of Raf-1. They also suggested the involvement of this enhanced binding in the suppressive action of Rap1A in Rasdependent Raf-1 activation, which is examined in the following section. Rap1A and Ha-Ras Co-associate with Raf-1 N-terminal Region Containing Both RBD and CRR-We reasoned that the greatly enhanced ability of Rap1A to bind to CRR found here may also be involved in competitive inhibition of Ras-Raf-1 association by Rap1A. Rap1A would inhibit association of Ras with a Raf-1 N-terminal region containing both RBD and CRR much more efficiently than that with RBD alone. To test this idea, we first incubated MBP-Raf-1(50 -131) with a fixed amount of Ha-Ras and increasing amounts of Rap1A. As expected, the amount of bound Ha-Ras was found to be reduced by increasing amounts of Rap1A, whereas that of bound Rap1A was increased (Fig. 3A). We then tested MBP-Raf-1(48 -206) in the same experiment. To our surprise, the binding of Ha-Ras to MBP-Raf-1(48 -206) was found enhanced to some extent even when the amounts of bound Rap1A were increased (Fig. 3B). The only possible explanation for these results would be that Rap1A and Ha-Ras co-associate with the same Raf-1 N-terminal molecule through their independent binding to CRR and RBD, respectively, and that these associations mutually stabilize each other. To test this possibility, we immobilized GST-Ha-Ras fusion protein onto glutathione-Sepharose and incubated it with Rap1A in the absence or the presence of MBP-Raf-1 fusion proteins. As shown in Fig. 3C, no Rap1A was found to be associated with GST-Ha-Ras when they were incubated in the absence of MBP DISCUSSION The current model of Ras-suppressive action of Rap1A involves competitive inhibition by Rap1A of the interaction between Ras and Raf-1 RBD (10). However, we found in this study that Rap1A possessed greatly enhanced ability to bind to CRR, another Ras-binding domain identified by us (11) and Mechanism of Suppression of Ras by Rap1A 11704 others (18 -20). This strong binding even resulted in the coassociation of Rap1A and Ha-Ras with Raf-1 N-terminal region through their independent binding to CRR and RBD, respectively, and these bindings might mutually stabilize each other. Because we have previously shown that binding of Ras to both CRR and RBD is necessary for Raf-1 activation (11), it is very likely that this triple complex formation will result in impairment of Raf-1 activation due to the failure of Ha-Ras to bind to CRR. The previous observations that unmodified Rap1A cannot suppress Ras function (5,6) is also consistent with this, because we found that modification of Rap1A is essential for its binding to CRR. A study with Ras/Rap1A chimera indicated that transforming potential of Ras requires both of the two regions, residues 21-31 and 45-54. On the other hand, Rap1A anti-oncogenicity requires mainly residues 21-31, although residues 45-54 is also required for full activity to suppress transformation (21). Disregarding residues that are changed conservatively, that are variable among the Ras family, and that are not exposed on the protein surface, three residues, 26, 31, and 45, have been postulated to determine whether the protein is oncogenic or anti-oncogenic (3,(21)(22)(23)(24). In fact, replacements of residues 26 (or 26 plus 27), 31 (or 30 plus 31), or 45 of activated Ras with those of Rap1A resulted in attenuation of transforming activity (22,24). However, it remained unclear which of these residues plays the most critical role. Residues 26 -28 (including 26) and 42-49 (including 45) have been proposed to constitute a contiguous domain on the surface of Ras protein (3,25). The domain, termed "activator domain," was suggested to play an important role for activation of effectors through some physical interaction with them. In the previous study, we found that Ha-Ras proteins carrying mutations N26G and V45E failed to bind to Raf-1 CRR (11). These mutants were also incapable of activating Raf-1 in Sf9 cells. Based on these findings, we proposed that the activator domain interacts with Raf-1 CRR and that this interaction is essential for Raf-1 activation. This predicts that the inability of Rap1A to activate Raf-1 may result from its failure to bind to CRR, because Rap1A contains residues Gly 26 and Glu 45 , which abolished binding to CRR in the context of Ha-Ras. Contrary to this expectation, we found here that Rap1A exhibited greatly enhanced rather than reduced binding to CRR. One explanation for this apparent discrepancy is that side chains of the activator domain residues such as 26 and 45, which are divergent between Ras and Rap1A, might not interact directly with CRR. Instead, the activator domain residues such as Phe 28 and Lys 42 , which are conserved between Ras and Rap1A, might participate in direct interaction with CRR. Consistent with this idea, F28A and K42A mutations have been shown to impair activities of Ras (3,25). In the context of Ha-Ras, both Asn 26 and Val 45 might be necessary for these interacting residues to assume their functional conformation, which are destroyed by the N26G and V45E mutations. On the other hand, in the context of Rap1A, such effects of Gly 26 and Glu 45 might be masked by conformational effects of other flanking residues that are not conserved with Ha-Ras. In this regard, residue Lys 31 of Rap1A might play a critical role because we found here that E31K mutation alone enhanced CRR binding in the context of Ha-Ras. A support for the conformational role of the residue 31 over the activator domain also comes from a study of Rap1A mutants. Nassar et al. recently solved the structure of Rap1A(E30D,K31E) double mutant complexed with RBD by x-ray crystallography (26). Comparison of this structure with that of wild-type Rap1A complexed with RBD indicated that the mutation led to a movement by more than 1.2 Å of a loop containing residues 44 -50, which overlapped with the activator domain. Thus, it is likely that residues 30 and 31 of Ras influence the conformation of the activator domain residues. In addition, Glu 31 of Rap1A(E30D,K31E) was found to interact directly with Lys 84 of Raf-1 RBD, suggesting that E31 of Ras takes part in the same ionic interaction. Nassar et al. also showed that Rap1A(E30D,K31E) and Rap1A(K31E) acquired an activity to stimulate transcription from a Ras-dependent promoter in vivo, albeit to a small extent, and argued that this is accounted for by a large increase in the affinity of Rap1A for RBD due to the newly created ionic interaction. However, the result is also consistent with our finding that the identity of the residue 31 affects CRR binding. The observed conformational change of the activator domain of Rap1A might have brought about a reduction in the affinity of the Rap1A activator domain for CRR to a level appropriate for Raf-1 activation, contributing to the acquisition of the Ras-like activity. Although the solution structure of CRR has been solved (27), further understanding of the mechanism of CRR binding should await the structural analysis of the complex between CRR and Ras or Rap1A.
2018-04-03T02:42:50.695Z
1997-05-02T00:00:00.000
{ "year": 1997, "sha1": "f67bb530fc50c2f47ae6a97ef328ed90e994ef78", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/18/11702.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "384eadb6113e78866c047cedb3599869d7db35b3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257090088
pes2o/s2orc
v3-fos-license
Bacteria-specific pro-photosensitizer kills multidrug-resistant Staphylococcus aureus and Pseudomonas aeruginosa The emergence of multidrug-resistant bacteria has become a real threat and we are fast running out of treatment options. A combinatory strategy is explored here to eradicate multidrug-resistant Staphlococcus aureus and Pseudomonas aeruginosa including planktonic cells, established biofilms, and persisters as high as 7.5 log bacteria in less than 30 min. Blue-laser and thymol together rapidly sterilized acute infected or biofilm-associated wounds and successfully prevented systematic dissemination in mice. Mechanistically, blue-laser and thymol instigated oxidative bursts exclusively in bacteria owing to abundant proporphyrin-like compounds produced in bacteria over mammalian cells, which transformed harmless thymol into blue-laser sensitizers, thymoquinone and thymohydroquinone. Photo-excitations of thymoquinone and thymohydroquinone augmented reactive oxygen species production and initiated a torrent of cytotoxic events in bacteria while completely sparing the host tissue. The investigation unravels a previously unappreciated property of thymol as a pro-photosensitizer analogous to a prodrug that is activated only in bacteria. Multidrug-resistant bacteria are a real threat to human health. Here, the authors investigate a combinatory strategy using blue-laser and thymol against Staphylococcus aureus and Pseudomonas aeruginosa. Blue-laser and thymol succesfully sterilized acute infected or biofilm-associated wounds and prevented systematic dissemination in mice. Compared with mammalian cells, bacteria contain abundant proporphyrin-like compounds that transform harmless thymol into blue-laser sensitizers, thymoquinone and thymohydroquinone. Photo-excitation of thymoquinone and thymohydroquinone augmented reactive oxygen species production in bacteria while completely sparing the host tissue. M ultidrug-resistant (MDR) superbugs have been emerging as a real threat rather than a looming crisis 1 . In particular, the Gram-positive methicillin-resistant Staphylococcus aureus (MRSA) and Gram-negative Pseudomonas aeruginosa (Pa) are listed by WHO as prioritized nosocomial pathogens, which require much research and development of new antimicrobial therapeutics 2 . MRSA and Pa are strong biofilm producers; generating biofilms with a poor permeability and a slow metabolic rate of the encased bacteria, further challenging antibiotic treatments 3 . Moreover, dormant persisters spontaneously form in the biofilm population. The recalcitrant persisters can dodge antibiotic attacks and lead to recurrent or chronic infections 4 . The unmet need for anti-biofilm and anti-persister therapeutics has urged researches on non-antibiotic alternatives 5 . MRSA and Pa are the top two MDR species frequently found at open skin wounds in association with sepsis, a lethal disease, if the infection is left unchecked. Open wound beds are desirable for biofilm establishment and provide portals for bacterial invasion and systematic dissemination. Treatment options are limited should MDR bacteria contaminate the wounds 6 . The nephrotoxic and neurotoxic colistin appears to be the last resort for treating MDR biofilms at burns 7 . Essential oils from aromatic plants have been used as folkloric medicines for wound healing and their antimicrobial effect are appreciated throughout history and around the world. In nature, essential oils protect plants from phytopathogens and reduce bacterial burdens of MDR pathogens in trauma-associated wounds by targeting the membrane. As mutations in membrane synthesis likely affect bacterial fitness, resistance to essential oils hardly developed 8 . Blue light (BL) at 400-495 nm is another rising antimicrobial approach, particularly for skin infection. Many studies have validated BL's efficacy in killing bacteria, regardless of their resistance profiles [9][10][11] . The exact mechanism of action for BL remains unclear. However, it is generally accepted that BL excites endogenous proporphyrin-like derivatives and stimulates a chain of reactive oxygen species (ROS) production. The toxic ROS react with multiple components of a bacterium and eventually rupture the cell. The nonselective and fast-acting characters of ROS minimize a chance for bacterial resistance to BL. We herein report an antimicrobial synergy between BL and thymol. Thymol is a phenolic monoterpenoid that is commonly present in edible essential oils. The combined thymol and BL readily and safely inactivated all forms of bacteria, including planktonic cells, mature biofilms, and persisters of MDR MRSA and Pa strains in vivo and in vitro. In the combinatory therapy, thymol acted as a "pro-photosensitizer" and was oxidized to thymoquinone (TQ) and thymohydroquinone (THQ) exclusively in bacteria by BL. The resultant TQ and THQ acted as photosensitizers and magnified ROS productions exponentially and rapidly killed the pathogens while completely sparing the host tissue. Results Screening bactericidal activity of BL combined with different constituents of essential oils. All bacterial strains used in this study, including four MRSA clinical isolates, four Pa clinical isolates, one luminescent strain of USA300, and one standard strain of Pa ATCC19660, were confirmed MDR by microbiological tests ( Fig. 1a and Supplementary Table 1). Two dozen of compounds were screened and seven of them were shown in Fig. 1a in which thymol exhibited the most potent bactericidal activities against the MDR strains with a minimal inhibitory concentration (MIC) range from 0.3 mg/mL to 0.8 mg/mL dependent on the bacterial strain tested. BL alone at 30 J/cm 2 was ineffective in killing the MDR strains; only marginal reductions of 0.2-and 0.6-log CFU/mL were observed for Pa ATCC19660 and Pa RJ0002, respectively. Combinations of BL (30 J/cm 2 ) with each of the seven compounds at 1/4 MIC enhanced the antimicrobial activity in 4 of compounds for MRSA and all compounds for Pa at varying extents (Fig. 1a). Among the combintations, thymol and BL showed the most distinct antimicrobial effect, with a >2 log CFU/mL reductions for all strains tested (Fig. 1a, outlined in red). We next evaluated any risk of drug-resistance induced by the combined approach with two representative strains: MRSA HS0182 and Pa HS0028. The strains remained susceptible to the combined treatment after 20 cycles of sub-lethal selections (Fig. 1b). Conversely, exposure of MRSA HS0182 to penicillin (PEN) or Pa HS0028 to ampicillin (AMP) substantially increased the MICs with a 64-or 40-fold MIC elevation for PEN or AMP, respectively, after the 20th passage (Fig. 1c). BL and thymol synergistically killed planktonic bacteria, biofilms, and persisters. Planktonic MRSA HS0182 (Fig. 2a) and Pa HS0028 (Fig. 2b) were synergistically inactivated by BL and thymol, as indicated by the positive S-values in the heat maps. Svalues were calculated according to the Bliss Independence model 12 . The synergies were dose-related. As BL or thymol increased in doses, the S-values raised and displayed a dark patch toward the upper right corner where clustered the most effective combinations (Fig. 2a, b; right). Planktonic bacteria at 7.5-log CFU/mL were promptly eradicated by 4-min BL exposure (13 J/cm 2 ) in the presence of 1× MIC thymol. In contrast, BL (200 J/cm 2 ) or thymol (at sub-MICs) by itself did not severely affect the survival of planktonic cells (Fig. 2a, b; left). The remaining species of the MDR panel were also tested and shown comparably susceptible to the combined BL (30 J/cm 2 ) and thymol (1× MIC) treatment, whereas monotherapies were ineffective ( Supplementary Fig. S1a). BL and thymol together impaired bacterial envelope integrity, as suggested by propidium iodide (PI) staining ( Supplementary Fig. S2). PI + MRSA HS0182 and PI + Pa HS0028 surged to 88% and 95%, respectively, after the duo treatment, whereas fewer than 5% PI + cells were seen in monotherapies (P < 0.0001). Mature biofilm formed by MRSA HS0182 (Fig. 2c) or Pa HS0028 (Fig. 2d) was also eradicated synergistically by the duo, again with positive S-values, though requiring high doses of thymol and/or BL. Thymol (1× MIC) combined with 75-J/cm 2 or 100-J/cm 2 BL completely removed viable biofilms of MRSA HS0182 or Pa HS0028 (7.0 log CFU/well), whereas BL alone at 100 J/cm 2 or thymol alone at 1 MIC hardly affected the biofilms (Fig. 2c, d; left). Similar anti-biofilm activities of the combined therapy were also confirmed in other MDR strains listed in Fig. 1a (Supplementary Fig. S1b). Scanning electron microscopy (SEM) revealed a detrimental effect of the combined therapy on biofilm structures (Supplementary Fig. S3). The biofilm of MRSA HS1082 was patchy and thick in the control group but effaced by the duo treatment. The effacement of MRSA clusters at the upper layers exposed many void matrices and some scattered cells ( Supplementary Fig. S3a vs. 3b). For Pa HS0028, the orderly polymeric matrices seen in the control group were utterly disrupted by the duo treatment. The remaining bacteria rested sparsely on the bare surface; collapses (arrows) and bacterial wreckages (stars) were around the place ( Supplementary Fig. S3c vs. 3d). Biofilms of MRSA HS0182 or Pa HS0028 were next treated with 100× MICs of rifampicin and ciprofloxacin, respectively, for the selections of persisters. It was found that the combined therapy remained effective in killing the dormant persisters. The anti-persister synergies were comparable to the anti-biofilm synergies, as shown by the similar patterns on heat maps (Fig. 2e, f; right). The MRSA HS0182 or Pa HS0028 persisters were entirely eradicated by thymol at 1× MICs combined with 50-or 75-J/cm 2 BL, respectively. Conversely, the persister reductions in monotherapies were <1 log CFU/well ( Topical application of BL and thymol sterilized wound contamination. Full-thickness 3rd-burns were inflicted and infected by the luminescent USA300 for 30 min, after which the acute contaminated wounds received sham therapy, monotherapies, or combined BL and thymol at 100 µg in 50 µL. Bacterial luminescence signal, as a surrogate for viability, remained stable in the control group. The BL or thymol monotherapy slightly attenuated the luminescence overtime, yet complete elimination of the signal was only achieved by the combined therapy starting at 9-minutes irradiation (30 J/cm 2 ) ( Fig. 3a; Red dashed box). After quantifying the luminescence signal, a significant reduction was seen in the combination group ( Fig. 3b; P < 0.0001). The combined therapy's sterilization effect was proven synergic in all combinations per the Bliss Independence model (Fig. 3c). The combined therapy completely sterilized the wound and successfully prevented bacteremia on day 7. In contrast, the monotherapies reduced bacterial loads at the burns only by 2-to 3-log CFU and failed to halt bacterial invasion (Fig. 3d, e). Likewise, the burns were acutely infected with Pa HS0028 and treated with thymol and/or BL (33 or 66 J/cm 2 ) as above ( Fig. 3f-i). On day 1, 75% of mice in the combined 66 J/cm 2 BL and thymol group carried a completely sterile wound (Fig. 3f). Moreover, a higher dose of BL showed more potent synergy with thymol ( Fig. 3f, g). The combined groups showed the best survival rate after a course of 15-days infections. The duo with 66 J/cm 2 or 33 J/cm 2 BL exposure gave rise to 87.5% and 50% survival, which was significantly higher than 0%, 0%, 12.5%, and 25% survival of mice treated with sham, thymol alone, 33 J/cm 2 BL alone, and 66 J/cm 2 BL alone, respectively ( Fig. 3h; P < 0.0001). On day 15, the percentages of mice with a sterile wound were 12.5%, 25%, 50%, and 87.5% after treatment with 33 J/cm 2 BL, 66 J/cm 2 BL, 33 J/cm 2 BL + Th, and 66 J/cm 2 BL + Th, respectively (Fig. 3i), which correlated well to the descending CFUs in the corresponding groups on day 1. Therefore, the host may better handle an infection if timely interventions are provided to minimize acute bacterial loads. BL together with thymol rescued mice from lethal USA300 biofilm-associated infection. The burns were infected with USA300 for 72 h to allow biofilm establishment. The biofilmassociated wounds were then treated with sham therapy, monotherapies, or combined BL and thymol as above. Monotherapies slightly suppressed the luminescence overtime, yet virtual elimination of the bioluminescent signal was only attained by the combined therapy at 24-minutes irradiation (80 J/cm 2 ) ( Fig. 4a; Red dashed box). The combined therapy's anti-biofilm activity at 24 min was 750 or 140-folds more potent than thymol alone or BL alone, respectively ( Fig. 4b; P < 0.0001). The synergy between BL and thymol against USA300 biofilm-associated infection was confirmed by the Bliss Independence model (Fig. 4c). We traced the bacterial luminescence on wounds for 7 days after treatments (days 4 to 11) (Fig. 4d). No rebound of bacterial growth was found at the wounds in the combined group (Fig. 4e). Moreover, the combined therapy saved 87.5% of mice by day 15, which was significantly higher than 0%, 0%, or 25% survival of mice treated with sham, thymol alone, or BL alone, respectively ( Fig. 4f; P < 0.0001). Most importantly, the combined therapy prevented systematic dissemination of USA300, whereas the monotherapies failed. In support of this, notably fewer bacteria were recovered from the blood and vital organs (i.e., lung, spleen, liver and kidney) in the combined group (Fig. 4g). The combined therapy exhibited no adverse effects on fibroblasts and mouse skins. The combined therapy induced ROS production in MRSA HS0182 (upper) and Pa HS0028 (middle), but not in human fibroblasts (bottom), as shown by DCF staining (Fig. 5a). In co-culture experiments, where bacteria and fibroblasts received treatments at the same petri dish, the duo induced a 8-fold or 19-fold increase of DCF fluorescence in MRSA HS0182 or Pa HS0028, respectively. No notable changes in DCF fluorescence were observed in co-cultured fibroblasts, suggesting that ROS were produced and well trapped within bacterial cells (Fig. 5b). L-cys, a well-recognized ROS scavenger, dose-dependently abrogated the combined therapy's bactericidal activity in both MRSA HS0182 and Pa HS0028, arguing strongly for ROS-dependent bactericide (Fig. 5c). Bacteria-specific killing in the co-culture by BL and thymol was visualized via Calcein-AM/PI staining under a confocal microscope (Fig. 5d). PI + MRSA significantly increased to 81% after the combined therapy, whereas PI + fibroblasts remained at a low level (<10%) before and after treatment ( Fig. 5e; P < 0.01). After murine skin received five consecutive doses of concentrated thymol (20 mg/mL) and intense BL irradiation (100 J/cm 2 ), its structure and integrity were well-preserved ( Fig. 5f), confirming that the duo treatment did not adversely affect host cells. The lining of epidermis and dermis after treatment were clear and complete, resembling that of the control group. TUNEL-positive cells were non-detectible, indicating absence of DNA breaks or apoptotic cells in the treated skins (Fig. 5f). Photooxiation of thymol into photosensitizers amplifies bactericidal ROS generation. The selectivity of the combined therapy and the bacteria-specific production of ROS directed us to Checkerboards in the corresponding right panels show S-values for different combinations of BL and thymol as assessed by the Bliss Independence model according to the following formula: S-value = (logCFU/mL BL /logCFU/mL Control )(logCFU/mL Th /logCFU/mL Control ) − (logCFU/mL BL+Th /logCFU/mL Control ). LogCFU/mL BL , logCFU/mL Th , logCFU/mL BL+Th , and logCFU/mL control are the number of viable bacteria remaining after treatment with BL alone, thymol alone, combination of BL and thymol, or sham light, respectively. 0 < S < 1 indicates a synergistic interaction, whereas S < 0 indicates an antagonistic interaction. Results are presented as mean ± SD of four to six replicates from five independent experiments. ****P < 0.0001; ***P < 0.001; **P < 0.01; *P < 0.05; and ns, no significance. investigate the fate of thymol in the presence of BL. As shown by ultra-performance liquid chromatography-VION-ion mobility spectrometry-quadrupole time-of-flight-tandem mass spectrometry (UPLC-VION-IMS-QTOF-MS/MS), thymol was oxidized to TQ and THQ in viable MRSA HS0182 (upper) and Pa HS0028 (middle) or their extracts after BL exposure, whereas fibroblasts (bottom) or it extracts failed to convert thymol in the presence of BL (50 J/cm 2 ) (Fig. 6a). Chromatograms and mass spectra of thymol, TQ, and THQ standards were run in parallel to confirm the chemical compounds ( Supplementary Fig. S4). Next, we compared the excitation and emission spectra of MRSA HS0182, Pa HS0028, and fibroblast extracts with those of protoporphyrin IX (PPIX), an endogenous blue-laser sensitizer. All bacterial extracts had excitation at 405 nm like PPIX, while such excitation peak was absent in fibroblast extract (Fig. 6b). In response to 405-nm BL, MRSA HS0182 extract emitted predominantly at 632 nm, and Pa HS0028 extract at 676 nm, similar to the PPIX emission peaks at 632 nm, 668 nm, and 702 nm, whereas no emission was seen in fibroblast extract (Fig. 6c). Production singlet oxygen ( 1 O 2 ) is a hallmark of PPIX photo-excitation. After BL exposure, 1 O 2 was detectable in MRSA HS0182, Pa HS0028, and PPIX solution, but not in fibroblasts, in agreement with negligible BL sensitizers in mammalian cells (Fig. 6d). Of note, the generation of 1 O 2 in bacterial cells or PPIX solution were canceled by the 1 O 2 quencher NaN 3 (Fig. 6d). In line with this, the oxidative transformation of thymol into TQ and THQ in bactreia cells (upper) and their extracts (bottom) was also abrogated by NaN 3 (Fig. 6e). These results confirmed that proporphyrin-like compounds in bacteria and the subsequent 1 O 2 Fig. 3 Synergistic disinfections of murine burns by topical application of BL and thymol. Full-thickness 3rd-burn-wounds were infected with USA300 (a-e) or Pa HS0028 (f-i) at 5 × 10 6 CFU in 50 μL of PBS for 30 min as acute infection. a-e The USA300-infected wounds were exposed to sham (control), 50 µL of thymol at 2 mg/mL (Th), indicated times of BL exposure (BL), or both (BL + Th). a Bacterial luminescence images of representative wounds were acquired at indicated times after various treatments. b Mean luminescence was presented as logarithmic relative luminescence units (log RLU) per model relative to time zero. c S-values are calculated by the Bliss Independence model as Fig. 2. d and e Bacterial burdens in the wounds (d) and blood (e) were assayed 7 days after acute infection shown as log CFU per model. f-i The Pa HS0028-infected burns were treated with sham (control), 50 µL of thymol at 10 mg/mL (Th), BL exposure at 33 J/cm 2 or 66 J/cm 2 , or both and log CFU per wound were determined on day 1 after the indicated treatments. g S-values confirm the synergy between BL and thymol against acute Pa HS0028 infection. h Kaplan-Meier survival curves of Pa HS0028-infected mice. i Bacterial loads in the burns were quantified either prior death or at the end of the experiments. All results are presented as mean ± SD of eight mice. Zero in d, e, f, and i was below the detection limit (40 CFU per wound and 20 CFU per mL blood). ****P < 0.0001; ***P < 0.001; **P < 0.01; and ns, no significance. generation were vital for the transformation of thymol into a photosensitizer. We further discovered that the oxidized products, TQ and THQ, were by themselves BL-sensitizing agents. Both TQ and THQ, particularly TQ, exhibited an excitation peak at around 410 nm, while thymol showed none of such excitation ( Fig. 6f; left). TQ and THQ also exhibited a similar emission peak at about 630 nm and 670 nm in response to 405-nm BL, measured by a fluorescence spectrometer ( Fig. 6f; right). Accordingly, TQ and to a lesser extent THQ, rather than thymol, generated substantial amounts of H 2 O 2 and •HO upon BL irradiation (Fig. 6g). We suspected that TQ and THQ, most likely TQ, contributed notably to the ROS source during the combined therapy. In support, at low doses of BL (20 J/cm 2 ) and compounds (0.05-0.1 mg/mL), TQ and THQ were significantly more potent than thymol in killing planktonic or biofilm bacteria ( Fig. 6h; P < 0.0001). Furthermore, •HO producing planktonic/ biofilm bacteria (green; HPF + ) overlapped perfectly with dying bacteria (red; PI + ), which validated that the combined therapy inactivated bacteria through the action of ROS, particularly by the detrimental •HO (Fig. 6i). Discussion This study unravels a previously unappreciated "pro-photosensitizer" function of thymol that is activated exclusively in bacteria upon BL illumination. The unexpected finding paves an innovative strategy to seek bacteria-specific pro-photosensitizers with which more effective and specific non-antibiotic modalities can be developed to combat the crisis of MDR superbugs. MDR microbes are commonly colonized on the wound surface, especially on chronic skin wounds, which contaminate the healthcare environment frequently and readily spread the bacteria to vulnerable patients in hospitals because these wounds are openly exposed to the atmosphere. When the body surfaces are disinfected quickly, safely, and repeatedly by a modality, it not only benefits patients greatly but also effectively eliminates the sources of nosocomial infections, making the healthcare environment safe to vulnerable patients. Moreover, the modality is not limited to Fig. 2. d and e Mean luminescence was acquired from days 4 to 11 after an indicated treatment and the mean areas under the luminescence curves were summarized in e. f Kaplan-Meier survival curves of USA300 biofilm-associated mice in response to an indicated treatment. g Bacterial loads in the blood, wounds, lungs, spleens, livers, and kidneys were quantified just prior death or on day 15 after bacterial inoculation. All results are presented as mean ± SD of eight biological replicates. Zero in g was below the detection limit (40 CFU per model for murine wounds or organs and 20 CFU per mL blood). ****P < 0.0001; ***P < 0.001; **P < 0.01; and ns, no significance. skin wounds and can be potentially extended to disinfect other tissues such as urinay tract, throat, and mouth, surgical sites, tooths, and so on. Antimicrobial BL and essential oils both possess a multi-target mode of action and broad antimicrobial spectra. BL at 400 to 495 nm has been shown to be bactericidal to an array of pathogens through the generation of ROS. ROS indiscriminately damages cellular components (e.g., lipids, proteins, plasma membrane, and nucleic acids). Likewise, essential oils act on multiple bacterial targets, particularly the bacterial envelopes 8,13,14 . Thus, repeated exposures to BL and essential oils are unlikely to render bacterial tolerance 8,9,[15][16][17] . Essential oils are comprised of volatile constituents produced by aromatic plant/herbs. Among 3000 essential oils, about 300 are generally recognized as safe (GRAS) to humans by the United States Food and Drug Administration (U.S. FDA) and have broad applications in food preservation, additives, favors, perfume, cosmetic industries, antiseptic oral solutions, toothpastes, cleaner, and air fresheners for centuries 18,19 . In attempt to develop non-antibiotic alternatives, we have screen a few dozens of the essential oils for their ability to kill MDR bacteria 8 . We found potent antimicrobial activity of the volatile oil prepared from Thymus vulgaris, which was also confirmed by other groups 20,21 . MRSA HS0182 was exposed to thymol at 0.15 mg/mL, BL at 50 J/cm 2 , or both, and Pa HS0028 to thymol at 0.3 mg/mL, BL at 25 J/cm 2 , or both. Alternatively, fibroblasts alone (a, bottom) or the cells co-cultured with MRSA HS0182 (b, upper) or Pa HS0028 (c, bottom) were treated with thymol at 0.5 mg/mL, BL at 50 J/cm 2 , or both. DCF mean florescence intensity (MFI) on gate of bacteria or fibroblasts was presented in a and fold changes of DCF MFI relative to sham-treated controls were shown in b. c Dose-dependent effects of antioxidant L-cys on the bactericidal activity of the combined therapy: 50 J/cm 2 BL and 0.15 mg/mL thymol for MRSA HS0182 and 25 J/cm 2 BL and 0.3 mg/mL thymol for Pa HS0028. d Representative fluorescence images of co-culture of MRSA HS0182 and fibroblasts were shown, in which the dead and viable cells were visualized by PI (red) and calcein-AM staining (green), respectively. Scale bars, 20 µm. An area in the middle panel (control PI) was enlarged to show a few PI-stained bacteria (Scale bars, 2 µm). e PI + MRSA HS0182 and PI + fibroblasts in co-cultures were also counted manually and presented as percentages relative to a total of cells. f No adverse effect of topical application of BL and thymol in murine skin. The dorsal skin was topically treated with sham (control) or 100 J/cm 2 BL and 50 µL of thymol at 20 mg/mL (BL + Th) once a day for 5 consecutive days. On day 6, the skins were processed by H&E histological examination and TUNEL assay. DNase I-treated skins were TUNEL stained in parallel as positive-staining controls. All results are presented as mean ± SD of at least five biological replicates. Images in d and f are representative of five independent experiments. ****P < 0.0001; ***P < 0.001; **P < 0.01; and ns, no significance. The predomiant component of the volatile oil is thymol (47.59%) and exhibited strong synergy with BL, whereas compounds isolated from other essential oils appeared to be weak or no such synergy, including eugenol, cinnamaldehyde, cuminaldehyde, citral-a, terpinen-4-ol, and menthol, although these essential oil compounds possess similar antimicrobial activity as thymol (Fig. 1a). Intriguingly, thymol and BL selectively targeted bacteria while sparing the surrounding host tissue, as confirmed in vivo histologically, as well as by DNA damage assays and cell viability and ROS production in co-cultures of bacteria and fibroblasts (Fig. 5). The bacteria-specific photochemical reaction is likely triggered by BL excitation of endogenous proporphyrins-like compounds in bacterial cells. The excited triplet-state proporphyrin-like compounds collide with molecular oxygen ( 3 O 2 ) generating highly reactive 1 O 2 [22][23][24] (Fig. 7). This photochemical reaction occurred in MRSA or Pa cells or their extracts but not in mammalian cells or their extracts (Fig. 6b-d), in agreement with relatively abundant proporphyrin-like compounds generated in bacteria over mammalian cells. Bacteria spontaneously accumulate tetrapyrrole macrocycles, such as protoporphyrin, uroporphyrinogen III, coproporphyrinogen III, coproporphyrin III, etc. which are BLsensitive photosensitizers in the basis of their absorbance and images in a, b, c, e, f, and i are representative of five independent experiments. Results in d, g, and h are presented as mean ± SD of at least five independent experiments. ****P < 0.0001; ***P < 0.001; **P < 0.01; *P < 0.05; and ns, no significance. excitation spectra and ability to respond to BL similarly as PPIX (Fig. 6). These precursors were generated during synthesis of various metallated tetrapyrroles in bacteria essential for their survival in various environments 25 . The derivatives vary with bacterial strains or isolates and living conditions and are quite complex, identification of which is beyond the scope of the current investigation 23,[25][26][27] . As depicted in Fig. 7, 1 O 2 generated from photo-excitation of endogenous proporphyrins-like compounds presumably oxidized thymol to TQ and THQ in bacteria as shown in Fig. 6a. Photooxidation of the phenolic hydroxyl group to para-benzoquinone by 1 O 2 was previously confirmed 28 . Photo-conversion of thymol to TQ in the presence of porphyrins was also described elsewhere 29 . The resulting TQ and to a lesser extent, THQ were BL-sensitizers; they produced substantial amounts of H 2 O 2 and •HO once irradiated (Fig. 6g). Apparently, in this combined treatment, thymol functions as a "pro-photosensitizer" analogous to a prodrug, which is harmless at sub-MICs for the bacteria until BL converts it to TQ and THQ. Like other quinones, TQ can be reversibly transformed into semiquinone and THQ via redox cycling 30 , forming an autoxidative cycle. TQ is subsequently excited by BL and generates either superoxide anion (O 2 • − ) or 1 O 2 via Type I or Type II photo-oxidation. 1 O 2 continuously reacts with thymol and maintains the pool of TQ and THQ, giving rise to another autoxidation cycle. The two autoxidation cycles interacted with each other exponentially amplying ROS production over time as long as BL is presented. O 2 • − undergoes dismutation and forms H 2 O 2 and O 2 . H 2 O 2 may be rapidly converted into the most detrimental •HO either in a Fenton reaction or by photolysis [31][32][33] . The deleterious •HO initiates a torrent of cytotoxic events that oxidatively damage the fatty acids, lipids, amino acids, and nucleobases (Fig. 7). As 1 O 2 and •HO are highly reactive and short-lived, they cause oxidative damages to proximal biomolecules before they can escape the bacterial cells. Of note, none of the above events occur in mammalian cells due to insufficient metal-free proporphyrins-like substances to generate sufficient 1 O 2 (Fig. 6d). TQ has been widely tested in various medinces as antioxidant 34 and it is harmless in the absence of BL, but it is toxic in the presence of BL as it is a photosensitizer. Various photosensitizers are used for antimicrobial photodynamic therapy (aPDT), in which a photosensitizer enters not only bacteria but also mammalian cells and generates ROS similarly in these two types of cells so that killing bacteria over mammalian cells depends on relative susceptibility to ROS between bacteria and mammalian cells and finding a safe and effective window of aPDT could be challenging. In sharp contrast, thymol behaves as a pro-photosensitizer and is converted into an active photosensitizer by BL exclusively in bacteria. Conceivably, pro-photosensitizers like prodrugs ensure specificity and safety and represent notable advantages over traditional aPDT. Limitation of the modality lies in poor tissue penetration of BL and inability to decomtaminate deep tissue. Moreover, some bacteria like Escherichia coli produce latively low amounts of BLsensitive tetrapyrrole macrocycles and may not be susceptible to the combinatory treatment 35,36 . Thus, further studies are needed to enhance bacteria-specific ROS generation, which can then convert harmless pro-photosensitizers like thymol into photosensitizers exclusively in most of bacterial pathogens. In summary, the combined BL and thymol induced oxidative burst exclusively within bacteria. The confined ROS rapidly and selectively inactivated all forms of bacteria, including planktonic cells, biofilms, and persisters, regardless of their antibiotic-resistance profiles. In accordance with this, the combined topical therapy synergistically sterilized acutely infected or biofilm-associated wounds and effectively prevented subsequent bacterial invasion or dissemination in mice while incurring no adverse effects on host cells. This highly selective modality is less likely to develop resistance and can serve as an alternative to antibiotics to frequently and repeatedly disinfect skin wounds or body surfaces and prevent sepsis, particully useful to treat chronic skin wounds. A combination of BL and thymol thus holds promise as a safe, attractive, and non-antibiotic therapeutic in fighting MDR bacteria. Methods Light source, compounds, microorganisms, and cell lines. A light-emitting diode (LED, Thorlabs) with peak emission at 405 nm and a full width at half maximum of 12.5 nm was used. The irradiation was fixed to obtain 55 mW/cm 2 by altering the distance of the light source aperture and the target surface with the use of a PM100D power/energy meter (Thorlabs). A small soft white LED bulb (3 W, A15) from General Electric was used as sham light at a similar intensity. Phytochemical compounds (>98% purity) listed in Fig. 1 were purchased from Sigma-Aldrich. The stock of every compound was prepared at 50 mg/mL in N, N-Dimethylformamide. The clinical strains of MRSA (RJ0021, RJ0056, HS0006, and HS0182) and Pa (RJ0002, RJ0006, HS0001, and HS0028) were isolated from patients with burn infections from Huashan Hospital and Ruijin Hospital. A luminescent strain of USA300 was used for real-time monitoring skin infection via bioluminescence imaging 9 . The bacteria were routinely cultured overnight at 37°C on brain heart infusion (BHI) agar plate supplemented with 5% sheep blood, followed by additional 3 or 20 h culture at 37°C at 180 rpm in BHI broth to obtain a mid-logarithmic or stationary growth-phase, respectively. The fibroblasts were purchased from ATCC (ATCC PCS-201-012) and cultured for 2-3 days at 37°C with 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 units/mL penicillin, and 100 µg/mL streptomycin. A mid-logarithmic growth-phase culture was prepared at 1 × 10 6 CFU/mL in trypticase soy broth (TSB) containing 0.1% glucose, 1 mM MgSO 4 , 0.15 M ammonium sulfate, and 34 mM citrate 38 . The inoculums were seeded into 96-well plates at 100 μL/well and incubated for 72 h to form biofilms. The formed biofilms were washed twice with PBS and then immersed into 200 µL of thymol solution at a final concentration of 1/4, 1/2, or 1× MIC, followed by BL exposure at 25, 50, 75, or 100 J/cm 2 . Bactericidal effects of BL alone, thymol alone, or TQ and THQ in the presence or absence of 20 J/cm 2 BL were investigated similarly. The biofilmencased bacteria were dislodged in 200 µL of PBS by 5-minute sonication for CFU assays as described above. Sonication itself would not affect bacterial viability. To prepare bacterial persisters, a mid-logarithmic growth-phase culture was prepared in the above-modified TSB medium at 1 × 10 6 CFU/mL and added to 96well plates at 200 µL/well. After 72 h of incubation and two washes with PBS, the biofilms were treated with 200 µL of BHI broth containing 100× MICs of rifampicin (1500 µg/mL) for MRSA and ciprofloxacin (800 µg/mL) for Pa 39 . The antibiotic was washed out after 24 h of incubation and adherent bacteria were dislodged into 200 µL of PBS. The bacteria surviving the antibiotic treatment were collected as persisters and treated with thymol alone at a final concentration of 1/4, 1/2, or 1× MIC, BL alone at 25, 50, 75, or 100 J/cm 2 , or both, followed by CFU quantification as above. Treatment of murine burns infected with USA300 or Pa HS0028. All animal protocols were approved by the Shanghai Jiao Tong University Animal Study Committee. BALB/C mice at 8 weeks of age were anesthetized with an intraperitoneal injection of a ketamine-xylazine cocktail and shaved on the lower dorsal skin. A full-thickness 3rd-degree burn was generated by a brass block (1 cm 2 ) heated to thermal equilibration with boiling water prior to the application of its extremity onto the skin for 7 s 26 . Sterile saline was intraperitoneally administered at 0.5 mL/mouse to sustain fluid balance during recovery. Ten minutes after the injury, a stationary growth-phase culture of USA300 or Pa HS0028 at 5 × 10 6 in 50 µL of PBS was incubated onto the burns for 30 min as acute burn infection. Subsequently, the USA300-or Pa HS0028-infected burns were smeared evenly with 50 µL of thymol at 2 mg/mL or 10 mg/mL, respectively, and exposed immediately to BL for indicated times. The bactericidal effects were evaluated similarly with BL alone, thymol alone, or vehicle PBS combined with sham light for comparisons. In addition, the burns were infected with USA300 at 5 × 10 7 in 50 µL of PBS for 72 h to form biofilms in the burns as a lethal biofilm-associated burn infection 26 . Then, the infected burns were treated with sham light, 50 µL of thymol at 10 mg/mL, an indicated time of BL, or both as above. The bioluminescence imaging of bacteria in wound was performed by using a Lumina II In Vivo Imaging System (IVIS, PerkinElmer). During imaging, mice were anesthetized in the chamber supplemented with 2.0% isoflurane inhalant mixed with oxygen via an IVIS manifold placed within the imaging chamber. Bioluminescence was quantified with the Living Image software (Xenogen). For measurement of bacterial burden, the mice were euthanized at the end of the experiment and perfused with PBS via the heart. The burned skins, lungs, spleens, livers, and kidneys of mice were excised and homogenized in 2 mL of PBS. The resultant homogenate of each tissue was spotted onto BHI agar plate containing Skirrow's supplements after serial dilutions and processed for bacterial enumeration. At least eight mice were used for each test group. Measurements of intracellular ROS in bacteria and fibroblasts. A stationary growth-phase culture of MRSA HS0182 or Pa HS0028 at 5 × 10 7 CFU/mL was treated with a lethal dose of the combined treatment. MRSA HS0182 was treated with thymol at 0.15 mg/mL, BL at 50 J/cm 2 , or both; Pa HS0028 was exposed to thymol at 0.3 mg/mL, BL at 25 J/cm 2 , or both. To determine intracellular ROS in fibroblasts or the cells co-cultured with the bacteria, fibroblasts were cultured for 2-3 days at 37°C with 5% CO 2 in DMEM supplemented with 10% FBS and antibiotics. The fibroblasts were mixed with bacteria at a cell to bacteria ratio 1:10 and the co-cultures and fibroblasts alone were treated with thymol at 0.5 mg/mL, BL at 50 J/cm 2 , or both. After various treatments, the bacteria and/or fibroblasts were stained with 10 µM DCFH-DA solution per the manufacturer's instruction and analyzed by flow cytometry (BD Biosciences). In some experiments, L-cysteine (L-cys) was added to the bacterial cultures at indicated concentrations for 30 min prior to treatment. All flow cytometric data were analyzed by FlowJo software. Cell viability. The co-culture of fibroblasts and bacteria were treated with sham light or a lethal dose of the combined modality (50 J/cm 2 BL and 0.5 mg/mL thymol). Then, the co-cultures were stained with 10 µM of calcein-AM solution to identify viable cells and 10 µM of PI solution to mark dead cells per the manufacturer's instruction. The cells were imaged with FluoView FV1000-MPE confocal microscopy (Olympus) via the green/red fluorescence channel. Determination of excitation and emission spectra of extracts prepared from fibroblasts and bacteria. Fibroblasts were cultured as above, after which the confluent cells were collected, washed, and resuspended in PBS. MRSA HS0182 and Pa HS0028 were incubated for 20 h at 37°C in BHI broth, then washed twice with PBS, and resuspended in PBS. Total protein levels in fibroblasts or bacteria were quantified with BCA method and normalized at equal amounts for the tests. The fibroblasts or bacteria in PBS were centrifuged and pelleted. The pellets were resuspended in an extraction solvent composed of ethanol: dimethyl sulfoxide: acetic acid at a ratio of 80:20:1 (vol: vol: vol) and stored at −80°C for another 48 h. The extracts were centrifuged, and the supernatants were collected and filtered through a Sep-Pak C18 Cartridge. Excitation or emission spectra of these extracts were scanned ranging from 350 to 650 nm or from 500 to 800 nm with an excitation wavelength of 405 nm, respectively, and measured by a FLS1000 Photoluminescence Spectrometer (Edinburgh Instruments). PPIX solution at 10 μM served as a positive control. In addition, excitation and emission spectra of thymol, TQ, and THQ at 0.5 mg/mL were determined similarly. UPLC-VION-IMS-QTOF-MS/MS analyses of thymol photo-oxidation in viable bacterial cells and fibroblasts or their extracts. Bacterial suspension at 5 × 10 7 CFU/mL or fibroblasts at 5 × 10 5 cells/mL in PBS was added to 35-mm Petri dishes, along with thymol at a final concentration of 0.5 mg/mL. Sample of 100 μL was taken and centrifuged and the supernatants were collected and analyzed by UPLC-VION-IMSQTOF-MS/MS. To determine thymol photo-oxidation, the samples were irradiated with 50 J/cm 2 BL. The BL-irradiated sample was collected and analyzed with UPLC-VION-IMSQTOF-MS/MS. In addition, the extracts prepared above were freeze-dried using a SC250 concentrator (Thermo Fisher). The resultant powder was re-dissolved in 100 μL of absolute ethanol, diluted 20x in PBS, and then supplemented with thymol solution at a final concentration of 0.5 mg/mL. The sample was taken about 100 μL from each extract for UPLC-VION-IMSQTOF-MS/MS analyses. The above extracts were irradiated with 50 J/cm 2 BL. After BL irradiation, the sample was collected and analyzed with UPLC-VION-IMS-QTOF-MS/MS. UPLC was performed with an ACQUITY UPLC I-class system (Waters), equipped with a binary solvent delivery system, autosampler, and a PDA detector. The separation was achieved on a Waters ACQUITY UPLC BEH C18 column (100 × 2.1 mm, 1.7 µm). The mobile phases consisted of (A) water containing 0.1% (v/v) formic acid and (B) acetonitrile containing 0.1% (v/v) formic acid. The elution condition was optimized as follows: 0-2 min, 5% B; 2-5 min, linear gradient from 5 to 25% B; 5-7 min, 50% B; 7-9 min, 50-100% B, 9-11 min, 100%, then restoration of initial conditions for 3 min to equilibrate the column. The flow rate was 0.4 mL/min. The injection sample volume was 1 µL. Mass spectrometry analysis was performed on a Vion IMS QTOF MS (Waters) equipped with an atmospheric pressure chemical ionization (APCI) source operating in a positive ionization mode. The capillary voltage and cone voltage were set at 1500 V and 40 V, respectively. The source temperature was 115°C. The QTOF acquisition rate was 0.2 s and the inter-scan delay was 0.1 s. The mass range was scanned from 50 to 1000 m/z. The data were collected and acquired in UNIFI Scientific Information System software (Waters). Statistical analyses. Data are presented as means ± standard deviations (SDs). Statistical significance was assessed with two-tailed Student's t-test between two groups or one-way ANOVA for multiple group comparisons. The Kaplan-Meier method was applied in the comparison of the survival curves. A P-value < 0.05 was considered statistically significant. All statistical analyses were performed using GraphPad Prism 7.0 (GraphPad Software).
2023-02-23T15:46:32.260Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "20c7c3e3b9c3ee1a7c46a8e4ea0fe3537d3df30c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-021-01956-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "20c7c3e3b9c3ee1a7c46a8e4ea0fe3537d3df30c", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
49685141
pes2o/s2orc
v3-fos-license
Utilizing Connection Usage Characteristics for Faster Web Data Transport Powered by TCPDF (www.tcpdf.org) This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Moktan, Gautam Raj; Varis, Nutti; Manner, Jukka Introduction Data transfer mechanism in the World Wide Web has not evolved at the same pace as the services it contains. Information in various formats is generated in web servers, and a browser fetches them through the network and renders them for the users' viewing. On the other hand, the diversity in the types and amounts of such objects has grown significantly, and many mechanisms have been developed in order to transfer and render them faster and more efficiently. Hypertext Transfer Protocol (HTTP) [1] enables the client-server model through which information is transferred in the web between browsers and servers. e commonly used transport protocol by HTTP is the Transmission Control Protocol (TCP). us, a browser and a server first establish a TCP connection over which the HTTP messages are communicated in order to fetch the various objects required to display a web page. Setting up a TCP connection entails a handshake mechanism that requires 3 trips of message transfer between the browser and the server. is overhead is becoming inefficient considering that a browser generates about 100 requests, on average, to fetch a web page [2]. us, to improve efficiency, modern browsers allow reusing the same TCP connection for fetching multiple web objects through the use of HTTP pipelining [3]. HTTP/1.0 uses keep-alive headers for it, and HTTP/1.1 considers connection persistence unless declared otherwise. With the advent of HTTP/2.0, multiplexing is done inside a single connection to a server to fetch multiple objects. Since the users' machines process information faster than the rate of fetching files from the network, parallel TCP connections enable faster web page loading. HTTP/1.x allows such mechanism, and all major browsers nowadays establish up to 6-8 connections per server. HTTP/2.0 has built-in multiplexing to fetch multiple objects in parallel. Yet, in the ongoing quest for faster web data transport to improve user experience, attempts are made on various fronts to realize it. Compression, Caching [4], TCP tuning [5], and so on are implemented at the servers as well as middle-boxes. Domain sharding (overcoming the browser's limit of maximum simultaneous connections per domain by downloading resources from multiple domains), DNS prefetching [6], content inlining [7], prioritizing, and so on are some other techniques. e focus of this paper is to investigate the characteristics of data transport connections in the web and analyze the effect of using an enhanced transport protocol to capitalize on the connection usage characteristics. We study the amount and types of files fetched through established connections. We also analyze the connection usage and reusage patterns for web content download in the web using the home pages of the Alexa top 100 sites [8]. We validate our sample against the whole list maintained at httparchive [2]. We implement an enhanced TCP, FLD_TCP, which pushes short flows (flow length smaller than 2MB) faster through the network as compared to traditional TCPs. In order to measure the benefits on the downloads of Alexa top sites to a user's browser, we implement the transport protocol in a proxy server (to emulate the content servers) and measure the performance in a Chrome browser. We show the benefits such a system can bring as compared to the standard TCP with the understanding of connection-level patterns. We find that this novel transport protocol can have positive effect in the connection idle times as well as the object fetching times. e results show significant improvement of page load times through the use of FLD_TCP in both HTTP/1.x and HTTP/2.0 cases. We discuss the different application scenarios and the implications of deploying such a system for accelerating web transfer. Background and Related Works As a part of an ongoing effort to improve the page load speed on the web, many methods have been proposed. Also several studies are done on the interactions during a page load among the browser, the servers, proxy middle-boxes, and the network. A web page can be made to load faster either by bringing the content closer to the browser or by reducing the number of fetches required to load the page. Moreover, it can also be achieved by optimizing the underlying transport mechanism. Content caching [4] has been used primarily to reduce the number of requests for the web pages. Depending upon its implementation at different nodes of the web page load interaction, it brings the content closer to the browser. Use of cloudlets [9] to bring the content closer to the users is also gaining momentum. is has been driven by the advent of 5G, Software Defined Networking (SDN), and advancing Content Distribution Networks (CDN) technologies. Webpage bundling has also been purposed to offload much of the computation task into the cloud. WebPro [10], Cumulus (MahiMahi) [11] and PARCEL [12] are some examples where page content is fetched and bundled in the cloud and the browser only need to render the received bundle. Techniques such as content inlining [7] and CSS spriting [13] are used in the design of today's web pages. ese techniques reduce the number of object fetches the browser has to do for loading the web pages. Compression is applied to objects to reduce the number of bytes through the network. Google Flywheel [14] and Opera Turbo [15] are examples of proxies that compress content and apply other latency reducing techniques to improve the web page load performance. DNS prefetching [16] allows eliminating the DNS resolution time for previously accessed domains. Optimizing the order in which the objects are loaded in the browser also brings benefits. Polaris [17] loads web pages according to the dependency tracking done by the dependency graph generator, SCOUT, in the offline mode. Klotski [18] also capitalizes on the dependency graph to evaluate the optimal prioritization of resources of the page. WebGage [19] prioritizes the loading of webpage sections that catch user's attention more. Prophesy [20] uses SCOUT to recompute the JavaScript heap and DOM tree for web pages so that browsers can render the page faster. Optimizing the transport mechanism itself also contributes to faster loading of web page. Enabling multiple HTTP requests to utilize the same transport connection has reduced the delay caused by TCP connection establishment overhead. Also, with HTTP/2, multiplexing of requests inside a single TCP connection has allowed better utilization of the connection. HTTP/2 also allows Server Push mechanism where the server can preemptively send objects that are needed to fulfill the page load to the browser in response to the requests, thus saving network transfer time. e servers can also apply TCP tuning techniques [5] to better adapt to various network conditions. TCP WISE [21] demonstrates that with relaxing the constant initial window size, HTTP latency can be significantly improved. Analysis of interactions among the browser, the servers, proxy middle-boxes, and the network during the page load has been done from different perspectives. Mahimahi [11] conducted the performance comparison between different application level protocols in different emulated network conditions. Gangadhara Rao et al. [22] provide the analysis of web server load by TCP connection establishment phase. Qian et al. [23] provide interesting insights on the interactions of caching, content type, timing characteristics, and connection management. Konorski and Lis [24] analyze through simulation the effect of aggressive TCP configuration in networks. Similar work using game theoretic tools are done by Zhang et al. [25]. To supplement the research in this field, our work quantitatively provides the real measurement analysis of the interactions of TCP connections during web download. We also analyze the interaction when the transport protocol of the connections is enhanced to be more aggressive for short flows which most web resources create. We begin by describing the functionality of the enhanced transport protocol, FLD_TCP. FLD_TCP: An Enhanced TCP Variant. Flow-Length Dependent TCP (FLD_TCP) is an experimental modification to the Transmission Control Protocol that prioritizes the short flows to finish faster as compared to long flows. It tries to attain a higher share of the network bandwidth than other TCP flows as long as the flows are short and becomes TCP-friendly once the amount of transmitted data exceed a threshold value. With the objective of enabling short flows to finish faster, it also starts with a higher congestion window of 10 segments as done in the newer Linux TCPs. In the Slow-start phase, FLD_TCP is more aggressive than traditional TCP. While traditional TCP doubles its congestion window every round trip in the Slow-start phase, FLD_TCP triples its congestion window. Figure 1 demonstrates how the modification of congestion window growth factor in the Slow-start phase saves round trips required to finish flows. We compare the cumulative amount of bytes transferred over each round trip Time (RTT) between traditional (RENO) Slow-start and FLD_TCP's Slow-start. We assume flows in both starts with initial congestion window of 10 segments and each segment holds 1500 bytes for simplicity, and they do not hit the bottleneck so that congestion avoidance mode does not kick in. We see that after the 1st RTT, both transfer 15 kB, and by the 2nd RTT, FLD_TCP transfers 60 kB while RENO transfers just 45 kB. is difference increases as RTT progresses. Note that the Y-axis is in logarithmic scale, so the difference between the two mechanisms is quite significant. If we consider a flow of size 500 kB, we see that FLD_TCP would finish the flow in 4 RTT while RENO's Slow-start would require 6 RTT. Also in the congestion avoidance phase, as long as the amount of transmitted data is within the specified threshold value, FLD_TCP functions in Relentless TCP [26] mode. Instead of multiplicative decrease upon packet loss, this TCP reduces the congestion window by the amount of lost segments only. An advantage to this approach is that this abides by the Van Jacobsons packet conservation principle while being aggressive at the same time. And after the TCP flow exceeds the threshold value, FLD_TCP functions in traditional RENO behavior thus acting in a TCP-friendly manner. e threshold value is selected to be 2 MB which is enough to accommodate small objects from most connections as shown in Figure 3. us, for flows less than 2 MB, FLD_TCP provides an aggressive transport enabling them to gain a bigger share of network bandwidth in the presence of cross traffic at the bottlenecks allowing them to finish faster. us, this enhanced TCP with its aggressive components in the Slow-start, as well as the congestion avoidance phase allows short flows to gain a bigger share of network bandwidth and hence finish the flows faster. e workings of the protocol and its performance evaluation in bulk file transfers are described in papers [27,28]. In the following sections, we describe the implications of using such transport protocol for web data transport. Measurement and Analysis Setup e measurements are done on Google Chrome browser. rough the Chrome remote debugging protocol, various statistics are collected during the loading of the target web pages including connection characteristics. e setup for the experiments run in this paper is laid out in Figure 2. Since we compare the performance of an enhanced TCP (the enhancement is only sender side modification), against the commonly used Cubic TCP, squid proxies are placed on the path between browser and servers to emulate the servers. is implementation enables us to examine the effect of using a different transport protocol for sending web data through the network. Also, we use a WAN emulator in the path and create background traffic as required using IPerf [29] to study the performance under realistic network conditions. e WAN network emulator is set up to control the bandwidth and delay parameters of the link between the Journal of Computer Networks and Communications proxy and the browser. We use the LTE and the 3G network settings for our measurements. e maximum bandwidth is set at 100 Mbps for LTE and 21 Mbps for 3G. To obtain realistic values of delay and bandwidth for the wireless links, we analyzed the data obtained from the measurement platform Netradar [30]. We observed that for LTE, the effective average bandwidth is 25 Mbps and delay is 13 ms. For 3G, it was 7 Mbps and 25 ms, respectively. us, we introduce background traffic flows using IPerf to bring down the effective bandwith so that it is similar to the real world values. For the test measurement runs, each web page is downloaded 11 times, and the aggregate measures are analyzed to mitigate the effect of network variance. Connection Characteristics on the World Wide Web. In order to characterize the TCP connections in web data transport, we conducted measurements with 100 sites from the Alexa top sites list. e home pages of the sites were loaded on the browser, and the statistics were collected. To validate our sample dataset, we contrasted the distribution of the volume of data transferred through TCP connections across the web pages in our list against all sites maintained at httparchive.org. e httparchive dataset was obtained from Google BigQuery. Figure 3 shows similar distributions for the two datasets. We see that the data transferred through each connection have a long tail distribution. Almost 85% of connections transport less than 50 kB of data and 99% of connections transport less than 500 kB of data. TCP connections are increasingly becoming encrypted on the web, and the proxies are typically unable to cache the web objects served from such encrypted TCP connections. For our measurements where we emulate the end server with a proxy, we want to know how much of the web page data are being served from proxy cache and how much are encrypted or served from elsewhere. Since we install the two different TCPs on the squid proxies to compare them, we need to be aware of the effect of sending rates of the servers into our measurements. e objects that are cached at the proxies have their sending rate fully controlled by the transport protocol of the proxy. But for objects that cannot be cached, the sending rate of the actual content servers and the link condition between the proxy and the content servers can also have effect, although minimal. Figure 4 shows our observation on how many data objects are served from different caches as well as how many connections terminate at the proxy. We see that 59.42% of the web objects are served from TCP connections terminating at the squid proxy (where 50.42% return with cache HIT while 9% object return with cache miss). And 40.58% of objects are served from TCP connections terminating elsewhere (11.27% have different X-Cache header value (which indicates if the HTTP response was served from the proxy or not) from our squid proxy and 29.31% do not have X-Cache header). We also classified the page types based on whether the connections are encrypted or not. As illustrated in Figure 5, we observed that 37.561% of web pages have more than 90% of the TCP connections encrypted while 33.358% of web pages had more than 90% unencrypted connections serving the pages. 29.081% pages however were served by both encrypted and unencrypted TCP connections. We later compare our analysis on these page types. Based on these information, we now proceed to measure the effect of the enhanced transport protocol to the page load time of the web pages in our sample list. Results and Analysis In this section, we present the results of using the enhanced transport protocol FLD_TCP as compared to Cubic TCP for downloading web pages. Page load duration is obtained from the Chrome browser's loadEventFired event. ere are idle times during the page load, during which there is no ongoing data transfer from the network to the browser. us, we calculate the page net transfer time as the difference between the page load duration and the idle duration during the page load. Figure 6 shows the effect on page net transfer time of the three different page types on different network characteristics upon using the two different transport protocols, Cubic and FLD_TCP. We see that FLD_TCP reduces the page net transfer time for all pages types (encrypted, nonencrypted, and mixed). e reduction in page load time is more in 3G network setup than LTE. For the rest of the analysis, we will look into the LTE network setup only for the sake of conciseness. Now we look deeper into the page load process and analyze the distribution of idle time (mentioned above) during page load. e idle time constitutes of the duration where the page is not receiving any objects since we are concerned about network-level downstream activity. Figure 7 shows the CDF and histogram of the idle time duration during the page load indexed against the total page load duration. In general, there is significant idle time during a page load with median number of pages having around 50% of page load time as idle from network data reception perspective. Comparing the FLD_TCP and Cubic graph in the figure, we observe that FLD_TCP increases the idle time proportion in receiving objects for the page. Since FLD_TCP finished object transfers faster than Cubic, an increase in the proportion of idle time during page load occurs for FLD_TCP. Next, we look into the concurrency of data transfer between the browser and the servers during a page load. We Journal of Computer Networks and Communications create a page level effective concurrency metric for each page load. By tracking the number of active connections (objects being received through them) and their cumulative duration, we obtain a weighted value for a page that denotes the effective number of connections seen throughout the page load duration. A concurrency level of 1.5 would imply that without any idle time, the page could be served the same amount of data through 1.5 connections. Figure 8 shows that effectively most pages seem to get a concurrency level less than 2. And it also shows that FLD_TCP decreases the effective concurrency of the page, implying that it would require fewer TCP connections to serve a page, leading to improved connection efficiency. Next, we conduct a connection-level analysis of the two transport protocols. As established earlier, connections are either encrypted or unencrypted. We measure the effects in the duration of both connection types upon using the two protocols. From Figure 9, we observe that FLD_TCP reduces connection duration for both encrypted and unencrypted connections. Since HTTP/2 protocol is gaining momentum, we also analyzed the effect of FLD_TCP on the HTTP/2 connections. Figure 10 shows that the connection durations for HTTP/2 connections are also reduced by FLD_TCP. For the objects-level analysis, Figure 11 shows the amount of data transferred in each round trip time. e oneway delay for 13 ms was used in our LTE setup that makes the RTT at least 26 ms. us, the figure shows the maximum size of objects transferred in each RTT bin. In this analysis, we only consider the first object in the connection that allows us to see the window increase function in action. We see that in the first RTT of the connection, the maximum data size is 14 KB for both FLD_TCP and Cubic in accordance to the 10 segments initial window size. In the second RTT, Cubic's window size doubles while FLD_TCP's window size triples which allows bigger-sized web objects transfer to also finish within the second RTT. Similarly, in each of the succeeding RTTs, FLD_TCP is able to finish the transfer of larger web objects than Cubic. us, the effects of using FLD_TCP at the sender side of TCP connections are viewed at three levels of granularity. On the objects level, RTTs are saved, that is, web objects are fetched faster. On connection level, the connection durations of the TCP connections become shorter for both encrypted (including HTTP/2) and unencrypted connections. ese accumulate at the page level, making the web pages load faster thus improving user experience on the web. Journal of Computer Networks and Communications For service providers, even a marginal improvement of web latency translates to significant increase of user retention rates. us, using these methods to improve their services can have high economic impact. For modern wireless devices, the energy cost per bit for wireless transmission is 1000 times more than the energy cost for single computation of that bit [31]. By reducing the data transmission duration, there lies a huge energy saving potential as well. Conclusion In this paper, we analyzed the connection characteristics of web data transport. We also analyzed the effect of using an aggressive transport protocol for web data transfer. We used the transport protocol at a proxy but it can also be used at the content servers to boost the data transport for short flows. We found that the an aggressive transport protocol such as FLD_TCP reduces the load time of web pages by saving RTT while fetching web objects and reducing the connection duration as compared to Cubic TCP. It also decreases the concurrency of flows, thereby creating fewer TCP connections which come with connection setup overheads required to serve a page. Improving web user experience requires optimization on both browsers and the data transport. Networks are evolving towards 5G, SDN, and cloudlet architectures. ese mechanisms of alternate transport protocols will help realize low latency services for improved user experiences in general. Journal of Computer Networks and Communications
2018-07-16T23:43:45.604Z
2018-06-06T00:00:00.000
{ "year": 2018, "sha1": "73e1973d9eddddf9fe74ad64da53f1937434a9e8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/4520312", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "698a4bdcaad506961687a932e9722fe315b65792", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
204836641
pes2o/s2orc
v3-fos-license
De novo gonad transcriptome analysis of the common littoral shrimp Palaemon serratus: novel insights into sex-related genes Background The common littoral shrimp Palaemon serratus is an economically important decapod resource in some European communities. Aquaculture practices prevent the genetic deterioration of wild stocks caused by overfishing and at the same time enhance the production. The biotechnological manipulation of sex-related genes has the proved potential to improve the aquaculture production but the scarcity of genomic data about P. serratus hinders these applications. RNA-Seq analysis has been performed on ovary and testis samples to generate a reference gonadal transcriptome. Differential expression analyses were conducted between three ovary and three testis samples sequenced by Illumina HiSeq 4000 PE100 to reveal sex-related genes with sex-biased or sex-specific expression patterns. Results A total of 224.5 and 281.1 million paired-end reads were produced from ovary and testis samples, respectively. De novo assembly of ovary and testis trimmed reads yielded a transcriptome with 39,186 transcripts. The 29.57% of the transcriptome retrieved at least one annotation and 11,087 differentially expressed genes (DEGs) were detected between ovary and testis replicates. Six thousand two hundred seven genes were up-regulated in ovaries meanwhile 4880 genes were up-regulated in testes. Candidate genes to be involved in sexual development and gonadal development processes were retrieved from the transcriptome. These sex-related genes were discussed taking into account whether they were up-regulated in ovary, up-regulated in testis or not differentially expressed between gonads and in the framework of previous findings in other crustacean species. Conclusions This is the first transcriptome analysis of P. serratus gonads using RNA-Seq technology. Interesting findings about sex-related genes from an evolutionary perspective (such as Dmrt1) and for putative future aquaculture applications (Iag or vitellogenesis genes) are reported here. We provide a valuable dataset that will facilitate further research into the reproductive biology of this shrimp. Background The common littoral shrimp Palaemon serratus (Pennant, 1777) is a crustacean decapod with a geographical distribution ranging the Atlantic Ocean (from Scotland and Denmark to Mauritania, including Azores, Madeira and Canary Islands) and the entire Mediterranean Sea and the Black Sea [1]. This species inhabits the intertidal and subtidal soft-sediment of estuaries in the reproductive season, and rocky bottoms covered with seagrass and algae [2,3]. Palaemon serratus fishing activity is crucial in some European communities, mainly around the British Isles, France and northern Spain [4]. In Galicia (NW Spain) the volume of catches varies from 47.6 to 90.7 tons traded per year, what equals to worth of 2 million euros per year in this region (data obtained from https://www.pescadegalicia.gal/ on 26 Sep 2018, Xunta de Galicia). The high commercial value of this species could possibly lead to overfishing [4]. Implementation of proper management measures that ensure a sustainable exploitation will prevent depletion or genetic deterioration of wild fisheries [5,6]. Aquaculture practices might improve P. serratus production at once reduce the fishing pressure over the wild populations. In the field of aquaculture, reproductive traits are considered economically significant. Hence, understanding sexual and reproductive development is necessary to obtain successful and sustainable cultures, to increase seed quality or to breed genetically improved lines [7,8]. For instance, as sex dimorphism in growth is common in crustaceans, monosex aquaculture of commercially relevant species is especially interesting. In monosex populations the yield is increased because energy from reproduction is invested in growth, resulting in larger-size individuals than in sex-mixed cultures [9,10]. A better knowledge about the genetics of crustacean sexual development facilitates the application of biotechnological strategies, such as sexchange induction, benefiting productivity [7]. Sexual development includes sex determination and sex differentiation processes. In Decapoda, sex is determined by the initiation of a genetic cascade triggered by a master sex-determining gene. Downstream genes in this cascade act as sex-regulator genes, leading to the sex differentiation pathway, which in turn results in a sex-specific phenotype development [11]. Due to the lack of genomic information in crustacean decapods, sex-determining genes have not been identified and even sex-related genes have been rarely reported [9]. Several genes are considered preliminary candidates to be implicated in decapod sex determination but since these genes have been identified through homology screening, the list is heavily biased by genes characterized in model species as Drosophila melanogaster, Caenorhabditis elegans and Mammalia (e.g. Sxl, Tra, Tra-2, Dsx, Fem-1 or Sry; see review in [11]). Among these candidates, it is noteworthy to highlight that Dmrt genes (doublesex and male abnormal-3-related transcription factors) have been noted as the only gene family with a conserved function in sex determination across metazoans [11,12], so they are particularly intriguing. About sex differentiation, the insulin-like androgenic factor (IAG) is a well characterized hormone with a conserved central role in Malacostraca. Iag expression in the androgenic gland (AG) leads sexual differentiation to maleness by governing the onset of testicular and secondary-sex characteristics development in males. Upstream in the sex differentiation pathway, an array of neuropeptides secreted by the eyestalk regulates the Iag expression [11]. It was shown that AGimplanted females become males, and conversely AGablated males turned into females (see review in [13]). As this surgical procedure implies a high mortality rate, it has been recently achieved that Iag-silenced males shifted phenotypically into females in the prawn Macrobrachium rosenbergii [14]. Thus, the biotechnological manipulation of the expression of sex determination or sex differentiation genes has the potential to improve the aquaculture production, such as creating monosex populations among other possibilities. Nevertheless, prior to the implementation of any genetic manipulation technique, it is necessary a depth-understanding about the genetic factors underlying the sexual development in P. serratus. RNA-Seq greatly enhances the capability for gene discovery in non-model organisms where genomic data is not available. Transcriptome profiling using highthroughput sequencing allows the identification of transcripts involved in biological processes [15]. Comparative transcriptomics and differential expression analyses (DEA) between female and male reproductive tissues enable the detection of transcripts with sex-biased and sex-specific expression. In fact, sex determination and sex differentiation genes have been identified in several commercial decapod species thanks to transcriptomic analyses of certain tissues, including gonads [9,[16][17][18]. Genetic studies in P. serratus are scarce and mainly focused in its population genetics and cytogenetics. Solely one transcriptomic work is available for this species, providing data to study larval development and metamorphosis [19]. Regarding to sex determination in P. serratus, it is only known that heteromorphic sex chromosomes are absent [20]. No sex determination or sex differentiation genes or pathways have been reported for this shrimp. Accordingly, the aim of the present study was to identify candidate genes to be involved in the sexual development of P. serratus. For this purpose, an ovary and testis transcriptome was assembled and annotated from Illumina high-throughput sequencing reads, and genes with differential expression between ovarian and testicular tissues were studied. To the best of our knowledge, this is the first work that address the transcriptome profile analysis of both male and female P. serratus gonads, providing new data about sex-related genes in this species. Results Quality control, trimming, de novo assembly and mapping Illumina paired-end sequencing generated 525,605,992 raw reads (244,543,276 and 281,062,716 raw reads from ovary and testis samples, respectively), corresponding to 26.30 GB of sequence data (12.10 GB and 14.20 GB from ovary and testis, respectively). Raw reads are stored into the NCBI Sequence Read Archive database under accession numbers SRR8631955-SRR8631960. After the trimming and filtering steps a total of 196,658,624 cleaned reads were recovered from female and 245,684,318 from male samples, for a total of 442,342,942 reads survived the processing. De novo Trinity assembly from both ovary and testis reads together produced 328,495 assembled transcripts. BUSCO revealed a 97.6% of transcriptome completeness, indicating a high quality de novo assembly. As the proportion of duplicates was high (64%), the EvidentialGene tr2aacds pipeline was used to reduce the transcriptome redundancy. The nonredundant transcriptome consisted in 79,796 transcripts with also a high level of completeness (97%), and only a resulting 4.6% of duplicates. De novo assembly statistics are summarised in Table 1. All female and male reads were mapped back together on the non-redundant transcriptome and only transcripts with an expression value of TPM ≥ 1 were included in the final set of transcripts. After removing ribosomal-and mitochondrial-related transcripts, the final ovary and testis non-redundant transcriptome consisted in 39,186 transcripts. Functional annotation A total of 11,586 assembled transcripts (29.57%) were annotated at least against one of the used databases (Additional file 1: Table S1). Therefore, the 70.43% of the ovary and testis transcriptome remained unannotated. Number of transcripts matching each annotation category is listed in Table 2. BLASTx searches against the UniProtKB/Swiss-Prot database identified 17,285 transcripts from 437 species. As UniProtKB/Swiss-Prot is a reviewed database, the BLASTx top hit species were model species, e.g. Homo sapiens (2547 sequence hits), Mus musculus (1749 hits), Drosophila melanogaster (1619 hits), Rattus norvergicus (581 hits) or Bos taurus (540 hits). TransDecoder found 18,312 putative ORFs (Open Reading Frames) in the assembled transcripts and about 59% of them were full-length. A total of 15,961 out of these protein-coding candidate transcripts returned a BLASTp hit when they were searched against the known proteins of the UniProtKB/Swiss-Prot database. Searches against the PFAM protein domain database retrieved 182,498 hits on the putative peptides. Of these ORFs, 1492 were predicted to contain a secretion domain and 2790 were predicted to contain at least one transmembrane helix domain. The assembled transcripts were also annotated with Gene Ontology (GO) terms according to the three major GO categories: cellular component, molecular function, and biological process. A total of 13,960 GO terms were assigned and 6846 transcripts (17.47% of the transcriptome) were associated with at least one term. Accurately, 1809 transcripts were assigned to a cellular component category, 5952 to a molecular function category, and 3155 to a biological process category. Differential expression and enrichment analyses Clean reads from each sample were mapped back on the non-redundant ovary and testis transcriptome. The percentage of mapped reads oscillated from 65.66 to 73.34% among samples. At the same time, larvae and muscle reads downloaded from the SRA (SRR4341161-2 and SRR4341163-4, respectively) were cleaned and also mapped separately on the ovary and testis transcriptome in order to identify genes with a gonad-biased expression. The percentage of mapping ranged from 27.83% mapped reads for larvae to 32.68% mapped reads for muscle. Gene expression of the assembled transcripts in the larvae and muscle samples was calculated and pairwise differential expression analyses (DEAs) were performed between gonad and non-gonad samples (ovaries vs. larvae, ovaries vs muscle, testes vs larvae and testes vs muscle). Genes with a False Discovery Rate (FDR) pvalue ≤0.01 and a fold-change > 2 were consider significatively up-regulated in the gonad tissues respecting to the non-gonad tissues. A total of 1961 and 774 genes were identified as up-regulated in ovaries with regard to larvae and to muscle, respectively. As for testes, 1338 and 1118 genes were detected as up-regulated with regard to larvae and to muscle, respectively. By removing duplicated up-regulated genes shared by both female and male gonads, 3646 genes were considered as upregulated genes in both gonads and so they were tagged with a 'G' (gonad up-regulated) in the transcriptome annotation table (Additional file 1: Table S1). A DEA was carried out between ovary and testis libraries to identify DEGs between sexes and therefore, putative sex-related genes. Transcripts with a FDR pvalue ≤0.01 and an absolute value of fold-change > 2 were considered to be significative DEGs. Overall, 11, 087 transcripts were identified as DEGs (Additional file 2: Figure S1). Among these DEGs, the 39.09% out of them had an annotation from at least one of the used databases, and 6207 genes were up-regulated in ovaries meanwhile 4880 genes were up-regulated in testes (Additional file 1: Tables S2 and Additional file 1: Table S3). A gene was considered ovary-or testis-specifically expressed when its TPM value were less than 1 in the three testis or ovary samples, respectively. The 78.22% of the genes previously identified as gonad up-regulated genes (G-genes) matched with DEGs between ovaries and testes. Fine scale comparative results from the different DEAs are shown in Table 3. Comparative GO classification distribution of the annotated genes showed no large differences between DEGs and the entire gonadal transcriptome (Fig. 1a). Statistically, GO enrichment analyses were performed and a p-value ≤0.01 was considered as threshold to identify putative functional differences between DEGs and the ovary and testis transcriptome. Based on these GO analyses, 3 GO terms were found to be significantly enriched in DEGs in the cellular component category, 'extracellular region', 'integral component of membrane' and 'membrane'. There were 18 significantly enriched GO terms in DEGs at the molecular function category, being the most enriched terms 'ionotropic glutamate receptor activity', 'chitin binding' and 'G protein-coupled receptor activity'. As for the biological process level, 8 GO terms showed significative enrichment in DEGs, corresponding the most enriched assignations to 'chitin metabolic process', 'transmembrane transport' and 'DNA integration'. Complete results from the GO enrichment analyses are included as Additional file 1: Table S4. No remarkable differences between ovary and testis upregulated genes were detected when their GO term distributions were plotted (Fig. 1b). Candidate DEGs to be involved in sexual development We aimed to reveal genes involved in sex determination, sex differentiation and gonadal development pathways. To achieve this purpose, it is crucial to explore the DEGs between sexes. Thus, up-regulated expressed genes in ovaries and in testes were mined according to the transcriptome annotation and the published literature in search of putative sex-related genes. When different transcripts matched the same gene annotation their BLAST hits were manually cured, and then the fulllength transcript was chosen or, in the case that all of them were complete ORFs, the transcript with the highest expression was selected. Up-regulated genes in ovary The most expressed up-regulated genes in ovary libraries generally retrieved no annotations that allowed us to identify them. These top expressed genes mostly matched with genes tagged as 'gonad up-regulated' when they were compared with non-gonad tissues (Additional file 1: Table S2). The most highly expressed up-regulated genes that could be annotated were cytochrome c oxidase subunit 3 (Cox3), cellular retinoic acid-binding protein 2 (Crabp2), ferritin, death-associated inhibitor of apoptosis 1 (Diap1), ankyrin-like, annexin and NPC intracellular cholesterol transporter 2 homolog a (Npc2). However, our focus was to examine candidate genes to be involved in female sexual development within the 6207 up-regulated genes in ovary. A total of 15 sex-related DEGs were found to be up-regulated in ovary samples, being 10 out of them also gonad up-regulated genes and 3 considered ovary-specific genes (Table 4). Genes associated with vitellogenesis and ovary development were the DEGs that showed the higher expression levels: vitellogenin (Vg), vitellogenin receptor (VgR), cathepsin D, chorion peroxidase (Pxt) and profilin. Among up-regulated DEGs, they were also present prostaglandins metabolism-related genes, prostaglandin D synthase (Hpgds) and prostaglandin E synthase 2 (Pges2), along with farnesoic acid-O-methyl transferase (FAO-MeT), heat shock cognate 70 (Hsc70), mothers against decapentaplegic 4 (Smad4), gonadotropin-releasing hormone receptor (Gnrhr), and progestin membrane receptor component 1 (Pgmrc1). Feminization-1b (Fem-1b), disrupted meiotic cDNA (Dmc1) and transcription factor Sox8 were detected as up-regulated genes in the ovary even though they were traditionally associated Discussion). Up-regulated genes in testis Up-regulated genes in testis that showed the highest expression values also remained unannotated but they were identified as gonad up-regulated genes (Additional file 1: Table S3). The most expressed up-regulated annotated genes in testis were Kazal-like protease inhibitor MCPI, tenascin-X (TnxB), histone H1-delta (H1D), myosin light chain kinase (Mylk), histone H2A (H2A), RNA-directed RNA polymerase L and protein innexin-like. Regarding to DEGs associated with male sexual development, within the 4880 up-regulated genes in testis we identified 6 sexrelated genes, being 2 out of them gonad up-regulated genes and 5 considered as testis-specific genes ( Table 5). Doublesex-and mab-3-related transcription factor 1 (Dmrt1) was the sex-related up-regulated gene with the highest expression. As Dmrt1, three transcription factors belonging to the SOX (Sry-related HMG box) family showed testis-specific expression, Sox5, Sox14 and Sox15. There were also up-regulated genes in testis that did not agree with the traditional literature about male sexual development, namely vitelline membrane outer layer protein 1 (Vmo1) and heat shock protein 90 (Hsp90) (see Discussion). Not differentially expressed genes Several genes that have been largely dealed as sex-related genes in crustaceans were investigated even though they were not DEGs in the present study (Table 6). No differential expression of these genes might respond to they are not involved in sexual development or because the differential expression took place in gonads before dissection. In detail, these genes comprised genes that were reported to act either triggering female or male determination or sex differentiation (e.g. sex-lethal, feminization-1a and -1c or forkhead box L2) or taking part in specific processes later on ovary (e.g. estrogen-related receptor or follistatin) or testis (e.g. kinesin-like protein KIFC1) development and maturation. Due to the expression of some sexrelated genes was TPM < 1, they were not included in the final transcriptome. These genes with an extremely low expression were doublesex and mab-3-related transcription factor 11E (Dmrt11E), insulin-like androgenic factor (Iag) and Wnt4 transcription factor. Other interesting genes in reproduction with a documented preferential expression in non-gonad tissues were detected in our gonadal transcriptome, two members of the crustacean hyperglycemic hormone superfamily (Chh and Mih) and the ecdysone receptor (EcR) gene. Sequences of the sex-related genes listed in Table 4, Table 5 and Table 6 can be easily access in Additional file 3: File S1. Finally, there were sex-related genes whose expression was not detected neither in the non-redundant transcriptome nor in the redundant one. This fact could be explained because the expression of these genes occurs in other tissues, or because they were expressed in gonads but not at the stage when animals were dissected or because they are not actually sex-regulators in P. serratus. Relevant non-expressed sex-related genes were doublesex (Dsx), fruitless (Fru), sex-determining region Y (Sry), transcription factors Sox9 and Sox10, cytochrome P450 aromatase, R-spondin-1 (Rspo1), steroidogenic factor 1 (Sf-1) and fibroblast growth factor 9 (Fgf9). Discussion Palaemon serratus is a relevant commercial species in some countries such as UK, Ireland, France, Spain and Portugal [21]. The lack of genomic information about P. serratus hinders the application of potential aquaculture techniques, especially those focused on reproductive traits related to sex dimorphism. Therefore, in the present work we attempted to unravel sex-related genes featured in sex determination, sex differentiation and/or gonadal development using a RNA-Seq approach. This is the first transcriptome analysis of the gonads of a Palaemon species and the first work that provides data about sex-related genes in P. serratus. Within Palaemonidae, sex-related genes have been only studied in two Macrobrachium species [8,9,22]. Here, a reference gonadal transcriptome was obtained using ovary and testis reads. Statistics indicated a high quality de novo assembly but only the 29.57% of the transcriptome could be annotated due to the scarcity of crustacean genomic sequences. Precisely, the most expressed up-regulated genes both in ovary and in testis were not annotated. As the non-annotated transcripts correspond to unknown novel transcripts or to unreviewed transcripts, these unannotated differentially expressed sequences deserve attention in future functional analyses about sex-related genes in this shrimp. Additionally, the complete transcriptome annotation is provided, largely increasing the sequence resources available for this species. Sexual development includes several processes orchestrated by a variety of regulators. Overall, sex determination and sex differentiation are intricated processes not always clearly distinguishable because their signalling cascades can be integrated [11]. Sex determination mechanisms are widely divergent in animals, even between closely related species. This variability is due to the rapid evolution that sex-biased genes experience [11,23] and is reflected to crustaceans, where is not uncommon for findings in sex-related genes to differ among species. This can be linked to there is no conserved sex determination pathway in decapods and it likely evolved independently several times, making difficult to trace master sex-regulators [24]. Back to P. serratus, heteromorphic sex chromosomes are absent [20] and none sex determination system has been documented. It was suggested that there is a sex chromosome dosage compensation mechanism involving Msl3 gene in the tissues of the palaemonid Macrobrachium nipponense [25] as in Drosophila. Unfortunately, Msl3 was not a DEG between females and males in P. serratus, giving no hint to whether a heterogametic sex exists in this shrimp or not and what is it. Hence, we focused our efforts to study genes described as 'sex-related', mainly in crustaceans. Orthologs of sex determination genes in the model arthropod Drosophila were found in our transcriptome database. Sxl and Tra-2 orthologs were detected without sex differential expression meanwhile Tra, Dsx and Fru were absent. In Drosophila sex is ruled by the genetic pathway Sxl-Tra/Tra-2-Dsx/Fru, being Sxl the master sex determinant gene [12,26]. It has been proposed that crustaceans may adopt the Drosophila sex determination pathway given the findings reported in some species as Penaeus monodon [27], Macrobrachium nipponense [28], Penaeus chinensis [29], Penaeus vannamei [30] and Eriocheir sinensis [31]. However, our data are in line with those that stand that these genes do not act in decapods as they do in insects (see review in [7]), as suggested for the lobster Sagmariasus verreauxi [32] or for the crab Scylla paramamosain [33]. In the nematode Caenorhabditis elegans, Fem-1 is a component of the sex determination signalling pathway that promotes the male phenotype [34,35]. There are studies in decapods that pointed out the putative role of Fem-1 genes in male sex determination [18,36]. Orthologs of the three members of the Fem-1 family were found in P. serratus but none of them was up-regulated in the testis. Fem-1a and Fem-1b were not DEGs between sexes whilst Fem-1b was found to be slightly up-regulated in the ovary. Particularly to Fem-1b, [37] detected a higher expression of this gene in the testes than in the ovaries of the prawn Macrobrachium nipponense, which is the opposite of what we found in P. serratus. Also in M. nipponense, [38] reported a ovary-specific Fem-1 gene that could be involved in sex determination or differentiation and in ovarian maturation in this species. Fem-1b ortholog was up-regulated in the ovary of P. serratus but it was not an ovary-specific gene given that it also exhibits a considerable expression in testis. We conclude that similarly to in Scylla paramamosain [17] and in Penaeus vannamei [39], our results seem to indicate that whether Fem-1 genes are involved in the sexual development in P. serratus has yet to be established given that they are expressed in both gonads. The male-determining gene in most mammals is the Y chromosome Sry gen [40,41]. SRY along with SF-1 induces testicular development through the activation of the transcription factor SOX9 [42]. SOX9 up-regulates via FGF9 the expression of the Dmrt1 gene, which is the major male sex differentiation gene, promoting testis development and maintenance [43]. Expression of Sry, Sf-1, Sox9 and Fgf9 was not detected in the gonads of P. serratus. Some members of the Sox family were DEGs, Sox8 was up-regulated in ovaries and Sox5, Sox14 and Sox15 were specifically expressed in testes. The upregulation of Sox8 in females was unexpected as this gene has been related with testis development [12,32]. Both Sox5 and Sox14 were previously identified as genes involved in male sex differentiation with expression in testis tissues [15,[44][45][46] but Sox15 has never been defined as a testis gene before. Nevertheless, the most relevant finding in P. serratus gonadal transcriptome is the testis-specific expression of the Dmrt1 gene. In some vertebrate species Dmrt1 has been qualified as a sex-determining gene [47,48]. Moreover, Dmrt1 paralogs are the master sex determinant genes in medaka fish [49], frogs [50] and, recently in Sagmariasus verreauxi [11] as the first invertebrate species in which Dmrt1 determinates the sex. DMRTs are transcription factors characterized by the presence of a DM domain DNA binding motif. The relationship between DM domain genes and sex has been deeply investigated and as it is thought that their ancestral function was likely to determine gonadal sex and they subsequently expanded to control sexual dimorphism in other tissues [43]. Dmrt is the only gene family with a conserved function in sex determination across Animalia [11] and orthologs have been identified with a testis-restricted expression in the transcriptome of a few decapod species as the crabs Eriocheir sinensis [16] and Scylla paramamosain [17]. Keeping this in mind, the testis-specific Dmrt1 ortholog found in P. serratus should be considered the best candidate gene to be involved in the sex determination of this species. Future efforts should be directed to functionally characterize the Dmrt1 gene and to pursue upstream regulators and downstream targets. Another Dmrt gene was found in the transcriptome with an extremely low expression (TPM < 1), the Dmrt11E gene that has been previously detected in some decapods. This gene exhibited a testis-biased expression in Macrobrachium rosenbergii [51] and an androgenic gland-biased expression in Sagmariasus verreauxi [32], and in both cases it was suggested that Dmrt11E is a male differentiation regulator via IAG. As the Dmrt11E expression in P. serratus gonads is very low, another tissue should be the primary site of expression instead of testis, likely the AG. Owing to its proved relationship with the IAG in other species, expression of Dmrt11E should be studied in different organs of P. serratus. The IAG is the key regulator of male sex differentiation in the members of Malacostraca and its expression takes place exclusively in the androgenic gland (AG) of males. The Iag gene was characterized in several crustaceans, e.g. in the prawn Penaeus monodon [52], in the shrimps Penaeus vannamei [53], Macrobrachium lar, Palaemon paucides and P. pacificus [54], or in the spiny lobsters Sagmariasus verreauxi and Jasus edwardsii [32]. The expression level of the Iag gene in our gonadal transcriptome was very low (TPM = 0.84), likely because in Palaemon species the AG is located along the sperm ducts and not in the testis [54], and they should not be dissected along with the testis. A better knowledge about Iag is crucial because its genetic manipulation-based biotechnology has the potential to dramatically transform the entire aquaculture industry [55]. Monosex culture has the potential to enhance the production because energy for reproduction is allocated to growing, so the individuals reach higher sizes [54]. As female shrimps grow larger and faster, all-female population cultures are preferred for P. serratus. It has been proved in different decapod species that AG removal feminizes males [13], but this surgical procedure frequently entails mortality, so get monosex cultures by genetic manipulation is highly attractive. All-male populations were achieved for Macrobrachium rosenbergii silencing Iag using RNA interference [14]. To obtain all-female populations in P. serratus we suggest exploring the manipulation of the Iag gene to induce female sex-reversal. We provide the first Iag sequence for P. serratus and, even though is a partial sequence, it has the potential to pave the way to further biotechnological approaches that enable the production of female monosex cultures. These aquaculture strategies may enhance P. serratus production and at the same time prevent the genetic deterioration of the wild stocks caused by overfishing. Genes referred in literature as 'testis development' genes were also investigated. KIFC1 is a C-terminal kinesin motor protein that participates in acrosome biogenesis and nuclear reshaping during spermiogenesis in the palaemonids Macrobrachium nipponense [56] and Palaemon modestus [57] among other crustacean species. Kifc1 gene showed a high expression in the testis of M. nipponense and P. modestus but in both species this gene was also being expressed in other tissues, likely taking part in vesicle transportation processes [56], so is not strange that Kifc1 was not a DEG between ovary and testis in P. serratus. Temporal and spatial expression of Kifc1 during spermiogenesis in P. serratus could be address since its protein is vital in the formation of the acroframosome, an exclusive structure of caridean shrimp spermatids. Concerning to DMC1, it plays a major role in meiotic recombination and has been associated to spermatogenesis in crustaceans [27]. Unlike in the crawfish Procambarus clarkii [18] and the crab Portunus trituberculatus [58], Dmc1 was not upregulated in the testis but in the ovary of Palaemon serratus. It is important to highlight that Dmc1 was a gonad up-regulated gene, most likely because it is expressed in meiotic germ cells [59], but its role in the spermatogenesis of P. serratus should not be directly attributed without further testing. Another gene related with germ cell development is vasa, an ATP-dependent RNA helicase. Since vasa plays a role in both oogenesis and spermatogenesis its expression was exclusively detected in the gonads of the shrimp Penaeus vannamei [60] or of the crab Scylla paramamosain [61]. In this sense it was not surprising that vasa was a gonad up-regulated gene respect to nongonad tissues but not a DEG between the ovary and the testis of P. serratus. Regarding to female sex determination, Foxl2 gene encodes a conserved forkhead transcription factor preferentially expressed in the ovary of vertebrates, controlling the ovarian differentiation and maintenance by repression of testis-specific genes [62,63]. If Dmrt1 is present, Foxl2 expression is repressed, but in the absence of Dmrt1, Foxl2 inhibits the male developmental pathway and promotes the female. The expression of Foxl2 showed a changing pattern among crustacean species, i.e. up-regulated in ovary [18], not DEG between sexes [31] or even up-regulated in testis [64]. Foxl2 was not a DEG between ovary and testis in P. serratus, which supports that the role of Foxl2 in sex determination in invertebrates remains unclear [65,66]. Dmrt1 also acts repressing the RSPO1-WNT4-β-catenin signalling pathway, another female sex determination cascade that promotes ovary development in vertebrates independently and complementary to the Foxl2-leading pathway [67][68][69]. Given that Rspo1 was not found in the transcriptome of P. serratus and Wnt4 and β-catenin were detected as not DEGs, the existence and function of this pathway is unknown in this shrimp, as it was already advanced for other decapods with orthologs found [31,33]. Thereby, vertebrate pathways leadered by Foxl2 and Rspo1 do not seem to determine female development in P. serratus in the light of our data. However, further experiments should confirm these lacks of function in sexual development. Vitellogenesis, the production and accumulation of yolk, is crucial to oogenesis and ovarian maturation. In oviparous vertebrates vitellogenin synthesis is enhanced by 17β-estradiol E2, with the estrogen receptor (ER) and the HSP90 acting as mediators [70][71][72][73]. The elements of the E2-ER-HSP90 pathway were found in the transcriptome of decapod species [7,64,74] but only Hsp90 showed a higher expression in ovaries than in testes in the crab Scylla paramamosain [17]. An estrogen-related receptor gene (Err) was found in P. serratus without differential expression and Hsp90 was up-regulated in the testicular tissue, so our results agree that more studies are necessary to clarify if the E2-ER/ERR-HSP90 pathway exists in crustaceans and whether vitellogenesis it is regulated by estrogen-like hormones as it is in vertebrates. Another regulatory pathway that stimulates ovarian development and vitellogenesis in some decapods involves methyl farnesoate (MF), a crustacean juvenile hormone analogue [75,76]. Farnesoic acid-O-methyl transferase (FAOMeT) encondes the enzyme that catalyzes formation of MF and it was up-regulated in the ovaries of P. serratus respect to testes and non-gonad tissues, indicating the putative role of this hormone in the ovarian maturation of the species. Vitellogenin (Vg) and vitellogenin receptor (VgR), the main vitellogenesis genes, were up-regulated in the ovary of P. serratus as in multiple decapod species (e.g. [24] or [58]). The expression of VgR was higher than the expression of Vg, likely because the hepatopancreas is considered the primary site of VG production instead of the ovary while the VGR allow VG uptake from the hemolymph by oocytes. Vitelline membrane outer layer protein 1-like gene (Vmo1) was strangely up-regulated in males, a finding also reported for the crab Eriocheir sinensis [31]. Other 'ovary development' genes were also explored in the gonadal transcriptome of P. serratus. Since prostaglandins (PGs) have been described as factors that promote ovary development in crustaceans [77], PG genes were studied in P. serratus. HPGDS and PGES2 showed an ovary-biased expression, so they might have an implication in female gonad development [78,79]. Although PTGR1 is involved in the ovary development in the shrimp Penaeus monodon [80], it was not preferentially expressed in the ovaries of Palaemon serratus. Cathepsins have been also related with ovarian development in some crustaceans [22,81,82]. Cathepsin D is a needed protein for the formation of the yolk in vertebrates [83] and its gene was the only cathepsin gene up-regulated in the ovary of P. serratus. Other up-regulated genes in the ovary of P. serratus that are required for ovary maturation in other crustaceans were chorion peroxidase [31], profilin [84], Smad4 [85], Gnrhr [86] and Pgmrc1 [87]. Expression of Hsc70 was also up-regulated in the ovary of P. serratus, suggesting a putative role in reproductive events as in Macrobrachium rosengergii, where Hsc70 was enriched in the ovary [88]. Fst is known to be involved in folliculogenesis and ovary development in vertebrates [89], but it was not a DEG between the gonads of P. serratus, similarly to Eriocheir sinensis [31]. The expression of Fst should be examined throughout the stages of the ovarian development to evaluate whether it participates in it or not. Cytochrome P450 aromatase (Cyp19a) is also essential in the female gonadal development in vertebrates, converting androgens in estrogens. Several genes belonging to cytochrome P450 superfamily were detected in the transcriptome of P. serratus, but none of them was aromatase, an absence also found in the palaemonid Macrobrachium rosenbergii [90]. Lastly, two genes encoding two members of the CHHsuperfamily were detected in the gonadal transcriptome with a very low and not differential expression between sexes: the crustacean hyperglycemic hormone (Chh) and the molt inhibiting hormone (Mih). CHH neuropeptides are multifunctional hormones with roles in reproduction, regulating AG proliferation or MF production among other activities (see review in [77]). The eyestalk is the preferential site of production of these neurohormones, but it was repeatedly demonstrated that these genes are also expressed in multiple tissues [91][92][93], including gonads in some species as in P. serratus. Likewise, as recent studies have demonstrated that ecdysteroids regulate vitellogenesis, ovarian maturation and spermatogenesis in decapods (see review in [78]), it is also interesting to highlight the expression of the ecdysone receptor (EcR) gene in both female and male P. serratus gonads. Conclusions This study encompasses the first large-scale RNA-Seq and comprehensive transcriptome analysis of Palaemon serratus gonads. More than 442 million of clean reads were obtained, 39,186 transcripts were assembled and annotated and 11, 087 out of them were found to be DEGs between ovary and testis. Sex-related genes were identified and their expression between sexes was studied. A wide inventory of sex-related genes is provided and thoroughly discussed in the framework of previous findings in other crustacean species. This is the first time that sex-related genes have been addressed in a Palaemon species, so this transcriptomic analysis will facilitate further experimental research that aimed to delve into the sex determination, sex differentiation and gonadal development mechanisms of P. serratus and close species. The candidate genes to be involved in sexual development might also shed light about the evolution of sex-regulators in crustaceans. Furthermore, we report some particularly interesting genes towards investigating future aquaculture applications for P. serratus. Methods Specimens of P. serratus used in this study were collected inshore from the Ártabro Gulf (43°22′00″N, 8°28′ 00′W) in the northwest of Spain using a fish trap. Animals were carried alive to the Aquarium Finisterrae dependencies (A Coruña, Spain) where they were kept at 18°C in an aerated aquarium while they were sorted into sexes. According to [3], sex was determined by the presence (in males) or absence (in females) of the masculine appendix on the endopodite of the second pleopod. Shrimps (3-4.5 g body weight) were anesthetized on ice for 5 min prior to be sacrificed by dissection, and then gonads from three adult females and three adult males were quickly removed and directly immersed in liquid nitrogen. The development stage of the gonads was fully mature in all individuals. RNA isolation and library construction were carried out at AllGenetics & Biology SL (A Coruña, Spain). Total RNA was extracted from the six samples by grinding them with a mortar and pestle under liquid nitrogen. The resulting powder was used for the extraction using NZYTech's Total RNA isolation kit (NZYTech). Pure RNA was eluted in a final volume of 30 μL and then quantified and quality-checked in an Agilent 2100 Bioanalyzer (Agilent) using the Agilent RNA 6000 Kit (Agilent). Illumina's TruSeq Stranded mRNA Library Prep Kit (Illumina) was used to prepare six cDNA libraries following the manufacturer's instructions. Thus, one library per sample was prepared, i.e. three libraries from ovary tissues and three libraries from testis tissues were prepared, or in other words three biological replicates per sex. The fragment size distribution and concentration of the libraries were checked in the Agilent 2100 Bioanalyzer using the Agilent DNA1000 Kit (Agilent). Libraries were quantified using the Qubit dsDNA BR Assay Kit (ThermoFisher Scientific). All libraries were pooled in equimolar amounts, according to the quantification data. The pool was sequenced in two different lanes of a HiSeq 4000 PE100 platform (Illumina). Raw reads quality control was performed using FastQC v0.11.5 [94]. Trimmomatic v0.35 [95] was used for raw reads trimming. Primer/adaptor sequences were removed and the first 15 bp of the reads were cut. Trimmed reads shorter than 40 bp were discarded. Both female and male gonads trimmed reads were assembled together to obtain a single transcriptome that included ovary and testis transcripts. Trinity software v2.4.0 [96] was used for de novo assembly of these high-quality short reads using default parameters settings (Kmer = 25). Assembled transcriptome completeness was assessed with BUSCO v3.0.2 [97] using the Arthropoda database as reference. EvidentialGene tr2aacds pipeline [98] was used to reduce the transcriptome redundancy. Then BUSCO was run again to check the duplication level. Gene expression given as Transcript Per Million (TPM) was calculated by mapping back all ovary and testis trimmed reads together on the gonads assembled transcriptome (parameters: minimum allowed length fraction = 0.75, similarity fraction = 0.95 and maximum number of matching contigs = 4) using the RNA-Seq tool of the CLC Genomics Workbench v11.0 (Qiagen). Only transcripts with a TPM ≥ 1 were included in the definitive transcriptome. Ribosomal and mitochondrial contigs were identified by BLASTn [99] against the NCBI non-redundant (nr) database and they were also excluded from the final transcriptome. Functional annotation was carried out using the Trinotate v3.1.1 workflow (https://trinotate.github.io). In detail, sequence similarity of the assembled transcripts was evaluated using BLASTx [99] against the UniProtKB/Swiss-Prot database (E-value cutoff of 1e-5). TransDecoder v5.3.0 [100] was used to identify putative protein coding regions, including homology options as retention criteria for the candidates ORFs. Predicted ORFs were identified by BLASTp [99] queries against the UniProtKB/Swiss-Prot database (E-value cutoff of 1e-5). Protein functional domains were identified using HMMER3 [101] against the PFAM domain database. Signal peptide and transmembrane domains prediction was performed with SignalP v4.1 [102] and TMHMM v2.0 [103], respectively. WEGO [104] was used to plot the Gene Ontology (GO) functional classification and distribution of the annotated transcripts. Trimmed reads of each gonad sample were mapped back separately on this reduced high-quality set of ovary and testis transcripts (parameters: minimum allowed length fraction = 0.75, similarity fraction = 0.95 and maximum number of matching contigs = 4) using the RNA-Seq tool of the CLC Genomics Workbench v11.0 (Qiagen). Thus, expression of each transcript in each sample was calculated as Transcript Per Million (TPM). Subsequently, a DEA between ovary and testis samples was carried out in order to identify DEGs between ovaries and testes using the Differential Expression for RNA-Seq tool of the CLC Genomics Workbench v11.0 (Qiagen). GO enrichment analysis was conducted to reveal GO terms significantly enriched in DEGs using the CLC Genomics Workbench v11.0 (Qiagen). Transcriptomic SRA data (NCBI Sequence Read Archive) from other P. serratus tissues, larvae (SRR4341161-2) and muscle (SRR4341163-4), was used to identify differentially expressed genes (DEGs) between gonad tissues (ovary and testis) and non-gonad tissues (larvae and muscle). Firstly, SRA reads quality was checked with FastQC v0.11.5 [94] and Trimmomatic v0.35 [95] was used to trim the reads as follows: HEADCROP:15 TRAILING:25 MINLEN:40. SRA trimmed reads were mapped on the non-redundant and reduced gonads transcriptome to calculate the gene expression (TPM) of the assembled transcripts in the larvae and muscle samples. Pairwise differential expression analyses (DEAs) were performed between gonad and non-gonad samples (ovaries vs. larvae, ovaries vs muscle, testes vs larvae and testes vs muscle) using the CLC Genomics Workbench v11.0 (Qiagen). Up-regulated expressed genes in gonad tissues were tagged with a 'G' (gonad up-regulated) in the transcriptome annotation table. Additional file 1: Table S1. Annotation table of the non-redundant P. serratus ovary and testis transcriptome. Table S2. Full list of up-regulated genes in ovary. Table S3. Full list of up-regulated genes in testis. Table S4. Results of GO enrichment analyses. Additional file 2: Figure S1. Volcano plot of differentially expressed genes (DEGs) between ovary and testis samples. Not differentially expressed genes are shown with black dots meanwhile DEGs are depicted with red dots. Additional file 3: File S1. Sequences of the discussed sex-related genes.
2019-10-23T15:24:55.118Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "da188b9fb763ec9991a8639da58f3952c19529df", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-019-6157-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da188b9fb763ec9991a8639da58f3952c19529df", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
198914795
pes2o/s2orc
v3-fos-license
Persistence and change in behavioural problems during early childhood Background Behavioural problems and psychopathology can present from as early as the preschool period. However there is evidence that behavioural difficulties may not be stable over this period. Therefore, the current study was interested in evaluating the persistence and change in clinically relevant behavioural problems during early childhood in a population-based New Zealand birth cohort. Methods Behaviour was assessed in 5896 children when they were aged 2 and 4.5 years using the Strengths and Difficulties Questionnaire (SDQ). Correlations and mean differences in subscale and total difficulties scores were examined. Scores were then dichotomised into normal/borderline and abnormal ranges to evaluate the persistence and change in significant behavioural problems. Chi-square analyses and ANOVAs were used to determine the association between sociodemographic and birth variables, and preschool behavioural stability. Results Raw scores at ages 2 and 4.5 years were moderately correlated, with most measures showing a small but significant decrease in mean scores over time. The majority of children who showed abnormal behaviour at 2 years improved at 4.5 years (57.9% for total difficulties). However, a notable proportion persisted in their difficulties from 2 to 4.5 years (42.1% for total difficulties). There was a small percentage of children who were categorised as abnormal only at 4.5 years. Children with difficulties at one or both time points had a greater proportion who were the result of an unplanned pregnancy, lived in highly deprived urban areas, and had mothers who were younger, of Māori and Pacific ethnicity and were less educated. Conclusions Not all children who show early behavioural difficulties persist in these difficulties. Those whose difficulties persist were more likely to experience risk factors for vulnerability relative to children with no difficulties. Results suggest that repeated screening for early childhood behavioural difficulties is important. Electronic supplementary material The online version of this article (10.1186/s12887-019-1631-3) contains supplementary material, which is available to authorized users. Background Clinically significant psychiatric disorders can present as early as preschool age, with the following rates reported in children aged 2 to 5 years: 2 to 5.7% for ADHD; 4 to 16.8% for ODD, 0 to 4.6% for CD; 0 to 2.1% for depression; 0.3 to 9.4% for anxiety disorder [1]. Within New Zealand, it is estimated that approximately 10% of children aged 2 to 4 years show clinically significant total behavioural difficulties [2,3]. Furthermore, there is evidence that behavioural difficulties identified in children can persist and increase a child's risk of later adverse outcomes. For example, children who show behavioural problems during childhood are at an increased risk of ongoing mental health difficulties [4][5][6][7][8], a greater physical health burden [8], relationship and parenting problems [4,9], poor academic outcomes [10], criminal behaviour [4,11], substance abuse [4,12], as well as teen pregnancy and sexual risk-taking [4,13]. These studies typically focus their initial assessments on children around school age or older. However, it has recently been demonstrated that difficulties that persist throughout childhood can be measured in children from as early as their second year of life [14,15]. The studies mentioned above illustrate a shift from viewing clinically significant behavioural problems as distinct episodes to considering them as recurrent or persistent issues instead. Existing research typically investigates behavioural stability using continuous measures [15][16][17][18]. However, few studies focusing solely on early childhood have evaluated the persistence or change in clinically significant preschool behavioural problems. Given the developmental changes that occur during early childhood, it is common to believe that problem behaviours are transient and likely to reduce as the child grows older. However, this may be preventing children with genuine behavioural difficulties from getting the assistance and intervention that is needed. It is particularly important to address these behavioural concerns during the preschool period, so that the child is well prepared and adjusted for the demands of school. The few studies that have been conducted suggest that behavioural problems in early childhood can persist for a proportion of children. Mathieson and Sanson evaluated social, internalising and externalising behavioural problems in a Norwegian community sample when children were 18 months and 30 months [14]. As most children scored close to the norm when behaviours were evaluated continuously (using the Behaviour Checklist [19]), children were categorised as showing either problematic or non-problematic behaviour at the two time points. Children were categorised as problematic if they scored at or above 1.5 standard deviations above the mean. While 2.5-3.9% of the overall sample showed persistence in behavioural problems, the authors found that approximately 37% of children with problems at 18 months persisted in their difficulties at 30 months. When looking at the association between the continuous measures of behaviour at each time point, the authors found moderate correlations. A separate study by Briggs-Gowan, Carter, Bosson-Heenan, Guyer and Horwitz investigated whether preschool problem behaviour persisted in children from a Conneticut birth cohort [15]. Children were 12 to 40 months when initially assessed for behavioural problems and followed up a year later when they were aged 23 to 48 months. Using the Infant-Toddler Social and Emotional Assessment to measure internalising, externalising, dysregulation and total problems, children were categorised as having behavioural problems if they scored at or above the 90th percentile. The study found that 49.9% of children persisted from time 1 to time 2 in total and externalising problems, with lower persistence rates observed for the internalising (37.8%) and dysregulation (38.7%) domains. The studies by Mathiesen and Sanson [14] and by Briggs-Gowan et al. [15] indicate that a substantial proportion of children who initially identify as showing behavioural difficulties do improve over the early childhood period, but a notable proportion still persist in these difficulties. This suggests that repeated screening from early in childhood is important for identifying these children with persistent behavioural difficulties. In New Zealand (NZ), health and development checks are conducted on all children registered with a primary care practitioner at several time points in early childhood, starting from birth through to when the children are 4 years [20]. The aim of the check is to identify any difficulties the child may have, so that their needs are met and they are given the opportunity for optimal development. Currently, behavioural difficulties are only assessed at the 4 year health and developmental check, known as the B4 School Check, using the Strengths and Difficulties Questionnaire (SDQ) [21]. However, if similar patterns of persistence and change in behavioural problems occur in the NZ population, it may be beneficial to also conduct behavioural screening at prior health and development checks, so that intervention can occur earlier and the needs of children with persistent difficulties are adequately addressed. It is also important to explore the characteristics of children who show different behavioural development profiles, as this will indicate whether certain sociodemographic populations are more at risk of persistent behavioural problems. The Growing Up in New Zealand study is a longitudinal, prospective study consisting of a large population-based birth cohort. The study assessed child behaviour when children in the cohort were aged 2 and 4.5 years using the SDQ, the same measure that is used in the B4 School Check. The assessment of behavioural difficulties at age 2 was a unique feature of this study, as this was the first time the SDQ was administered and validated in a sample as young as 2 years [3]. While we were unable to evaluate the sensitivity and specificity of the SDQ at this age, the questionnaire showed satisfactory reliability and structural validity at 2 years [3]. Furthermore, the questionnaire is meant to be used as screening tool to identify children who likely show significant behavioural problems and are in need of further assessment, rather than as a diagnostic tool. As the SDQ showed good psychometric properties in our cohort at 2 years and has been extensively validated in children aged 4 to 12 years [22], this enables us to investigate whether persistence and change in preschool behavioural problems is also observed in a NZ population, using the same screening instrument that is formally used by NZ healthcare professionals. Using data from the Growing Up in New Zealand cohort, the current study firstly aimed to evaluate whether measures of behaviour at two different time points in preschool are closely correlated, and whether there are any developmental changes in behavioural scores as children move from the early preschool period (2 years) to the late preschool period (4.5 years). We hypothesised that behavioral scores at both time points will be at least moderately correlated, but there will be a slight decrease in externalising behavior, peer problems and total difficulties, as a result of developmental changes and increased social interaction as children get older. Secondly, the study was interested in calculating the rates of persistence or change in the categorisation of behavioural difficulties during this early childhood period (i.e. 2 to 4.5 years). We hypothesised that the majority of children identified as showing behavioural difficulties at 2 years will improve at 4.5 years, but a notable proportion will persist in their difficulties. Finally, we were interested in evaluating the sociodemographic characteristics of each of the apparent behavioural development profiles. Design and participants Participants were members of the Growing Up in New Zealand study. Details of the study's design and recruitment procedure can be found elsewhere [23,24]. In brief, the study's cohort consists of a socioeconomically and ethnically diverse sample of children, recruited via 6822 pregnant women who had expected delivery dates between 25th April 2009 and 25th March 2010. Pregnant women were recruited from a geographical area that contains approximately one third of the NZ birth population, and covers three contiguous District Health Board regions [23]. Recruited mothers were found to be comparable to NZ parents on key demographic measures, such as maternal age, ethnicity, parity and area-level deprivation [23]. Children in the study were not significantly different from national births on sex and singleton births, though fewer children in the cohort were born low birth weight or preterm [25]. However, these latter statistically significant differences reflect small absolute differences, and are in part due to the cohort recruitment requirement that children survive to 6 weeks [25]. To ensure adequate representation of major ethnic groups in the study, the cohort is more ethnically diverse than national births [25]. Major data collection waves (DCWs) have occurred during late pregnancy, and when children were aged 9 months, 2 years, and 4.5 years. Information gathered at each DCW relate to six inter-connected domains of child development: health and wellbeing; cognitive and psychosocial; education; family and whānau (extended family); culture and identity; and neighbourhoods and societal context. Children were included in the analyses only if their behaviour was measured at both ages 2 and 4.5 years. The final sample consisted of 5896 children (86% of the original sample). There were 348 children lost to follow up from the 2 year DCW to the 4.5 year DCW; however, 171 children who were not assessed at 2 years were followed up at 4.5 years. Children lost to follow up from age 2 to 4.5 years were more likely to have mothers who were younger, less educated and non-European, more likely to be part of an unplanned pregnancy, more likely to come from highly deprived areas at the 2 year DCW, and less likely to live in rural regions at age 2 (ps < .05). Further, children lost to follow up were also more likely to be categorised as abnormal on all SDQ scores at age 2 (ps < .05). Children from the original, recruited sample that were not included in the current study were more likely to have mothers who were non-European, less educated and younger (ps < .001). Children not included were also more likely to be first born, part of an unplanned pregnancy, from an area of high deprivation, and from an urban area (ps < .05). Strengths and difficulties questionnaire Behavioural difficulties were measured at 2 and 4.5 years using the mother-reported SDQ [26]. At 2 years, the preschool SDQ was used, while at 4.5 years the standard SDQ was administered. Each difficulties subscale and its corresponding items at ages 2 and 4.5 are provided in the Additional file 1: Table S1. Details of the minor differences between the preschool and standard SDQ are apparent in Table A1 and can also be found on the SDQ website [27]. The current study focuses on the difficulties subscales (emotional symptoms, peer problems, hyperactivity-inattention and conduct problems) as well as the total difficulties score. Generally, each subscale is measured by five items, rated on a 3-point Likert scale as either not true, somewhat true, or certainly true. However, with the current study, an item ('often fights with other children or bullies them') corresponding to the conduct problems subscale was missing from the 4.5-year questionnaire (due to an administrative error); therefore, the subscale score was prorated to account for this missing item. Prorating was used to calculate scores for all subscales, though individuals were excluded if data was missing for more than two items for a subscale (or a single item in the case of the conduct problems subscale). The total difficulties score was calculated by summing the scores of the difficulties subscales. We have previously found that the preschool SDQ shows generally acceptable psychometric properties at age 2 [3]. Consistent with our work on the SDQ at age 2 on structural validity, we found superior and acceptable model fit at age 4.5 years with a modified five-factor model that accounts for a positive construal method effect (χ2(237) = 3164.34; CFI = .926; TLI = .914; RMSEA = .046; for more information, see D'Souza et al. [3]). However, we found poor Cronbach's alpha coefficients for both peer (α = .55) and conduct problems (α = .47). As estimates of Cronbach's alpha can be affected by the number of scale items, it is possible that this low alpha for conduct problems is due to the reduced number of items [28,29]. Cronbach's alpha coefficients were acceptable for all other SDQ measures (α > .60). SDQ subscales range from 0 to 10, and total difficulties ranges from 0 to 40. These scores were also categorised into normal, borderline and abnormal bands based on previously determined cut-offs [3,26]. The abnormal band is typically used to identify children in need of further assessment and intervention, and is the method used by the B4 School Check to screen for children with social and emotional challenges [30][31][32]. SDQ measures were dichotomised into normal/borderline and abnormal in the current study, as we were primarily interested in movement into and out of the clinically significant abnormal range. Sociodemographic and birth variables Variables relating to the child or family's social structure included mother's ethnicity, mother's education, mother's age, child's gender, parity, planned pregnancy, area-level deprivation, and rurality. Birthweight and gestational age were also of interest in the current study. Information on all variables except area-level deprivation and rurality were collected during the antenatal data collection wave. Information on area-level deprivation and rurality were collected during the 4.5 year DCW. Mother's self-prioritised ethnicity was categorised into four Level 1 Statistics New Zealand categories: European, Māori, Pacific, and Asian/Other [33]. If the individual identifies with multiple ethnicities, the self-prioritised ethnicity is what they consider to be their main ethnicity. In cases of mothers with multiple ethnic identifications who did not provide a self-prioritised ethnicity, external prioritisation was used. As utilised by Statistics New Zealand, external prioritisation gives precedence to responses in the following order: Māori, Pacific, Asian/Other, European [34]. Mother's highest education was categorised into the following three levels: No secondary school; Secondary school/diploma/trade certificate; Bachelor's degree or higher. Mother's age during pregnancy was categorised as less than 20 years, 20-29 years, and 30 years and over. Area-level deprivation was measured using the NZDep2013, based on indicators of socioeconomic deprivation from the 2013 NZ census. Deprivation areas received a deprivation score from 1 (least deprived) to 10 (most deprived). Deprivation was categorised into high (deciles 8-10), medium (deciles 4-7), and low (deciles 1-3) deprivation. Data analysis Correlations between SDQ measures at 2 and 4.5 years were calculated using Pearson correlation coefficients. Mean differences in SDQ scores were investigated using paired sample t-tests, with effect sizes calculated using Cohen's d [35]. A contingency table was used to demonstrate the persistence and change in SDQ categorisation from 2 to 4.5 years. A composite measure of behavioural stability was also created using the 2 year and 4.5 year SDQ total difficulties scores. Children were categorised as showing no difficulties (normal/borderline scores at 2 and 4.5 years), improved (abnormal score at 2 years only), later difficulties (abnormal score at 4.5 years only), and persistent difficulties (abnormal scores at 2 and 4.5 years). Chi-square analyses were used to evaluate the association between sociodemographic variables and behavioural stability, and to determine sociodemographic characteristics for each group. For continuous birth variables (i.e. birthweight, gestational age), ANOVAs were conducted. Due to the large number of bivariate analyses conducted, all p-values displayed have been adjusted for multiple comparisons using the Bonferroni correction. Correlation and differences in SDQ scores from 2 to 4.5 years The correlation between SDQ measures at 2 and 4.5 years are presented in Table 1, as well as the t-value and effect size from the paired t-test comparing the mean scores at the two time points. Significant moderate correlations were found for all SDQ measures, Pearson r > 0.30, ps < .001. There were also significant differences in scores for all SDQ measures from 2 to 4.5 years, ps < .001. On average, all scores decreased from 2 to 4.5 years, except for emotional symptoms, which showed a negligible increase. Table 1 also presents the normal/borderline and abnormal frequencies for each SDQ measure at ages 2 and 4.5 years. At age 2, abnormal total difficulties scores were observed for 9.5% of the cohort, 6.7% of children had abnormal scores for emotional symptoms, 9.5% had abnormal scores for peer problems, 7.9% had abnormal hyperactivity-inattention scores, and 12.2% had abnormal conduct problems. At 4.5 years, total difficulties were in the abnormal range for 11.3% of children. 9.7% of children had abnormal emotional symptoms, approximately 13% had abnormal scores for peer problems and hyperactivity-inattention, and 11.1% had abnormal conduct problems. SDQ categorisations at 2 and 4.5 years Persistence and change in behaviour from 2 to 4.5 years Table 2 presents the frequency distribution of behavioural categorisations for all SDQ measures cross-tabulated across ages 2 and 4.5 years. Of those who scored in the normal/borderline range at 2 years, approximately 90% remained in this range at 4.5 years (92%% total difficulties; 92.4% emotional symptoms; 89.1% peer problems; 89.1% hyperactivity-inattention; 91.1% conduct problems). A small percentage of children who scored in the normal/ borderline range at 2 years showed an increase into the abnormal range at 4.5 years (8% total difficulties; 7.6% emotional symptoms; 10.9% peer problems; 10.9% hyperactivity-inattention; 8.9% conduct problems). For children that scored in the abnormal range at 2 years, approximately 60-70% of children improved to score in the normal/borderline range for most SDQ measures (57.9% total difficulties; 61.2% emotional symptoms; 65.6% peer problems; 62.2% hyperactivity-inattention; 72.5% conduct problems). A notable percentage of children who scored in the abnormal range at 2 years showed persistence in abnormal scores at 4.5 years (42.1% total difficulties; 38.8% emotional symptoms; 34.4% peer problems; 37.8% hyperactivity-inattention; 27.5% conduct problems). These results indicate four separate behavioural development profiles; children who showed no difficulties (i.e. remained in the normal/borderline range from 2 to 4.5 years), children who improved (i.e. moved from the abnormal range at 2 years to normal/borderline at 4.5 years), children who showed later difficulties (i.e. only showed abnormal scores at 4.5 years), and children who showed persistent difficulties (i.e. scored in the abnormal range at both 3 and 4.5 years). When looking at the proportions of each of these behavioural development profiles within the full study sample, approximately 80% of children showed no difficulties (83.2% total difficulties; 86.2% emotional symptoms; 80.6% peer problems; 82% hyperactivity-inattention; 80.1% conduct problems). Approximately 4-8% of the total cohort improved from 2 to 4.5 years (5.5% total difficulties; 4.1% emotional symptoms; 6.2% peer problems; 4.9% hyperactivity-inattention; 8.8% conduct problems). Of the total cohort, 7-10% showed later difficulties (7.3% total difficulties; 7.1% for emotional symptoms, 9.9% for peer problems; 10.1% for hyperactivity-inattention; 7.9% in conduct problems). Finally, approximately 3% of the overall cohort showed persistence in abnormal scores from 2 to 4.5 years (4% total difficulties; 2.6% emotional symptoms; 3.3% peer problems; 3% hyperactivity-inattention; 3.3% conduct problems). Association between behavioural stability, and sociodemographic and birth variables Refer to Table 3 for results from the chi-square tests and for proportions discussed below. All sociodemographic variables were significantly associated with SDQ stability (ps < .05), except child's gender and parity. Within the groups that showed behavioural difficulties during at least one time point (i.e. improved, later difficulties, and persistent difficulties), there was a greater proportion of children born to Māori or Pacific mothers relative to children showing no difficulties. Children with persistent difficulties had the greatest proportion of Māori and Pacific mothers. Relative to children with no difficulties, those who showed difficulties during at least one time point also had Relative to children showing no difficulties, the other groups had a greater percentage of children born from unplanned pregnancies (particularly those with persistent difficulties). Children within any of the groups showing difficulties during at least one time point had a notably greater percentage of children living in highly deprived areas, relative to those with no difficulties. Children with persistent difficulties in particular had the greatest proportion living in high deprivation areas relative to other groups. Those with persistent difficulties also had a greater proportion of children living in urban areas relative to children with no difficulties. The results from the ANOVAs showed that there was no significant Behavioural stability profiles were based on SDQ total difficulties categorisations at 2 and 4.5 years Note: **p < .001, *p < .05 difference between behavioural stability groups in either birthweight or gestational age (Table 3, ps > .05). Discussion The current study was interested in evaluating the association between behaviour at two different time points in early childhood, and examining the persistence and change in the categorisation of behavioural difficulties during early childhood. We also examined the sociodemographic characteristics of the observed behavioural development profiles. Consistent with our first hypothesis, we found that continuous measures of behaviour were moderately correlated and all scores except emotional symptoms showing a slight but significant decrease as the children got older. In contrast, emotional symptoms showed a slight but significant increase over time. The moderate correlation between behaviour at the two points is consistent with the work by Mathiesen and Sanson, who observed a correlation coefficient for total problems (r = 0.53) that is almost identical to the correlation coefficient for total difficulties observed in the current study. The decrease in externalising behaviour (conduct problems and hyperactivity-inattention), peer problems and total difficulties is also consistent with developmental changes associated with early childhood, and likely reflect the transient "terrible twos" [36,37]. However, while these behaviours are likely normative and temporary for most children, we were also interested in children at the extreme end of these distributions in behaviour at both time points; that is, children who likely show serious behavioural problems, relative to other children of the same age. We were specifically interested in evaluating the rates of persistence and change for this categorisation of behavioural problems from ages 2 to 4.5 years. We observed that approximately 90% of those who scored in the normal/borderline range at 2 years remained within this range at 4.5 years. A small percentage of children who scored in the normal/borderline range at 2 years showed a later onset of behavioural problems by transitioning into the abnormal range at 4.5 years (7.7-11%). A notable percentage of children showed movement out of the abnormal range of behaviour at 4.5 years; 57.8% of children with abnormal scores improved their total difficulties by moving out of the abnormal range at 4.5 years. Similar percentages were found for most subscales (61-65.7%) except for conduct problems, where 72.6% of children with age 2 abnormal scores improved at 4.5 years. A higher percentage of improvement for the conduct problems subscale, relative to the other SDQ measures, is not surprising. Many of the behaviours measured by the conduct problems subscale (e.g. temper tantrums, disobedience) are behaviours that typically occur during early childhood and decrease in frequency with age [36,37]. Therefore, this improvement in conduct problems from 2 to 4.5 years may simply reflect age-related changes in behaviour. While it is encouraging to see that a substantial proportion of children showing serious early behavioural problems improved over the early childhood period, our results also indicate that many of the children displaying early behavioural problems persisted in these difficulties. Over 40% of children with abnormal total difficulties at age 2 continued to show abnormal total difficulties at 4.5 years, with slightly lower percentages observed for most subscales (27.4-39%). These percentages are similar to what has been reported in previous research. For example, Mathieson and Sanson found that 37% of children with behavioural problems at 18 months were also classified as having problems at 30 months [14] and Briggs-Gowan et al. found that 49.9% of children aged 12-40 months who initially showed total behaviour problems persisted in these behavioural problems 1 year later [15]. It is important to note that these children with persistent difficulties make up only approximately 3-4% of our total sample, though this is similar to the proportions of children with persistent difficulties in the studies mentioned above. Further, this proportion is to be expected, given that approximately 10% of children are categorised as showing serious behavioural difficulties at a single time point [3]. We also examined the association between preschool behavioural stability and sociodemographic factors. Our descriptive analyses indicated that, relative to children with no serious behavioural difficulties during early childhood, children who showed behavioural difficulties during at least one time point had a greater proportion whose mothers were younger, Māori and Pacific, and less educated, were more likely to live in highly deprived and urban areas, and were also more likely to be the result of an unplanned pregnancy. Those with persistent difficulties had a particularly higher proportion of the aforementioned characteristics relative to other groups. Teen parenting, lack of secondary school education and high area-level deprivation have previously been identified as risk factors for vulnerability in our cohort, with greater exposure to multiple risk factors being associated with poorer health outcomes from the immediate postnatal period through to 2 years [38]. NZ studies examining ethnic disparities in preschool behavioural problems or psychosocial wellbeing are lacking, though studies with adolescent NZ samples have reported that Māori and Pacific children were more likely than NZ European children to experience behavioural difficulties (Noel et al., 2013). However, it is important to acknowledge that these results are purely descriptive in nature, and we therefore cannot make claims about ethnic differences in behavioural difficulties based on the results of this study. The association between ethnicity and behavioural difficulties is likely to be complex; However, Gillies et al. (2014) have found that these ethnic disparities may be due to the influence of an accumulation of vulnerability risk factors earlier in life, including socioeconomic disadvantage and childhood trauma. In support of this, it was found that Māori and Pacific children within our cohort were more likely to be exposed to a greater number of antenatal vulnerability risk factors [38]. Therefore, the greater proportion of children with Māori and Pacific mothers within those showing behavioural difficulties is likely reflective of the greater exposure these ethnic groups have with socioeconomic disadvantage and early adversity. As such, results regarding ethnic differences should be interpreted with caution, and consider the broader social and historical factors likely contributing to these differences. The results from the current study indicate that the persistence in early childhood behavioural problems observed in American and European samples are also apparent in a NZ cohort. These results support the need for repeated screening for behavioural problems, beginning from as early as 2 years. This could be applied by including the SDQ in the earlier health and development checks conducted in NZ, such as the 2-3 year check that is conducted just prior to the B4 School Check [20]. This would enable even earlier intervention or at least the identification of children who are showing persistent difficulties at both the 2-3 year check and the B4 School Check. To inform any intervention efforts, future research should investigate family and environmental factors influencing the persistence and change in early childhood behavioural problems. It is important to note that the SDQ is appropriate as a screening instrument for behavioural difficulties, and not as a diagnostic tool. We were unable to evaluate the sensitivity or specificity of the clinical cut-offs used in this study against more formal clinical diagnoses. While we had some diagnostic information at age 2 years, very few children had received any diagnoses at this age and we do not have this information at 4.5 years. However, we may be able to investigate this in future by linking with administrative health records. The current study was also somewhat limited in its investigation of persistence and change in childhood behavioural difficulties, as we could only track the stability in behavioural problems over two DCWs. As such, change across two time points may also be due to measurement error, rather than true change. However, Growing Up in New Zealand is currently collecting data for its 8 year DCW, which includes the SDQ. Future investigation will examine the trajectories of behavioural difficulties across the 2 year, 4.5 year and 8 year DCWs, which will provide us more insight into the stability of childhood behavioural difficulties. The work from this study will be useful in informing this future research. An additional limitation was the reduction in the representativeness of the sample as a result of attrition. Compared to the broadly generalisable original sample recruited by Growing Up in New Zealand, the children in the current study were more likely to have mothers who were European, more educated and older, more likely to come from less deprived areas and less likely to come from urban areas. However, while there is a statistically significant difference in the sociodemographic characteristics of children included and not included in the current study, the study's analytic sample still showed considerable diversity on these key sociodemographic measures and is therefore still an important and relevant resource. Conclusions Our results ultimately show that the majority of children who present with abnormal behavioural scores at age 2 typically improved by 4.5 years. However, there was still a significant proportion of children with an abnormal categorisation at age 2 who persisted in their difficulties at 4.5 years. There was also a small percentage of children who initially did not show behavioural problems but were classified as having abnormal scores at 4.5 years. Further, each of these groups, but particularly those with persistent difficulties, had a larger proportion of children experiencing risk factors for vulnerability relative to children with no difficulties. This study was intended to be descriptive in nature, and therefore does not address the complex associations between preschool behavioural stability and sociodemographic factors, though Growing Up in New Zealand aims to address this in future studies. Future research will also aim to identify proximal and distal family and environmental factors that may contribute to this persistence or change in problem behaviours. Nevertheless, findings from the current study are novel, given that, to our knowledge, we are the first to utilise the SDQ during multiple time points in the preschool period. Importantly, as our results indicate that some, but not all, children who show serious behavioural difficulties continue to persist in these difficulties across early childhood, repeated screening for behavioural problems is important. Additional file Additional file 1: Table S1. SDQ difficulties subscales and the corresponding items at ages 2 and 4.5 years.
2019-07-27T00:54:30.871Z
2019-07-26T00:00:00.000
{ "year": 2019, "sha1": "fa0b32b6c0f7fa406ebe11a681b33964ab4204e8", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-019-1631-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa0b32b6c0f7fa406ebe11a681b33964ab4204e8", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13161204
pes2o/s2orc
v3-fos-license
A New Algorithm for Finding MAP Assignments to Belief Networks We present a new algorithm for finding maximum a-posterior) (MAP) assignments of values to belief networks. The belief network is compiled into a network consisting only of nodes with boolean (i.e. only 0 or 1) conditional probabilities. The MAP assignment is then found using a best-first search on the resulting network. We argue that, as one would anticipate, the algorithm is exponential for the general case, but only linear in the size of the network for poly trees. Introduction Algorithms for belief networks (Bayesian networks) are the cornerstone of many applications for proba bilistic reasoning. Effective algorithms exist for cal culating posterior probabilities of nodes given the evidence, even in the case of undirected cycles in the network. Some of these are based on Pearl's mes sage passing algorithms (see [Pearl, 1988]), where some preprocessing is needed, such as clustering or conditioning. While much has been written about finding pos terior probabilities of nodes, not much has been done about finding maximum probability assign ments (MAPs) for belief networks 1 • One algorithm to compute MAPs is given by Pearl in [Pearl, 1988]. That algorithm, however is rather complicated, and finding the next best assignments with that algo rithm is not as simple as with the algorithm we present. Cooper, in his PhD thesis (see [Cooper, 1984], or [Neapolitan, 1990]), performs a best-first search for a most probable set of "diseases", or causes, given the evidence. That, however, is not equivalent to *This work has been supported in part by the Na tional Science Foundation under grants IRI-8911122 and Office of Naval Research under grant N00014-88-K-0589. We wish to thank Robert Goldman for many helpful comments. 1MAP stands for "Maximum A-posteriori Probabil ity", and we use it to refer to a complete assignment, unless specified otherwise. calculating a complete MAP, as he assumes mutual independence of all causes (i.e. they all have to be root nodes). Peng and Reggia, in [Peng and Reg gia, 1987], have defined a diagnostic problem that uses a 2-level belief network, and designed a best first algorithm that finds hypotheses in decreasing order of probability. It is not clear, however, how their methods would extend to a general belief net work, given that one of their assumptions is that all symptoms have causes (thus root nodes cannot be evidence). We propose an algorithm that transforms the belief net into a weighted boolean-function DAG, and then performs a best-first search to find the least cost assignment, which induces the MAP as signment on the belief network (it can find any next-best assignments as a natural extension). In the next section we define our transformation, and show that a minimum cost assignment for the cost based DAG induces a MAP assignment on the be lief network. In sections following that, we de scribe the algorithm and discuss complexity issues. We then present some experimental results of using two variants of the algorithm for limited belief net works, and conclude with a summary of our results and a discussion of future research. Belief Nets as Weighted DAGS In this section, we define weighted boolean func tion DAGs (WBFDAGs), and show how to repre sent any given Bayesian net as a WBFDAG. We as sume that the Bayesian network uses only discrete random variables. We also assume, without loss of generality, that all nodes take on the same values 2 , i.e. values from the domain 'D = {L 1 1 L 2 , ••• , Lm }· A WBFDAG is a DAG where nodes can be as signed values from some domain 'D' . Nodes have labels, which are functions in :F , the set of all func tions with domain 'D' Tr. for some k, and range 'D' . Formally, we define such DAGs as follows: 2If this is not the case, we simply take 'D to be the union of all node domains. This need not be done in practice, but we use it for simplicity of presentation. Definition 1 A WBFDAG il a -1-tuple (G, c, r, e), where: 1. G is a connected directed acyclic graph, G = (V, E). 2. r is a function from V to :F , called the label. If a node v has lc immediate predecessors, then the domain of r( v) is 1J'Ir.. We use the notation r, to denote the function r( v) 3. c is a function from (V, 'D' ) to the non-negative reals, called the co1t function. 4. e is a pair (1, d), the evidence. I is a sink node in G and d is the value in 7)' 461igned to 1. Definition 2 An aiJIJignment for a WBFDAG il a function / from V to 1)' • An 461ignment il a (po111ibly partial) model iff the following condition/J hold: If v is a non-root node, with immediate predeces sors {v1, ... , v�r.}, then /(v) = r, (/(v1), ... , f(v �c )). Intuitively, an assignment is a model if the node functional constraints are obeyed everywhere in the WBFDAG. That is, each node can only assume a value dictated by the values of its parents and its label. Definition 3 A model for a WBFDAG il satisfy ing iff /(1) = d. Definition 4 The co1t of an auignment A for a WBFDAG i6 the 1um The Best Selection Problem (BSP) is the problem of finding a minimal cost (not necessarily unique) satisfying model for a given WBFDAG. We exam ine the BSP in [Charniak and Shimony, 1990]. In that paper, we proved that BSP is NP-hard. We noted there, however, that using standard best first search, we have found minimal cost satisfying (par tial) models relatively efficiently. We now show how to construct a WBFDAG from a Bayesian network, where we make the assumption that only one sink node is an evidence node, and the evidence is of the form "node assumes single value". We then show that the solution to the BSP on the WBFDAG provides us with a MAP assignment for the Bayesian network, and vice versa. Later, the above limitation on evidence nodes is relaxed. We construct the WBFDAG from the Bayesian network via a local operator on nodes and their immediate predecessors3• The domain we use for the WBFDAG is V' = VU{T, F, U}. For each root node u, construct a node u' (the image of u) with lVI parents u� (see figure 1), and costs c(u�, T) = 'Henceforth, we will use the term "parents" to de note "immediate predecessors". The label ru' of u' is defined as follows: For non-root nodes, the construction is more complicated (consider the belief network segment of figure 2, and the corresponding WBFDAG seg ment of figure 3 as we describe the construction). For each non-root node v with in-degree k and parents U = {u1, ... , u�c} in the Bayesian network, do the following: 'We use the assignment function, /,for nodes in the belie£ net as well as for the WBFDAG. Its meaning in this case should be obvious. Construct a node w with parents U' (i.e. the images of the nodes in U) and u, and with label function r..,, as follows: Construct a node v' with n.C+l parents, the nodes constructed above. Define rv•, the label of v' , as follows: Intuitively, rv• gets a non-U value just in case exactly one of the parents, w is T. We call the node v' constructed in this step the image of v (thus, we call v the inverse image of v' ). In our example, the belief network segment, with nodes u1, u:h v, all 2-state nodes, as shown in fig ure 2, is transformed into the network of figure 3, where the probabilities used to determine the costs of the new root nodes are shown (the actual costs are negative logarithm of the probabilities shown)5• The evidence node in the belief network is treated as follows: set s to be the node which is the image of the evidence node, and d to the value of the evidence node. Theorem 1 All minimal co5t 5ati1Jfying modeliJ for tke WBFDAG induce MAP assignmentiJ given tke evidence on the Bayesian network. Proof outline: we show that any satisfying model for the WBFDAG induces a unique assignment to the nodes of the Bayesian network. We then show that a minimal cost satisfying model for the WBFDAG induces a maximum probability assign ment for the Bayesian network. • The node s can only get a value equal to d if ezactly one of its parents, w, has value T, and all others have value F. This can happen only if all the parents of each w are assigned values different from U. These parents of w are exactly the images of the parents of the inverse image of s (with one new "cost" node constructed in step 1a). Proceeding in this manner to the roots, all image nodes are assigned values in 1J in any sat isfying model for the WBFDAG. Using exactly these values for the reverse image nodes, we get a unique assignment for the belief network. • The cost 0 of a satisfying model is exactly the negative logarithm of the probability of the as signment it induces on the Bayesian network. 5 As v is a 2-state node, we do not really need all the nodes in figure 3, but we show them anyway, so that the generalization to the m-state node is self evident. 100 To see this, consider the following property of Bayesian networks (see Pearl's book, [Pearl, 1988]). The probability distribution of Bayesian networks can be written as: But in each layer of image nodes, we select ex actly one "cost" node to be T. The cost of this node is the negative logarithm of the conditional probability of the node state of node v given the state of its parents in tke model. Now since sum ming costs is equivalent to multiplying probabil ities, the overall cost of the model is the nega tive logarithm of the overall probability of the induced assignment. • Finding the MAP is finding the satisfying model A that maximizes P(Aievidence). By the defini tion of conditional probabilities, the latter is: P ( ev�dence) where P(evidence) is a constant (we are consid ering a particular evidence instance). Thus, it is sufficient to maximize the numerator. But P(evidenceiA) is exactly e-c, where c is the cost of the node selected in the level of the "grandparents" of the evidence node (in figure 3, if v ' were the evidence node, we refer to the level of root nodes labeled with P(FIFT) . .. etc.). The latter is true because P(evidenceiA) is equal to P(evidenceiA'), where A' is a partial assignment of A , which only assigns values to the parents of the original evidence node (the same values assigned to them by A ) . Likewise P(A) is the exponent of the (nega tive) cumulative cost selected in the rest of the WBFDAG. Since e= is monotonically increasing in z, minimizing the cost of the assignment for the WBFDAG is equivalent to maximizing prob ability of the assignment to the Bayesian net work, Q.E.D. We now relax the constraint on the evidence, so that the evidence can consist of any partial assign ment to the nodes of the Bayesian network. Given such a presentation of evidence, we construct an ex tra node s (in the WBFDAG ), with parents exactly the nodes assigned values in the evidence, and as sign it the following label function: the node s gets value T just in case its parents are assigned values exactly as in the evidence, and value F otherwise6• We now require that s get value T for a satisfying model (the original constraints on the values of the original evidence nodes can be removed). If the ev idence is more than just an assignment of one value 11Essentially, tJ is now an AND node, used for AND'ing all the evidence. to each evidence node, we use the method suggested by Pearl before constructing the WBFDAG (see [Pearl, 1988]). Computing MAPs with WBFDAGs In the previous section we showed how to construct a WBFDAG from a Bayesian network and evidence such that a minimal cost satisfying model for the WBFDAG induces a maximum probability given the evidence model on the Bayesian network. We now discuss an algorithm for computing MAPs us ing this construction, and determine its complexity relative to the complexity of the Bayesian network. Algorithm: compute MAP given evidence e . Construct WBFDAG as in the previous section, where an extra node e is constructed with parents all the nodes in e if the evidence involves more than just one sink node. 2. Run the best-first search algorithm on the WBFDAG, where the termination condition is a satisfying model7• 7 At this point we will apply standard best-first search on AND-OR trees to our WBFDAG to find the minimum cost. Since our WBFDAG is, however, an AND-XOR DAG, not an AND-OR tree, it is, perhaps worth describing the best-first search technique to show why it still applies. Best-first search on AND-OR trees works by starting at the sink and constructing alter native partial solutions. Whenever an OR node with lc parents is encountered, we split our partial solution into lc, each one of which will contain the previous par tial solution but now extended to include on of the OR possibilities. Whenever an AND node is encountered, all of its predecessors are added as things we must now handle. If we have a DAG then we must simply check 3. Determine the MAP assignment from the model from the roots down. By letting the best-first search continue after finding the MAP, we can enumerate the assign ments to the belief network in order of decreasing probability. We can see that the best-first search algorithm has to run on a graph larger than the Bayesian network, but the size of the WBFDAG is still linear in the size of the Bayesian network. It is, however, exponential in the in-degree of the nodes of the Bayesian network. If the given Bayesian network has mostly boolean valued nodes, or most of the conditional probabil ities are 0 or 1, we can omit most of the construc tion described above, and save on the size of the WBFDAG. The savings occur because whenever we have a conditional probability of 0, the relevant u and w nodes can be omitted (as the u node has cost oo ). Whenever we have a conditional proba bility of 1, we can essentially omit the u node (and modify the w node label accordingly). In the ex treme case, where all non-root nodes in the belief network have only boolean conditional probabilities whenever a new node is added to the partial solution that it has not been added before. If it has, it is sim ply not added the second time. As for the XOR nodes, in fact, best-first-search is commonly used in exclusive or situations (e.g., graph coloring, where the choice of color for a region is exclusive.) Using the technique in the XOR case is simply a matter of making sure that a variable (region, or random variable) gets only one value (color, or value of the random variable). In our case this is complicated by the seeming possibility that we assign random variables 111 = T and v2 = F, whereas in our distribution we have it that v1 => v2. In fact, this cannot occur, but we omit the proof. [Charniak and Shimony, 1990]. The best-first search will run in linear time on poly trees, assuming that the correct bookkeeping operations are made (i.e. the best assignment cost for the ancestors of a node is kept at every node, for every possible value assigned to the node). This is true because once we have these least-cost val ues for a node, there is no need to expand its an cestors again. In fact, Pearl's algorithm for com puting MAPs relies on this property (see [Pearl, 1988]). Thus, if the poly tree belief network has only boolean distribution for all nodes, then, be cause the WBFDAG constructed is also a poly tree, we have an algorithm that runs in time linear in the size of the network8• When finding next-best MAPs, however, we can no longer rely on the above property, and thus can no longer guarantee linear time. Unfortunately, for general poly tree belief net works, once we construct the WBFDAG, we no longer have a poly tree! We can show, however, that we still have an algorithm with running time linear in the size of the network. Note that the WBFDAG is still separable into components, where the separating nodes are the images of the nodes of the original poly tree. Also, from the "cost" nodes constructed for a certain node v, only one is se lected to be assigned T. Using these constraints, the algorithm still runs in time linear in the size of the belief network. Finally, our algorithm can be easily modified to compute certain partial MAPs. If we are only inter ested in assignments to some subset of root nodes in the belief network (the root nodes could repre sent diseases in medical diagnosis, for instance), all we need to do is set to 0 the costs of all root nodes in the WBFDAG that are not parents of images of root nodes. Implementation The algorithm has been implemented for the be lief networks generated by WIMP (see [Charniak and Goldman, 1988]), where most nodes have only two states and many conditional probabilities are either 0 or 1. The results are rather optimistic, as partial MAPs were computed faster than evaluat ing posterior probabilities for the nodes of the same network given the same evidence. For that experi ment, a very trivial admissible heuristic was used9, 'Pearl's algorithm for finding MAP is also efficient (time linear in the size of the network), for poly trees. In some cases (i.e. if local best assignments also happen to be global best assignments) our algorithm will avoid many operations that Pearl's algorithm has to perform, but in general the running times will be equal. 8Whereby the cost of the complete assignment is evaluated at the cost collected until now. 102 and it is certainly reasonable to hope that a bet ter admissible heuristic will improve performance even further. No conclusive timing tests have been conducted, however. In WIMP, only one set of evidence is used per belief network, as networks are constructed on the fly as new evidence comes in. If we need to use the same belief net with different evidence, how ever, the WBFDAG can be used again (with minor changes to cater for the different evidence). It is possible that many of the best-first search compu tations are also re-usable, but we did not try to do that, because it was not useful for our domain. We have an improved implementation of the al gorithm, where the assumption that 0 and 1 con ditional probabilities abound is dropped. The im plementation avoids the actual construction of the extra nodes, even though conceptually the nodes are still there. This version of the algorithm ex ploits cases where many adjacent entries in the con ditional distribution array are equal, but not neces sarily 0 or 1. Using this property, many of the (vir tual) w nodes are collapsed together, and likewise the u nodes. Advantages of this method over the method described earlier in this section is that it facilitates treatment of noisy ORs and ANDs (and many other types of nodes), as well as pure 0 Rs and ANDs. Detailed discussion of the modified al � orithm is outside the scope of this paper, but see [Shimony, 1990].
2013-03-27T06:55:36.000Z
1990-07-27T00:00:00.000
{ "year": 2013, "sha1": "f92046c9b45768143885024e671ae25773fb6a58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f92046c9b45768143885024e671ae25773fb6a58", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
38696893
pes2o/s2orc
v3-fos-license
Diagnosis of Giardia infections by PCR-based methods in children of an endemic area The present study was designed to estimate the prevalence of Giardia infection in preschooland school-aged children living in an endemic area. Fecal samples from 573 children were processed by zinc sulfate centrifugal flotation, centrifugal sedimentation (using a commercial device for fecal concentration – TF-Test kit®) and polymerase chain reaction (PCR)-based methods. Of the stool samples assessed, 277 (48.3%) were positive for intestinal parasites and/or commensal protozoa. Centrifugal flotation presented the highest diagnostic sensitivity for Giardia infections. The kappa index revealed that both coproparasitological techniques closely agreed on the Giardia diagnosis (86%) versus satisfactory (72%) and poor (35%) concordances for commensal protozoan and helminth infections, respectively. Concerning Giardia molecular diagnosis, from the 71 microscopy-positive samples, specific amplification of gdh and tpi fragments was noted in 68 (95.7%) and 64 (90%) samples, respectively. Amplification of gdh and tpi genes was observed, respectively, in 95.7% and 90% of microscopy-positive Giardia samples. For 144 microscopynegative samples, gdh and tpi gene amplification products were obtained from 8.3% and 35.9% samples, respectively. The agreement between these genes was about 40%. The centrifuge-flotation based method was the most suitable means of Giardia diagnosis assessed in the present study by combining accuracy INTRODUCTION The protozoan Giardia duodenalis (syn.Giardia lamblia and Giardia intestinalis) stands out as the most frequent enteroparasite found in coproparasitological surveys conducted in developed and developing countries.In different populations, giardiasis is one of the most common nonviral causes of diarrhea among children, which, in turn, gives rise to such problems as malabsorption and weight loss, leading to delayed growth and development (1,2). Until today, the Giardia infection has usually been diagnosed under light microscopy to identify either trophozoites or cysts in fecal samples.In general, the examination of Giardia is performed in a single stool sample; however, as this parasite presents a variable pattern of excretion, misdiagnoses have been common and the actual prevalence may be underestimated.In view of this limitation, repeated samplings may be necessary, preferably a minimum of three samples on three alternate days.Concentration techniques such as formalin-ether or zinc sulfate flotation have been used routinely to diagnose this infection, but, even under ideal conditions, cysts are identified in a single stool sample in only 50-70% of samples (3).Commercially available concentration assays have been included in the laboratory routine to improve the coprological diagnosis, but they still show some limitations.In Brazil, the TF-Test® (Immunoassay, Brazil) is a newly available commercial kit designed for the diagnosis of human intestinal parasites in which the three collected fecal samples are pooled, double filtered and concentrated by centrifugation (4). Despite the wide use of microscopic examination of stool in epidemiological surveys, the actual prevalence of Giardia infection may be underestimated in the communities.Therefore, the need for alternative methods has led some researchers to assess new procedures and techniques that can be efficient for this purpose.Recently, polymerase chain reaction (PCR)based methods that present sensitivity better than or similar to microscopy for directly detecting Giardia in stool have also been described (5).In this context, the present study was designed to estimate the prevalence of Giardia infection in children at preschool and school ages in an endemic area, using as diagnostic procedures the microscopy of fecal samples processed by centrifuge-flotation and by TF test assay and the PCR-based methods. Study Population and Sample Collection From September 2007 to April 2009, fecal samples were obtained from 573 children aged zero to 14 years in the city of Pratânia, São Paulo state, Brazil.The study was approved by the Research Ethics Committee of the Botucatu Medical School, UNESP, under the protocol number 492/2009 CEPE. For fecal sample collection, each child received a package containing three TF-Test kit® (Immunoassay, Brazil) collection tubes (patient kit) filled with a solution of potassium dichromate (2.5%).This kit allowed the separate collection of three fecal specimens on alternate days. Coproparasitological Analysis Prior to the processing of samples, the three TF-Test kit® (Immunoassay, Brazil) collection tubes were vigorously agitated to achieve homogenization; from each sample, a part of the fecal material was transferred to a test tube.After this, the collection tubes were processed according to the TF-Test kit® manufacturer's instructions. The fecal material transferred to the test tubes was washed three times with water (350 g for one minute) to remove potassium dichromate.After the washes, an aliquot of fecal sediment was submitted to concentration analysis by the conventional coprological technique of centrifuge flotation in zinc sulfate, according to the method of Sloss et al. (6).The remaining sediment was used to prepare smears stained by a modified Ziehl-Neelsen (7) for detecting Cryptosporidium spp.and for DNA extraction. Slides loaded with the material obtained by either TF-Test kit® or centrifuge-flotation in zinc sulfate were examined under an optical microscope to screen for the presence of Giardia and other intestinal parasites. DNA Extraction Total DNA was extracted from each Giardiapositive sample using the QIAamp® Stool mini kit (Qiagen, Germany) following the manufacturer's instructions.To optimize disruption of the cysts, prior to DNA extraction, the samples were subjected to three cycles of freezing and thawing by the following steps: two cycles alternating incubation in liquid nitrogen for five minutes and thawing in a water bath at 70°C for five minutes and concluding with a cycle of freezing in liquid nitrogen for five minutes and thawing at 95°C for five minutes. Considering the possibility of false-negative results by coprological methods, Giardia-negative samples were processed for DNA extraction.The number of negative samples was calculated based on the prevalence data of Giardia infection in children in the municipality of Pratânia. PCR Assays Molecular diagnosis of Giardia was performed using two loci, glutamate dehydrogenase (gdh) and triose phosphate isomerase (tpi) genes.The eluted DNA was submitted to a semi-nested procedure for amplification of a 432-bp region from the gdh gene according to Read et al. (8).A nested PCR reaction for amplification of a 530-bp fragment from the tpi gene was performed using a protocol described by Sulaiman et al. (9).In each reaction, negative (mix + water) and positive (DNA from axenic G. duodenalis trophozoites) controls were added.The PCR products were submitted to 1.5 % agarose gel electrophoresis, stained with ethidium bromide and the gel image was recorded under transilluminator UV light. Statistical Analysis The frequencies of each parasite detected by centrifuge-flotation and TF-Test kit® were compared by the chi-square test.The diagnostic sensitivities of the techniques were individually calculated for each parasite by using the formula: where ''a" represents the number of positive cases detected by the method and ''c" is the number of true positive cases.The kappa index (k) was calculated to determine the agreement among diagnoses obtained from each of the three methods.It is calculated based on the observed and expected frequencies displayed diagonally on a square table of frequencies (10), as follows: where "P o " and "P e " represent the observed and expected percentages of agreement, respectively. All the analyses were done using Excel and PopTools (Microsoft Co., USA). RESULTS Of the 573 stool samples assessed, 277 (48.3 %) were positive for at least one intestinal parasite and/ or commensal.The most frequent parasite detected was Cryptosporidium spp., found in 79 (13.8%) stool samples.In 49 samples, Cryptosporidium was detected as single infection (62%) and in 30 (38%) it was found associated with a commensal or with another intestinal parasite. Of 198 positive samples, 165 were diagnosed by centrifugal flotation and 154 by the TF-Test® kit.Most of the samples positive for Giardia (98.6%),E. vermicularis (81.8%) and E. coli (81.1%) were detected by centrifugal flotation, whereas the TF-Test® kit detected most of E. nana (87%) B. hominis (85.7%) and T. trichiura (75%) infections, but statistical analysis revealed that the frequency of each individual parasite or commensal was similar regardless of the diagnostic method employed (P > 0.05). In light of the low prevalence of some parasites and commensals, the analytical sensitivity As to Giardia molecular diagnosis, out of the 71 microscopy-positive samples, specific amplification of gdh and tpi fragments was observed in 68 (95.7%) and 64 (90.0%) samples, respectively (Table 3).Not all of these samples showed PCR products for both genes.Of the 144 Giardia-negative samples by microscopy that were submitted to DNA amplification, PCR products of the expected size were generated in 12 (8.3%)and 44 (35.9%) samples, respectively for gdh and tpi genes (Table 3).In relation to the agreement indexes (Table 4), the lowest concordance rate (47%) was found by microscopic examination and PCR of the tpi gene.The gdh gene results revealed a Kappa Index of 0.79, which is interpreted as a satisfactory concordance (79%).The agreement between the two genes was only 37%.As to the correlation between microscopic examination (EM) and molecular techniques (TM) used, the k value obtained (0.46) reflects a weak level of agreement. DISCUSSION The present study compared PCR-based methods with microscopy of concentrated fecal samples to assess their performance in diagnosing Giardia infection in children, during an epidemiological survey in an endemic area. Giardia and other intestinal parasite infections were detected in this population by comparing the coprological diagnostic methods of centrifugal flotation and the TF-Test® assay.Centrifugal flotation provided higher diagnostic sensitivity for Giardia infections than the TF-Test® kit, a difference probably related to the principle of each method.Given that the centrifugal flotation technique is based on measuring the concentration by flotation, it is suitable for detecting lighter parasitic forms such as protozoan cysts.Despite the difference in their sensitivity for detecting Giardia, both techniques enabled the diagnosis of intestinal protozoa in the population with a high level of concordance, namely, 86% for Giardia and 72% for the commensal protozoans.The low prevalence of intestinal helminths in this population precludes further discussion on the reliability of these two methods. As to the molecular diagnosis of Giardia infection, not all positive stool samples showed PCR products for gdh and tpi.Even though molecular techniques offer advantages over conventional methods due to their higher sensitivity and specificity, problems in PCR performance have been reported by other researchers in Giardia studies (11)(12)(13)(14)(15).There is a consensus among the authors that the occurrence of false-negative cases is associated with factors that may determine the absence or low concentrations of DNA in the sample to be submitted to extraction and amplification of genetic material. The presence of inhibitors in feces appears to be one of the most main factors that affect the efficiency of PCR-based techniques, resulting in non-amplification of the gene fragments.Among these inhibitors we highlight substances such as the polysaccharide complex, bile salts, bilirubins and hemoglobin degradation products (16).Besides the diversity of inhibitors in feces, the concentrations of these substances in the sample varies according to the stool, diet, gut flora and health condition of the host (16).To minimize the effect of DNA inhibitors, one alternative is using commercial kits that include extraction columns (spin columns) for purification of DNA.The extraction by these kits can reduce the amount of PCR inhibitors, as well as enable greater efficiency in obtaining the DNA sample.In the present study, despite using such a kit, the occurrence of false negatives was not eliminated and, thus, the amplification of fragments corresponding to gdh and tpi genes was observed respectively in 95.7% and 90% of the samples.A similar situation has been found in recent studies in which the authors report target-fragment amplification in only 70% of Giardia-positive samples (11,12). In addition to PCR inhibitors, the low number of cysts in the stool can directly affect the concentration of the extracted DNA sample.With respect to Giardia, as the excretion of cysts is intermittent, cysts are not passed or are passed sporadically during a period of 15 to 20 days.The examination of at least three stool specimens collected on alternate days can be an alternative for minimizing the interference of this biological factor, but many times it does not avoid the problem when cysts are passed in very low numbers.Although this alternative has been adopted in the present study, it was not sufficient to eliminate the false-negative results. Besides the limitations due the low concentration of cysts in feces, we may infer that the lysis of these forms is also a determining factor for obtaining the genomic DNA.Thus, to ensure a better yield in obtaining DNA, samples were subjected to a heat shock procedure which consists of alternating cycles of freezing in liquid nitrogen (-196°C) and thawing in a water bath at temperatures up to 95°C.This procedure produces satisfactory results, but it does not mean that 100% of cysts can be disrupted.The extent to which the goal of total cyst disruption is achieved depends on the number of cycles, temperature for thawing, duration of each cycle and also the number of cysts present in the sample. The high sensitivity of PCR-based techniques requires attention in relation to the increased possibility of false-positive results.In 144 microscopy-negative samples, amplification products for the gdh and tpi genes were obtained from 8.3% and 35.9% of samples, respectively.Given that the occurrence of false-positives is frequently related to contamination by amplicons, preventive measures were adopted.Thus, the false-positive cases observed in our study could be attributed to the low concentration of cysts in stool samples, which caused the infection to be underestimated by microscopic examination. With regard to amplification of DNA fragments by gdh and tpi, the correlation between these genes was about 40%, since not all samples were amplified by both molecular markers.Of the 214 DNA samples, amplification of gdh and tpi gene fragments was observed respectively in 80 (37.2%) and 108 (50.2%) samples, while only in 61 samples (28.5%) was it possible to obtain amplification products of fragments of both genes.Our observations corroborate recent studies in which the authors observed differences in the performance of commonly used markers of gdh, tpi and 18S rRNA genes in PCR amplification (12,14).The cause of this difference is not yet known, but according to Lalle et al. (14), despite the fact that gene primers are designed to bind "conserved" regions in genes, it is possible that mismatches in primer sequences may be too long to allow successful PCR analysis of some fecal isolates. Although molecular PCR has clearly enabled further improvements in parasite diagnosis and epidemiology, the successful application of PCR-based methods in epidemiological surveys depends on understanding the limitations and assumptions of the techniques.According to the present observations, the centrifugal flotationbased method remains the most suitable for Giardia diagnosis.Indeed, given that most endemic areas are distributed in developing countries, the diagnostic methods employed during an epidemiological survey should combine accuracy and low cost of the diagnosis. Table 1 . Frequency of intestinal parasites and commensals detected by centrifugal flotation (CF) and TF-Test kit® (TF) in fecal samples from 573 children in São Paulo state, Brazil Table 2 . Sensitivity and agreement (kappa index) analyses of centrifuge-flotation (CF) and TF-Test® (TF) techniques utilized to diagnose intestinal parasites in 573 children in São Paulo state, Brazil Table 3 . Amplification of gdh and tpi Giardia genes from 215 stool samples, microscopically positive (71) and negative (144), obtained from children in São Paulo state, Brazil Table 4 . Agreement analysis (kappa index) between microscopic and molecular methods used for the diagnosis of Giardia infections
2017-09-12T18:48:28.038Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "1ce4e8f67c3ae8330d8ba0443df5a92616216bef", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/jvatitd/v17n2/12.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1ce4e8f67c3ae8330d8ba0443df5a92616216bef", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
261621569
pes2o/s2orc
v3-fos-license
Overexpression of glutathione synthetase gene improving redox homeostasis and chicken infectious bursal disease virus propagation in chicken embryo fibroblast DF-1 Infectious bursal disease (IBD) of chickens is an acute, high-contact, lytic infectious disease caused by infectious bursal disease virus (IBDV). The attenuated inactivated vaccine produced by DF-1 cells is an effective control method, but the epidemic protection demands from the world poultry industry remain unfulfilled. To improve the IBDV vaccine production capacity and reduce the economic losses caused by IBDV in chicken, cellular metabolic engineering is performed on host cells. In this study, when analyzing the metabolomic after IBDV infection of DF-1 cells and the exogenous addition of reduced glutathione (GSH), we found that glutathione metabolism had an important role in the propagation of IBDV in DF-1 cells, and the glutathione synthetase gene (gss) could be a limiting regulator in glutathione metabolism. Therefore, three stable recombinant cell lines GSS-L, GSS-M, and GSS-H (gss gene overexpression with low, medium, and high mRNA levels) were screened. We found that the recombinant GSS-M cell line had the optimal regulatory effect with a 7.19 ± 0.93-fold increase in IBDV titer. We performed oxidative stress and redox status analysis on different recombinant cell lines, and found that the overexpression of gss gene significantly enhanced the ability of host cells to resist oxidative stress caused by IBDV infection. This study established a high-efficiency DF-1 cells system for IBDV vaccine production by regulating glutathione metabolism, and underscored the importance of moderate gene expression regulation on the virus reproduction providing a way for rational and precise cell engineering. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40643-023-00665-0. Introduction Infectious bursal disease in chickens has been found in flocks for more than 50 years now and is an acute, highcontact, lytic infectious disease caused by the infectious bursal virus IBDV, which is a small-molecule, envelopefree virus belonging to the genus Avibirnavirus of the Birnaviridase family (Leong 2000).IBDV also exacerbates other viral infections and causes huge economic losses to the world poultry industry (Müller et al. 2003).An attenuated inactivated vaccine is an effective control method for this disease, and DF-1 cell is useful for IBDV vaccine production (Choi et al. 2020).We found that the process of IBDV propagation in DF-1 cells would affect the metabolism of the host cells since infection (Lin et al. 2020).There were mainly eight cell metabolic pathways that changed (Rodrigues et al. 2013): amino acid catabolism, carbohydrate catabolism and the integration of energy metabolism, nucleotide metabolism, pentose phosphate pathway, polyamine biosynthesis, lipid metabolism, and glutathione metabolism.Here, we found that the changes in glutathione metabolism in DF-1 cells caused by IBDV invasion play an important role in virus reproduction. The altered glutathione metabolism of host cells is associated with intense oxidative stress during virus infection, mainly manifested as the imbalance between oxidative and antioxidant effects in vivo.Oxidants and hyperoxia radicals, especially the reactive oxygen species (ROS), are the main agents acting in oxidative stress and have an important role in the pathogenesis of various infectious diseases.There are two types of antioxidant systems in cells.One is the enzymatic antioxidant system, with superoxide dismutase (SOD) as the main marker (He et al. 2016), and the other is the non-enzymatic antioxidant system, with glutathione as the main marker (Espinosa-Diez et al. 2015).Glutathione is available in both reduced (GSH) and oxidized (GSSG) forms.GSSG/ GSH ratio can reflect the redox state since GSH is transferred into GSSG when cells undergo oxidative stress.A controlled intracellular glutathione redox cycle is a guarantee for maintaining a favorable intracellular redox state (Schafer and Buettner 2001).NADP+/NADPH is a cofactor pair that provides active sources of protons and electrons and is closely linked with GSSG/GSH.GSH, together with NADPH and related enzymes, form a complex antioxidant network that is involved in maintaining the redox state of the organism (Ouyang et al. 2018;Ye et al. 2015;Lu and Holmgren 2014). Glutathione, the main acting substrate of cellular resistance to oxidative stress, involves several enzymes in its metabolism (Tsugawa et al. 2019), among which glutathione synthase (GSS) catalyzes the synthesis of reduced glutathione from γ-glutamylcysteine and glycine in an ATP-dependent manner (Njalsson et al. 2001), and the activity of GSS is linearly correlated with the intracellular content of GSH (Dickinson and Forman 2002).Moore et al. recombinantly expressed GSS and γ-glutamylcysteine synthetase (γ-GCS) proteins in E. coli, which significantly promoted glutathione concentrations, and hypothesized that GSH synthesis in mammals facilitates cellular resistance to toxic substances (Moore et al. 1989).Volohonsky et al. abstracted GSS and γ-GCS proteins in organs or cells such as murine liver and kidney and found that γ-GCS was feedback inhibited by GSH, while GSS was non-restricted enzyme and not feedback inhibited by GSH (Volohonsky et al. 2002).GSS can modulate the GSH redox system by increasing GSH synthesis to resist oxidative stress in harsh environments (ZhuY et al. 1999;Li et al. 2006).Despite the low amino acid sequence homology of GSS among different species, they have important roles in cell growth (Jez 2019).Therefore, overexpression of GSS has the potential to regulate intracellular GSH concentration to promote cell growth and viral productivity. In this study, we analyzed the metabolomic of DF-1 cells after IBDV infection, and explored the importance of glutathione pathway by the exogenous addition of GSH.Then, we overexpressed the gss gene, screened recombinant DF-1 cell lines with different gene expression levels, and further evaluated the virus reproduction capacity and redox status of recombinant cell lines after IBDV infection. DF-1 cells and IBDV culture Routine cell culture of adherent DF-1 cells was performed in Nunc EasyFlask 25 cm 2 (Thermo Scientific) with 5 ml DMEM/F12 (1:1) with 5% fetal bovine serum (Biological Industries) in a humidified incubator at 37 ℃ with 5% CO 2 .Cell number and viability were determined using Countstar (ALIT Life Science), an automated trypan blue cell counter.The specific growth rate (h −1 ) was calculated as the following equation: where t 1 and t 2 were the culture time (h), X 1 and X 2 were the corresponding cell concentration (cells/ml). IBDV multiplied in DF-1 cell culture provided by our lab was used throughout this study.IBDV infected DF-1 in the flask when cells reached a confluence of 90% (about 36 h of DF-1 growth) and harvested when 80% of cells were observed lesions (about 36 hpi after IBDV infection), and then determined virulence using TCID 50 as stated in the previous report (Lin et al. 2020).The relative titer was calculated as the following equation: where TCID 50, control is the average virus titer at 36 hpi in control group, and TCID 50, sample is the recombinant group. Construction of gss overexpression DF-1 cell line EcoR I and Xba I were digestion sites at the 5' and 3' ends of the chicken gss gene (NCBI number XM_425692.5) in the pCI-neo vector (Additional file 1: Fig. S1A).DF-1 cells were seeded in 24-well plates at a concentration of 3 × 10 5 cells/well 16 h prior to being 70-90% confluent at transfection.Plasmid DNA (1 μg) expressing the gss gene was transfected into DF-1 cells mediated by Lipofectamine 3000 (Invitrogen) as the manual.The fresh medium with 800 μg/ml G418 (Sigma) was replaced every 48 h until the cell growth was stable and the cells without transfection were dead.The cell clones were screened in 96-well plates.DF-1 cells expressing lacZ were set as control. RNA isolation and quantitation RT-PCR Total RNA was isolated from DF-1 cells using the TRIzol extraction method as described previously.Purified RNAs were eluted using 20 μl RNase-free water and stored in a − 80 ℃ freezer.The quality and quantity of RNA were evaluated using a spectrophotometer (Nan-oDrop 2000, Thermo Scientific).For cDNA synthesis, RNA was reverse-transcribed using the First Strand cDNA Synthesis kit (Thermo Scientific) according to the manufacturer's instructions with Oligo(dT) 18 primers after RNase-free DNase treatment.For gene expression analysis, the sequences of forward and reverse primers used to amply chicken gss, gsr, ggt, sod2, and housekeeping gene β-actin were designed by NCBI blast (Table 1).The cDNA samples were amplified in triplicate by realtime qPCR using TB Green Premix Ex Taq II (Takara) and the CFX96 Touch Real-time PCR Detection System Table 1 The primers used in this study (Bio-Rad Laborites, Inc.).Gene expression levels were estimated based on PCR efficiency and threshold cycle (Ct) deviation of an unknown sample vs. a control. Western blot The cell protein extracts (10 μg) from control DF-1 cells and gss overexpressed cells by RIPA Lysis Buffers (Beyotime) were prepared by detecting their total protein concentration using Pierce ™ BCA Protein Assay Kit (Thermo Scientific).Samples were subsequently subjected to SDS/ PAGE one 7.5% (w/v) polyacrylamide gels.Proteins were transferred onto a nitrocellulose membrane.After blocking in QuickBlock ™ Blocking Buffer for Western Blot (Beyotime), membranes were then probed with anti-GSS antibodies (ProteinTech; diluted to 1:1000) and detected with HRP-labeled goat anti-rabbit IgG (Abcam; diluted to 1:2000) after washing.GAPDH was used as a control. Redox-state analysis DF-1 cells were lysed by freeze-thaw method, and reduced/oxidized glutathione (GSH/GSSG) measurement was performed using a GSH and GSSG Assay Kit (Beyotime) according to the manufacturer's protocol.NADP+ and NADPH levels of cells were detected with Enzychrom ™ NADP+/NADPH Assay Kit (BioAssay Systems) according to the manufacturer's protocol.Intracellular ROS level measurements were performed according to the Reactive oxygen species Assay Kit's manufacturer's protocol (Nanjing Jiancheng Bioengineering Institute). Statistical analysis All experiments were repeated at least three times.The statistical significance of variables was evaluated by applying the analysis of variance (ANOVA) using Student's t test.A p-value less than 0.05 was considered statistically significant and was indicated by an asterisk in the figures.Data were reported as mean ± standard deviation. Effect of glutathione metabolic pathway on IBDV replication based on metabolomics analysis In the metabolomic study after IBDV infection of DF-1 cells, intracellular metabolite intensities of DF-1 cells were examined at 0 hpi, 6 hpi, 12 hpi, 18 hpi, and 36 hpi after IBDV inoculation, with three parallels set at each time point, including the processes of IBDV infestation, replication, assembly, and secretion (Lin et al. 2020).By metabolic pathway topology analysis on the MetaboAnalyst, there were significant changes in the glutathione metabolic pathway during viral multiplication (Fig. 1A).The intracellular metabolites intensities associated with the glutathione metabolic pathway in DF-1 cells with IBDV were generally higher than the intracellular situation in DF-1 cells without IBDV inoculation (Fig. 1B-F).It is hypothesized that the upregulation of the glutathione metabolic pathway may facilitate the intracellular propagation of IBDV in DF-1 cells. Effect of exogenous addition of GSH on IBDV multiplication in DF-1 cells To investigate the effects of glutathione, 0 mM, 0.3 mM, 0.6 mM, and 1.2 mM of GSH were added at 0 h and 24 h during DF-1 cell growth, and then DF-1 cell growth and IBDV multiplication were examined (Fig. 2).The results showed that GSH could inhibit the early growth of DF-1 cells.Specifically, the exogenous addition of 0.3 mM and 0.6 mM GSH had a significant promotion effect on the growth of DF-1 cells when the cells grew to 24 h and started to enter the logarithmic growth phase (Fig. 2A, B).In addition, the addition of 0.6 mM GSH significantly promoted the acquisition of IBDV, 0.3 mM GSH had the second highest effect, while 1.2 mM GSH inhibited the propagation of IBDV.In addition, the growth state of the cells at the time of GSH addition also significantly affected IBDV propagation, and the IBDV titers obtained by adding GSH at the early growth stage of DF-1 cells were significantly lower than those obtained by adding GSH after the DF-1 cells entered the logarithmic growth stage (Fig. 2C).It indicates that the influence of exogenous GSH addition on IBDV propagation in DF-1 cells is not only concentration-dependent, but also time-dependent. The limitation of glutathione metabolism in DF-1 cells infected by IBDV To explore the limitation of intracellular GSH concentration in DF-1 cells of IBDV infection, we examined the mRNA levels of the enzymes in related pathways, including GSH synthase (GSS) involved with GSH production, γ-glutamyltransferase (GGT) involved with GSH consumption, and glutathione reductase (GSR) which mediates the interconversion between GSH and GSSG (Fig. 3A).Compared to the control which was not infected by IBDV, the transcript levels of ggt and gsr genes were significantly increased by 2.43 ± 0.16-fold and 1.82 ± 0.08-fold, respectively, while the transcript levels of gss did not significantly change, indicating that the GSH utilization pathway was significantly elevated, while the GSH synthesis pathway was not significantly changed (Fig. 3B).During the antiviral oxidative stress response in mammalian cells, it is mainly GSH that plays an antioxidant role.Therefore, enhancing the synthetic pathway of GSH by overexpressing the gss gene could potentially improve the viral multiplication ability of DF-1 cells. Effect of GSS overexpression on DF-1 cell growth and IBDV propagation To investigate the effect of gss gene expression on IBDV propagation in DF-1 cells, recombinant monoclonal DF-1 cell lines overexpressing the gss gene were constructed and obtained by screening.According to the gss gene expression from low to high determined by RT-qPCR and Western Blot, three recombinant monoclonal DF-1 cell lines, GSS-L, GSS-M, and GSS-H, were selected for the subsequent study (Fig. 4A, B).The gss gene transcript levels of recombinant GSS-L, GSS-M, and GSS-H cell lines were increased by 3.00 ± 0.12, 9.29 ± 0.17, and 21.23 ± 2.05-fold, respectively, and the results of Western blot analysis were similar to the RT-qRCP results.A DF-1 cells growth curve from 0 to 36 h with 0, 0.3, 0.6, 1.2 mM GSH addition at 0 h.B DF-1 cells growth curve from 24 to 36 h with 0, 0.3, 0.6, 1.2 mM GSH addition at 24 h.C The relative IBDV titers from DF-1 cells were treated with different GSH concentrations at 0 h and 24 h.The groups without GSH supplementation were set as the control.N = 3 biological replicates and error bars represent s.d.Asterisks "*" presented the differences between the control group (without GSH in medium) and the experimental groups (with 0.3, 0.6, 1.2 mM GSH in media).*p < 0.05, **p < 0.01, and ***p < 0.001 as determined by two-tailed t test When we considered the DF-1 cells growth and IBDV propagation, the results showed the difference in the three cell lines.Compared to the control cell lines, the growth of recombinant GSS-L, GSS-M, and GSS-H cell lines were all effectively promoted, obtaining 1.50 ± 0.02, 1.18 ± 0.004, and 1.42 ± 0.06-fold higher maximum cell density and 1.33 ± 0.04, 1.76 ± 0.09, and 1.24 ± 0.01-fold higher maximum specific growth rate, respectively (Table 2; Additional file 1: Fig. S1B).And compared to control cells, the IBDV titers of recombinant GSS-L, GSS-M, and GSS-H cell lines were increased by 1.74 ± 0.50, 7.19 ± 0.93, and 0.96 ± 0.32fold, respectively (Fig. 4C), indicating that moderate overexpression of the gss gene contributed to DF-1 cell growth and IBDV propagation. Effect of overexpression of gss gene on redox status in DF-1 cells ROS is an important indicator to evaluate the cellular oxidative stress response.The intracellular ROS concentrations of three recombinant cell lines decreased by 80.62 ± 0.96%, 94.97 ± 0.38%, and 84.12 ± 1.25%, respectively, before being infected by IBDV (Fig. 5A).The ROS concentration of the GSS-M cell line decreased the most significantly indicating that moderate overexpression of expression of the gss gene was able to effectively reduce the intracellular ROS levels.Moreover, the intracellular concentrations of the related metabolites, GSH and GSSG in GSS-M cells were higher than that in control cells at 0 hpi.Therefore, the glutathione metabolism in the GSS-M cell line could be more active than that in control cell line (Fig. 5B, C).The superoxide dismutase SOD2 is required for the protection of cells from the toxicity of ROS generated during metabolism. The mRNA levels of the antioxidant gene sod2 were decreased in all three recombinant cell lines compared to the control cell line before IBDV infection, with the most pronounced decrease in the mRNA levels of the sod2 gene in the recombinant GSS-M cell line (Fig. 5E). Effect of overexpression of gss gene on cellular redox status in DF-1 cells after IBDV infection After IBDV infection (6 hpi), the cellular redox status of DF-1 cells changed.All the cells increased the intracellular concentrations of GSH and GSSG.The GSSG/ GSH (Fig. 5D) and NADP+/NADPH ratios (Fig. 4H) of control cells increased significantly, while the recombinant cell lines overexpressing the gss gene, the GSSG/ GSH ratio and NADP+/NADPH ratio remained relatively stable.Moreover, the transcript level of the sod2 gene decreased by 45.12 ± 4.72% in control cells, while decreased by only 8.59 ± 1.11% in GSS-L, and, respectively, increased by 26.66 ± 8.87% and 13.23 ± 3.82% in GSS-M and GSS-H at 6 hpi (Fig. 5E).The transcript levels of the antioxidant gene sod2 in all three recombinant cell lines overexpressing the gss gene were higher than those in the control cell line.Therefore, overexpression of the gss gene strengthened both the cellular enzymatic and non-enzymatic antioxidant system when the cells were fighting against the intense oxidative stress induced by IBDV. Discussion The process of viral infection can induce oxidative stress in host cells.A large number of the cellular virus infection experiments in vitro showed that severe oxidative stress occurs in host cells after infection with HIV (Thangavel et al. 2018), hepatitis C virus (Ríos-Ocampo et al. 2019), herpes simplex virus type 1 (Kristen et al. 2018), Sendai virus (Han et al. 2019), and influenza virus (Cai et al. 2003).In this study, by analyzing the metabolome changes after IBDV infection of DF-1 cells, we found a significant increase in the intensities of relevant metabolites in the glutathione metabolic pathway indicating that the IBDV infestation of DF-1 cells modulated the glutathione metabolic pathway to enhance the cellular resistance to oxidative stress in response to viral invasion.Among them, ROS is the main indicator of oxidative stress, and the production of excessive ROS overwhelms the glutathione antioxidant regulatory system and is accompanied by a significant decrease in the intracellular NADP + /NADPH ratio (Morris et al. 2013), which resulted in imbalances the redox state.However, IBDV does not mediate the lesions in DF-1 cells once infected.DF-1 cells did not have any significant changes in cell morphology at 6 hpi, while started to develop lesions after 12 hpi (Additional file 1: Fig. S2A).Raymond Hui and Frederick Leung also found that IBDV started to replicate at 6 hpi after caIBDV infestation of DF-1 cells, and virus particle formatted at 12 hpi (Hui and Leung 2015).Moreover, we found that cellular activity began to decline significantly at 12 hpi, when the oxidative and antioxidant effects of the cells were completely imbalanced, while the intracellular redox state was still able to maintain relative homeostasis at 6 hpi (Additional file 1: Fig. S2B). The maintenance of temporary redox homeostasis in DF-1 cells at the time of IBDV invasion facilitated the propagation of the virus at a later stage.In another study, we found that delaying IBDV-induced DF-1 cell death ultimately resulted in higher IBDV titers (Lin et al. 2020).Zhao et al. found induced the formation of stress granule (SG) which was an mRNA storages complex that played an important role in the innate immune response in host cells, to significantly promote IBDV replication in host cells (Zhao et al. 2020).Oxidative stress has been proven to be an inducer of SG formation, but excess ROS inhibited SG formation.Therefore, maintaining a certain concentration of intracellular GSH at the time of IBDV invasion can mitigate the damage caused by excess ROS.In addition, DF-1 cells tried to repair the imbalance of the redox state due to the intense oxidative stress response triggered by viral infection by moderately increasing GSH concentration in host cells to mitigate the onset of apoptosis and maintain the replicative environment and persistence of IBDV, this phenomenon was also found in HCV infection of Huh7.5 cells (Anticoli et al. 2019;Vasallo and Gastaminza 2015). Endogenous overexpression of some degree of GSS likewise enhances the ability of DF-1 cells to cope with oxidative stress.In this study, we obtained three recombinant cell lines GSS-L, GSS-M, and GSS-H with low to high levels of gss gene overexpression by screening.We found that the intracellular ROS concentrations of all three recombinant cell lines decreased significantly compared to the control cell line before IBDV infection (0 hpi), and the recombinant GSS-M cell line showed the most significant decrease by 94.97% lower compared with the control (Fig. 4A).Therefore, the moderate overexpression of gss gene could effectively reduce the intracellular ROS level and improve the capacity of dealing with oxidative stress so that to make a contribution to cell growth to some extent.Although there was no significant correlation between cell growth status and gss gene overexpression level, the maximum cell density and maximum specific growth rate data of the two main cell growth characteristics were significantly higher in the overexpression cell lines compared to the control cell lines (Table 2). Overexpression of the gss gene facilitated the transient maintenance of intracellular redox homeostasis when DF-1 cells were subjected to IBDV infestation, providing a favorable environment for viral replication.The control DF-1 cells encountered an excessive oxidative stress response and a significant imbalance in the cellular redox state, with significantly higher GSSG/GSH and NADP + / NADPH ratios at 6 hpi.The significantly elevated GSSG/ GSH and NADP + /NADPH ratios of the control cell line indicated that the cells were unable to effectively regulate the excessive oxidative stress triggered by viral replication, and the oxidative and antioxidant effects were imbalanced in vivo, with the cells favoring the oxidative state.In contrast, the recombinant GSS-M cells increased the intracellular GSH and GSSG concentrations significantly (Fig. 4B, C) indicating that the cellular glutathione metabolism was enhanced and maintained the relatively stable GSSG/GSH (Fig. 4D) and NADP + /NADPH (Fig. 4H) ratio.Meanwhile, the transcript levels of the antioxidant gene sod2 were higher in all three recombinant cell lines overexpressing the gss gene than in the control cell line at 6 hpi (Fig. 4E).Therefore, overexpressing gss enhanced both cellular non-enzymatic and enzymatic antioxidant systems against the oxidative stress induced by IBDV to maintain the redox homeostasis. Conclusions In this study, by analyzing the changes in metabolome after IBDV infection of DF-1 cells, we identified the important role of glutathione metabolism on virus multiplication and suggested that the gss gene might be a restrictive regulator in glutathione metabolism by detecting the transcript level of the key enzymes relative to glutathione.By exogenous addition of GSH and endogenous overexpression of the gss gene, we have demonstrated that appropriately increasing the concentration of GSH in DF-1 cells is beneficial to improving the antioxidant stress ability of the cells, and maintaining the temporary redox homeostasis at the initial stage of IBDV infection, thus improving the viral proliferation ability in the later stage.This study provided an effective method to improve the IBDV vaccine production capacity, and sheds light on cell engineering for vaccine process that benefit from enhancing host cell resistance to oxidative stress in viral infection. AAG GAG CAG GGA CGT CTA CA 97 qSOD2-R CCC ATA CAT CGA TTC CCA GCA GSS-F (EcoR I) ATT GAA TTC TTA GCT ATT GTC CAA TCG CCG 1591 GSS-R (Xba I) GCT CTA GAG CCC AAC AAA TGC AAA ACC ATT G Fig. 1 Fig. 2 Fig. 1 The relative intensity of metabolites in glutathione metabolism in DF-1 cells with IBDV incubation.A Topological analysis of the metabolic profiles in DF-1 cells induced by IBDV.B-F Show a significant increase in the intracellular intensities of GSH, GSSG, cysteinylglycine, cysteine-glutathione disulfide, and 5-oxoproline in DF-1 cells infected with IBDV (IBDV, open circle) for 12, 18 and 36 h compared with uninfected controls (no IBDV, solid square).Raw area counts from three independent experiments performed in triplicate (N = 3) were normalized to protein levels.Error bars show ± s.d. of the mean Fig. 3 Fig. 4 Fig. 3 The relative mRNA levels of the key enzymes in GSH-relative pathways.A Glutathione metabolic pathway diagram according to metabolome.B The relative mRNA levels of the ggt, gss, and gsr genes in DF-1 cells with or without IBDV infection.N = 3 biological replicates and error bars represent s.d.Asterisks "*" presented the differences between the control group (without IBDV) and the experimental groups (with IBDV) Fig. 5 (See legend on previous page.) Table 2 Growth of DF-1 cell line overexpressing gss geneThe calculated mean was for triplicate measurements from two independent experiments ± s.d. and compared between the experiment group and the control group.Statistical differences were calculated using the two-tailed Student's t-test in the software IBM SPSS Statistics 24
2023-09-10T13:11:42.383Z
2023-09-09T00:00:00.000
{ "year": 2023, "sha1": "dc0bfe32ed99a128a7954472bfea22a667ebf8cb", "oa_license": "CCBY", "oa_url": "https://bioresourcesbioprocessing.springeropen.com/counter/pdf/10.1186/s40643-023-00665-0", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "f1e8ceee33dcf139adadecc8877e5acfcd6fe9e7", "s2fieldsofstudy": [ "Biology", "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
195797950
pes2o/s2orc
v3-fos-license
Linkage between fecal androgen and glucocorticoid metabolites, spermaturia, body weight and onset of puberty in male African lions (Panthera leo) There is limited physiological information on onset of puberty in male lions. The aim of this study was to use longitudinal non-invasive monitoring to: 1) assess changes in steroid metabolite excretory patterns as a function of age and body weight; 2) determine correlations between fecal androgen (FAM) and glucocorticoid (FGM) metabolite concentrations; and 3) confirm spermiogenesis non-invasively through urinalysis. Specifically, FAM and FGM metabolites were analyzed in samples collected twice weekly from 21 male lions at 17 institutions (0.9–16 years of age) for 3.8 months– 2.5 years to assess longitudinal hormone patterns. In addition, body weights were obtained approximately monthly from 10 individuals at five zoos (0.0–3.0 years), and urine was collected from six males at two facilities (1.2–6.3 years) and evaluated for the presence of spermatozoa. An increase in overall mean FAM occurred at 2.0 years of age, at which point concentrations remained similar throughout adulthood. The onset of puberty occurred earlier in captive-born males (<1.2 years of age) compared to wild-born counterparts (<2.5 years of age). Additionally, males in captivity gained an average of 7.3 kg/month compared to 3.9 kg/month for wild males over the first 2–2.5 years of age. Sperm (spermaturia) was observed in males as young as 1.2 years in captivity compared to 2.5 years in the wild (ejaculates). There was no difference in FAM or FGM concentrations with regards to age or season. Overall, this study demonstrates that: 1) captive male lions attain puberty at an earlier age than wild counterparts; 2) onset of puberty is influenced by body weight (growth rate); and 3) spermiogenesis can be confirmed via urinalysis. Knowledge about the linkage between body weight and onset of puberty could facilitate improved reproductive management of ex situ populations via mitigating the risk of unintended breedings in young animals. Introduction African lions (Panthera leo) have been kept in menageries and for public display since the Roman Era, and in the 21 st century an estimated 750 lions were maintained in captivity globally [1]. Lions historically have bred well in captivity compared to other felids [2][3][4][5][6], limiting the incentive for conducting biological research. Most studies are based on behavioral observations of wild lions [7,8]; few included biological data [9][10][11]. Consequently, little is known about longitudinal gonadal and adrenal hormonal patterns or how they are associated with the onset of puberty in male lions. Puberty is defined as the process of acquiring reproductive competence [12]. In males, puberty is characterized by the development of secondary sexual characteristics, display of mounting behaviors, appearance of sperm in urine or ejaculate, and/or ability to ejaculate [12]. In addition to genetic and environmental factors, there is increasing evidence that nutritional status during early development influences maturation of the hypothalamic-pituitary-gonadal axis, resulting in earlier onset of puberty and sexual maturity. Extensive literature is available on the onset of puberty in livestock species, including cattle [13,14]. Among wildlife species, onset of puberty has been studied in species such as raccoons (Procyon lator) [15]), baboons (genus Papio) [16], wild boars (Sus scrofa) [17], American black bears (Ursus americanus) [18], polar bears (Ursus maritimus) [19], cheetahs (Acinonyx jubatus) [20], and European badger (Meles meles) [21]. Previous studies have characterized lions into reproductive life stages using behaviors and social dynamics. The average ages across these studies for wild male lions are: cub (0-2.2 years), subadult (2.2-4.5 years), adult (4.5-10 years), and aged (>10 years) [11,[22][23][24]. Yet, these distinctions are more a function of social structure and morphology than reproductive capability. Using behaviors and dimorphic physiological changes, most sources agree that puberty occurs in male cubs at or after 2.2 years of age (~26 months) [10,25], which corresponds to the approximate age that they are expelled from their natal pride [7,22,26]. By contrast, in most zoos, young males are housed together and recommended for breeding when they are over 3 years of age [27]. The age span of males that have successfully bred in captivity ranges from < 2 to 15.1 years [28]. Furthermore, lions in captivity appear to gain body weight faster than their wild counterparts [10], but there is no physiological or hormonal evidence to determine if the animals attain puberty earlier. Therefore, the objectives of this study were to: 1) use longitudinal non-invasive monitoring to assess changes in steroid metabolite excretory patterns as a function of age and body weight; 2) determine correlations between fecal androgen (FAM) and glucocorticoid (FGM) metabolite concentrations; and 3) confirm spermiogenesis non-invasively through urinalysis. Ethics statement Animal-related protocols were conducted with the approval of the Smithsonian National Zoological Park's Animal Care and Use Committee. Routine noninvasive collection of fecal and urine samples did not warrant additional institutional Animal Care and Use Committee approvals nor did it affect the daily routine of animals used in this study. Study animals A total of 26 male African lions located at 18 AZA-accredited institutions, ranging in age from <1 month to 16.0 years of age were included in the study (Table 1). Twenty-one animals were used for reproductive (hormone) assessment, 11 males were used for body weight measurements, and six males were used for urinalysis. Males were divided into four reproductive life stage age groups: peripubertal, 0.91-1.99 years (n = 5); subadult, 2-2.99 years (n = 5); adult, 3-10.99 years (n = 12); and aged, > 11 years (n = 5). In instances where the data for an individual spanned more than one age group, separate values were calculated for each age group. On average, animals were fasted <1 days/week and provided bones 1.5 days/week. Forty-four percent of the population was fed solely horsemeat (Nebraska Brand, Central Nebraska Packing, Inc. North Platte, NE) while the remaining animals were fed either beef or a combination of the two. All individuals had ad-libitum access to water. Furthermore, all animals in the study were allowed daily outside access for at least 1 hour, as long as temperatures remained above freezing. Sample collection and processing Sample collection. On average, fecal samples were collected 2x/week for at least 3.8 months and as long as 2.5 years ( Table 1, N = 21) and frozen within 24 hours in plastic zip-top bags. Urine samples (>1 ml; N = 6) for spermatozoal assessment were aspirated from concrete flooring opportunistically from six male lions at two facilities: a 6.3-year proven male that served as a positive control; three ranging in age from 1.4-1.5 years; and pooled samples from a pair of siblings, 1.2 years old. Urine samples were placed in polypropylene tubes and frozen within 12 hours of being voided. Fecal processing and steroid hormone extraction. Fecal samples were processed and hormone metabolites extracted as previously described [29]. Briefly, samples were dried in a lyophilizer (VirTis Ultra 35XL, SP Scientific, Warminster, PA), powdered, sifted, and 0.20 ± 0.02 g was weighed into 16x125 mm glass tubes (Fisherbrand, Thermo Fisher, Pittsburgh, PA). Five ml of 90% ethanol:10% de-ionized water was added to each sample along with~20,000 dpm 3 H-cortisol tracer (NEN Radiochemicals, Perkin Elmer, Boston, MA) to determine procedural loss. For 20 minutes, samples were boiled in a 95˚C waterbath and maintained at a 5-ml volume with the addition of 100% ethanol as needed. Samples were then centrifuged at 500 x g for 20 minutes (Sorval RC 3C Plus, Kendro Laboratory Products, Newtown, CT), the supernatant recovered and 5 ml of 90% ethanol:10% de-ionized water added to the pellet, which was vortexed (pulse rate 1/second, speed 65; Glas-Col, Terre Haute, IN) for 30 seconds. Samples were centrifuged (15 minutes, 500 x g), and the supernatants combined and dried down under forced air. One ml of 100% methanol was then added to dried sample extracts, evaporated to dryness, and reconstituted in 1 ml of preservative-free buffer (0.2 M NaH 2 PO 4 , #S8282; 0.2 M Na 2 HPO 4 , #S7907, Sigma Aldrich, St. Louis, MO; 0.15 M NaCl, #S271, Fisherbrand; pH 7.0). After vortexing for 15 seconds, samples were placed in an ultrasonic cleaner water bath (Cole Parmer Instrument Company, Vernon Hills, IL) for 15 minutes. Sample extracts were further diluted in buffer as needed: 1:10-1:50 for glucocorticoids and 1:50-1:250 for androgens. All sample extracts and dilutions were stored in polypropylene tubes at -20˚C until analysis. Average recovery of the 3 H-cortisol tracer was 82 ± 0.26% (mean ± standard error of the mean; SE). High pressure liquid chromatography (HPLC). High pressure liquid chromatography (Varian ProStar; Varian Analytical Instruments, Lexington, MA) of pooled fecal extracts was performed to characterize steroid hormone metabolites similar to that described previously [30]. Briefly, six fecal samples were extracted and the supernatants pooled, dried down, reconstituted in 1 ml methanol, passed through a syringe filter (13 mm, 0.2 μm pore size, #6789-1302, Whatman, Inc. Clifton, NJ) and dried under forced air. The pooled extract was reconstituted in 0.5 ml PBS (0.03 M Na 2 HPO4, 0.02 M NaH 2 PO4, 0.15 M NaCl, 0.002 M NaN 3 , #S2002, Sigma Aldrich; pH: 5.0), filtered through a C18 Spice cartridge (Analtech, Inc., Newark, DE), and evaporated to dryness. Approximately 14,000 dpm of radioactive tracers ( 3 H-testosterone, 3 H-cortisol and 3 H-corticosterone) were added to the appropriate pooled sample as chromatographic markers and dried again. The extract was reconstituted in 0.3 ml methanol, sonicated for 5 minute, and 0.05 ml was loaded onto a reverse-phase C18 HPLC column (Agilent Technologies, Santa Clara, CA). For testosterone, the sample was separated using a 45% isocratic acetonitrile:water solution over 80 minute (1 ml/minute flow rate, 0.33 ml fractions). For cortisol, the sample was separated using a 20-80% linear gradient of methanol:water over 80 minute (1 ml/minute flow rate, 1 ml fractions). A multi-purpose β-radiation scintillation counter (LS 6500, Beckman Coulter, Brea, CA) was used to evaluate a 0.05-ml aliquot of each fraction; the remaining volume of each fraction was dried and resuspended in 0.25 ml preservative-free phosphate buffer. Each fraction was then analyzed in singlet using the appropriate EIA and the retention times of chromatographic standards and immunologic activity were compared. Enzyme immunoassays were validated by demonstrating: 1) parallelism between standard curves and serially diluted fecal extracts; 2) recovery of hormone standard added to fecal extracts; and 3) correlation of hormone data with physiological events. Two-fold serial dilutions of samples were parallel to the standard curve for each EIA. For testosterone, the slopes of the standards and the sample dilutions were -12.18 and -13.82, respectively (r = 0.99), and for cortisol were -11.47 and -11.66, respectively (r = 0.99). For testosterone, the slope of hormone recovery was y = 0.82x + 4.69 (r = 0.99) when exogenous steroid was added to pooled fecal extract diluted 1:50. The slope of cortisol added to pooled fecal extracts (diluted 1:10) was y = 1.13x + 2.85 (r = 0.99). Increases in FAM in two male lions (SB409 and SB248) associated with mane growth, a secondary sexual characteristic controlled by testosterone [34] showed the biological validity of the testosterone EIA (S1 Fig). To validate the FGM EIA, FGM was measured in a male that exhibited lethargy and hematuria. Fecal FGM in samples collected over 2 weeks prior to the onset of symptoms (0.29 ± 0.04 μg/g; N = 6) and the 2 weeks after symptoms resolved (0.28 ± 0.02 μg/g; N = 6) were lower than the average FGM concentration during the duration of the illness (0.41 ± 0.02; N = 8; F 2,17 = 8.75 P = 0.002; Tukey, before: P = 0.01 and after: P = 0.005 treatment). Urine processing and evaluation Sample processing and evaluation was modified from human protocols [35,36]. Briefly, urine samples were thawed at RT and 1-ml aliquots were spun at 1,100 x g for 7 minutes (MiniSpin Plus, Eppendorf North America, Hauppauge, NY). After centrifugation, the supernatant was aspirated and < 0.01 ml of the pellet was examined under 400x magnification (Olympus BH2 microscope, Olympus Corporation, Center Valley, PA) for the presence of spermatozoa. Each pellet was analyzed a maximum of three times for the presence of sperm, after which if no spermatozoa were observed, the sample was categorized as sperm negative. Body weight measurement. Body weights of captive lions from <1-30 months were obtained at least monthly for 11 individuals at six institutions for at least 12 months. Animals <4 months of age were weighed inside a pre-tared plastic tube on a platform scale; after 4 months, they were weighed either on a platform scale during training sessions or inside a squeeze cage. Weights from wild-born cubs (N = 26) ranging in age from <1-30 month were derived from data collected in Kruger National Park, South Africa [10,37]. Data analysis An iterative process was utilized to determine baseline concentrations for steroid hormones in all study animals individually [38]. Briefly, data were deleted if the concentration was greater than 2 times the standard deviation (SD) above the mean. This process was replicated until no further data points could be removed. The resulting mean was considered the baseline concentration for that hormone and data points greater than 2 SD above the mean were regarded as peak concentrations. For individual study animals, FAM and FGM hormone data were calculated for overall, baseline and peak means (μg/g dry feces ± SE). Repeated measures analysis of covariance (ANCOVA) utilizing a compound symmetric covariance matrix structure and post-hoc Tukey HSD tests established the differences in FAM and FGM concentrations among age groups and across seasons. Correlations between FAM and FGM were calculated using Pearson's correlation coefficient; analyses were performed on all data, by age group, and within individual. For seasonality, individual mean and baseline hormone concentrations were calculated for each season and then seasons were averaged by age group to obtain a final mean. Body weights of wild lion cubs in Kruger National Park were obtained from previously published data [10,37] and averaged by month, as were body weight measurements from zoo-born cubs. Monthly differences in body weight between locations were calculated using ANCOVA with post-hoc Tukey HSD tests. Linear regression slopes were calculated for the overall weights of wild and captive cubs. A P < 0.05 α-level was used to determine statistical significance. In analyses where individual animals were repeatedly sampled, data were either blocked by individual or analyzed as repeated measures. The Kenward-Roger adjustment for degrees of freedom was used when employing repeated measures analyses. Analyses were conducted using SAS v. 9.3 (SAS Institute Inc., Cary, NC, USA). HPLC All of the immunoactivity in fractions evaluated using the testosterone EIA was present in fractions 9-15, and no activity was associated with the radioactive testosterone tracer (fraction 32). Analysis of fractions using the cortisol EIA indicated 80% of immunoactivity was at fraction 40, with a smaller peak (20%) observed at fraction 45, corresponding to the retention times of Body weight Average monthly body weights differed between captive and wild-born cubs (Fig 2, F 1 Discussion This study represents the first analysis of longitudinal FAM and FGM patterns in male African lions utilizing a non-invasive approach, identifying the influence of age on testicular and adrenal steroidogenic activity. Urinalysis for spermatozoa presence also proved to be a novel, noninvasive method for determining puberty onset in young lions. Overall results indicate that captive lions are maturing faster than wild counterparts, which was related to differences in growth rates during the first few years of life. These data provide fundamental information regarding male lion gonadal and adrenal function, and how it relates to age and reproductive capacity that can be used by the Lion Species Survival Plan (SSP) and animal care staff to improve captive animal management. Relevant steroid hormone metabolites were reliably detected in lion fecal samples via EIA. Based on HPLC analysis, none of the immunoactivity was associated with native testosterone, but rather presumably conjugated polar metabolites, which agrees with studies in domestic cats (Felis catus), Pallas' cats (Otocolobus manul), Eurasian lynx (Lynx lynx), and Iberian lynx (Lynx pardinus) [29]. By comparison, the cortisol EIA detected native cortisol as a major source of immunoactivity in HPLC fractions. These results differ from other felids, including clouded leopards (Neofelis nebulosa) and cheetahs (Acinonyx jubatus) [39], where native cortisol is not a detectable excretory product. The five age categories in this study (cub, peripubertal, sub-adult, adult and aged) were based on prior studies of wild lions [10,23], but were refined because of our results. In wild males, individuals are considered cubs until they are at least 2 years, but in captivity, individuals <2 years are capable of producing offspring [28]. As a result, cubs were reclassified as <0.9 years. The age range of 0.9-1.99 years was reclassified as peripubertal because of higher body weight s compared to wild lions but lower FAM than captive adults, and the onset of spermaturia indicative of puberty [10,12,23]. Captive males 2-2.99 years of age were classified as subadult to indicate that while they are not yet full adult weight, they are still heavier than their wild counterparts [10,23] and had similar FAM to adult males. The adult age range shifted to include 3-year-old males because they had reached adult weight by that age, and in the wild would be physically capable of gaining and holding control over a pride [26,40], and their FAM was similar to males at the age typical of retaining a pride. Wild males >8-9 years were still classified as adults [23]. Captive males were considered aged starting at 11 years to identify a population rarely observed in the wild and from which no wild hormone data are available [11]. Concentrations of FAM were lower in peripubertal males compared to subadult, adult and aged males in agreement with other felid studies, although the effect of age on gonadal activity has not been studied extensively in these species. In Iberian lynx, young males (2 years old) produced the lowest, while older males (>4 years) excreted the highest FAM concentrations, and results from electroejaculation trials found that even 2-year-old males were spermic, albeit with a higher percentage of abnormal spermatozoa [41]. Likewise, FAM concentrations were lower in juvenile (2 years) compared with adult (3-18 years) Canada lynx (Lynx canadensis) [42]. Concentrations of FAM were consistent throughout the year, indicating that captive male lions are not seasonal. Seasonality in testosterone production is more common in felids found in temperate latitudes, such as the Pallas' cat [43], Canada lynx [42] and Iberian lynx [41] compared to cats from tropical and subtropical climates, like the margay (Leopardus wiedii) and tigrina (L. tigrinus) that do not show seasonal changes in FAM [44]. However, not all cats found closer to the equator are aseasonal; the ocelot (L. pardalis) [44] and clouded leopard [45] do exhibit changes in testosterone production associated with season. Concentrations of FGM were similar across age groups and did not vary with season. Furthermore, there was a relationship between FAM and FGM in most males individually, and all age groups showed weak to moderate positive correlations between the two hormones. There are limited data on the relationship between testosterone and cortisol production in felids; most relate to how testosterone changes as a result of anesthesia, administration of ACTH, or other procedures (e.g. clouded leopards and jaguars (Panthera onca) [45,46]. Experimental manipulation of cortisol by ACTH and dexamethasone suggest that corticoids can increase testosterone production in arctic ground squirrels (Spermophilus parryii plesius) [47]. It is unlikely that the observed correlation in lions is due to the cortisol EIA is crossreacting with FAM because an increase in FGM concentrations was not observed in the young males as they aged and FAM significantly increased. Longitudinal analysis of androgen production in relation to puberty onset is not well documented in felids. In domestic cats, fecal testosterone was monitored from birth to the weeks leading up to the average age of puberty, and a rise in testosterone was observed in neonates [48]. However, testosterone remained low in subsequent weeks until the individuals were castrated [48]. Recently, Maly et al [20] reported that ex situ managed cheetahs reached adult FAM concentrations by 18-24 months of age and body weight by 21 months. They also concluded that based on increases in androgens and body weights, male cheetahs reached puberty at 18-24 months of age. Interestingly, based on the Lion SSP studbook records, males under 2 years of age have sired litters of cubs [28], and the urinalysis results in this study showed that animals with lower testosterone concentrations than average adult males are capable of supporting complete spermatogenesis. It is likely that testosterone concentrations are even lower in males <1 years of age, so that average mean and baseline observed between 1-2 years represents an increase in testosterone capable of initiating spermatogenesis. A major finding in this study was that captive male lions appear to reach puberty at a younger age than wild counterparts. Based on when males first show mounting behaviors, wild males reach sexual maturity at 2.2 years (26 months), but do not typically breed until they take over a pride at the average age of 4 [26,49] or 5 [23,50] years. Still, individuals as young as 3.3 years have been observed to control prides in the Serengeti and Ngorongoro Crater [11]. In wild lions, males 1.6-1.8 years of age were considered pre-pubertal because they produced lower serum testosterone than young adults and adults, weighed only~88 kg, and were aspermic [11]. Histologic evaluation of testicular tissue from wild males further demonstrated that the onset of spermatogenesis begins at about 2.5 years of age (range: 2.2-2.8 years) [23]. Our study found that although the peripubertal age group had lower FAM concentrations compared to older age categories, the youngest males that tested positive for spermaturia were 105 kg at 1.2 years of age. The growth kinetics presented in the present study indicate that captive-born lion cubs develop at a faster rate than wild-born cubs, which could account for the early puberty. Wild cubs are heavier for the first few months after birth, but between 3-5 months of age the growth patterns were the same to our captive cubs. Both captive and wild cubs begin to taste/consume meat a few months after birth and are usually weaned by 0.5 years [22,27,50]. However, after weaning, the plane of nutrition appears to diverge between wild-and captive-born cubs and the ADG rate is no longer synchronous, perhaps a result of feeding captive cubs meat daily. There is substantial variability in growth rate in wild African lions [22], and the rate at which lion cubs grow is correlated with food availability once they are weaned [50,51]. Cubs are dependent on adults for food [7,22], but when prey is scarce, young lions go without food for extended periods [50][51][52] and starvation is a common cause of death at that age [7,22,53]. Captive cubs likely experience an early onset of puberty and reach adult body weight earlier as a result of the consistent feedings they are provided [22,50,54]. For most mammals, the onset of puberty is associated with attaining a threshold body weight [55][56][57][58][59] and acquiring adequate fat reserves [60]. Under-nutrition can delay onset of puberty while over-conditioned animals often attain puberty at an earlier age. The minimum body weight for triggering onset of puberty has not been established for African lions. The lightest wild males in Kruger National Park with spermatozoa in their seminiferous tubules weighed 110 kg (n = 2) [10]. Captive males reached �115 kg at~1.3 years, compared with 2.2 years of age in the wild when they are reported to attain puberty. Males SB420 and SB422 were 105 kg at 1.2 years of age when spermatozoa were detected in their urine, corroborating Smuts' findings based on lion weight. In the wild, growth slows between 2-3 years, but weight gain continues until lions reach a maximum weight around 6 years of age [22,50]. By contrast, captive males reached an adult weight by 3 years [27]. Additionally, the onset of mane development has been reported to occur earlier in captive males, another indicator that androgen production increases at a younger age than in wild males [61]. The observation of spermaturia in captive lions at 14 months of age (1.2 years) indicates that spermatogenesis and spermiogenesis were ongoing before that. Many zoos have programs that include training various species to urinate on command. Being able to identify when a young male has reached puberty through urine analyses would allow zoos to better manage them (separating males and females) and avoid inbreeding. In summary, this study provides the first extensive study of androgen and glucocorticoid production in male lions. With the addition of urinalysis for spermaturia onset and body weight measurements, we are now better able to identify the onset of puberty in young males. These approaches could serve as a model for understanding onset of puberty in other felid species. It also may be informative to examine the reproductive hormone profile as well as urine samples before 1 year of age to monitor the increase in FAM prior to the onset of spermatogenesis and spermiogenesis. While spermaturia implies the onset of spermatogenesis and spermiogenesis, electroejaculations would provide more information on ejaculate and spermatozoal quality. Over 20 years ago, Brown et al. [11] remarked on the dearth of knowledge on pubertal changes in semen quality in felids; until now, little new information has been added. Future studies could show how felid ejaculate traits are impacted by age or body weight approaching puberty, which might improve management of young felids to avoid inbreeding. Because of declining wild lion populations [62,63], the possibility exists that captive insurance populations may be necessary to conserve the species [64,65]. Overall, results of this study reconfirm a lack of male reproductive seasonality, demonstrates a link between body weight and onset of puberty, as well as validates the utility of urinalysis for assessing reproductive status in male lions. Furthermore, findings add to the information that could be useful for animal managers to improve reproductive management of lions in captivity.
2019-07-05T13:14:58.025Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "7bd7d5a6da48efb74eae5220966640ea78651bbc", "oa_license": "CC0", "oa_url": "https://doi.org/10.1371/journal.pone.0217986", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bd7d5a6da48efb74eae5220966640ea78651bbc", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
247520094
pes2o/s2orc
v3-fos-license
Large variation in timing of follow-up visits after hip replacement: a review of the literature The study investigated the existing guidelines on the quality and frequency of the follow-up visits after total hip replacement surgery and assessed the level of evidence of these recommendations. The review process was carried out according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Additional works were retrieved by direct investigation of the available guidelines of the most important orthopedic societies and regulatory agencies. The current systematic review of the literature resulted in zero original papers, four guidelines for routine follow-up and three guidelines for special cases. Concerning the quality of evidence behind them, these guidelines were not evidence based but drafted from expert consensus. The most important finding of this review is the large variation of recommendations in the follow-up schedule after total hip arthroplasty and the lack of evidence-based indications. Indeed, all the above-reported guidelines are the result of a consensus among experts in the field (level of recommendation class D ‘very low’) and not based on clinical studies. Introduction Total hip arthroplasty (THA) is one of the most frequent and successful surgeries performed in the orthopedic field, nevertheless, a clear consensus on post-surgical management still lacks (1). The need to define a clear protocol to manage patients after THA stems from a number of reasons such as the early identification of complications and the assessment of the right timing for a possible revision surgery. The latter aspect ensues due to the fact that prosthetic hip implants have a limited lifespan, which a recent review by Evans et al. has estimated to be around 20 years for 75% of patients and 25 years for 56% (2). The gap of knowledge that the present review attempts to fill resides in the lack of clear indications regarding the follow-up visits schedule after THA. Indeed, this heterogeneity in terms of timing, number and nature of the visits following the discharge from the hospital still nowadays is not aligned with clear, evidence-based indications (3). The main aim of the follow-up visits is to detect the asymptomatic failure of the hip prosthesis. The diagnosis of asymptomatic failure can prevent extensive surgery such as the full revision of the acetabular component instead of the liner exchange to manage the wear and complications such as periprosthetic fractures due to severe bone reabsorption and/or gross loosening. If the THA failure presents symptomatically, the patient either self-refers (45%) or is referred by the general practitioner (19%) or is referred from other hospitals (16%) or from the emergency room (7.5%) (4). On the other hand, only routine follow-up is able to identify the asymptomatic failures and these account for 9% of the total amount of failures (5). According to these data, the vast majority of current revisions are late surgeries. However, early THA revision surgeries (e.g. only revision of a worn-down liner) can provide better outcomes with lower complication rates because they can be a less extensive and non-acute procedure. In fact, complex revisions of THA have been found to cost up to 1.5 times more than the hospital and physician resources of routine revisions (5). Another reason behind performing routine follow-up is that the latter is able to identify not only asymptomatic failure but also slightly symptomatic patients, which symptoms are often not promptly correlated with the prosthetic implant. In addition, the traditional follow-up with scheduled outpatient visits represents an issue not only from the costeffectiveness point of view but also for patient compliance. Indeed, only 61% of patients show up at follow-up visits at 1 year after surgery and that number drops even more at 2 years reaching 36% (6). This balance between the need to identify asymptomatic (radiographic) failures of THA (i.e. preventing more extensive revision surgery) and a costeffective medical practice results in a vast heterogeneity regarding the proper schedule of follow-up visits after THA. Materials and methods The review process was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (flow chart in Fig. 1) (7). Literature research was carried out by two independent authors (M L and F M G) through August 2020 on PubMed, Google Scholar and Scopus databases with the following Medical Subject Headings: follow-up and total hip replacement. Additional information was retrieved from most recent publicly available guidelines of orthopedic societies and regulatory agencies such as the Food and . In order to judge the relevance of a study, the following inclusion criteria were adopted: information from original papers, an orthopedic society guideline or a regulatory agency recommendation, the inclusion of information on duration and frequency of the follow-up visits after THA and information in either English, German or Italian language. As the systematic review of the literature did not find any original paper, no quantitative or qualitative assessment could be performed. Therefore, only a qualitative analysis of guidelines retrieved from orthopedic societies and regulatory agencies' websites was carried out. The latter was performed by means of the Grading of Recommendations Assessment, Development and Evaluation (GRADE). Results The current systematic review of the literature resulted in zero original papers, four guidelines for routine follow-up (8,9,10,11) and three guidelines for special cases such as metal-on-metal (MoM) THA or small head size (5,12,13). Concerning the quality of evidence behind them, these guidelines were not evidence based but drafted from expert consensus. Therefore, the level of recommendation according to GRADE was of Class D (i.e. 'very low') (14). Definition and content of follow-up The typical surveillance program for THA includes follow-up visits composed of an interview with an orthopedic surgeon that performs a clinical assessment and, by means of an imaging tool, also a radiological assessment. The inclusion of radiographic imaging during a routine follow-up visit after THA has been a matter of debate since it adds cost to the surveillance program. On one side, since the use of patient-reported outcomes alone is not able to assess a hip prosthesis state during a routine follow-up visit, the hip X-ray is suggested (15). On the other side, concerns on the ability of conventional radiographic imaging to effectively recognize THA failure have been raised (16). But even if plain radiography has some intrinsic limitations for the diagnosis of THA failure, it remains the first-step imaging technique and when inconclusive or doubtful, it can be followed by a more accurate tool such as a CT scan (17). www.efortopenreviews.org 7:3 Variations in FU schedule after THA 202 A further aspect to be acknowledged concerns the first visit after a THA procedure, since some guidelines (11) define the latter as the first meeting between the patient and the surgeon after the procedure typically occurring after a few weeks when the wound check and a general assessment are performed. Whereas other guidelines (9, 10) do not include this meeting as part of the follow-up schedule. Current guidelines for routine follow-up The systematic review of the literature and the content of orthopedic societies websites demonstrated only five clearly described recommended schedules of THA follow-up visits. Large variability on the recommendation for frequency and duration of follow-up is present ( Table 1). For that matter, some guidelines only state that regular follow-up visits are important but do not specify frequency and duration during follow-up: the NIH consensus 1997 (18) and the AAOS guidelines 2017 (19). Furthermore, three orthopedic societies recommend a follow-up schedule based on a first visit within the first year after the operation, followed by a second visit around the seventh year and then a visit every 3-5 years. These recommendations are from the BOA guidelines 2012 (10), the Netherlands Orthopaedic Association 2018 (9) and the Arthroplasty Society of Australia 2019 (8). The guidelines of the BOA are justified since the majority of revision occurs 7 years after the first implant and early detection of aseptic loosening may prevent periprosthetic fracture. The latter has increased mortality and costs associated with revision surgery in an acute situation (20). Instead, the Netherlands Orthopaedic Association guidelines present a similar rationale behind their schedule of follow-up by underlining the risk of missing asymptomatic silent osteolysis or loss of function, which increases the risk of periprosthetic fracture after an in-house fall with devastating consequences. Finally, the Arthroplasty Society of Australia gives a similar justification of their recommendation warning orthopedic surgeons to be aware that despite most aseptic loosening being symptomatic, some may present with an insidious development, hence the need for a clinical and radiological review of all THA in an attempt to identify these 'silent problems' allows timely intervention. The AAHKS 2019 (11) suggests a similar protocol compared to the three mentioned above, with a further recommended visit at the fifth year from surgery. Current guidelines for follow-up in special cases In some guidelines, a general schedule of follow-up visits (both frequency and duration) is missing, although precise recommendations on radiographic follow-up exist for high-risk patients ( Table 2). This risk assessment is based on both patient-specific and implant-specific factors. For example, the FDA guidelines (21) suggest regular follow-up visits (i.e. every 1-2 years) for MoM hip implants with certain risk factors (i.e. bilateral implants, the presence of small femoral heads (≤44 mm), female sex, patients receiving high doses of corticosteroids, with evidence of renal insufficiency, with immunosuppression, with suboptimal alignment of device components, with suspected metal sensitivity, BMI >40 and patients with high levels of physical activity). While SCENIHR in 2014 (12) has released a statement suggesting yearly follow-up visits for all patients with MoM prostheses, small femoral head size and female gender, in addition, it recommends performing blood cobalt measurements (normal value range 2-7 μg/L) at follow-up visits. In the United Kingdom, the annual report of MHRA 2017 (13) recommends the need for a more stringent follow-up schedule for MoM implants, younger patients and more active patients. Even more, for these patients, it is recommended to have an annual follow-up for the first 5 years then every 2 years until the tenth year and every 3 years thereafter. As per the ASA guidelines (8), high-risk patients are defined as all patients with newly designed implants with limited long-term clinical results, younger patients, those with MoM articulation and total hip implants with small head sizes (≤36 mm) (22). For these patients, follow-up is recommended at yearly intervals with radiographs. As for the latter, concerning new prosthetic implants, most guidelines also suggest a more stringent schedule of follow-up visits.: the BOA also recommends yearly radiographic follow-ups until the fifth year then every 2 years until the tenth year and then every 3 years. Discussion The most important finding of this review is the large variation of recommendations on the follow-up schedule after THA as well as the lack of evidence-based recommendations of these follow-ups. Although, all reported guidelines are the result of a consensus among experts in the field (level of recommendation class D 'very low') with a rationale on the recommendation but not based on evidence from clinical studies. Current guidelines do not recommend more than one follow-up visit (including radiographs) within the first year and one follow-up visit (including radiographs) between 2 and 10 years after surgery. Nevertheless, the assessment of a temporal sequence of radiographs plays a critical role in the early (asymptomatic) detection of failure of an implant. Although the pathophysiology of aseptic loosening is not completely understood, the main underlying mechanism is represented at radiographs by periprosthetic osteolysis induced by implant particles (e.g. liner wear). The latter usually have a diameter ranging from 0.2 to 10 μm (23), which induces an inflammatory process involving a variety of cells, eventually leading to aseptic loosening of the implant. This process results in visible radiological signs that the trained orthopedic surgeon can promptly identify at a radiograph. The identification of these radiological signs is facilitated when a temporal sequence of radiographs of the patient (e.g. hip etc) are present. Hence, the need of performing a schedule of regular follow-ups including radiological imaging is needed to detect subtle radiological changes. In particular, the temporal sequence of radiographs is most important during the first 2 years after hip prothesis implantation, since most implant migration occurs in this time window (24). This concept is supported by Mjöberg who in his 'theory of early loosening of hip prothesis' states that loosening is likely to begin at an early stage due to either insufficient initial fixation or an early loss of fixation (25). It should be noted that migration at radiographs is measured with an (in) accuracy of 4-12 mm. For that matter, radiostereometric analysis (RSA) is a highly accurate method to determine migration and wear of the prosthetic implant, with an accuracy of 0.1 mm in three dimensions (26,27). The advantage of the highly accurate RSA technique is that implants which are at risk for late failure can be detected within 1-2 years of follow-up (28,29,30). Data from these RSA studies on prosthesis migration within the first 2 years may support performing sequential radiographs during this time window, in order to detect early aseptic loosening. Nevertheless, further studies evaluating evidence of the use of normal radiographs, preferably using machine learning algorithms, are needed to support the importance of sequential series of hip radiographies for early detection of implant fixation problems. Another interesting finding of this review is that a more stringent follow-up was recommended in highrisk patients, although each guideline defined 'high-risk' patients differently, making comparison difficult. The latter may be responsible for some of the large variation on the recommendation of follow-up visits after THA. Patient-related variables which determine to some extent timing of follow-up visits are younger age, female sex and high activity sport level. Indeed, according to the ASA and MHRA guidelines (8,13), younger patients require a more stringent follow-up, consisting of a yearly visit. While implant-specific variables which are associated with the timing of follow-up are the use of MoM prosthesis, the use of new prosthesis and the small size of the femoral head. The large variation of recommendations in the follow-up schedule after THA observed by the current study is reflected by the lack of recommendations among the most relevant worldwide regulatory agencies in the medical field. Indeed FDA (21), the European Medicines Agencies (31) and the National Institute of Health and Care Excellence (32) only stress the importance of follow-up after THA without specifying its exact duration and frequency. In addition, the frequency of follow-ups after a THA intervention is a matter that concerns the medical area as well as the socio-economic one. Indeed, in order to improve the efficiency of national healthcare systems, a cost-effectiveness analysis strictly depends on regional, economic and social aspects (33) therefore contributing to the heterogeneity observed in the current study. Already in late 90', an attempt was made to improve the cost-effectiveness of the radiographic follow-up visits for patients who had hip replacement surgery. It was theorized that a system in which trained medical staff would review routine radiographs in order to decide if a face-to-face visit was needed. This system would have allocated outpatient follow-up visits only to patients at risk of THA failure. More recently, this concept has been further developed in what has been defined as the 'virtual clinic'. This system determines who should be offered a face-to-face appointment based on routine radiographs and questionnaires (Oxford hip or knee score), reviewed by a consultant orthopedic surgeon (34). To investigate the efficacy of the virtual clinic to detect potential implant failure, a recent study compared the traditional outpatient visits with radiographs and questionnaires related to revision symptoms without patient contact. The results showed a substantial agreement between the two, especially for TKA (81%) and to a lower extent also for THA (69%) suggesting that the virtual clinic is a valid alternative to face-to-face visits (35). A similar study that randomized THA patients to either the traditional follow-up system based on routine outpatient visits (including radiographs) or to a questionnaire-and radiograph-based remote follow-up found that no patients who had a potential failure were missed by the remote follow-up (36). Recently during the coronavirus disease 2019 pandemic, some surgeons of BOA employed virtual follow-ups by using telephone consultations for patients unable to attend their routine THA postoperative visits. Although 63% of patients were satisfied by the 'virtual' appointment, 75% of patients would prefer to have their next appointments face-toface. The latter may be related to the population of 70 years and older and the unfamiliarity with technology like electronic questionnaires. Although this may also be related to accessibility and internet density, which can be different between countries (37), it could not be related to the confidence a physical examination and face-to-face explanation give to a patient. The latter also stresses the importance of general guidelines which have to be patient specific. The main limitations of this review are represented by the limited number of guidelines and no clinical studies which report on the topic of recommendation of radiographic follow-up and the ambiguity of the definition of post-surgical follow-up. For that matter, most guidelines do not include the first visit after surgery as part of the schedule of visits, which is in our opinion important in order to compare subsequent future radiographs. Another limitation stems from the study design of the current review. In fact, after performing a systematic review of the literature and retrieving zero original papers, we could only analyze guidelines from orthopedic societies and regulatory agencies. Conclusions • The follow-up schedule after THA is nowadays arbitrary organized based on consensus among experts and not on evidence. • Current guidelines do not recommend more than two radiographs 10 years after surgery. • In certain guidelines, more stringent follow-up was recommended in high-risk patients, but the definition of 'high risk' was very heterogeneous among them. • There is a clear need to develop data-based recommendations for clinical and radiographic follow-up after hip replacement. ICMJE Conflict of Interest Statement The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. Funding Statement This work did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
2022-03-19T06:23:24.531Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "8aa31bf860193209050e5e34fb404f8c5dc637f4", "oa_license": "CCBYNC", "oa_url": "https://eor.bioscientifica.com/downloadpdf/journals/eor/7/3/EOR-21-0016.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4443b5030f5bf1b710a158a9f90caff8bfc79fa3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
112828
pes2o/s2orc
v3-fos-license
Production of anti-Candida antibodies in mice with gut colonization of Candida albicans. BACKGROUND: Production of antibodies that are specific for allergens is an important pathological process in inflammatory allergic diseases. These contain the antibodies against antigens of Candida albicans, one of the normal microbial flora in an intestinal tract. We studied the effects of the prednisolone administration on the production of anti-Candida antibodies in the gastrointestinally C. albicans-colonized mice. METHODS AND MATERIALS: BALB/c mice, treated with antibacterial antibiotics to decontaminate indigenous intestinal bacterial flora, were inoculated intragastrically with C. albicans. The mice, in which C. albicans grows intestinally, were administered prednisolone to induce temporary immunosuppression. The Candida growth in their intestinal tract and their antibody response to Candida were examined. RESULTS: Antibiotic treatment allowed establishment of C. albicans gastrointestinal colonization, but did not cause subsequent systemic dissemination of C. albicans in all the animals. When these animals received an additional treatment with prednisolone, they showed a significantly higher population of C. albicans in their feces than those of animals treated with antibiotics alone, and the organisms were recovered even from their kidney. This systemic dissemination by C. albicans appeared to be temporal, because all the mice survived without any symptoms for more than 2 months. Examination of the serum titers of total immunoglobulin (Ig)E antibodies and specific IgE and IgG antibodies against Candida antigens demonstrated that titers of total IgE increased, partially by day 14 and clearly at day 27, in prednisolone-treated Candida-colonized mice. Without prednisolone treatment, an increment of the serum titer was scarcely observed. By day 27, corresponding to the increase of total IgE, the anti-Candida IgE and IgG titer increased in mice of the prednisolone-treated group. CONCLUSION: Administration of prednisolone to Candida-colonized mice can induce production of the IgG, IgE antibodies against Candida antigens, perhaps through temporal systemic dissemination of Candida from the intestinal tract. Introduction Candida albicans is known to be one of the intestinal microbial flora of healthy persons. 1 To this microbe, adults elicit cellular and/or humoral immune responses postulated to have some pathogenetic relevance to such allergic diseases as atopic dermatitis (AD) 2 Á 7 and food allergy. 8 Savolainen et al. reported that AD patients frequently have a high titer of anti-C. albicans antibodies in their sera. 9 Candida organisms are colonized in the intestinal tracts in these patients and are harbored in the nasal cavity and buccal capsule in a saprophytic manner at high frequency, and a high titer of immunoglobulin (Ig)E antibody reacting with Candida is frequently detectable. 9 Moreover, several recent examples have shown that oral administration of antifungal agents to AD patients has displayed therapeutic efficacy with improvement of dermatitis. 4,10,11 To study the pathogenetic roles of C. albicans in these allergic diseases, it is important to analyze the induction process of anti-Candida IgE and/or IgG antibodies in Candida-colonized individuals. However, it is not yet known how specific Candida IgE antibodies are produced in modern human life. We assumed that intestinal translocation of Candida organisms might provide immunogenic stimulation to produce specific IgE and/or IgG antibodies. We reported previously that application of antibiotics and anti-inflammatory corticosteroids induced an overgrowth of C. albicans in the intestinal tract of mice. 12 Here, we find that prednisolone treatment of mice intestinally colonized by C. albicans induced production of specific IgE and IgG antibodies, perhaps through systemic dissemination of this fungus from the gut. Materials and methods Preparation of Candida inoculate for challenge C. albicans TIMM 0239 12,13 was grown in Sabouraud dextrose broth in an L-tube. After growing at 378C overnight, cells were harvested by centrifugation, washed three times with saline, and adjusted to a cell density appropriate for inoculation to mice. Animals and inoculation All animal experiments were performed according to the guidelines for the care and use of animals approved by Teikyo University. To produce intestinally C. albicans -colonized mice, we used a modified method of Uchida et al. 13 Specific pathogen-free female BALB/c mice, 6Á/8 weeks old (Japan SLC, Inc., Shizuoka, Japan) were given potable water containing 1 mg/ml of ampicillin (Meiji Seika Co., Tokyo, Japan) and 0.2 mg/ml of kanamycin (Meiji Seika Co.) until the end of the experiments. Each mouse was challenged intragastrically with 1 )/10 6 cells of C. albicans at a volume of 0.1 ml using a gastric gavage. For immunosuppression, animals were subcutaneously given 100 mg/kg body weight of prednisolone (Mitaka Seiyaku, Tokyo, Japan) 7 and 9 days after the Candida challenge as described elsewhere. 12 Intestinal colonization of C. albicans was monitored by counts of viable C. albicans cells in their stools as follows. Every stool sample was homogenized in a volume of sterile saline and serial 10-fold dilutions with saline were made. One hundred microliters of each dilution was inoculated onto Candida GS agar (Tanabe Seiyaku Co. Ltd, Osaka, Japan) and the cultures were incubated at 378C for 24 h, at which time quantitation of fungal colonies was performed. In some experiments, Candida -challenged mice were killed for microbial examination; kidneys were excised aseptically and homogenized in a glass tissue grinder with 1 ml of saline. Viable Candida cells were counted as already described, and the results were expressed as the mean 9/standard deviation value of colony-forming units (CFU) per kidneys of five mice in each group. Enzyme-linked immunosorbent assay The total IgE level was measured by a sandwich enzyme-linked immunosorbent assay (ELISA) using two kinds of rat anti-mouse IgE monoclonal antibody (mAb) (6HD5 and HMK12) according to the instructions of the manufacturer (Yamasa Shoyu Co., Ltd., Choshi, Japan). 14 These mAbs recognized different epitopes of Fc fragments of murine IgE. Briefly, 96 wells of solid immunomicroplates (Nunc) were coated with 50 ml of 6HD5 mAb (5 ml/ml) and blocked with phosphate-buffered saline supplemented with 1% bovine serum albumin (Sigma Chemical Co., St Louis, MO, USA). Collected samples or standard mouse IgE (SPE7; Seikagaku Kogyo, Tokyo, Japan) were added to the wells and incubated for 1 h at room temperature. Each well was washed with phosphate-buffered saline containing 0.05% Tween-20, and received 50 ml of biotinylated HMK12 mAb (1 mg/ml), and then all the plates were incubated for 1 h. After 50 ml of peroxidase-conjugated avidin (1/2000; Dakopatts, Glostrup, Denmark) was added to each well, another 1 h incubation was carried out. Finally, the reaction products were visualized with 0.4 mg/ml of orthophenylenediamine (Sigma) and 0.012% H 2 O 2 . Thirty minutes after addition of the substrate, the reaction was stopped with 0.05 ml of 0.2 M H 2 SO 4 and the absorbance at 490 nm wavelength was measured by an Immunoreader (NJ-2300; Nippon Intermed, Tokyo, Japan). In this assay the minimal detectable concentration of IgE was 40 ng/ml. Specific IgE for Candida antigens was measured by an ELISA as described previously 14 with slight modifications. A Candida antigen preparation was obtained from C. albicans cells by 37C 2 h incubation in 0.05 M citrate buffer (pH 7) as described elsewhere. 15 Á 17 Immunomicroplates for ELISA (Nunc) were coated with the Candida antigen preparation (0.75 mg protein/ml). Serum samples were incubated in the Candida antigen-coated microwells for 1 h. Then, binding of IgE antibodies with the Candida antigens was detected by rat anti-mouse IgE mAb (biotinylated HMK12 mAb) and peroxidase-conjugated avidin as already described. Statistical analysis Statistical difference in the survival rate was examined by Wilcoxon test. Other statistical analyses were examined by Student's t -test. Effects of treatments with antibiotics and/or prednisolone on intestinal colonization of C. albicans in mice The effects of treatments to mice with two antibacterial agents, ampicillin and kanamycin, and/or prednisolone on the intestinal colonization of C. albicans were examined. BALB/c mice taking antibacterial agent-supplemented or unsupplemented drinking water were orally infected with C. albicans . As shown in Fig. 1, viable Candida cells were recovered from the feces of the Candida -inoculated mice from the day following oral inoculation (day 1). On this day the mean CFU number of Candida in the stools of mice given the supplemented drinking water was about 1)/10 5 CFU/g of stool. The viable Candida population in the stools of mice given the unsupplemented drinking water rapidly decreased to a level of less than 1)/10 4 CFU/g on day 3. On the other hand, antibiotic-treated mice maintained a high concentration of C. albicans of about 1 )/10 7 CFU/g for more than 2 weeks. Figure 1 also shows that two subcutaneous administrations of prednisolone increased the CFU level about 10-fold in the antibiotic-treated mice. To check the systemic dissemination of C. albicans from intestinal colonization in these mice, their kidney homogenates of their kidney were cultured on Candida-GS-agar. As shown in Fig. 2, in mice treated with prednisolone at days 7 and 9, Candida cells were detected on day 11 and the number of CFU in their kidney increased to 5 )/10 5 CFU/g on day 13, but the mice tested survived for more than 1 month. This indicates that the prednisolone treatment caused temporal systemic dissemination of C. albicans 2 weeks after the infection but did not result in Candida infection fatal to the hosts. Antibody-formation of orally Candida-infected mice with or without prednisolone treatment Total IgE concentration in sera of these Candidainfected mice at 2 and 4 weeks after Candida infection was measured. As shown in Fig. 3, in comparison with normal mice, a low but significant level of IgE (less than 100 ng/ml) could be detected in the sera of the antibiotic-treated and Candidacolonized mice at 2 weeks after the challenge. Prednisolone treatments of such mice clearly increased the serum level of IgE of the animals to about 120 ng/ml and 200 Á/300 ng/ml at 2 and 4 weeks after the challenge, respectively. In further experiments, the titers of IgE and IgG antibodies against Candida antigen in the sera of the C. albicans -colonized mice at 4 weeks after the challenge were examined. Figures 4 and 5 show that IgE and IgG, respectively, against Candida antigens increased in the serum of the animals treated with antibiotics and infected orally with C. albicans and twice administered with prednisolone. Without the treatment with both prednisolone and antibiotics, these infected mice would had no sera showing a detectable level of antibodies against Candida antigens higher than those of control mice. These results indicated that prednisolone treatment caused a significant increase in specific Candida IgE and IgG antibodies in the sera of Candida -colonized mice. Discussion We have shown that prednisolone administration to mice with intestinally-colonized C. albicans induced IgG and IgE antibodies in their sera within 4 weeks after infection. As far as we know, this is the first report that administration of anti-inflammatory steroidal compounds causes production of anti-Candida antibodies in Candida -colonized animals. Antibody production augmented by prednisolone treatment is not curious, because immunosuppressive activity of prednisolone subcutaneously given in the deposuspension form is known to disappear within 1 week after administration. The mechanism of the augmentation of antibody production by prednisolone remains to be clarified. We believe that prednisolone treatment must have a profound effect on the immunological condition of the mice, since the concentration of total IgE immunoglobulin in the sera clearly increased to 200Á/300 ng/ml in prednisolone-treated Candida infected mice. We can speculate that the augmentation of anti-Candida IgE production by prednisolone treatment may result from in vivo antigen stimulation accompanied by temporal systemic dissemination of Candida from the intestinal tract, since Candida dissemination to kidney was observed in the Candida infected prednisolone treated mice at 6 days after prednisolone treatment, as shown in Fig. 2. This speculation is supported by the finding that no augmented production of IgE or IgG against Candida antigens was observed in either antibiotictreated or prednisolone-treated mice unless they had been orally inoculated with C. albicans (Figs 4 and 5) The findings presented here may have some impact on the field of questions among clinical dermatologists of why steroidal anti-inflammatory drugs sometimes have a negative effect on the pathogenesis of AD. AD shows chronic symptoms with clinical manifestation of repeated aggravating itching and high concentration of IgE in sera. In AD patients dermal mast cells bind with IgE; when these Serum levels of antibodies were tested 4 weeks after the Candida infections. Specific IgG of anti-Candida antibody was measured using a specific ELISA as described in Materials and methods. Each value represents the mean9/standard error of five mice per group. *pB/0.05, when compared with the control (normal mouse serum). FIG. 4. Enhanced production of anti-Candida antibody (IgE) in sera of C. albicans-infected mice treated with prednisolone. Serum levels of antibodies were tested 4 weeks after the Candida infections. Specific IgE of anti-Candida antibody was measured using a specific ELISA as described in Materials and methods. Each value represents the mean9/ standard error of five mice per group. *p B/0.05, when compared with the control (normal mouse serum). IgE are bound with antigens, they release the chemical mediator histamine that induces pathological symptoms of edema and itching. Severe itching makes patients scratch their skin, perturbing/disturbing the epidermal organization of epidermal cells and destroying the barrier of the skin so that it is invaded by many allergens, worsening the dermatitis. These suggest that a high concentration of IgE in patient sera plays a critical role in the pathogenesis of AD. Our results therefore mean that a steroidal antiinflammatory drug may have a severe pathogenic effect on AD patients because of the augmented production of IgE. However, at the present time, we cannot precisely explain the relationship among the administration of a steroid drug, intestinal translocation of Candida and pathogenesis of AD. To clarify this relation, we hope that our model in which anti-Candida IgE production can be induced by prednisolone treatment will be used as a tool to analyze the relationships between anti-inflammatory agents and IgE allergy with Candida infection.
2014-10-01T00:00:00.000Z
2004-06-01T00:00:00.000
{ "year": 2004, "sha1": "4a56281c5339d6f06cce297dcb49e4e0be7a00c2", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2004/378310.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a56281c5339d6f06cce297dcb49e4e0be7a00c2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3965586
pes2o/s2orc
v3-fos-license
A National Case-Control Study Identifies Human Socio-Economic Status and Activities as Risk Factors for Tick-Borne Encephalitis in Poland Background Tick-borne encephalitis (TBE) is endemic to Europe and medically highly significant. This study, focused on Poland, investigated individual risk factors for TBE symptomatic infection. Methods and Findings In a nation-wide population-based case-control study, of the 351 TBE cases reported to local health departments in Poland in 2009, 178 were included in the analysis. For controls, of 2704 subjects (matched to cases by age, sex, district of residence) selected at random from the national population register, two were interviewed for each case and a total of 327 were suitable for the analysis. Questionnaires yielded information on potential exposure to ticks during the six weeks (maximum incubation period) preceding disease onset in each case. Independent associations between disease and socio-economic factors and occupational or recreational exposure were assessed by conditional logistic regression, stratified according to residence in known endemic and non-endemic areas. Adjusted population attributable fractions (PAF) were computed for significant variables. In endemic areas, highest TBE risk was associated with spending ≥10 hours/week in mixed forests and harvesting forest foods (adjusted odds ratio 19.19 [95% CI: 1.72–214.32]; PAF 0.127 [0.064–0.193]), being unemployed (11.51 [2.84–46.59]; 0.109 [0.046–0.174]), or employed as a forester (8.96 [1.58–50.77]; 0.053 [0.011–0.100]) or non-specialized worker (5.39 [2.21–13.16]; 0.202 [0.090–0.282]). Other activities (swimming, camping and travel to non-endemic regions) reduced risk. Outside TBE endemic areas, risk was greater for those who spent ≥10 hours/week on recreation in mixed forests (7.18 [1.90–27.08]; 0.191 [0.065–0.304]) and visited known TBE endemic areas (4.65 [0.59–36.50]; 0.058 [−0.007–0.144]), while travel to other non-endemic areas reduced risk. Conclusions These socio-economic factors and associated human activities identified as risk factors for symptomatic TBE in Poland are consistent with results from previous correlational studies across eastern Europe, and allow public health interventions to be targeted at particularly vulnerable sections of the population. Introduction Tick-borne encephalitis (TBE) is the most significant vectorborne viral infection in Europe, with clinical symptoms that commonly involve the central nervous system, leading to a high percentage of neurological sequelae (c.25%), psychiatric problems (c.45%), and fatality in c.1% of the 3-4000 annual cases [1]. Its focal distribution across much of Europe, from eastern France to the Baltic countries (and through much of Russia) and Sweden to the Balkans [2,3], is related to persistent natural enzootic cycles vectored by ticks (principally Ixodes ricinus and also I. persulcatus in the east) amongst transmission-competent rodents (principally Apodemus species [4]), for which specific environmental conditions are required. As Ixodes ticks are very sensitive to desiccation, humidity must remain high through the summer for good tick survival and questing activity [5,6]. Furthermore, a relatively rapid rate of increase in spring temperatures is necessary to allow maximal synchrony in the activity of larval and nymphal ticks and thereby a high degree of co-feeding by these stages on rodents, essential for TBEV transmission [7,8,9]. In addition to rodents, large hosts such as deer are essential to support tick populations, feeding significant numbers of both immature life stages as well as adults [6], although locally very high deer densities appear to reduce TBE prevalence in rodents, perhaps because deer divert ticks from feeding on rodents [10]. These abiotic and biotic constraints make forests the principal habitat for infected ticks, which has important consequences for risk factors. Human infections arise principally through tick bites to which people are exposed as they enter the forests for occupation and recreation. Geographically variable patterns of increase in TBE incidence have occurred in most parts of Europe: gradual but significant increases, including the emergence of new foci, have occurred in western and northern countries over the past twothree decades [11,12,13,14], in contrast to abrupt upsurges in erstwhile communist countries in the early 1990s [15]. The latter was particularly marked in Poland, where annual case numbers increased by an order of magnitude from 1992 to 1993 and have been maintained at this high level ever since (mean +/2 st. dev. annual cases 1975-1992, 21+/214; 1993-2010, 229+/269) (see Fig. 1 in [15]). Recent studies to assess the factors associated with the occurrence and upsurge of TBE have mostly been of an ecologic design, identifying correlates in time and space within a biologically and epidemiologically plausible framework. Some factors act directly on the enzootic cycle, but those that act on the degree of human exposure to infected ticks can cause more abrupt, spatially differential changes [16]. In the Czech Republic, any effect of socio-economic factors on exposure has been denied [17,18], despite the largest proportional increase in incidence post-1992 occurring in people aged over 65 years (see Fig. 5 in [17]). Instead, climate change has been emphasized as the sole causal factor [18], although marked heterogeneities at regional and even very fine geographical scales make this explanation untenable [19]. Although it is likely that increased incidence at higher altitudes in Austria, Slovakia and the Czech Republic [20,21,22] reflects warmer temperatures under limiting conditions along the distributional boundaries, the case numbers at these mountainous sites cannot account for either the full amount or the geographical pattern of the increases in incidence across central Europe. Instead, changes in specific climatic factors [23], in landscape resources and their utilization [9,24], and, most markedly, in socio-economic conditions that accompanied the transition to freemarket economies [15,25], have all been identified as part of a network of independent but synergistic factors significantly correlated with TBE incidence. Each factor will operate with differential force and on different time-scales depending on the cultural, societal and political contexts characteristic of each country. Gradual increases in TBE incidence in western countries that have not experienced extreme political changes do not, of course, deny the role of slower socio-economic evolution in those countries (e.g. more outdoor recreation by retired people [26]) or abrupt socio-economic transitions in ex-communist countries, despite assertions to the contrary [27]. Furthermore, short-term changes in the weather in one case (2006) and the recent economic crisis in another (2009) have been shown to explain annual spikes in incidence via their effects on human behaviour [28,29,30]. Changes in public health services, however, have been discounted as a sufficient explanation [31]. The aim of the present study is to test the credibility of the emergent explanations based on correlations by applying a more rigorous analytical epidemiological study at the individual level to assess associations between specific risk factors and disease. This was achieved by conducting a nationwide case-control study for Poland, the first such study for TBE, to compare the socioeconomic status, residence characteristics, travel history and outdoor exposure to tick bites between TBE cases diagnosed during 2009 and randomly selected members of the population. The additional aim was to differentiate risk arising from exposure incurred through occupation or recreation, including travelrelated risk. Knowledge of individual risk factors is particularly important for TBE because, in the absence of any specific antiviral treatment [1], prophylactic interventions are the only means for limiting human transmission. These include landscape management to control tick abundance, education about personal protective measures to reduce exposure to ticks, and vaccination using one of the two highly effective vaccines (produced by Baxter and Novartis) [32]. Public health interventions can be much better targeted if high quality information exists on the risk factors likely to make some sections of the population particularly vulnerable. Ethics Statement The study protocol received written approved from the Ethical Committee of National Institute of Public Health -National Institute of Hygiene. Written consent was obtained from each adult subject and written consent of the legal guardian was obtained for each minor (person under 18 years of age). All consent forms are stored at the Department of Epidemiology of the National Institute of Public Health in Warsaw. Study Design The population-based, national case-control study to assess TBE risk factors covered ten of the 16 Polish provinces. The decision to set up the study in any particular province, and to recruit a network of interviewers with regional coordinators, was based on the expected occurrence of TBE cases (at least five TBE cases reported annually during the previous five years or their prior inclusion in a parallel screening study, in which all patients with aseptic CNS infection were tested for TBE). The study was performed by a team of national coordinators, with two regional coordinators in each province, and 90 trained interviewers. Faceto-face interviews were performed with all eligible subjects. Case Subjects Attempts were made to recruit each diagnosed TBE case reported to the surveillance system. The Polish surveillance system has national coverage and is based on mandatory passive reporting of cases that develop symptoms of meningo-encephalitis. The system has fair sensitivity overall (48%), but diagnosis of TBE may be different in known endemic regions and the remaining parts of the country [33]. A standardized case definition is used to classify each reported case [34], as follows: a possible case is one that presents with symptoms of meningo-encephalitis, and had visited an endemic area during April-November; a probable case is one that presents with symptoms of meningo-encephalitis and either the presence of an epidemiological link (consumption of raw dairy products) or detection of IgM in serum by enzyme-linked immunosorbent assay (ELISA); a confirmed case is one that presents with symptoms of meningo-encephalitis and laboratory confirmation (IgM and IgG detection in serum, or detection of antibodies in CSF, or confirmation by neutralization test independently from other test results). All eligible cases in this study met the surveillance definition of a probable or confirmed TBE case, were not vaccinated against TBE according to the recommended schedule in the previous 5 years, had disease onset between January 1, 2009 and December 31, 2009, and gave informed consent to participate in the study. Each case was interviewed either in the hospital, or at home after discharge using a 4-page questionnaire on exposure. Control Subjects Two control subjects were selected for each case, matched by sex, age (+/25 years), and district of residence. To allow prospective selection of controls, a stratified random sample of 500 inhabitants from each studied district was obtained from the national population register, prior to the recruitment of cases. The district samples were weighted using the age-and-gender distribution typical for TBE cases reported to surveillance during the previous 20 years. After a case was notified, seven subjects meeting the matching criteria were selected at random, and contact information from the population register was updated. The regional coordinators appointed interviewers, taking into account their availability and logistic constraints related to the subject's residence. For each case, the aim was to interview two of the selected controls that met the eligibility criteria. If the subject declined to participate in the study another control subject was selected from the list. Interviews One questionnaire was used in the interview with adults and adolescents and a separate questionnaire was used in the interview with children of 12 years and younger in the presence of their parents or legal guardians. Interviews of adult subjects comprised approximately 30 questions and took about 30 minutes. Interviews of children were shorter (approximately 20 questions) but lasted longer because both the child and its parent or legal guardian were questioned. Interviewers had received 5-hour training sessions from the study coordinator, including an introduction to the study procedures and interview techniques. In addition to basic demographic data, information was sought specifically on exposure to ticks (i.e. time spent within various habitats) related to occupational and recreational outdoor activities. Interviewers were equipped with regional maps to mark geographic locations of exposure. Both cases and matched controls were asked about exposure that had occurred during a six-week period (maximum disease incubation time) preceding the onset of disease in the respective case subject. This 'matching by exposure period' created the potential for differential recall bias, as the recall period for control subjects was delayed by the time needed for their recruitment and the arrangement of their interview. To address this issue, interviewers used a calendar marked with important national and local events, anniversaries, festivals, and asked about important dates from the respondents' lives to help them recall diverse activities over the relevant six-week period. Data Management For the analysis, pairs were excluded if the recall period for the control covered less than 50% of the actual six-week exposure period for the case, or if controls were not adequately matched to the cases on other variables (i.e. gender, age, region of residence). Information on occupation was collected using free text, which was then re-coded according to ISCO-08 major groups (Table 1), except for forestry workers who were retained as a separate group as they are at higher risk of exposure to ticks [35]. Children aged ,16 years, the unemployed, the retired and students were originally separate groups. Due to limited sample sizes, some occupational groups were later further amalgamated if odds ratios did not differ significantly in preliminary univariate analyses. Place of residence was classified by endemic or non-endemic areas, according to the official definition that the average incidence in each administrative district did or did not exceed 1 case per 100,000 inhabitants in the preceding 5-year period (for more information, see Supplementary Table S1, and Supplementary Figures S1 and S2, online material). We stratified the analysis according to endemic and non-endemic status of study subjects' residence for two reasons. First, residence in endemic region modified the effects of other variables on TBE risk in preliminary analyses in the entire dataset. Secondly, the existence of infected ticks arises from persistent enzootic cycles, due to environmental and biological, rather than human, factors. This stratification was therefore considered because many of the factors potentially associated with infection within endemic regions would not necessarily pre-dispose people to infection in non-endemic regions. Multivariate Model Conditional logistic regression was used to account for the matched study design. A stepwise and backwards selection modelbuilding strategy was first used to create intermediate models for each of the following groups of factors: socio-economic factors, residence characteristics, travel history, outdoor exposures. In the case of travel history, destinations within TBE-endemic or nonendemic regions were distinguished, and the duration of the travel during the exposure period was determined. Initially, the factors significant at p#0.1 level in the univariate analysis were considered, and then factors significant in the intermediate models were further included in an initial full multivariate model. In the multivariate model we assessed confounding by each of the candidate variables by inspecting the impact of its inclusion/ exclusion on the estimates of the effect of the remaining variables. If time spent at different outdoor locations was identified as a significant risk factor (p#0.05), the relative importance of occupational or recreational exposures was examined and related to specific activities. We considered two-way interactions between spending $10 hours/week of recreational time in locations significantly associated with TBE risk and specific recreational activities. Education and occupation were considered only in adults. We checked the adequacy of the model using the Pregibon goodness-of-link test. This test re-runs the conditional logistic regression on the predicted logit score and its square, and the interpretation is based on the significance of the square term. As a sensitivity test, we also re-ran the model with and without children and major occupational groups. The effect of each ordinal variable (education category, income category, distance of the residence from the woods, duration of exposure time) was considered as a categorical as well as a scored variable, including linear and higher order terms. Categories that showed ,20% effect, and were not significantly different by the Wald test, were grouped. The most meaningful variable form was selected based on information criteria (Bayesian (BIC) and Akaike (AIC) -see Supplementary Tables S4, S11) and transparency of interpretation. The robustness of model parameters was assessed by their sensitivity to excluding defined population groups (children, occupational groups). No significant impact of these procedures on the model parameters was noted. Adjusted population attributable fractions (PAF) were estimated for selected variables by the method of Bruzzi et al. (1985) [36], for which the primary underlying assumption is that the cases could be considered a random sample of those in the population. The variables were selected according to their significance in univariate and multivariate analyses and their relevance to public health. Bootstrap standard errors and bias-corrected (BC) confidence intervals were estimated for the adjusted population attributed risks. The bootstrap program first repeated the conditional logistic regression on each sample, and then estimated PAFs on that sample. We generated 5,024 completed bootstrap samples for the endemic area analysis and 9,984 samples for the non-endemic analysis. All analyses were conducted in STATA versions 10 and 12 (StataCorp, College Station, Texas, USA). Study Population Characteristics The outcome of the recruitment process, including validation of the matching procedures, is summarized in Figure 1. In total, 178 matched pairs were used for the analysis, including one (33 pairs), two (142 pairs), three (2 pairs), and four (1 pair) controls per case, making a total of 505 valid interviews. Of 178 cases, 145 (81%) met criteria for confirmed cases, and the remaining 33 cases were confirmed by high concentrations of IgM anti-TBEV antibodies in serum. The comparative characteristics of the two study subpopulations (Table 2) confirm the good match between cases and controls. The mean period between the TBE onset and interview among cases was 28.9 days (SE 2.0 days). The equivalent among respective controls was 58.3 days (SE 2.6). Risk Factors in TBE Endemic Areas The univariate associations between all the studied factors and TBE risk among inhabitants of endemic areas (summarized in Table 3, with complete information in Supplementary Table S2) indicate the importance of socio-economic characteristics. First, risk of TBE decreased with the increasing education level, and Table S7). Certain aspects of human activities had significant impacts on TBE risk (Table 3). Spending $10 hours per week in mixed forests in relation to either occupational or recreational activities was associated with increased risk of TBE (OR 2.21 and 3.11, respectively). As expected, lengthy occupational exposure in mixed forests was strongly associated with being a forester (data not shown), already identified as a significant risk factor. Somewhat paradoxically, occupational exposure of $10 hours per week at forest edges substantially decreased TBE risk. The particular types of recreational activity also proved to be relevant: spending time camping or swimming was associated with significantly reduced risk for TBE, whereas collecting forests foods and sailing was associated with increased risk (Table 3). Based on univariate analysis and intermediate models (Supplementary Tables S5, S6, S7, S8, S9), the following candidate variables were considered in the final model: education (high school or higher; primary/vocational), occupation (technicians, craftsmen and elementary occupations; forestry or fishery workers; Table S10). Of the socio-economic factors, occupation remained a strong predictor of TBE, with the unemployed, foresters, and nonspecialized occupations the most affected (aOR 11.51, 8.96 and 5.39 respectively) ( Table 3). The effect of working at forest edges $10 h/week was significantly protective in the final model (aOR 0.14), but this term may include being outside the forest and highlights the much lesser risk compared with working within deciduous or mixed forests. After adjusting for socio-demographic and outdoor exposures, camping and swimming remained protective (aOR 0.17 and 0.24, respectively). Neither recreation for $10 hours per week in mixed forests nor collecting forest foods (mushrooms or berries) per se was a high-risk activity, but the combination of these two activities conferred the highest risk for TBE (aOR 19.19, see also Figure 2). Risk Factors in TBE Non-endemic Areas No socio-economic factors predicted TBE risk among inhabitants of non-endemic areas, although there was a hint of a protective effect of higher education in the univariate analysis (Table 4, complete information in Supplementary Table S3). Curiously, residence at a greater distance from the nearest forest was associated with increasing risk, whereas travel to (other) nonendemic areas reduced the risk. Amongst outdoor recreational activities, exposure of at least 10 hours per week in mixed forests was a significant risk factor, while equivalent time spent in cottage gardens was a strong protective factor. No specific recreational activity was associated with TBE risk. Based on the univariate analysis and the intermediate models (Supplementary Tables S12, S13, S14, S15, S16), the following candidate variables were included in the initial full multivariate model: education (per one level increase), occupation (forester vs other), residence distance from the forest (#500 m vs .500 m), travel to non-endemic area (yes/no), travel to endemic area (yes/ no), $10 h/week in mixed forest during leisure time (yes/no), $10 h/week in cottage gardens (yes/no) (Supplementary Table S17). In the final model (Table 4), spending $10 h/week in mixed forest during leisure time was the single most important predictor of TBE risk (aOR 7.18). After adjusting for socio-demographic variables and outdoor exposures, the effect of increasing distance between residence and forests remained significant (aOR 4.00). A history of travel by inhabitants of non-endemic areas to endemic areas returned a high adjusted odds ratio (aOR 4.65), but this was non-significant. Conversely, travel to non-endemic areas was significantly associated with decreased risk, even after adjusting for other factors (aOR 0.33). Estimation of Population Attributable Fraction Among inhabitants of endemic areas, population attributable fraction (PAF) was established for persons living within 500 m of a forest (0.312), the occupational groups of technicians, craftsman and elementary workers (0.202), unemployed (0.109) and foresters (0.053), and persons who spent $10 hours of recreation per week collecting forest foods in mixed forests (0.127) ( Figure 3A). All effects, apart from distance from home to the nearest forest, were statistically significant. Among inhabitants of non-endemic areas, PAFs were established for persons spending $10 hours of recreation per week in mixed forests (0.191), and travelling to endemic areas (0.058) ( Figure 3B). Only the former effect, however, was statistically significant in non-endemic areas. Discussion Despite many constraints in ascertaining behavioural exposure of humans to ticks, and in measuring many factors that have important influences on TBE risk (such as weather conditions and populations of wild animals and ticks within the disease foci), this first case-control study of individual TBE risk factors allows deeper insight into human behaviour and characteristics that increase the risk of contacting ticks infected with TBEV. In endemic areas, highest TBE risk was associated with recreation of $10 hours/ week in mixed forests and harvesting forest foods, being unemployed, or employed as a forester or non-specialized worker. Outside TBE endemic areas, risk was greater for those who spent $10 hours/week on recreation in mixed forests and visited known TBE endemic areas. This result, derived from the first rigorous epidemiological study for TBE in Europe, establishes the principal that human factors do play a role in determining risk of infection, and therefore could have been instrumental in driving the recent increases in incidence, despite assertions to the contrary [17,18]. The particular patterns of these effects will vary between countries. Public Health Implications of Main Results The findings identify certain sections of the population at highest risk of TBE infection, allowing public health interventions to be targeted more effectively and efficiently. Two methods were applied: using conditional logistic regression, we identified risk factors amongst the (sampled) population as a whole; then, based on the PAF calculation, we assessed the proportion of cases that would be avoided if the risk factor were eliminated from the population (for example by immunization of the risk groups). The combined results enable prioritization of possible interventions that could have the highest impact on TBE incidence in Poland. The importance of lower socio-economic status in determining risk highlights the mis-match between greatest need and least capacity to implement protection without financial assistance. First there is the environmental context of zoonotic risk. As expected, mixed forests were identified as significant places of human exposure associated with TBE risk, as these habitats provide the most favourable abiotic conditions for ticks [37] and house abundant tick hosts. Secondly, there is human exposure to zoonotic hazard. It is well known that forestry work poses a high risk most probably related not only to the time spent in the forested ecosystems, but also to the types of activity, for example frequently leaving paths and moving amongst the vegetation and so enhancing contact with ticks. Even so, due to universal vaccination of forestry workers provided freely by forestry departments during the previous decade, the effect of forestryrelated occupational exposure is likely to be underestimated. The limited system of recording vaccine use in Poland does not take into account the type of vaccination (primary or booster), and therefore does not permit a valid estimation of vaccination coverage in the general population and in the groups of foresters [34]. According to the information obtained in the State Forest Directorate, no-cost vaccination is offered to all employees, but its use is not recorded at national level. The estimated national immunization coverage for Poland in 2007 was 0.8% [34]. With respect to recreational exposure associated with mixed forests, in the case of residents in non-endemic regions travel to endemic regions was a necessary additional risk factor, while staying within non-endemic regions (or travel there by residents of endemic regions) not surprisingly reduced risk. Within endemic regions, residence close to forests has a high positive impact on TBE risk, as was also found for Lyme borreliosis in Pennsylvania, USA [38], presumably simply reflecting the probability of entering tickinfested forests. The finding that, conversely, risk of symptomatic TBE infection is higher for residents living further from forests in non-endemic regions is hard to explain without invoking the possibility of reduced protection (e.g. barrier clothing, vaccination) due to lower awareness of risk, for which we have no evidence. Basing vaccination policies solely on the propinquity of homes to forests, therefore, would be neither specific nor sensitive enough, given the other risk factors and the contrast between residents of endemic and non-endemic areas. Amongst the range of outdoor activities examined, collecting forest foods (mushrooms or berries) per se did not increase risk unless it occurred in mixed forests, when it became the highest identified risk factor. This finding concurs with individual responses to questionnaires in a survey in Latvia [35,39] that revealed that collecting forest foods was the commonest reason for frequent visits to forests (more often than once a month) and also more than doubled the odds of suffering a tick bite, second only to forestry work. In contrast, camping and swimming in Poland were strongly negatively associated with TBE risk, presumably because such activities occupied people away from tick habitats, as apparently did prolonged recreation in cottage gardens in nonendemic areas. To conclude, TBE risk seems to be related not to time spent outdoors per se, but to specific activities that lead people to maximum exposure to specific vegetation where TBEV-infected ticks are present. Compared with previous ecological studies that identified socioeconomic correlates of behaviour associated with TBE risk (e.g. frequent visits to forests principally for food harvest in Latvia) [35], the socio-economic factors examined here can be related directly to individual TBE risk. There was no statistically significant protective effect of increasing income and education level, but occupation appears as a particularly important risk determinant, although only in endemic areas, as would be expected. In addition to forestry, unemployment and the group of non-specialized occupations are unambiguously associated with higher risk. This strong empiric evidence for unemployment and relatively lowly paid work as important contributing factors for public health problems (see also [40,41,42,43]) is backed by several plausible mechanisms in relation to this particular infectious disease that have already been substantiated with respect to high risk behaviour in Latvia [35] and the unemployment-triggered spike in TBE cases in Lithuania, Latvia and Poland in 2009 [30]. First, harvesting food from forests, although by no means practiced only by people of low economic status, was the major reason given for frequent visits to forests by the unemployed in Latvia. Poland is Europe's leading exporter of wild fungi. A nation-wide survey performed in Poland in 2004 found that the harvest of these and other forest foods to generate additional family income is associated with low income, and worsening of financial situation was given as a major reason for increased harvest by less wealthy families [44,45]. Podlaskie is the most productive province for mushrooms, followed by Warminsko-Mazurskie [46], both of which suffer particularly high TBE incidence. Officially recorded annual harvests in Poland were more variable for forest foods than for game animals, as would be expected from weather effects on productivity. Harvests in 2009 were typical for the past decade: mushrooms (principally chanterelle, boletus and king boletus), 4,176 tonnes (range of annual harvests 2,379-6,922); fruits and nuts (principally bilberry, elder, dog rose and mountain ash), 12,244 tonnes (range 8,374-19,138); game, 7,147 tonnes (range 6,549-9,546) (http://www.stat.gov.pl/cps/rde/ xbcr/gus/PUBL_sy_statistical_yearbook_agriculture_2011.pdf). In Russia, of the workers who moved out of employment on the traditional collective and state farms, and then out of the corporate farms that succeeded them after 1990, more than half shifted to individual employment on household plots and peasant farms; sale of mushrooms and forest fruits made up two-thirds of the income from non-farm self-employment amongst these rural people [45]. In Lithuania in 2009, when unemployment increased after the downward trend of previous years, the official market in wild fungi doubled (http://www.stat.gov.lt). Secondly, unemployment may render people unable to cover the cost of the vaccine, or even the cost of tick repellents. Indeed, increasing costs and decreasing uptake of vaccination were recorded in Lithuania during the recent recession [30]. Thirdly, if unemployment were associated with a lower standard of living, including lower levels of nutrition, protective immune responses against infection might be compromised, leading to more severe clinical symptoms and thus a higher proportion of infections progressing to recorded neuro-invasive disease (see below), as stressful life events can have an impact on the health of an individual, including immunological health, acting through stress hormones [47,48]. It should be noted, however, that improved wealth and the funding of relatively high-cost leisure activities in rural settings may also increase the risk of TBE, as appears more likely to apply in the Czech Republic [19]. This conforms to the conceptual model that both poverty and wealth affect zoonotic risk [25], but asymmetrically due to differential constraints and opportunities for amelioration [49]. The case-control study reported here allows appropriate responses by national public health agencies to geographically variable risk factors, both within and between countries. A full relative cost-benefit analysis is needed, including all realistic logistical and practical aspects, to decide between the strategies of encouraging the lower cost but less secure use of tick repellants and protective clothing versus the higher cost but much more certain protection of vaccination. Individual perception of risk and personal attitudes towards vaccination, depending on geographical and social contexts, also needs further systematic study. Study Limitations As with all observational studies, our study has several limitations. In Poland, testing for TBE is limited to the cases with symptoms of meningo-encephalitis, representing approximately 5% of persons exposed to TBE virus, because most infections remain asymptomatic, and 70% of symptomatic infections are limited to the first, flu-like phase without progressing to CNS involvement [3]. This study therefore does not reflect risk factors for TBEV infection, but rather for development of severe neuroinvasive disease. Non-compliance to study participation might introduce bias, but only if it were differential with respect to disease status. This disease carries no stigma in Poland, but controls and less debilitated patients, more occupied by work, might have been less available or willing to devote time to the interview. Interviewers, however, were trained to accommodate this in the times they sought contact with subjects and arranged interviews. The possibility of having included as controls persons who had recently suffered an asymptomatic TBE infection could have added noise to the results. This effect could be more pronounced in endemic compared to non-endemic regions, due to higher prevalence to TBE-infected ticks. TBE infections, however, are relatively rare even if, in reality, there are 20 infections per reported case. Our study does, in any case, conform to the casecohort study design by having selected members of the control group at random from the source population [50]. A potential problem of over-matching cases and controls with respect to socio-economic class arises if socially deprived and relatively wealthy people occupy spatially distinct areas. To minimize this effect, the selected geographical units within which cases and controls were matched were relatively large, inhabited on average by 100,000 persons (NUTS-4 administrative area). To accommodate the low incidence but extensive distribution of TBE in Poland, 90 interviewers had to be recruited, but they were drawn as much as possible from amongst health department surveillance epidemiologists with extensive experience of interviewing communicable disease patients. They were trained and equipped to maximize the accuracy of subjects recalling events up to six weeks prior to the interview (see methods). The number of questions that required interpretation by the interviewer was limited and the use of aide-memoires followed a strict protocol. Finally, the problem of confounding variables of known and unknown origin was minimized as far as possible by the careful handling of the data. Case and control subjects were matched on potentially strong confounders (age, gender and district of residence), and potential confounders were included in the multivariable analysis. The variable concerning time spent travelling to non-endemic areas (i.e. while not in endemic areas), for example, corrected for the time that did not contribute to the relevant exposure period. Conclusions Despite the potential for bias and confounding, our study design allowed a more accurate insight into individual-level risk factors for TBE in Poland than from recent ecologic-type studies. Its methodological strength lies with random selection of control subjects from the general population and rigorous procedures to avoid recall bias. Gratifyingly, the results from both study types were largely concordant, thereby validating many of the substantive conclusions on determinants of TBE risk in central and eastern European countries. It is increasingly clear that human factors must be taken into account in assessing and therefore combating emerging zoonotic risk. Such factors can change adversely more rapidly than environmental conditions, but are also more amenable to public health measures. There is no reason to think that these general conclusions would not apply to other countries, but the specific risk factors are likely to vary with differing national cultural and socio-economic contexts and can only be identified with certainty by focused case-control studies. In wealthier countries, for example, or those where harvest of forest foods is not a strong cultural tradition, there is unlikely to be such a strong association of unemployment or low-paid work with exposure through activities in tick-infested forests. Instead, the scaling of risk with economic hardship is likely to be reversed [49].
2017-05-29T13:06:21.736Z
2012-09-19T00:00:00.000
{ "year": 2012, "sha1": "562e71ad06c34c0ef93706e2954f8ff0c5277f7a", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045511&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "562e71ad06c34c0ef93706e2954f8ff0c5277f7a", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204800013
pes2o/s2orc
v3-fos-license
Relationship Status between Vancomycin Loading Dose and Treatment Failure in Patients with MRSA Bacteremia: It’s Complicated Introduction A one-time vancomycin loading dose of 25–30 mg/kg is recommended in the current iteration of the vancomycin consensus guidelines in order to more rapidly achieve target serum concentrations and hasten clinical improvement. However, there are few clinical data to support this practice, and the extents of its benefits are largely unknown. Methods A multicenter, retrospective, cohort study was performed to assess the impact of a vancomycin loading dose (≥ 20 mg/kg) on clinical outcomes and rates of nephrotoxicity in patients with methicillin-resistant Staphylococcus aureus (MRSA) bacteremia. The study matched patients in a 1:1 fashion based on age, Pitt bacteremia score, and bacteremia source. The primary outcome was composite treatment failure (30-day mortality, bacteremia duration ≥ 7 days after vancomycin initiation, persistent signs and symptoms of infection ≥ 7 days after vancomycin initiation, or switch to an alternative antimicrobial agent). Secondary outcomes included duration of bacteremia, length of stay post-bacteremia onset, and nephrotoxicity. Results A total of 316 patients with MRSA bacteremia were included. Median first doses in the loading dose and non-loading dose groups were 23.0 mg/kg and 14.3 mg/kg, respectively (P < 0.001). No difference was found in composite failure rates between the non-loading dose and loading dose groups (40.5% vs. 36.7%; P = 0.488) or in the incidence of nephrotoxicity (12.7% vs. 16.5%; P = 0.347). While multivariable regression modeling showed receipt of a vancomycin loading dose on a mg/kg basis was not significantly associated with composite failure [aOR 0.612, 95% CI (0.368–1.019)]; post hoc analyses demonstrated that initial doses ≥ 1750 mg were independently protective against failure [aOR 0.506, 95% CI (0.284–0.902)] without increasing the risk for nephrotoxicity [aOR 0.909, 95% CI (0.432–1.911)]. Conclusion These findings suggest that initial vancomycin doses above a certain threshold may decrease clinical failures without increasing toxicity and that weight-based dosing might not be the optimal strategy. Electronic supplementary material The online version of this article (10.1007/s40121-019-00268-3) contains supplementary material, which is available to authorized users. INTRODUCTION Various guidelines have suggested different vancomycin dosing and monitoring strategies and it was not until 2009 that the first consensus guideline for the therapeutic monitoring of vancomycin was published [1][2][3][4].The vancomycin guidelines recommend targeting trough concentrations of 15-20 mg/L for patients with Staphylococcus aureus bacteremia, endocarditis, osteomyelitis, meningitis, or hos-pital-acquired pneumonia and dosing regimens are designed to achieve these target serum exposures at steady-state. Depending on a patient's renal function, it may take anywhere from 24 to 72 h, or longer, to reach steady-state.To facilitate rapid attainment of goal concentrations, the guidelines recommend a one-time loading dose of 25-30 mg/kg based off total body weight (TBW) for seriously ill patients [1].By increasing the likelihood of pharmacokinetic/pharmacodynamic (PK/PD) target attainment early in therapy, this would theoretically improve outcomes in those patients at highest risk of mortality.Although published data demonstrate that achievement of PD targets during the first 48 h of infection improves outcomes, clinical data showing a direct benefit of vancomycin loading doses are lacking [5,6].Conversely, given previous findings of higher total daily doses being correlated with higher incidence of nephrotoxicity, hypothetical concerns of increased vancomycin-associated nephrotoxicity persist with the weight-based loading dose approach, especially in obese patients [7].However, this is likely due to resultant supratherapeutic vancomycin exposures in these patients, and there are currently few data to demonstrate an association between vancomycin loading doses and nephrotoxicity [8,9]. Due to the relative paucity of evidence demonstrating advantages to vancomycin loading doses, combined with the concern for increasing the risk for toxic events, continued evaluation of this practice is necessary.The primary objective of this study was to evaluate the effect of administering a one-time, weightbased vancomycin loading dose on clinical outcomes in patients with MRSA bacteremia. with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.This was a retrospective, matched cohort study conducted at two academic health-systems in Southeastern Michigan comprised of 5 acute care hospitals.Patients at least 18 years of age who received vancomycin for treatment of a documented MRSA bacteremia between 2007 and 2013 were eligible for inclusion.Patients were excluded if they received vancomycin for less than 72 h, were pregnant, or had end-stage renal disease or unstable renal function that precluded them from receiving a scheduled vancomycin maintenance dose. Data Collection Data collected included demographics, comorbid conditions, antimicrobial treatment regimens, source of MRSA bacteremia, serum creatinine, Pitt bacteremia score at the time of vancomycin initiation, duration of bacteremia, length of stay, length of vancomycin therapy, vancomycin dosing and trough concentrations, microbiological and clinical cure data, concomitant nephrotoxins, and in-hospital mortality. Outcome Data and Definitions The primary outcome of this study was composite treatment failure defined as the presence of at least one of the following: 30-day mortality (from index culture), bacteremia duration C 7 days after vancomycin initiation, persistent signs and symptoms of infection [temperature [ 38 °C, white blood cells [ 12,000/lL] C 7 days after vancomycin initiation, or switch to an alternative anti-MRSA antimicrobial agent due to treatment failure as determined by treating physician documentation.Patients not meeting criteria for composite failure were considered to be a treatment success.Secondary outcomes included duration of bacteremia, length of stay post-bacteremia onset, and nephrotoxicity.Nephrotoxicity was defined as an increase in serum creatinine (SCr) of greater than 0.5 mg/dL or at least a 50% increase from baseline on two consecutive measurements as per the vancomycin dosing and monitoring guidelines, and was assessed starting from the first dose of vancomycin to 72 h after the final dose [1].Baseline SCr was the creatinine value immediately preceding the first dose of vancomycin.Vancomycin trough concentration assessment included only initial trough concentrations drawn at steady-state of the maintenance regimen (prior to the 4th or 5th dose).Concomitant nephrotoxins assessed included aminoglycosides, colistin, acyclovir, intravenous (IV) contrast dye, amphotericin, tacrolimus, loop diuretics, non-steroidal antiinflammatory drugs, angiotensin-converting enzyme inhibitors and angiotensin II receptor blockers. Statistical Analysis A sample size of 272 patients, 136 matched pairs, was required to detect a 15% difference in the primary endpoint using an alpha of 0.05 and power of 80%.For all analyses, a P value B 0.05 was considered statistically significant.All statistical analyses were performed using SPSS v.24.0 (Armonk, NY, USA). In the primary analysis, a series of bivariate analyses were performed to compare outcomes between exposure groups, determine factors associated with the primary outcome of composite failure, and determine factors associated with nephrotoxicity.Categorical variables were compared using the v 2 or Fisher's exact test while continuous variables were compared using the Student's t test or Mann-Whitney U test.Multivariable regression analyses were then performed to examine the independent association between loading dose and composite failure as well as loading dose and nephrotoxicity.Loading dose, along with all variables associated with the outcome of interest at a P value \ 0.2 with biologic plausibility were entered into conditional logistic regression models simultaneously and removed in a backward, stepwise fashion, being retained in the logistic regression model if the P value for the likelihood ratio test for their removal was \ 0.1.Because loading dose was the exposure of interest, it was forced to remain in final step of regression models even if no statistical association was observed.Model fit was assessed with the Hosmer-Lemmeshow goodness-of-fit test; models with a non-significant result were considered adequate.Multicollinearity of candidate regression models was assessed via the variance inflation factor, with values between 1 and 5 considered acceptable. Post-Hoc Analyses Based on the unequal distribution of TBW between the loading dose and non-loading dose groups, the lack of association between first dose measured in mg/kg, and the mild association between first dose in mg and outcome (P = 0.12) in the primary analysis, post hoc exploratory analyses were performed to further examine the association between initial vancomycin dose, measured in mg, and outcome.Classification and regression tree (CART) analysis was performed to derive a threshold in the distribution of initial vancomycin dose, modeled continuously, where the incidence of composite failure was most disproportionate.After identifying this threshold, it was entered into regression analysis in place of loading dose to examine its independent association with composite failure. Furthermore, given both the obesity imbalance between treatment groups and the unexpected finding of obesity being protective against treatment failure, further analyses were performed to ensure the lack of association between a weight-based loading dose strategy and outcome was not an artifact of obese patients being less likely to receive first doses C 20 mg/kg.This was accomplished by two separate methods.First, failure rates were compared in patients receiving loading doses to those who did not as a function of body mass index (BMI) classification (i.e., underweight, normal/overweight, and obese).This same analysis was also performed in the different BMI classifications for the CART-defined milligrambased (non-weight-based) first dose cutoff for success.Secondly, the multivariate models for independent predictors of failure were performed excluding the obese patients population (n = 62) to assess the impact on the association between loading dose or the CART-defined cutoff on treatment failure in the rest of the cohort. Patient Population A total of 316 patients constituting 158 matched pairs were included in the final analysis.The baseline demographics of the patients were similar in each group, although patients who did not receive loading doses had significantly higher TBW and prevalence of obesity (Table 1).The most common source of MRSA bacteremia was skin and soft tissue infection.Over onethird of the patients in each group required admission to an intensive care unit (ICU) at some point during admission, but overall Pitt bacteremia scores remained low in both groups.Vancomycin minimum inhibitory concentration (MIC) was available for 292 of the isolates, with an MIC 50 of 1 mg/L (range 0.5-2 mg/L).Among patients in the loading dose group, the median (IQR) initial dose was 23.0 mg/kg (21.4-25.0)equating to 1500 mg (IQR 1500(IQR -2000)).This was significantly greater than the initial dose of 14.3 mg/kg (IQR 12.2-17.1)or 1000 mg (IQR 1000-1250) received by patients in the non-loading dose group (P \ 0.001 for both comparisons).Although loading dose patients received a higher median (IQR) Post-Hoc Analyses As described above, post-hoc CART analysis on initial vancomycin dose (mg) was performed to determine if a milligram-based cutoff predicting success could be identified.This analysis unveiled a threshold of C 1750 mg, above which the proportion of patients experiencing composite failure (Supplemental Table 1) was significantly lower [25/86 (29.1%) receiving C 1750 mg vs. 97/230 (42.2%) receiving \ 1750 mg, P = 0.033].CART analysis was unable to determine a mg/kg-based cutoff.In multivariable regression analyses including initial dose C 1750 mg in place of vancomycin loading dose, doses C 1750 mg were independently protective against failure [aOR 0.506 (0.284-0.902)] and obesity was no longer independently protective against failure (Table 2). When treatment failure rates were assessed for both exposure cutoffs (presence/absence of loading dose of C 20 mg/kg and presence/absence of first dose of C 1750 mg) as a function of BMI category, failure rates were lowest for obese patients and there was no association between first dose and outcome in obese patients (Table 4).Initial dose C 1750 mg was associated with decreased failure in the normal/ overweight cohort (31.0% vs. 47.5%;P = 0.032).No such association was seen with loading doses C 20 mg/kg in normal/overweight patients, with failure seen in 42.4% and 44.1%, respectively (P = 0.89).Furthermore, when all obese patients were removed from the cohort, the magnitude of the adjusted odds ratios in the multivariate models for loading dose [aOR 0.697 (0.406-1.196)] and first dose C 1750 mg [aOR 0.561 (0.287-1.094)] and treatment failure were similar to that of the overall cohort, and only failed to reach significance for first dose C 1750 mg due to wider confidence intervals due to the decrease in sample size.Importantly, when initial dose of C 1750 mg was placed in the model for nephrotoxicity instead of vancomycin loading dose, no association between this dose and toxicity was demonstrated [aOR 0.909 (0.432-1.911)]. DISCUSSION In the present study, there was no significant correlation between vancomycin loading dose and clinical success when the loading dose was assessed in the traditional (mg/kg) sense.It is noteworthy, however, that, when controlling for other factors, there was a signal between a mg/kg-based first dose and improved outcome that failed to reach statistical significance.Further investigation revealed that first dose, when looked at on a milligram basis alone, did have a significant impact on clinical outcomes.While first doses C 1750 mg being predictive of success and loading doses C 20 mg/kg having no association with failure is an interesting and important finding, a potential confounder of this dataset was that obesity was found to be protective against composite failure.One possible explanation for the above finding is that obese patients may have received higher first doses (in mg) despite the dose not meeting the arbitrary mg/kg definition of a loading dose.In order to ensure this patient group did not drive the lack of association between mg/kg-based loading dose and outcomes, multiple additional analyses were performed, the results of which demonstrate that obesity itself is unlikely to have obscured any relationship, and that the association between milligram-based (flat) first doses and outcomes is truly stronger than mg/ kg-based loading doses. First, Table 4 clearly demonstrates that obese patients were less likely to experience composite failure compared to other patients in the cohort regardless of either mg/kg or flat first dose cutoff (20 mg/kg or 1750 mg).One possible explanation for this finding is that obese patients were less likely to have a ''high-risk'' source of MRSA bacteremia then non-obese patients (30% vs. 18%; P = 0.08; data not shown).Secondly, to further ensure that obesity was not confounding an association with weight-based dosing we performed the same regression analyses with obese patients removed from the cohort.Without this group of patients, the adjusted odds ratios for composite failure ICU intensive care unit a Hosmer-Lemeshow goodness of fit test P = 0.310; variance inflation factor 1-5 for all variables included at model entry b Hosmer-Lemeshow goodness of fit test P = 0.977; variance inflation factor 1-5 for all variables included at model entry and first doses in mg/kg and C 1750 mg were similar to those for the entire cohort.Importantly, the adjusted odds ratio for mg/kg-based loading dose and treatment failure actually increased slightly when obese patients were removed from the cohort.If these patients were truly obscuring an association, it would be expected that the adjusted odds ratios would decrease (or at the least stay the same) when these patients were removed from the cohort, even if they failed to reach statistical significance due to sample size.Additional stratified analyses further support the association between initial doses of 1750 mg or greater rather than weight-based loading doses as the true driver in clinical success in our cohort.The benefit of a first dose C 1750 mg was primarily observed in normal/overweight individuals, which was the predominate weight class of the patients in this study.Conversely, when assessing weight-based doses in this same cohort of patients, no signal of an association was identified with first doses C 20 mg/kg.Interestingly, the only weight category that suggested a potential benefit from a weightbased loading dose was those who were underweight.In this cohort of patients, failure was seen in 4/20 (20%) of patients who received a weight-based loading dose compared to 7/14 (50%) of those who did not.While this association failed to reach significance due to small numbers, it is logical that this would be the cohort where a weight-based dose might show the most benefit as it would allow patients in this group to receive a dose closer to the threshold mg dose.However, given the small numbers, and the fact that no patient in this weight category received a dose of at least 1750 mg, we were unable to fully assess the threshold in this patient population.Finally, while the CART analysis was able to identify 1750 mg as a flat-dose threshold, it was unsuccessful at identifying an mg/kg cutoff value associated with composite failure.Taken together, these data support the finding that a flat, milligram-based first dose, rather than a mg/kgbased one, may improve patient outcomes. It is important to note that the finding that doses C 1750 mg decreased treatment failure should not to be interpreted as a threshold for what a loading dose should be, but more so as a proof of concept that there is an association between initial vancomycin dose and clinical outcome, and that that dose might not be best determined by a patient's weight.Finding an association between first dose and outcome is a Underweight defined as a body mass index \ 18.5 kg/m 2 b Normal/overweight defined as a body mass index 18.5-29.9kg/m 2 c Obese defined as a body mass index C 30 kg/m 2 not surprising given the wealth of evidence demonstrating the importance of attaining adequate vancomycin exposure on day 1 and 2 of therapy on improving outcomes [8,9,11].While this study assessed both first dose and maintenance dose regimens, it did not assess the timing between those doses and thus cannot assess day 1 area under the time-versusconcentration curve (AUC).Therefore, the 1750-mg dose identified by CART analysis in this study cannot be extrapolated to the greater population as the total exposure on day 1 associated with this value could not be ascertained.MIC values were available for the majority of the MRSA isolates; however, this information was not included in any of the analyses given the known inaccuracies with the various testing methodologies.Vancomycin MIC values for MRSA performed using automated susceptibility testing have been shown to vary from the Clinical Laboratory Standards Institute broth microdilution method by ± 1 dilution, whereas MIC testing via Etest methodology tended to produce MIC values 1-2 dilutions higher than broth microdilution [12].As the vast majority of MRSA isolates have a MIC value of 1 mg/L, this variable likely had little impact on the outcomes of this study [13]. While the association between a flat milligram-based first dose and not a mg/kg-based dose and clinical failure was novel and unexpected, it should not be a surprise.The only pharmacokinetic parameter impacted by the first dose is peak serum concentration (C max ), which is dependent not only on the dose but also on the volume of distribution (V d ).It is well established that V d is lower (0.26-0.56 L/kg of total body weight) in obese patients than the 0.7 L/kg cited for normal weight individuals [14][15][16][17].Given this information, it makes sense that, while obese patients may need a higher first dose due to a higher overall V d , that dose does not need to increase proportionally with weight because V d is not increasing proportionally.This finding is further supported by Reynolds et al. [18], who reported that obese patients who received vancomycin dosed at 10 mg/kg/dose achieved more therapeutic concentrations and fewer supratherapeutic troughs than those who received 15 mg/kg/dose.Similarly, AUC is dependent on initial dose as well as on drug clearance.Vancomycin clearance is best estimated using the adjusted body weight of an obese patient [19].Using this information, a mg/kg vancomycin dose based on total body weight would likely overshoot the AUC target, given the disproportionate increase in clearance.This provides further support that a flat dose may be a more appropriate dosing strategy in this patient population.As previously stated, we were unable to validate this assumption, given the absence of maintenance dosing timing as well as lack of day 1 AUC data. The clinical failure rate of around 40% in each group is consistent with failure rates documented in other studies using trough-based vancomycin dosing for the treatment of MRSA bacteremia [20][21][22].Although a few analyses have assessed the impact of PK determined loading doses on day 1 AUC or trough target attainment, only one study has assessed the impact of loading doses on outcomes [23][24][25][26].Wesolek and colleagues [26] performed a retrospective cohort study to evaluate the impact of initial vancomycin doses on resolution of systemic inflammatory response syndrome (SIRS) criteria in patients with sepsis secondary to MRSA bacteremia.Patients who received a first dose of vancomycin C 20 mg/kg (n = 37) experienced resolution of SIRS criteria within 67 h, on average, compared to 109 h in patients receiving first doses of vancomycin \ 20 mg/kg (n = 87) and Cox proportional hazard modeling showed a faster resolution of SIRS in the first dose C 20 mg/kg group [HR = 1.72 (1.09-2.73)].It was hypothesized that this was likely due to more rapid achievement of therapeutic serum vancomycin concentrations among patients receiving the higher first dose; however, day 1 exposures were not reported and other dosing strategies were not assessed. A common cause for hesitation with higher first doses is a perceived risk for increased rates of acute kidney injury.In this regard, the data presented in this analysis are extremely encouraging as neither a first dose C 20 mg/kg nor a first dose of C 1750 mg was a risk factor for of development of nephrotoxicity.These findings are further supported by a study performed by Rosini et al. [27], which compared rates of nephrotoxicity (2 serial SCr values of C 0.5 mg/dL from baseline or an increase of C 50%) and acute kidney injury (AKI; a single SCr increase of C 0.5 mg/dL or C 50% increase from baseline) among patients receiving vancomycin with first doses [ 20 mg/kg and B 20 mg/kg.In this analysis, nephrotoxicity and AKI actually occurred less frequently in patients who received a first dose [ 20 mg/kg compared to patients receiving B 20 mg/kg (5.8% vs. 11.1%;P \ 0.001 for nephrotoxicity; and 7.5% vs. 12.8%; P \ 0.001 for AKI).Taken together, these data support the safety of vancomycin loading doses to optimize patient outcomes. The findings of this study are not without limitations.First, this study was retrospective in nature, which could lead to information bias.Although incomplete documentation in the medical record can make it difficult to accurately measure outcomes retrospectively, we constructed a primary composite failure outcome based largely on readily available objective criteria, such as mortality and bacteremia duration which should limit the impact this has on the outcomes assessed.Secondly, while the guideline-recommended loading dose is 25-30 mg/kg, all first doses above 20 mg/kg were considered a loading dose in this study.This was done to capture patients intended to receive a loading dose, but who may have received slightly less than the guideline recommendation due to the common practice of doserounding to the nearest 250 mg increment.Additionally, as one component of the failure definition was a switch from vancomycin to alternative agents, prescribing bias in therapeutic preference could come into play.Encouragingly, there was no difference in this outcome between any of our groups, and this did not drive the differences in composite failure seen in this study.Although the study included two health systems, the vancomycin dosing practices at these institutions may not be reflective of the diverse range of practices employed.In particular, this study included a large proportion of patients who were not critically ill, the area where loading doses are theorized to provide the greatest benefit.As such, the results of this study may not fully capture the impact of administering a loading dose in this population.Finally, as previously discussed, while an association between vancomycin first doses and clinical failure was observed, evaluation of the maintenance dose was not performed and carries with it multiple implications.While maintenance doses and steady-state vancomycin troughs were similar between the cohorts, timing of initiation of these maintenance regimens and the resulting AUC 0-24 exposures, were not assessed.An inappropriately timed maintenance regimen (i.e., too great an interval between administration of the first dose and the maintenance regimen) has the potential to derail any theoretical benefit gained by administering a loading dose. CONCLUSION To date, this is one of the only studies to examine the association between vancomycin loading doses, clinical outcomes and nephrotoxicity in patients with MRSA bacteremia.No significant difference in efficacy or toxicity was seen between those patients who received loading doses C 20 mg/kg TBW and those who received a smaller initial dose.This study found that initial doses C 1750 mg were associated with clinical success; however, due to the aforementioned limitations, this should not be interpreted as the definitive first dose threshold.Rather, these finding highlights that there is an association between first dose and clinical outcome and that, contrary to previous belief, this first dose may not need to be an mg/kg-based dose.Additional studies combining first dose data, day 1 exposures, and clinical outcomes are needed to fully discern the impact of vancomycin first doses. Table 3 . Receipt of a vancomycin loading dose was not associated with risk of nephrotoxicity [aOR 1.295 (0.657-2.553)]. Table 2 Logistic regression for factors associated with composite failure Table 3 Logistic regression for factors associated with nephrotoxicity Table 4 Association between first dose and composite failure stratified by body mass index category
2019-10-21T14:14:59.222Z
2019-10-21T00:00:00.000
{ "year": 2019, "sha1": "80c900330e69b6c69912cb4cc8f1cbb146aa7518", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40121-019-00268-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9a4d7f77fe89f4426e388224a97931ac4a1fc71", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
128684604
pes2o/s2orc
v3-fos-license
10.1007/s11434-011-4592-y Humid Medieval Warm Period recorded by magnetic characteristics of sediments from Gonghai Lake, Shanxi, North 2011 Variations in monsoon strength, moisture or precipitation in eastern China during the MWP reflected by different climatic records have shown apparent discrepancies. Here, detailed environmental magnetic investigations and mineralogical analyses were conducted on lacustrine sediments of Core GH09B1 (2.8 m long) from Gonghai Lake, Shanxi, North China, concerning the monsoon history during the MWP. The results demonstrate that the main magnetic mineral is magnetite. The sediments with relatively high magnetic mineral concentrations were characterized by relatively fine magnetic grain sizes, which were formed in a period of relatively strong pedogenesis and high precipitation. In contrast, the sediments with low magnetic mineral concentrations reflected an opposite process. The variations of magnetic parameters in Gonghai Lake sediments were mainly controlled by the degree of pedogenesis in the lake drainage basin, which further indicated the strength of the Asian summer monsoon. The variations in the  and S  300 parameters of the core clearly reveal the Asian summer monsoon history over the last 1200 years in the study area, suggesting generally abundant precipitation and a strong summer monsoon during the Medieval Warm Period (MWP, AD 910–1220), which is supported by pollen evidence. Furthermore, this 3–6-year resolution environmental magnetic record indicates a dry event around AD 980–1050, interrupting the generally humid MWP. The summer monsoon evolution over the last millennium recorded by magnetic parameters in sediments from Gonghai Lake correlates well with historical documentation (North China) and speleothem oxygen isotopes (Wanxiang Cave), as well as precipitation modeling results (extratropical East Asia), which all indicate a generally humid MWP within which centennial-scale moisture variability existed. It is thus demonstrated that environmental magnetic parameters could be used as an effective proxy for monsoon climate variations in Magnetic measurements provide a rapid, cheap and nondestructive way to characterize high-resolution mineralogical variations, which are widely used in sequence correlation, paleoclimate and provenance studies of marine [1,2], lacustrine [3] and aeolian sediments [4,5]. They have become one of the most important proxies in paleoenvironment and paleoclimate reconstructions [6,7]. However, due to the low magnetic mineral concentrations in lacustrine sediments and the authigenesis/diagenesis and biogenesis in situ after deposition which may result in the formation of new magnetic minerals [8,9], the magnetic parameters that can be used to reveal sensitive short-scale (decadal to centennial scale) climatic and environmental events remain undetermined and are a problem requiring urgent attention. Global climate evolution over the last millennium is known to have varied dramatically. It can be roughly divided into three major episodes: the Medieval Warm Period (MWP), the Little Ice Age (LIA), and post-industrial warming [10][11][12][13][14]. The MWP in the Northern Hemisphere is about 0.5-0.8°C warmer on average than the LIA [13,14]. In eastern China, the winter half-year temperatures of two centennial warm periods within the MWP are even higher than those observed in the 20th century [15]. In recent years, considerable research interest in MWP climate variability has arisen [13,[16][17][18], because the centennial-scale warm intervals of the MWP are closest to those of the modern age, and can thus provide a complete paleoclimatic analog for long-term climate prediction. During this interval, many regions of the globe experienced hydroclimate anomalies [19,20]. However, compared with the relatively uniformlychanged temperature, moisture and precipitation showed significant regional differences during the MWP. For example, the central and western part of America experienced a mega-drought during the MWP [21], while periods of high precipitation dominated the eastern part of the United Kingdom during the same time [22,23]; in other regions, no obvious anomaly was seen [24]. In China, regional differences in hydroclimatic characteristics during the MWP also clearly existed. In arid northwestern China, the proxy records derived from lacustrine sediments [25,26], aeolian deposits [27] and tree rings [28] all show a dry MWP and a wet LIA in general [29]. The Asian summer monsoon is an important component of global climate system and its variability is always of great interest in paleoclimatological studies at a wide range of time scales [30,31]. As far as the monsoon climate over last millennium is concerned, the Asian summer monsoon has been found to be relatively weak, and the climate was relatively dry in eastern China during the LIA [32][33][34]. However, there is no consensus achieved about monsoon climate conditions during the MWP. For example, a pollen record from northeastern China suggests that the Asian summer monsoon was strong during the MWP [33], which is supported by a recently published high-resolution speleothem oxygen isotope record from the summer monsoon margin [35]. Conversely, speleothem oxygen isotope records from some other regions show that the Asian summer monsoon was not strong in MWP [36,37]. Furthermore, a stacked speleothem oxygen isotope record has demonstrated that the summer monsoon was not significantly abnormal during the MWP, and was even slightly weaker from the 11th to early 13th century [38]. Two historical document-based humidity reconstructions indicate a relatively wet period in North China during the MWP [39,40], but it has been suggested by another document-based record that there was a dry MWP over the whole of eastern China [41]. Therefore, variations in monsoon strength, moisture or precipitation in eastern China during the MWP reflected by different climatic records have shown apparent discrepancies. This could due to the fact that different proxies may have different paleoclimatic implications, or that the strength of the Asian monsoon may have a different effect on precipitation in different regions of eastern China. According to conventional concepts, the abnormal northward extension of the southerlies into North China and associated increased precipitation in North China could be regarded as signs of an intensified Asian summer monsoon [42,43]. It is therefore particularly important that high-resolution climate records over the last millennium are obtained in this critical region. Here, we choose a paleolimnological site, Gonghai Lake in northern Shanxi, which is located at the northern boundary of the modern Asian summer monsoon, to carry out a high-resolution and multi-parameter investigation on environmental magnetism of a sediment core. On the basis of environmental magnetism results, the aims of this study are to explore the magnetic characteristics of Gonghai Lake sediments in combination with other proxies and to reconstruct the regional climate and environment change over the last millennium, with particular attention to the evolution of the Asian summer monsoon and associated precipitation during the MWP. Study site and sampling Gonghai Lake (38°54′N, 112°14′E, elevation 1860 m a.s.l.) is located in Dongzhuang, Ningwu County, Shanxi Province, in the northern part of the Lüliang Mountains. It is a lake formed on a planation surface of the watershed between the Sanggan and Fenhe rivers. Eleven small lakes including Gonghai Lake are distributed on the planation surface (the average altitude is 1800 m a.s.l.), which make up a rare group of high-mountain lakes in North China [44]. Among these lakes, Gonghai Lake is at the highest altitude and has a maximum water depth of around 10 m and a surface of 0.36 km 2 . Gonghai Lake is hydrologically closed, and has a flat lakebed; its main water source is precipitation ( Figure 1). The Ningwu area is close to the sandy deserts and sandy lands of central China to the north and west (e.g. Maowusu Sandy Land, Kubuqi Desert, Badain Jaran Desert and Tengger Desert), falling into the typical fringe of the modern Asian summer monsoon ( Figure 1). The mean annual precipitation in Ningwu (1500 m a.s.l.) is about 468 mm, with about 65% of the annual precipitation occurring in the summer (from June to August). Regional vegetation in the mountains (including the drainage basin of Gonghai Lake) is dominated by mixed coniferous and broad-leaved forests, whereas in the mountains with an elevation lower than the planation surface, coniferous forests are widely distributed. Zonal soil types consist mainly of mountain-meadow soil, brown soil, cinnamon soil, chestnut soil and meadow soil, of which the cinnamon soil is the major zonal soil type. The exposed bedrock of the zonal ground surfaces was mainly formed during the Archeozoic to Cenozoic, and is comprised mainly of weakly-magnetic rock, such as dolomite, limestone, sandstone, sandy shale, and glutenite [46]. This provides good conditions for conducting environmental magnetism research using lacustrine sediments. In January 2009, we drilled at Gonghai Lake using a Piston Corer on a UWITEC platform (UWITEC, Mondsee, Austria). A long core of 7.68 m, GH09B, was retrieved at the center of the lake in a water depth of 8.96 m ( Figure 1). Also, 15 bedrock samples and 3 soil samples were taken around the lake drainage basin on the ground surface. The samples were frozen on-site and taken back to the laboratory where they were stored in a freezer below 4°C. The 2.8-m-long first tube of core GH09B (named GH09B1) provides material for this study. There is no significant change in the lithology of core GH09B1, which mainly consists of the silty clay. The sediments are coarser at depths of around 1.0 and 2.0 m, and from depths of 2.5 to 2.8 m. In the laboratory, the core GH09B1 was sliced at 1 cm intervals for magnetic analysis, and the samples were freeze-dried and then ground. Laboratory methods All environmental magnetic parameters were measured and calculated following the procedure of Dearing et al. [47]. Low-field (470 Hz) and high-field (4700 Hz) frequency magnetic susceptibility (χ lf and χ hf ) were measured using a Bartington MS2 magnetometer (Bartington Instruments Ltd, Witney, UK). Anhysteretic remanent magnetization (ARM) was performed using a DTECH AF demagnetizer (2G Enterprises, California, USA) with a peak AF field of 50 mT and DC bias field of 0.05 mT. Stepwise demagnetization of saturation isothermal remanent magnetization (SIRM) at 1 T was carried out using three reverse fields (−20, −100, and −300 mT). IRMs were performed using an MMPM 5 pulse magnetizer (2G Enterprises, California, USA). All remanence measurements were made using a Minispin magnetometer (Advanced Geoscience Instruments Company, Brno, Czech). Magnetic parameters are all expressed on both mass-specific and quotient bases to give quantitative and qualitative information of χ lf (×10 8 m 3 /kg), SIRM (SIRM= and S-ratio (IRM 300 /SIRM). A subset of representative samples (50 samples) was selected for further magnetic measurements: magnetic hysteresis loops and Curie temperatures (T c ) were determined using a variable field translation balance (Petersen Instruments, Muenchen, Germany). Five representative samples were selected for X-ray diffraction analysis using D/max-3BX equipment (Rigaku, Tokyo, Japan); scans were run from 4° to 80° of 2θ at a step width of 0.02°. Also, eight representative samples were selected for pollen analysis following the procedure of Moore et al. [48]. The pollen analysis was conducted in the laboratory of Hebei Normal University. All other experiments were conducted in the Key Laboratory of Western China's Environmental Systems, Lanzhou University. Chronology In core GH09B1, all AMS 14 C dates were generated from samples of terrestrial plant macrofossils, which can avoid the carbon reservoir effects that commonly occur in lacustrine sediments. The AMS 14 C samples were all first prepared with the standard pretreatment (alkali-acid-alkali) and then measured at the AMS Dating Laboratory of the Institute of Earth Environment, Chinese Academy of Sciences, Xi'an, China. All dates were calibrated to calendar years using OXCAL4.1 software [49] using the IntCal04 calibration data set [50]; ages determined included 415±83 cal a BP at 0.63 m depth, 617±52 cal a BP at 1.16 m depth, 884±80 cal a BP at 1.99 m depth and 1011±43 cal a BP at 2.45 m depth. These ages are precise enough to control the date range of the MWP. The dates of the other samples were obtained by interpolation, and the date at the bottom of core GH09B1 is AD 840. The time resolution between samples is about 4 to 8 years since AD 1300, and 3 to 6 years before AD 1300. Compositions and types of magnetic minerals Thermomagnetic curves for all samples from core GH09B1 are similar to each other (Figure 2(a),(b)). With increasing temperature, the magnetization first decreased slightly, then increased rapidly after 450°C, and reached a peak at around 500°C. This may reflect that paramagnetic minerals, such as iron silicate or clay minerals, were transformed into new magnetic minerals [51]. The cooling curves of the samples lie above the heating curves, which further supports that many new magnetic minerals are formed during the heating treatment. By further increasing the temperature, the mag-netization gradually reduces and approaches zero at 580°C, the Curie point of magnetite. This behavior indicates that magnetite is the major magnetic mineral in the samples. The shape of the hysteresis loop was reversible within the magnetization of 300 mT (Figure 2(e),(f)), indicating that the magnetite was also the main magnetic mineral contributor in the hysteresis loop. The magnetization is still increasing and not saturated even when the magnetic field reaches 1000 mT, further indicating that there is a large fraction of paramagnetic minerals existing in the GH09B1 core samples [52]. The main peaks from X-ray diffraction analysis are found to be very similar (Figure 3(a),(b)). This suggests that the mineral compositions of the GH09B1 core samples are the same. Samples consist mainly of quartz, albite, plagioclase, orthoclase, amphibole, illite, mica, magnetite, and carbonate (mainly calcite), as well as iron-rich chlorite clay minerals. This further supports the result, acquired by analysis of thermomagnetic curves and hysteresis loops, that the magnetic assemblage in GH09B1 core samples is dominated by magnetite and the minerals are associated with quite a few paramagnetic minerals such as iron silicate or clay minerals. The ratio of SIRM to susceptibility is suited for assessing mineralogy. For example, the values of the ratio in pyrrhotite and greigite is around 100 kA/m; hematite has a high ratio (higher than 200 kA/m), but magnetite has low SIRM to susceptibility which is lower than 30 kA/m and usually around 10 kA/m [53,54]. The average value of saturation isothermal remanent magnetization to susceptibility of all samples in core GH09B1 is 13.25 kA/m ( Figure 6), which also supports that the dominant magnetic carrier in GH09B1 core samples is magnetite. Granularity of magnetic minerals The granularity of the magnetite in sediment could be identified by the Day plot between the magnetization intensity ratio (M rs /M s ) and coercivity ratio (B cr /B c ) [55][56][57]. The critical value of granularity follows that provided by Dunlop et al. [56,57]: Single domain (SD) magnetic grains are characterized by M rs /M s >0.5 and B cr /B c <2.0, multi-domain (MD) magnetic grains are characterized by M rs /M s <0.02 and B cr /B c >5, and pseudo-single domain (PSD) grains lie between MD and SD grains. Figure 4 shows that GH09B1 core samples are all in the PSD region, indicating that most of the magnetic grains in the lacustrine sediments of Gonghai are PSD. However, the mixtures of MD and SD magnetic grains in sediments might also scatter in the PSD region in the Day plot [58]. Dearing et al. [59] demonstrated that magnetic grain size in sediment can be effectively portrayed by combining χ FD % and χ ARM /SIRM data in a quantitative mixing model. All the GH09B1 core samples are almost found in the coarse stable single domain (SSD) region and only few samples in the MD+PSD region ( Figure 5), further indicating that the most of the magnetic grains in lacustrine sediments from Gonghai Lake are coarse SSD. The surface samples that come from the top of the core contain MD+PSD magnetic grains, indicating that the MD+PSD magnetic grains in GH09B1 lake samples may have blown in from atmospheric pollutants, because MD+ PSD grains are the main grains of atmospheric pollutants found in North China [60]. Susceptibility of ARM (χ ARM ) that reflects the concentration of SSD magnetic grains has a positive correlation with magnetic susceptibility [6], also supporting that the granularity of magnetic minerals in GH09B1 core samples is dominated by SSD grains, except for the recent 100 years at the top of core. Variations in magnetic parameters with depth Based on magnetic susceptibility (χ), core GH09B1 could be divided into four sections. In the first section (2.80-2.55 m, AD 840-910), the magnetic susceptibilities are the low-est observed along the whole core. In the second section (2.55-1.50 m, AD 910-1220), the magnetic susceptibilities are high in general, and reached a peak for the core, indicating that the concentration of magnetite in the lacustrine sediments is high. However, during AD 980-1050, the magnetic susceptibility also shows relatively low values forming a trough throughout this section. In the third section (1.50-0.21 m, AD 1220-1850), the magnetic susceptibility decreases gradually and is relatively low compared with the core in general; the lowest values appear between the depths of 1.35 and 0.80 m (AD 1270-1450). In the fourth section (0.21-0 m, AD 185 to the present), the magnetic susceptibility increases rapidly with increasing magnetic concentration. However, the variation in this section is large, and the value is generally lower than the second section. S 300 is a proxy of the proportion of low-coercivity ferrimagnetic grains to high-coercivity antiferromagnetic grains [61]. Values above 80% reflect the dominance of magnetite (maghemite), whereas values below this percentage represent increasing contributions from antiferromagnetic minerals (hematite or goethite) [62]. The values of S 300 in core GH09B1 are all above 80%, which identifies magnetite as the dominant magnetic minerals ( Figure 6). This result is in accordance with the analysis of thermomagnetic curves and hysteresis loops, i.e. that the magnetic assemblage in GH09B1 core samples is dominated by magnetite. The changes in S 300 and magnetic susceptibility along the whole core show a consistent trend (Figure 6), which indicates that the magnetic susceptibility in GH09B1 core samples mainly reflect the content of magnetite. The trend of change in SIRM along the whole core is basically consistent with magnetic susceptibility and S 300 (Figure 6), which also in-dicates that the superparamagnetic (SP) magnetic grains make only a small contribution to the intensity of magnetism [6]. In addition, most of the magnetic grains in samples are SSD. Therefore, the magnetic susceptibility and S 300 mainly reflect the concentration of magnetite in the samples. The variation in the trend of χ ARM /χ is basically consistent with that of χ ARM /SIRM. They show troughs at the bottom of the core (below a depth of 2.55 m), at a depth of ~2.2 m (during the middle of the MWP, AD 980-1050) and at the top of the core (in the last 50 years) ( Figure 6). The ratios of χ ARM /χ and χ ARM /SIRM can reflect grain sizes of ferrimagnetic minerals, especially fine-grained magnetite. For example, when magnetic grain sizes are smaller (or larger) in samples, values of χ ARM /χ and χ ARM /SIRM are higher (or lower) [63,64]. Similar variations in the trends of χ ARM /χ and χ ARM /SIRM suggests that magnetic grains formed in the MWP are fine in general, but become coarse during AD 980-1050. The values of SIRM reflect grain sizes of magnetic minerals becoming lower during AD 980-1050. This suggests that the low values of χ and S 300 during AD 980-1050 result from the change in magnetic grain sizes. Since AD 1850, the ratios of χ ARM /χ and χ ARM /SIRM have evidently decreased, indicating that the magnetic grain sizes become larger. However, during this period, values of χ and S 300 are still consistent with the conditions of a high-value period (Figure 6), indicating that the intensity of magnetism in the samples is strong. Obviously, this is different from the low values of χ ARM /χ and χ ARM /SIRM that result in the low values of χ and S 300 during the middle of the MWP (AD 980-1050). This difference may indicate that the coarse magnetic minerals in the lake were sourced by air pollution from the city of Ningwu near Gonghai Lake since AD 1850. Origin of magnetic minerals and their paleoclimatic significance Magnetic minerals in lacustrine sediments can originate from detrital or authigenic sources [6]. In Gonghai Lake, magnetic assemblage in the sediments is dominated by SSD or PSD magnetite, either of which could come from a detrital or authigenic origin. The ratios of χ ARM /χ, χ ARM /χ FD and ARM/SIRM reflect the grain sizes of the magnetic minerals. The mean value of χ ARM /χ is 6.69 (maximum value does not surpass 25, Figure 6), the average value of χ ARM /χ FD is 65.14, and the range of ARM/SIRM is between 0.016 and 0.058. The values of these ratios in magnetic parameters are all quite low, belonging to the range of values of detrital origin [65,66]. The magnetite in Gonghai Lake sediments primarily represents detrital input rather than authigenic/biogenesis produced magnetite in situ after deposition. The detrital input of magnetic minerals may originate either from atmospheric dust or as lake drainage from ground surface runoff. Except in the section above 0.14 m in the core, the changes in χ and S 300 have a positive correlation with those of χ ARM /χ and χ ARM /SIRM, indicating that the sediments with higher magnetic mineral concentration are characterized by smaller magnetic grain sizes (i.e. the higher the χ, the finer the magnetic grain size, Figure 6). This is usually relevant with the pedogenic processes. The detailed mineral magnetic measurements on the bedrock and the soil from the drainage area of Gonghai Lake were also conducted. In the bedrock samples, the average value of χ is only 9.12 (10 8 m 3 /kg) and the highest value is 13.2 (10 8 m 3 /kg). The values of S 300 are also very low (between 58.5% and 73.9%). The features of the thermomagnetic curves ( Figure 2(c)) are evidently different from those in the core samples (Figure 2(a),(b)), indicating that the dominant magnetic mineral is hematite. The average value of χ of the soil samples is 28.9 (10 8 m 3 /kg) and the lowest value is higher than 22 (10 8 m 3 /kg). The average value of S 300 is 85.2% and all the values are higher than 80%. The features of the thermomagnetic curves (Figure 2(d)) are similar to those in the core samples (Figure 2(a), (b)), indicating that the dominant magnetic mineral is in this case is magnetite. The dominant magnetic carriers in the atmospheric dust are ferromagnetic minerals with the main magnetic grain sizes of a PSD [60,67]. Therefore, the magnetite in GH09B1 core samples was all formed in the process of pedogenesis around the lake drainage area from the ground surface, rather than in the process of the atmospheric dust accumulation. The magnetic characteristics of core GH09B1 and the soil in the lake drainage area are the same (e.g. the characteristics of the thermomagnetic curves), which further sug-gests that the effect of diagenesis on magnetic minerals in lacustrine sediments is limited. As a result, the concentration of magnetite in Gonghai Lake can reflect the degree of pedogenesis in the lake catchment. In the Chinese Loess Plateau, abundant fine magnetic grains have been formed by pedogenic processes [68]. The changes in magnetic susceptibility in modern surface soils have a positive correlation with changes in precipitation [69]. The χ and other environmental magnetic parameters were successfully used as indicators of the strength of Asian summer monsoon [5], and successfully enabled the reconstruction of the evolutionary history of the Quaternary monsoon [30] and rapid changes in monsoon climate [70] during the last glaciation in East Asia. Gonghai Lake is located on the margin of the Chinese Loess Plateau. The lake drainage area is covered by variable thicknesses of loess. The magnetic minerals of the core originate from the lake drainage on the ground surface by soil erosion. The changes in magnetic parameters could record the degree of pedogenesis in soil, and thereby constrain the evolutionary history of the monsoon. When the degree of pedogenesis is higher, the magnetic grain sizes produced in the process of pedogenesis would be finer and the concentration of magnetic minerals in soil will be also higher. Greater concentrations of magnetic minerals flowing into Gonghai Lake via surface runoff lead to high values of χ and S 300 . To verify this model, some samples from the core were selected for a pollen analysis (Figure 7(g)). The modern vegetation in the Gonghai Lake catchment area includes grasslands and shrubs. A mixed coniferous and broadleaved forest is located below the planation surface on which Gonghai Lake is located. The content of the tree pollen at this area could reflect the amount of precipitation in this region. In the semi-drought/semi-humid areas of China, when the content of the tree pollen is lower than 50%, the Artemisia to Chenopodiaceae (A/C) ratio in the pollen spectrum is also a good indicator of the effective moisture [71][72][73][74]. In the pollen spectrum of core GH09B1, the highest content of the tree pollen, up to 47.2%, is at a depth of 1.97 m with the concentration of broad-leaved tree pollen almost as high as the concentration of coniferous tree pollen, indicating that the mixed broad-leaved and coniferous forest developed during the period. The lowest concentration of tree pollens is at a depth of 1.05 m where its content is lower than 7%; even lower than the concentration of tree pollen in the lake-surface sample. The high content of tree pollen of the samples correlates very well with the high values of χ and S 300 in the samples. The changes in the A/C ratio in the pollen spectrum are consistent with those of the content of tree pollen (Figure 7(g)), which suggests that during the high values of χ and S 300 period, the climate is dominated by high precipitation and a strong Asian monsoon. As a result, in core GH09B1, χ and S 300 can be used as sensitive indicators of the Asian summer monsoon, where higher values of χ and S 300 indicate stronger Asian summer monsoons. Figure 7 Variations in χ (f) and S 300 (e) with time over the last millennium, and their comparisons with modeled Asian monsoon strength in North China (d) [43], speleothem oxygen isotope time series from Wangxiang Cave (c) [35], dry-Wet index record inferred from historical documentation in North China (b) [40] and temperature anomalies in the Northern Hemisphere (a) [14]. The solid histogram and hollow histogram respectively represent the content of tree pollen and A/C (g). The shaded areas represent the MWP and the Current Warm Periods. Strong summer monsoons during the MWP and regional comparisons The base of core GH09B1 is dated at AD 840. Above this, the χ and S 300 can be divided into two high-value periods and two low-value periods (Figure 7(f),(e)). The two high-value periods lasted from AD 910 to 1220 and from AD 1850 to the present, which correspond to the MWP and the Current Warm Period (i.e. the 20th-century warm period), respectively [14] (Figure 7(a)). The two low-value periods lasted from AD 840 to 910 and from AD 1220 to 1850, with the latter corresponding to the LIA [14] (Figure 7(a)). Evidently, the most distinct climate fluctuations during the last millennium in Northern Hemisphere have been well documented in sediments from Gonghai Lake. Concerning the changes in χ and S 300 , the generally high values from AD 910 to 1220 suggest that the degree of pedogenesis was remarkable and that precipitation was high in that time, indicating a strong Asian summer monsoon during the MWP. According to the result of low-resolution pollen analysis (Figure 7(g)), the content of tree pollen was high during MWP as well, suggesting forest vegetation around the lake basin. Also the pollen A/C ratio indicates that the regional moisture was highest during the MWP, supporting a more optimal climate than that in the Current Warm Period. Compared with conditions in the MWP, the content of tree pollen was lower than 10% during the LIA, and the pollen A/C ratio was also low, which together suggests that the Asian monsoon was weak and the climate was dry during that period. The high (3-6 years) resolution of the sedimentary magnetic record from Gonghai Lake during the MWP provides us an opportunity to discuss the climate variability on decadal-to-centennial scales. Although the summer monsoon was generally strong in the MWP, secondary oscillations were superimposed on it, of which the most pronounced one lasted from AD 980 to 1850. During that period, the values of χ and S 300 were relatively low, indicating that the degree of pedogenesis and amount of precipitation were also relatively low around the lake basin. It is thus indicated that the MWP was not a period of constantly strong summer monsoons, as there was a centennial-scale interval that features relatively weak summer monsoon. Figure 7 shows comparisons of the magnetic proxy records with the dry-wet proxy series of North China reconstructed from historical documentation [40] (Figure 7(b)), and the oxygen isotope record of speleothems from Wangxiang Cave [35] (Figure 7(c)) which is also located on the northwestern margin of the modern Asian summer monsoon. On a multi-centennial time scale, the proxy data from these three locations show a consistent climate pattern over the last millennium. In particular, the strong Asian monsoon during the MWP is indicated by high values of χ and S 300 in Gonghai Lake sediments, more flood events in North China, and the light oxygen isotopes recorded in speleothems of Wanxiang Cave, all of which resulted from high monsoon precipitation. Situations were roughly opposite during the LIA. On decadal to centennial time scales, the secondary weakening of the summer monsoon during the 11th century against the overall strong summer monsoon period during the MWP, which is revealed by the magnetic record of Gonghai Lake sediments, is also evident in the high-resolution series of oxygen isotopes for the speleothems of Wangxiang Cave. In addition, this short-term drought is reflected in the dry-wet proxy reconstruction for North China as well. Therefore, changes in the magnetic parameters of Gonghai Lake sediments sensitively documented the variability of the Asian summer monsoon on the submillennial scale and even shorter time scales. Furthermore, the evolution of the Asian summer monsoon during the last millennium, as indicated by the environmental magnetic pxoxies from Gonghai Lake and oxygen isotopes from Wangxiang Cave speleothems, is supported by the modeling of monsoon rainfall [43] (Figure 7(d)). The variations in intensity of the summer monsoon (precipitation), which are reconstructed from Gonghai Lake, obviously differ from moisture variability in Central Asia, which is dominated by westerlies [29] during the last millennium. This supports the proposal that a "westerlydominated climate model" for arid Central Asia is distinct from precipitation (moisture) evolution in monsoonal Asia over the various time scales during the modern interglacial period [46,75]. Note that the generally strong Asian monsoon during the MWP, reconstructed from the observed limits of the monsoon, shows major differences from the monsoon strength reconstructed using cave deposits [38,76,77] and lacustrine sediments [78] from South China, and also differs from the dry-wet index series based on historical documentation in South China [40,41]. To trace the reason for this, consider that the chronological uncertainties are unlikely to have exerted effects upon recorded patterns of climate change on a multi-centennial time scale. Therefore, the above-mentioned differences between precipitation (moisture) records may be the result of different regional responses to the same climate event in monsoon-dominated Eastern China. For example, on the decadal time scale, instrumental data demonstrate that precipitation in North China decreased over the last 50 years, while it increased in Southern China [79]. In fact, based on integrated research, we found that on the multi-centennial time scale during the last millennium, climatic changes in mid-latitude monsoon-dominated China differed from those in westerly-dominated China and moisture variation over monsoonal China has shown clear spatial variability. Such spatial patterns and possible mechanisms will be investigated in the future. Gonghai Lake, located at the margin of the Asian summer monsoon, occupies an ideal position to sensitively record variations in the strength of the monsoon. Changes in the magnetic parameters in its sediments could reflect the evolution of Asian monsoon intensity, which is generally consistent with proxy records derived from surrounding regions and is also supported by models. These advantages, together with relatively high resolution, show the value of paleolimnological records from Gonghai Lake for exploring the past variability of the Asian monsoon. Note that due to relatively few age control points (four in the past 1000 years), comparisons of decadal-scale climatic events between sites are restricted. Besides the chronological enhancement, future research efforts should include detailed investigations of modern processes and multi-proxy studies. These will help improve explanations of the paleoclimatic significance of magnetic parameters and achieve a complete and profound understanding of paleoenvironment and paleoclimate variations during the last millennium. Conclusions Our investigation of environmental magnetism in Gonghai Lake sediments demonstrates that the main magnetic minerals in the sediments are magnetite and mainly originate from pedogenic processes in the lake drainage basin. The post depositional effects on the magnetic properties of detrital minerals are rather weak. The grain sizes of the main magnetic minerals mostly fall into the SSD category. In general, sediments with relatively high magnetite concentrations are characterized by relatively fine magnetic grain sizes, whereas sediments with relatively low magnetite concentrations are characterized by relatively coarse magnetic grain sizes. This relationship is mainly attributed to environmental conditions in the lake drainage basin, and mainly reflects the degree of pedogenic process and the amount of precipitation around the drainage basin, corresponding to variations in the strength of Asian summer monsoon. Based on an understanding of paleoclimatic significance of magnetic parameters, the evolution of the Asian summer monsoon over the last 1200 years was reconstructed using χ and S 300 as proxies from Gonghai Lake, Shanxi, North China. The results show that an obviously wet period (AD 910-1220) occurred in this region, suggesting that the summer monsoon was strong and forests were well developed around the lake drainage basin during the MWP. This forms a striking contrast to the generally dry climate that occurred during the MWP in the mid-latitude westerlydominated part of Asia. In addition, against the context of a generally strong summer monsoon, a relatively weak monsoon interval (AD 980-1050) occurred within the MWP in the study area. The major characteristics of climate change over the last millennium revealed by magnetic proxies in Gonghai Lake sediments are supported by other proxy records derived from monsoon region and models. These paleo-monsoon reconstructions are comparable, even on decadal-to-centennial time scales, demonstrating the ability of magnetic parameters to identify sensitive regional environment changes. The high resolution of these data combined with additional timing constraints mean that records of magnetic proxies from Gonghai Lake are expected to be valuable for the thorough characterization of the evolution of the Asian summer monsoon.
2019-04-24T13:10:36.964Z
2011-07-20T00:00:00.000
{ "year": 2011, "sha1": "baaf39ad29234fed104d83c4549939b032e89873", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11434-011-4592-y.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7d07e1e4b0739b9cd2e3c1eaac26d8f63dcb236f", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
244835062
pes2o/s2orc
v3-fos-license
Enabling Biological Nitrogen Fixation for Cereal Crops in Fertilized Fields : Agricultural productivity relies on synthetic nitrogen fertilizers, yet half of that reactive nitrogen is lost to the environment. There is an urgent need for alternative nitrogen solutions to reduce the water pollution, ozone depletion, atmospheric particulate formation, and global greenhouse gas emissions associated with synthetic nitrogen fertilizer use. One such solution is biological nitrogen fi xation (BNF), a component of the complex natural nitrogen cycle. BNF application to commercial agriculture is currently limited by fertilizer use and plant type. This paper describes the identi fi cation, development, and deployment of the fi rst microbial product optimized using synthetic biology tools to enable BNF for corn ( Zea mays ) in fertilized fi elds, demonstrating the successful, safe commercialization of root-associated diazotrophs and realizing the potential of BNF to replace and reduce synthetic nitrogen fertilizer use in production agriculture. Derived from a wild nitrogen- fi xing microbe isolated from agricultural soils, Klebsiella variicola 137-1036 ( “ Kv137-1036 ” ) retains the capacity of the parent strain to colonize corn roots while increasing nitrogen fi xation activity 122-fold in nitrogen-rich environments. This technical milestone was then commercialized in less than half of the time of a traditional biological product, with robust biosafety evaluations and product formulations contributing to consumer con fi dence and ease of use. Tested in multi-year, multi-site fi eld trial experiments throughout the U.S. Corn Belt, fi elds grown with Kv137-1036 exhibited both higher yields (0.35 ± 0.092 t/ha ± SE or 5.2 ± 1.4 bushels/acre ± SE) and reduced within- fi eld yield variance by 25% in 2018 and 8% in 2019 compared to fi elds fertilized with synthetic nitrogen fertilizers alone. These results demonstrate the capacity of a broad-acre BNF product to fi x nitrogen for corn in fi eld conditions with reliable agronomic bene fi ts. N itrogen is often the most rate-limiting nutrient in agricultural systems. 1,2 Synthetic nitrogen fertilizers address this limitation by providing a reliable source of nitrogen to crop plants. Fifty percent of the global agricultural productivity gains in the 20th century can be attributed to synthetic nitrogen fertilizers 3 enabling high-yielding crop varieties and irrigation to increase productivity. 4 The benefits of synthetic nitrogen fertilizers are counterbalanced by consistent loss of half or more of the applied nitrogen fertilizers from the field. 2 The resulting consequences have been well documented: water and air pollution, stratospheric ozone depletion, hypoxic dead zones that stretch for thousands of square miles, and the generation of nitrous oxide, a greenhouse gas 300 times more potent than carbon dioxide. 5−9 Synthetic nitrogen fertilizers, from the energyintensive process of synthesis to the inevitable loss of fertilizers from the field, cause an estimated damage of $200B each year. As such, there is a pressing need to mitigate the negative impacts of applied nitrogen fertilizers while increasing nutrient supply to intensify row crop production on existing land to meet the growing demand for food, fuel, and fodder. One such solution lies with soil microbes known as diazotrophs. These microbes can reduce atmospheric nitrogen to ammonia, a bio-available form of nitrogen, via biological nitrogen fixation (BNF). 10 A protein complex known as nitrogenase enables BNF for microbes. Crop engineering and synthetic plant−microbe symbioses that transfer nitrogenase and the capacity to fix nitrogen to cereals have been widely explored but have proved challenging to implement. 11,12 Developing and commercializing nitrogen fixing bacteria has the potential to combine robust plant−microbe relationships with the capacity to quickly and effectively gene-edit microbes for purpose. BNF for cereal crops is of particular interest, as cereal grains such as rice, wheat, and corn provide 50% of global calories and are the recipients of 45% of global fertilizer applications. 13 However, two major hurdles to the commercialization of BNF microbes for cereal crops exist: the technical challenge of enabling BNF microbes to operate under field conditions, 10,14 and the commercial challenge of successfully bringing such a microbe to market and widespread adoption. 15 BNF is an energetically expensive process requiring high reducing power and a large amount of ATP. As a result, diazotrophs evolved mechanisms which allow them to tightly control the formation and expression of the nitrogenase complex. 16,17 In some farmed soils, the application of synthetic nitrogen fertilizers occurs at concentrations many orders of magnitude greater than the exogenous nitrogen required to suppress nitrogen fixation by the microbes, thereby "switching off" BNF. 18 Despite tight control of nitrogen fixation, diazotrophs are estimated to provide 10% of the nitrogen crop cereal budget. 19 Free living crop-associated diazotrophs capable of providing nitrogen at agriculturally relevant levels as observed by Van Deynze et al. and Ladha et al. 19,20 indicate that this nitrogen source could be developed and optimized for modern agriculture. However, any microbe identified as providing BNF for cereal crops will need to be gene-edited for function in the nitrogen-rich soil conditions which would normally suppress nitrogen fixation. 21 Synthetic biology provides the tools and engineering frameworks to effectively develop microbes for purpose. 22 Advances in genomics enable us to better characterize the myriad members of these often complex families of bacteria. The decreasing cost of DNA sequencing 23 has made it possible to screen the billions of microbes present in soil for specific characteristics and metabolic activities. 24 Improvements in computational power enable fast and reliable distinction between microbes of interest based on genetic variation, and advances in biotechnology, such as precision gene-editing, have made it possible to enhance naturally occurring microbial activities including nitrogen fixation. 25 Sustainable, scalable, drop-in microbial products are not yet widespread across global agriculture. While microbes have long been touted as the clear solution to diverse challenges in sustainable agriculture, only 1% of potentially beneficial bacteria characterized in the laboratory have emerged onto the marketplace. 15 One challenging aspect is that many potentially beneficial microbes found in the nutrient-rich, highly competitive rhizosphere are related to microbes that have the potential to be opportunistic human pathogens, 26 highlighting the need for rigorous assessment of biosafety as a part of any product development effort. A second challenging aspect is that adoption of new agronomic practices and products is frequently slowed by the cost of implementation and uncertain returns on investment. 27 Despite the challenges, the impact of a commercial microbe for BNF in cereal crops could be as profound a shift for agriculture as the discovery of the Haber−Bosch process was over a century ago. Such a product would allow for an immediate and effective change in intensive agriculture practices, meeting our time-sensitive need to scale up alternatives to synthetic nitrogen. 28 ■ RESULTS AND DISCUSSION In this report, we address both engineering and product development challenges to develop the first commercially available nitrogen-fixing microbe for corn. We first identified and characterized an agricultural soil-derived wild-type diazotroph (Klebsiella variicola strain 137; Kv137) through use of computational and synthetic biology tools. We then edited the genome to produce K. variicola strain 137-1036 (Kv137-1036), a non-transgenic strain which fixes nitrogen regardless of exogenous nitrogen levels. We evaluated the nitrogen fixing and excretion capacity of the genetically remodeled strain Kv137-1036 both in vitro and in planta. Upon verification of function, and in partnership with a multitude of collaborators, we addressed the commercial challenges of safety, stability, and efficacy through standard biosafety assay panels, long-term viability assays, and in-field performance across the U.S. Corn Belt over 2018 and 2019 crop years. The result is a microbial product that supplies nitrogen to corn crops in nitrogen-rich soil conditions, with performance suitable for broad acre use in conjunction with standard agricultural practice. Our work introduces an environmentally and economically sustainable alternative nitrogen source for farmers and contributes to a framework for the development and subsequent commercialization of nitrogen-fixing biofertilizer products for agriculture. Wild-Type Strain K. variicola 137 and Remodeled Strains. Bacterial strain 137 was isolated from the surface of corn roots grown in soil collected near a farm in St. Charles County, Missouri, USA. There are reports of endophytic K. variicola species 29,30 as well as free-living diazotroph species. 31 The Kv137 isolate was derived from plant roots gently rinsed free of bulk excess soil but not surface-sterilized. Subsequent microscopy work (as shown in Figure 2F) revealed the presence of the microbe only on the exterior of corn roots after inoculation, suggesting that Kv137 is a root-associated diazotroph that colonizes the rhizoplane rather than internal plant tissues (data not shown). The microbe was initially identified as K. variicola at 100% identity through 16S rRNA sequence alignment after genomic sequencing. However, identifying taxa within the genus Klebsiella can be challenging. 32,33 In order to better compare the putative K. variicola 137 strain against a broader cross section of related species, we calculated the percent-average nucleotide identity (ANI) between strain 137 and 16 Klebsiella strains and constructed a phylogenetic tree 34 ( Figure 1A) showing a clear demarcation between the various species of Klebsiella, with strain Kv137 located among other K. variicola species to comprise a monophyletic clade. Comparisons of Kv137 within the K. variicola clade yielded >98%, well above the standard 95% ANI threshold for species determination ( Figure 1A). Together, the percent ANI and phylogenetic tree analyses support the initial determination of Kv137 within taxon K. variicola. Further genomic analysis revealed the presence of a nif cluster homologous to that described in other plant-associated K. variicola strains. 35,36 The presence of a nif gene cluster in a soil-based microbe, in combination with >98% ANI between Kv137 and K. variicola, a family of microbes known to contain plant-associated diazotrophs, suggested that Kv137 was an intriguing candidate for the development of an agriculturally indigenous diazotroph for BNF in cereal crops. In order to develop a microbe for use with cereal crops in fertilized fields, we edited the Kv137 genome to decouple regulation of nitrogen fixation from the presence or absence of exogenous nitrogen. 21 Similar to other nitrogen-fixing Gammaproteobacteria, strain Kv137 contains the nif L and nifA genes responsible for control of the nitrogen fixation pathway in an operon driven by a single repressible promoter upstream of nif L 17 ( Figure 1B). We thus created strain Kv137-1036 by replacing nif L of strain Kv137 with an endogenous constitutive promoter consisting of the 500 bp immediately upstream of inf C. 37−40 This substitution removes negative regulation of the NifA protein by eliminating the inhibitor NifL and allowing for the constitutive production of NifA, which we hypothesized would drive nitrogen fixation in the nitrogenreplete conditions of a fertilized field. This edit does not impact the main components of the nif cluster, genes nif HDK, which encode for functional formation of the nitrogenase complex. To confirm the nitrogenase function in vitro, a nif H knockout mutant strain of Kv137-1036 was also constructed ("Kv137-3738"). This edit used the same mutagenesis approach, introducing the same nif LA edits and also deleting the entirety of the nif H gene. These modifications were carried out by using the guided microbial remodeling methods described in Bloch et al. 28 In Vitro and In Planta Confirmation of Enhanced Nitrogen Fixation Capabilities in Strain Kv137-1036. Nitrogenase Activity In Vitro and Ammonium Excretion Assay. The standard acetylene reduction assay (ARA) was conducted for wild-type and edited strains as previously described. 10,41 Acetylene was injected into the headspace of bacterial cultures, and after incubation, the resulting headspace with ethylene was sampled and quantified as a proxy for nitrogen fixation in nitrogen-free and nitrogen-rich media conditions. In the absence of nitrogen, both wild Kv137 and edited Kv137-1036 produced similar quantities of ethylene. In the presence of 5 mM ammonium, Kv137 showed little measurable acetylene reduction, while Kv137-1036 showed activity similar to the levels of acetylene reduction achieved under nitrogen-free conditions ( Figure 2A). Ethylene production for the parent strain Kv137 in nitrogen-free media was 2.2 × 10 −15 mMol ethylene/CFU h compared to 2.7 × 10 −13 mMol ethylene/CFU h for Kv137-1036, a 122-fold increase in nitrogen fixation. Thus, in the presence of fieldrelevant concentrations of exogenous nitrogen (e.g., fertilized farms), ethylene levels produced by strain Kv137-1036 were significantly higher than parent strain Kv137 ( Figure 2A). As expected, no ethylene production was seen under any condition for the nif H knockout strain Kv137-3738. BNF for use by cereal crops requires the fixed nitrogen be excreted from the microbe to become available for uptake by the plant root. We measured the ammonium excretion of strains cultured in anaerobic conditions in nitrogen-free media, quantifying the total concentration of ammonium after 3 days ( Figure 2B). Per cell, Kv137-1036 produced an average 23-fold more ammonium than the wild-type strain ( Figure 2B). While no gene edits targeting excretion mechanisms were made, Kv137-1036 releases ammonium into its environment at a significantly higher rate than the wild type or nif H knockout mutant, suggesting that gene edits which optimize for continuous nitrogen fixation generate enough ammonium for passive excretion. These results suggest that Kv137-1036 is able to both fix nitrogen in nitrogen-rich environments such as fertilized fields and transfer a portion of that fixed nitrogen to crops. Nitrogenase Activity In Planta. To confirm that an increase in nitrogenase expression and activity is sufficient to generate an excess of ammonium ions within Kv137-1036, leading to passive excretion into the rhizosphere, 28 corn seedlings are grown in sterile, transparent plant growth pouches and inoculated with microbial cultures ( Figure 2D). Air was pressed out of pouches prior to sealing; however, the experiment was not conducted under anaerobic conditions. After several days of growth in a growth chamber, the bags were injected with acetylene and exposed overnight, after which the pouches were sampled and analyzed for ethylene. While Kv137 and Kv137-3738 showed little to no detectable ethylene production on corn roots, Kv137-1036 showed clear acetylene reduction compared to controls (p < 0.1) ( Figure 2C). These results show that the increased nitrogen fixation in Kv137-1036 over the wild-type strain translates to the root environment in association with plants. Because no additional carbon source was added at the time of inoculation, these results suggest that Kv137-1036 can use root exudate as a carbon source to fuel nitrogen fixation in planta. Root Colonization. To effectively deliver fixed nitrogen to plant roots, BNF microbes must be competitive with the existing flora. One aspect of this competitiveness is the ability of the microbe to effectively colonize the rhizosphere, securing access to the root exudates necessary for sustained microbial growth. 42 However, colonization is a complex and poorly characterized process, making rational design difficult. Rather than target colonization mechanisms with gene-editing, we evaluated the microbes for wild colonization competence. Root colonization was quantified by inoculating corn seeds with approximately 10 7 CFU of either Kv137, Kv137-1036, or E. coli, and harvesting seedling roots after 3 weeks of plant growth for genomic DNA extraction and qPCR analysis targeting the Kv137 genome (Kv137 Probe) or the E. coli genome (E. coli Probe) ( Figure 2E). Both Kv137 and Kv137-1036 colonize corn roots at levels two orders of magnitude higher than the background signal detected in untreated control (UTC) samples and bacterial inoculants without cropspecific associations (e.g., E. coli) ( Figure 2E). The signal detected by the Kv137 probe in UTC samples may represent background Klebsiella species present in the plant growth media as Klebsiella is a common rhizosphere microbe. To visualize colonization, remodeled strain (Kv137-1595) containing similar nitrogen fixation edits as Kv137-1036 in addition to red fluorescent protein (RFP) fused to nifA was inoculated onto corn seedlings. Fluorescent microscopy showed individual bacterial cells of Kv137-1595 on the exterior surface of the roots ( Figure 2F). Microcolonies along the roots can also be seen. These results indicate that Kv137-1036 can be considered a root-associated diazotroph capable of colonizing corn roots from germination onward and maintaining colonization in the presence of a functional rhizosphere. Commercial Efficacy of Strain Kv137-1036. Having remodeled a microbe capable of BNF for cereal crops in nitrogen-replete conditions, we validated the commercial potential of formulations containing this microbe, ensuring that any product emerging from this work was both safe and effective. 15 Biosafety Studies. As certain K. variicola strains have been isolated in healthcare environments as opportunistic patho- Product Stability. A dry powder formulation of lyophilized Kv137-1036 was developed and pre-commercially marketed as Pivot Bio PROVEN in 2018 ( Figure 3D). The dry formulation was suspended and activated in a sterile liquid medium according to packaging instructions. After the cap containing the dry microbial powder was punched in, activating the product, technical and biological replicates were sampled to verify that freeze-dried formulation enabled the temperature and time stability essential for planting. Results indicated that Kv137-1036 is stable in freeze-dried form for several weeks in refrigerated and room temperatures. After 32 weeks, the freezedried powder contained viable cells around 1 × 10 9 CFU/g at 20°C ( Figure 3B). Moreover, the formulated product remained viable for at least 14 days after punch-cap activation ( Figure 3C). Field Trials. In 2019, a field-ready dry powder formulation containing Kv137-1036 was released commercially as Pivot Bio PROVEN to corn farmers across the U.S. Corn Belt. After activation in accordance with the instructions on the product label, the microbe was applied at planting via in-furrow planting systems, which deposit small quantities of liquid in close proximity to the seed. In 2019, over 2.54 million yield data points were generated from 38 farms growing corn with Pivot Bio PROVEN alongside untreated checks in structured field trials that we designed and commissioned through a third party. In total, 48 large plot trials (without replication) were conducted over 2 years. Trials consisted of two treatments: a control using grower standard practice and that same practice with the addition of Kv137-1036. Table 1 provides a summary The microbial solution will be applied alongside the farmer's standard inputs. (C3) Image of infurrow planting equipment for the delivery of the activated microbial solution onto seed at planting. Simultaneous deposition of seeds and microbes inoculates each corn plant in a field with nitrogen-producing bacteria. (C4) Colonization of corn roots by microbes (red) after germination as described in Figure 2F. Further information about the dataset is summarized in the Materials and Methods section ( Figure 4). As all trials used the same basic design across geographies and years, a combined site analysis was conducted. 46 Yields of control and inoculated treatments were measured within each site. The percentage of trials showing increased yield due to inoculation, the average yield increase (t/ha), and the 95% confidence interval of this increase as well as the change in CV between treated and untreated were determined. Mean yield and CVs of treated and untreated plots were compared using pairwise Student's t-test statistics at the 95% confidence level. Inoculation with Kv137 increased maize yield in 35 of 48 trials. In both years, the yield increased significantly at the 95% confidence level across trials, with 12 of 17 (71%) of trials in 2018 and 23 of 31 (74%) of trials in 2019, resulting in yield increases. The CVs of individual farm yield data showed a decrease between treated and untreated plots significant in both years of 25% (p = 0.001) and 8% (p = 0.005) in 2018 and 2019, respectively ( Table 2). The reduction in variability is more completely described in a recently submitted patent application (PIVO-019/00US (316309-2053)). ■ DISCUSSION Here, we characterize the successful isolation, identification, development, and commercial deployment of a free-living agriculturally relevant diazotroph capable of BNF for corn. Commercialization of Kv137-1036 was made possible by addressing technical challenges, 10 key customer needs (e.g., efficacy; application technologies), and health and safety (e.g., regulatory concerns). 15 Integrating technical improvements with thoughtful investigation of commercial concerns resulted in the first commercial BNF microbe for corn, Kv137-1036. Despite a robust body of evidence suggesting that free-living microbes have the potential to be a reliable biological source of fixed nitrogen for cereal crops, 12 it is only recently that this hypothesis could be tested effectively. We leveraged decades of research on the regulation of the nif operon and genome editing strategies to improve nitrogen fixation by Kv137 and excretion of bioavailable nitrogen into the rhizosphere for use by the crop. 21 Constitutive expression of the nifA gene in Kv137-1036 resulted in nitrogen fixation in the presence of exogenous nitrogen, making this microbe compatible with modern row crop agriculture. Further work may explore other aspects of the nitrogen fixation pathway either individually or in combination. For example, we recently showed that modifications in genes involved in nitrogen assimilation (glnE) and nitrogen signaling (glnD) significantly increase the amount of ammonium excreted by Kosakonia sacchari PBC6.1, another strain of root-associated diazotroph within the same family as Kv137. 28 The uncertainty surrounding species and taxa in the largely undocumented soil microbiome is a considerable challenge when commercializing soil microbes, and K. variicola variants are no exception. 47−49 In an instance of taxonomic confusion, the well-studied root-associated diazotroph K. variicola strain 342 was initially characterized as K. pneumoniae (Kp342) despite its phylogenetic relationship to K. variicola strain At-22. 50 Kp342 was eventually reclassified as K. variicola strain 342. 51,52 Taxonomic confusion or otherwise, individual species within the K. variicola family have been recognized as opportunistic pathogens. An in silico screening of 31 K. variicola genomes by Martinez and colleagues 32 uncovered a "mosaic distribution" of proteins related to both plant host affinity and potential for virulence. The variety of genes that can be present across K. variicola strains yet not generally universally encoded in the genomes highlight the need to evaluate toxicity empirically for each strain phenotype as part of the commercialization process. To the authors' knowledge, this is the first paper describing the commercialization of a microbial strain that explicitly investigates biosafety implications. The results of the toxicity and pathogenicity studies documented here indicate that Kv137-1036 is a K. variicola strain without toxicity or infectivity concerns to humans. Together with the genomic analysis of isolate Kv137, these results seem to support the premise that selective pressures result in strain adaptations within the K. variicola family that restrict a given species to either clinical or plant-associated settings (e.g., ref 53). In the years since initial characterization of K. variicola as a distinct Klebsiella species, the molecular framework for distinguishing K. variicola continues to grow. 47 Developing a greater capacity for species identification will enable the commercialization of additional species for agricultural use. This work also joins the relatively small body of literature that quantifies the impact of microbial inoculants on yield and yield variance in multi-year, multi-geography large plot studies. 46 The consistency of both yield advantage and the reduction of yield variance across 2 years of trials is a promising indication that these free-living diazotrophs show similar BNF performance on corn roots across disparate geographies, possibly due to the consistency of the rhizosphere microclimate. Given that yields in agriculture integrate variables which are largely uncontrolled by the grower, including weather, temperature, soil type, and topography, 54 products which perform reliably have additional market potential. The potential impact of BNF on cereal crops cannot be fully characterized without an investigation into the nitrogen flux between plant and bacteria. This approach, which may include experiments detailing colonization dynamics and tracing heavy isotopes of nitrogen throughout the system, is challenging but necessary to establish the contribution of microbes to total plant nitrogen. 14,20 More research is needed to determine the precise contribution of nitrogen fixing microbes to the corn plant over the course of the growing season to identify the agronomic equivalent quantity of synthetic nitrogen fertilizer which the microbes can displace. A novel microbial product such as Kv137-1036 that improves nitrogen use efficiency is one of the few ways that farmers, the fertilizer industry, and the environment all benefit from innovation. 27 This product benefits the grower through 46 BNF for cereal crops not only supports productive yields, but it also has the potential to minimize nitrogen loss to the environment. By fixing nitrogen in close proximity to corn roots, Kv137-1036 can be used to complement existing nutrient management practices to support the "4Rs" of nutrient stewardship: right time, right dose, right place, and right source. 55 Given that even the most highly efficient agricultural systems have an NUE of only 40%, 27 microbial production of nitrogen at the root could signal a sea change in nutrient management. The economic and environmental benefits of BNF for cereal crops make it a tool uniquely suited for voluntary adoption in efficient agricultural systems. Continued research into both formulations that improve grower access to the product and the performance of microbial biofertilizers across soil types and regions will give growers confidence that BNF for cereal crops will benefit their operation. As a result, farmers will, for the first time, be able to replace and reduce their dependence on synthetic nitrogen fertilizer while maintaining yields. The commercialization of Kv137-1036 marks a turning point for nitrogen, giving growers a much-needed tool to ensure adequate crop nutrition while minimizing nutrient loss to the environment. Hoagland's V2.11-0.5 medium (used for plant assays) contains 9.6 g/L of NH 4 NO 3 , 7.5 g/L of KCL, 3.3 g/L of CaCl 2 , 1.4 g/L of KH 2 PO 4 , 4.9 g/L of MgSO 4 ·7H 2 O, and 0.15 g/L of FeSO 4 ·7H 2 O. Isolation and Identification of Isolate Strain 137. The microbe of interest was originally obtained from agricultural soil in St. Charles County, Missouri, USA. Soil samples were diluted with 1 mL of PBS and then centrifuged for 1 min at 13,000 rpm, and serial 10 −1 dilutions of the samples were made. Each dilution was plated onto an NFb agar medium supplemented with 0.2% casamino acids. 56 The plates were incubated at 30°C for 4−6 days. The colonies that appeared were tested for the presence of nif H using primers Ueda19f and Ueda407r. 56 To confirm the colonization activity of the microbe, corn seedlings were grown from seed (DKC 66-40, DeKalb, IL, USA) for 2 weeks in a greenhouse environment controlled from 22°C (night) to 26°C (day) and exposed to 16 h light cycles in the same agricultural soil collected from St. Charles County, MO, USA. Roots were harvested and washed with sterile deionized water to remove bulk soil. Root tissues were homogenized, and the samples were centrifuged for 1 min at 13,000 rpm to separate tissue from root-associated bacteria. 21 Positive hits were subsequently purified on the same medium. The presence of nif H was reconfirmed by PCR, and a preliminary identification of the microbe was performed by amplification with 16S rRNA primers 27F and 1492R, 57 followed by Sanger sequencing 58 and NCBI BLAST analysis. 59 Initial NCBI BLAST of the sequence 16S rRNA gene amplicon matched several K. variicola isolates at 100.00% identity. Genomic DNA was prepared with the MagAttract HMW DNA Kit (Qiagen cat no. 67563). PacBio genome sequencing and assembly were performed (SNPsaurus, Eugene, OR), and the resulting genome assembly was annotated with Prokka v. 1.12. 60 For phylogenetic analysis (Figure 1), a total of 19 genome assemblies were reannotated with the most recent version of Prokka (v. 1.13.3) at the time for annotation consistency. These 19 genome assemblies include Kv137, the Kv137-1036 derivative strain (described below), representative genomes for the 16 Klebsiella-type strains with genomes available in the NCBI RefSeq database, and E. coli K-12 substr. MG1655 (for out group comparison). Phylogenetic analysis was performed on a set of 104 conserved protein sequences as described in Parks et al. (2017). Analysis was restricted to proteins that were present annotated in >80% of our assemblies, bringing our protein count from 104 to 100. All 19 genome assemblies contained 80% or more of the 100 remaining protein annotations. Protein sequences were concatenated and aligned with MUSCLE. 61 Any columns of residues present in fewer than 50% of assemblies were considered insufficiently informative and removed before subsequent analysis. Phylogenetic trees were built with FastTree 62 and graphed with FigTree 63 using the E. coli strain as the out group. To more quantitatively determine the identity of Kv137, the ANI of Kv137's genome was compared to the other 18 genomes using Mash at the recommended species cutoff measurement of 95%. 64 Remodeled Strain Description and Genomic Modifications. The genotype of Kv137-1036 is ΔnifL::PinfC, with the nif L locus including a gene deletion and promoter insertion. Kv137-1036 and Δnif H (Kv137-3738) were carried out using the same mutagenesis approach as described in a patent application on our guided microbial remodeling platform. 28 Strains with genomic edits were cured of all plasmids through repetitive sub-culturing for the desired edits. Acetylene Reduction Assay. A modified version of this standard assay described by Temme et al. 65 was used to measure nitrogenase activity in pure culture conditions. Strains were cultured from single colonies into 4 mL of SOB for 24 h (30°C, aerobic). The growth culture (1 mL) was then added to 4 mL of minimal media or 4 mL of minimal media supplemented with 5 mM ammonium phosphate in airtight culture tubes prepared in an anaerobic chamber and grown for 4 h (30°C, anaerobic). A headspace of 10% was replaced by an equal volume of acetylene and incubation continued for an additional hour. A gas-tight syringe was used to remove 2 mL of headspace in preparation for ethylene production quantification using either an Agilent 6850 or 7890B gas chromatograph equipped with a flame ionization detector (FID). The initial culture biomass was compared to the end biomass by measuring OD590. To establish a negative control, we knocked out the central nitrogenase subunit NifH, deleting the entirety of the nif H gene. The nitrogen fixation − ACS Synthetic Biology pubs.acs.org/synthbio Research Article phenotype was confirmed via ARA prior to use as a negative control. Sterility is maintained throughout this experiment. Ammonium Excretion Assay. Excretion of fixed nitrogen in the form of ammonium was measured using batch cultures in DeepWell plates. Strains were propagated from single colony in 1 mL/well SOB in a 96-well DeepWell plate. The plate was incubated for 24 h (30°C, 200 rpm) and then diluted 1:25 into a fresh plate containing 1 mL/well of growth medium. Cells were incubated for 24 h (30°C, 200 rpm) and then diluted 1:10 into a fresh plate containing minimal medium. The plate was transferred to an anaerobic (Coy) chamber with a gas mixture of >98.5% nitrogen, 1.2−1.5% hydrogen, and <30 ppM oxygen and incubated at 1350 rpm at room temperature for 72 h. The initial culture biomass was compared to the end biomass by measuring OD590. Cells were then separated by centrifugation, and supernatant from the reactor broth was assayed for free ammonium using the Megazyme Ammonia Assay Kit (P/N K-AMIAR) normalized to biomass at each time point. Sterility is maintained throughout this experiment. Fluorescence Microscopy of Root Surface Colonization. Sprouted corn seeds with roots were half-immersed in a 48-well plate where each well contained 4 mL of sterile Hoagland's V2.11-0.5 solution. The plate was covered with a breathable seal (wells were slit to support the seeds) and incubated in a humidified and temperature-controlled growth room for 72 h. The wells were topped off with fresh Hoagland's solution after 48 h. The remodeled strain Kv137-1595 (genotype: Kv137_glnE_KO2-unintended deletion, ΔnifL-Prm1.2, Prm1.2-RFP-linker-nifA) was inoculated into 5 mL of SOB from a single colony and grown aerobically for 48 h at 30°C. Prior to inoculation, the bacterial culture was diluted to 109 cell/mL. The diluted bacteria culture (10 μL) was added to the roots of each seeding for a final concentration of 107 cells/seedling. The inoculated seedlings were then incubated in the growth room for an additional 24 h. Root sample preparation and staining: Plants were removed from the growth plate, and the roots were rinsed in sterile PBS. Root sections were obtained using a sterile razor blade. The SYTO9 green fluorescent stain (6 μM, Life Technologies P/N S34854) was prepared from a 5 mM stock solution by diluting in sterile DI water. Immediately before imaging, root samples were added to the 6 μM SYTO9 solution and incubated at room temperature for 15 min in the dark. The samples were imaged on a Zeiss LSM710 confocal microscope or a Leica DM IL fluorescence microscope, and image processing was done using the Zeiss Zen Black imaging software and/or Leica LAS X imaging software and ImageJ. Root Colonization Assay. As described in Bloch et al., a planting medium with minimal background nitrogen was prepared with pure sand autoclaved for 1 h at 122°C, and ∼600 g was measured out into a D40 Deepot (Stuewe and Sons) before planting corn seeds at a depth of ∼1 cm. Seedlings were inoculated with a suspension of cells drenched directly over the emerging coleoptile at 5 d after planting. The inoculum was prepared from 5 mL of overnight culture in SOB, which was spun down and resuspended twice in 5 mL of PBS to remove the residual SOB before final dilution to an OD of 1.0. The plants were maintained under standard growth room conditions using fluorescent lamps and a 16 h day length with a 26°C day temperature and 22°C night temperature. Plants were fertilized twice per week with a modified Hoagland's fertilizer solution containing 2 mM KNO 3 . All pots were watered with sterile deionized H 2 O as needed to maintain consistent soil moisture. At 3 weeks of growth, three replicate plants were collected, and plant roots were washed and harvested for the preparation of genomic DNA extraction to quantify root colonization using quantitative PCR (qPCR) with primer−probe pairs that targeted genome-specific regions of either Kv137 or the E. coli genomes. The primers were designed by Primer Blast. 66 The Kapa Probe Force kit (Kapa Biosystems P/N KK4301) was used as per manufacturer's instructions, and qPCR reaction efficiency was measured using a standard curve generated from a known quantity of genomic DNA from the target genomes. The data shown in Figure 2E are normalized to genome copies per gram fresh weight using the tissue weight and extraction volume. ARA In Planta. Corn seedlings were cultured on water in sterile conditions, and three 7 day-old seedlings were placed in each sterile plant growth pouch under aerobic conditions and inoculated with a microbial culture of Kv137, Kv137-1036, or Kv137-3738, which had been centrifuged, the spent medium was removed, and the cells were resuspended in sterile water. All pouches were placed in transparent boxes for growth in a growth chamber with a 16 h day length and controlled humidity for 5−10 days. The pouches with seedlings were then transferred to gastight bags, and acetylene was injected to allow for incubation at 30°C overnight, after which the headspace of the bags was sampled and analyzed by gas chromatography. Shelf Life and Viability of Freeze-Dried Microbial Powder Formulation. To evaluate the freeze-dried microbes in the punchcap system/to test the stability of cap formulation (dry powder product shelf-life), mean (CFU/g) was sampled over time. The data are an average of triplicate samples from two production batches. Each error bar is constructed using 1 SD error from the mean. To test the product viability after activation, CFU/mL was sampled at various time points after activation (measured in days). Each error bar is constructed using 1 SD error from the mean. Characterization of the Potential Toxicity, Pathogenicity, and Irritancy of Kv137-1036. A set of six toxicity and pathogenicity studies were conducted by a third-party contract research organization (Product Safety Labs, Dayton, NJ, USA) to characterize the potential toxicity, pathogenicity, and irritancy of a solution containing K. variicola 137-1036 following acute exposure. The test solution contained 1 × 109 CFU/mL K. variicola 137-1036, a concentration higher than a proposed product that would be diluted for use. All studies were conducted under Good Laboratory Practice (GLP). Brief summaries of methods are presented below: • Acute Oral Toxicity (OPPTS 870.1100)An initial limit dose of 2000 mg/kg was administered to one rat. In the absence of mortality, four additional rats were sequentially dosed at the same level. Since all five rats survived, no additional animals were tested. Individual doses were calculated based on the initial body weights, taking into account the density of the test substance. The test substance was administered directly to the stomach via oral gavage. The animals were returned to their cage and feed was replaced 3−4 h after dosing. All animals were observed daily for 14 days after dosing. Body weights were recorded prior to administration and again on days 7 and 14 (termination of study). Necropsies were performed on all animals at day 14. ACS Synthetic Biology 67 The animals were observed daily until termination of study. Body weights were recorded prior to exposure and at the end of the study. • Acute Pulmonary Toxicity/Pathogenicity (OPPTS 885.3150)Thirty-eight rats were divided into two study groups. Group A (17 rats of each sex) received a pulmonary dose of 0.15 mL containing 1.0 × 10 1 viable cells of the test microorganism Kv137-1036. Group B (2 rats of each sex) was an UTC. Viable CFUs were enumerated to confirm delivery of greater than or equal to 1 × 10 8 viable cells in 0.1 mL. The test substance was mixed thoroughly and serially diluted with PBS to reach a concentration of 10 8 CFU/mL for dosing. Groups of three rats from Group A were sacrificed at various intervals after dosing to assess the distribution of the microbe in tissues and organs, the potential pathogenicity, and the pattern of elimination of the microbe from the body. Samples of lung, blood, brain, kidney, liver, lymph nodes, spleen, and cecum contents were collected from all animals at their scheduled sacrifice at days 1, 3, 7, 15, and 22/23. Body weights were recorded from surviving animals on days 1, 3,8,15, and prior to sacrifice (day 22/23). At each sacrifice, blood and organ tissues were plated to determine the amount of Kv137-1036 present. Broad-Acre Field Trial Design and Data Collection. For structured field trials, corn yields produced with the grower's standard nitrogen fertilization practice were compared to corn yields produced with the addition of a commercial formulation of Kv137-1036 (Pivot Bio PROVEN) containing 10 8 CFU/mL to the grower's standard nitrogen fertilization practice. The microbe was applied at a rate of 12.8 fl oz per acre (934 mL/ha) as in-furrow application at planting. Fields were split in half with a Pivot Bio PROVEN treated area on one side of the field and the grower standard practice (GSP) plot on the other. The two treatment zones were identified by digital as-planted (planter monitor) and harvest (combine yield monitor) maps. To obtain customer data from field trials, customers that purchased Pivot Bio PROVEN in 2019 were invited to share their planting, treatment, and yield data with us through an incentive program. The trial layouts were determined by customers for their own fields. Customer fields that were not laid out in a similar fashion to the structured field trials were discarded. Data Processing of Field Trials. Monitor data for both structured trials and customer data trials were received, cleaned, and processed by a third-party company, IN10T, who performed yield analysis with ArcGIS. Trials were subjected to a further agronomic QC check to screen for serious defects in field conditions, on-farm management, or data collection issues that would skew the results. In order to ensure representative comparisons from both Pivot Bio PROVEN and untreated field regions, IN10T used algorithms to remove unsuitable parts of the field. Header rows, which are typically lower in yield, more prone to damage, and have a varying incident solar radiation profile, were removed from field datasets. The most reliable data from combine harvest monitors occurs in areas where the combine is moving at a steady velocity. Thus, data points were removed with automated filters where the combine was accelerating or where the combine had to slow down to pass obstacles such as field drains or terraces. To generate Tables 1 and 2, data was separated by year, and the following analyses were performed with Python. The mean of yield within each treatment was calculated for the two treatments within each trial (untreated and treated with Pivot Bio PROVEN), which created 17 pairs of data for 2018 and 31 for 2019. The distribution of the treated and untreated means examined for normality and constant variance. The Box−Cox transformation was applied to 2019 alone. The Shapiro−Wilks test for normality was applied to the resulting data, and the null hypothesis of normality for each dataset was retained. The means were compared using one-sided, paired t-test analysis at the 95% confidence level. The same method was applied for comparing the coefficients of variation between treatments on each trial within years. In this case, the Box−Cox transformation was applied to both years in order to meet the assumptions of normality and constant variance for t-test analysis.
2021-12-02T06:23:07.969Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "85995753849d3a65529c5757541ce389e99bb162", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acssynbio.1c00049", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "f0083ae8d21923a1e2b7e38d382d3379f73bb575", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258501381
pes2o/s2orc
v3-fos-license
The Impact of Competitive Experience on Enterprise Performance in a Green Innovation Environment : This is a constantly changing era. With the development of economic globalization, domestic and foreign enterprises are constantly facing fierce competition. The so-called "pulling the trigger and moving the whole body" refers to the chain interaction effect between the competitive behavior of enterprises and competitors, and its means are hierarchical and mutually influencing. In recent years, not only has national policies strongly supported green environmental protection, but green innovation, as an important component of an enterprise's core competitiveness, is not only affected by the intensity of competition between both sides of the competition, but also has a catalytic effect on improving enterprise environmental performance. In the context of green transformation, enterprises have spontaneous learning behaviors, constantly learning, selecting, and accumulating competitive experience. This article discusses the key factors that affect the viability of organizations in the process of green transformation, as well as how enterprises can obtain competitive advantages in a constantly changing environment, providing useful insights for improving organizational viability. Introduction In July 2020, the National Development and Reform Commission and other nine departments jointly issued the "Notice on Solid Promotion of Plastic Pollution Control", strictly using non-degradable plastic bags, disposable plastic straws, and disposable plastic tableware. For enterprises, from focusing on rapid economic growth to focusing on economic green development, how to maintain their competitiveness in the changing logic of competition has become an important issue. Enterprises were once the main producers of pollution. With the increasing demand for green in the market, they are now becoming pioneers in advocating environmental protection. According to the theory of the Red Queen, the business environment changes, and the logic of competition changes accordingly, and enterprises are prone to fall into the lagging effect of competition. Therefore, it can also be seen that the more recent competitive experience is more conducive to the development of the enterprise, the more long-term competitive experience is not conducive to the survival of the enterprise .(Ji Fang,2011) [1] So how much impact does the change in the logic of competition brought about by the green transformation process have on companies? How does competitive experience affect enterprise viability? How can companies use competitive experience to survive better in a green environment? From the perspective of the Red Queen theory, this article explores the impact of competitive experience on corporate viability by analyzing the changes in the logic of competition, and provides suggestions for companies to improve their organizational viability. Li Haiping et al. (2005) [2] proposed that the purpose of implementing green innovation measures in enterprises is to promote energy conservation, emission reduction, and improve environmental quality. Under the premise of sustainable development, the benefits of green innovation are not only the reduction of environmental pollution, but the greening of products and technologies through innovation has brought high benefits to enterprises. The development model of enterprises pursuing profit maximization has brought about high pollution, wastewater discharge, and high energy consumption. Under the background of green innovation, enterprises should strive to create more social value. Theoretical Summary The Red Queen theory includes two aspects: Competitive experience: Barnett and Hansen (1996) [3] introduced the Red Queen theory to organizational survival: the target organization learns in order to make itself stronger and gains a slight advantage in the competition, which in turn promotes the learning behavior of competitors; the competitiveness of competitors is enhanced , Resulting in the target organization being at a disadvantage in the competition and the lack of resources, so that another round of problem search is carried out. The two compete with each other and influence each other, accumulating rich competitive experience. Deng Xinming, Guo Yanan (2020) [4] proposed that the more recent competitive behaviors, the more advantageous the competitive experience it produces will be to the enterprise and the more it can improve the viability and competitiveness of the enterprise; the more long-term competitive experience the more adverse the impact on the enterprise. , Will reduce the viability and competitiveness of enterprises. Competitive Logic: Competitive logic has evolved in the competition between enterprises again and again. It is the rules of competition, the way of competition, and the key to the success or failure of competition. Enterprises will discover the competition logic after gaining a wealth of competitive experience. This article discusses based on these two aspects. 3.Changes in the logic of competition in the environment of green innovation Since the 19th National Congress of the Communist Party of China, the practical implementation of the development concepts of innovation, coordination, greenness, openness, and sharing, taking into account economic and ecological benefits, and promoting sustainable development have become an important issue for national development. Green innovation requires enterprises to carry out technological transformation to achieve the purpose of reducing pollution. According to the Red Queen theory, companies will learn independently to discover and adapt to the logic of competition; the logic of competition is the rules of competition, the way of competition, and the key to the success or failure of competition. Due to the requirements of green transformation, the market's competition rules, competition methods, and business success or failure The standards have changed, and the changes in these elements are in line with the definition of competitive logic. Therefore, the Red Queen theory is suitable for studying the relationship between the environment, ecology and individual enterprises. The transformation of green innovation is embodied in a series of changes in policies and standards from a macro perspective, and embodied in the emergence of new technologies, the rise of new competitors, changes in competitive concepts, and so on from a micro perspective. The original competitive environment is very different from before. According to the Red Queen theory, when the environment undergoes major changes, the above-mentioned series of changes all reflect that the logic of competition is also changing accordingly. At this time, the competitive experience derived from the fierce fighting in the original competitive logic will no longer apply. Companies that incorporate green into their corporate strategy, guided by government policies and corporate strategies, can effectively alleviate or even circumvent many restrictions that companies face in the process of green innovation transformation, and help companies better and faster adapt to the new competitive logic; if you are addicted to the past achievements, the impact of lagging competition will also cause great harm to the enterprise. (Deng Xinming, Guo Yanan ,2020) [4] The impact of competitive experience on enterprise viability The theory of the Red Queen comes from the book "Alice in the Mirror". Alice asked the Red Queen: "Why am I still in the same place after running for so long?" The Red Queen replied: "In this country, if you want to stay in It must keep running!" If an enterprise wants to gain a competitive advantage, it must continue to learn, and the improvement of enterprise capabilities triggers the learning behavior of competitors. This is a process of co-evolution. In this process, the company can accumulate a wealth of competitive experience, which is the result of the company's "continuous running". [4] Today, the ISO 22316 standard adds organizational acceptance and adaptation to environmental changes in the definition of organizational viability. This extension not only requires the company to achieve its goals, but also requires the company to have the ability to survive and grow. It focuses on the organization's adaptability and whether the organization can withstand the test of time. The reasonable operation and continuous success of the company can be comprehensive and perfect. As shown in Figure 1, ISO 22316 clarifies the six principles of enhancing the viability of the organization. [5] When a company gains experience in competition from time to time, it will naturally be more able to adapt to the external environment. However, there are no absolutes, and the impact of historical experience on business operations is extremely complex. The closer the competition occurs, the greater the impact on the company, and the impact of long-standing historical experience will gradually decline due to memory and environmental changes. When the logic of competition changes, the company's original competitive experience may have a negative impact on the company because it does not adapt to the new competitive environment. Every competition has a subtle impact on the company. Therefore, the closer the time of the occurrence of competitive behavior, the closer the competitive experience generated will be to the changing competitive logic, the greater the impact of the generated competitive experience on the enterprise, and the greater the survival and competitiveness of the enterprise; the more long-term competitive experience , The more in line with the original competitive logic, the more unfavorable the impact on the enterprise, which will reduce the enterprise's viability and competitiveness. If you insist blindly, the company will fall into a competitive trap, and thus go into decline. In addition, companies with relatively rich competitive experience are more adaptable to the environment than those without competitive experience, and are more conducive to the survival of the company. According to the resource-based theory, the internal organizational capabilities, resources and knowledge accumulation within an enterprise are the key to explaining that an enterprise obtains excess returns and maintains a competitive advantage. Learning in the competition can bring more knowledge to the enterprise. Like the external environment, the internal resources of the company also play a major role and play a decisive role in creating a competitive advantage in the market. This shows the importance of the accumulation of corporate knowledge and historical competitive experience for companies to face competition. How companies can improve their survivability From the internal point of view of the enterprise, the competitive logic of using competitive experience to adapt to changes is the key point, and attention should be paid to selecting the essence and removing the dross. First of all, companies with rich competitive experience have a certain degree of sensitivity to changes in the environment and can better adapt to changes in the logic of competition. However, in order to avoid blunders, business managers should be more rational and sensitive to changes in the market environment. Carefully compare the difference between far and near competition, and avoid blindly self-confidence because of past success. According to the Red Queen's competition theory, companies with rich competitive experience, especially those that have gained a competitive advantage because of this, are more likely to ignore the basics, and the environment is not as careful as they were when they were fledgling. This way, it is easy to produce short-sighted learning. This leads to corporate cognitive biases. Historical experience can help companies to a large extent, but companies must make reasonable trade-offs and make good use of historical experience: for example, some traditional processes and routine processes owned by companies may become obsolete over time and cannot adapt to the new environment. ; Although the recent innovative strategic actions require more effort to understand, the advantages they bring to the enterprise are huge. Special attention is needed to understand the recent actions of competitors. Conclusion The pace of competition cannot be stopped. Market competition is like the kingdom in "Alice in the Mirror". There is no eternal victory, and no one will stop and wait, because the only consequence of waiting is ruthless elimination. Nowadays, a new trend of advocating environmental protection is set off at home and abroad. As a participant, companies bear the brunt of changing their development concepts, incorporating green concepts into corporate strategies, and using government policies as guidance. Only through continuous learning, improving their own capabilities, and absorbing experience in every competition, can they alleviate the many restrictions they face in the transformation of green innovation, gain a relatively equal position in market competition, and help them become better and better. Quickly adapting to the new competitive logic can help companies gain a competitive advantage within a certain period of time.
2023-05-05T15:05:03.634Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "fe047fc2d28113a5091433ba01a4fdbb7ee4883f", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/14/shsconf_cike2023_01013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0e0e6e64987fd4418c5cc47a633a828587115067", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
272819619
pes2o/s2orc
v3-fos-license
Call for emergency action to limit global temperature increases, restore biodiversity, and protect health: Wealthy nations must do much more, much faster Lukoye Atwoli, Abdullah H. Baqui, Thomas Benfield, Raffaella Bosurgi, Fiona Godlee, Stephen Hancocks, Richard Horton, Laurie Laybourn-Langton*, Carlos Augusto Monteiro, Ian Norman, Kirsten Patrick, Nigel Praities, Marcel G.M. Olde Rikkert, Eric J. Rubin, Peush Sahni, Richard Smith, Nick Talley, Sue Turale, and Damián Vázquez Editor in Chief, East African Medical Journal; Editor in Chief, Journal of Health, Population and Nutrition; Editor in Chief, Danish Medical Journal; Editor in Chief, PLOS Medicine; Editor in Chief, The BMJ; Editor in Chief, British Dental Journal; Editor in Chief, The Lancet; Senior Adviser, UK Health Alliance on Climate Change; Editor in Chief, Revista de Saúde Pública; Editor in Chief, International Journal of Nursing Studies; Interim Editor in Chief, CMAJ; Executive Editor, Pharmaceutical Journal; Editor in Chief, Dutch Journal of Medicine; Editor in Chief, NEJM; Editor in Chief, National Medical Journal of India; Chair, UK Health Alliance on Climate Change; Editor in Chief, Medical Journal of Australia; Editor in Chief, International Nursing Review; Editor in Chief, Pan American Journal of Public Health The UN General Assembly in September 2021 will bring countries together at a critical time for marshalling collective action to tackle the global environmental crisis. They will meet again at the biodiversity summit in Kunming, China, and the climate conference (COP26) in Glasgow, UK. Ahead of these pivotal meetings, we-the editors of health journals worldwide-call for urgent action to keep average global temperature increases below 1.5 C, halt the destruction of nature and protect health. Health is already being harmed by global temperature increases and the destruction of the natural world, a state of affairs health professionals have been bringing attention to for decades (https://healthyrecov ery.net). The science is unequivocal; a global increase of 1.5 C above the pre-industrial average and the continued loss of biodiversity risk catastrophic harm to health that will be impossible to reverse (Intergovernmental Panel on Climate Change, 2018; Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, 2019). Despite the world's necessary preoccupation with COVID-19, we cannot wait for the pandemic to pass to rapidly reduce emissions. Reflecting the severity of the moment, this editorial appears in health journals across the world. We are united in recognizing that only fundamental and equitable changes to societies will reverse our current trajectory. The risks to health of increases above 1.5 C are now well established (Intergovernmental Panel on Climate Change, 2018). Indeed, no temperature rise is 'safe'. In the past 20 years, heat related mortality among people aged over 65 has increased by more than 50% (Watts et al., 2021). Higher temperatures have brought increased dehydration and renal function loss, dermatological malignancies, tropical infections, adverse mental health outcomes, pregnancy complications, allergies and cardiovascular and pulmonary morbidity and mortality (Haines and Ebi, 2019;Rocque et al., 2021). Harms disproportionately affect the most vulnerable, including among children, older populations, ethnic minorities, poorer communities and those with underlying health problems (Intergovernmental Panel on Climate Change, 2018; Watts et al., 2021). Global heating is also contributing to the decline in global yield potential for major crops, falling by 1.8-5.6% since 1981; this, together with the effects of extreme weather and soil depletion, is hampering efforts to reduce undernutrition (Watts et al., 2021). Thriving ecosystems are essential to human health, and the widespread destruction of † This editorial is being published simultaneously in many international journals. Please see the full list here: https://www.bmj.com/content/full-list-authors-and-signatories-climateemergency-editorial-september-2021. The consequences of the environmental crisis fall disproportionately on those countries and communities that have contributed least to the problem and are least able to mitigate the harms. Yet no country, no matter how wealthy, can shield itself from these impacts. Allowing the consequences to fall disproportionately on the most vulnerable will breed more conflict, food insecurity, forced displacement and zoonotic disease-with severe implications for all countries and communities. As with the COVID-19 pandemic, we are globally as strong as our weakest member. Rises above 1.5 C increase the chance of reaching tipping points in natural systems that could lock the world into an acutely unstable state. This would critically impair our ability to mitigate harms and to prevent catastrophic, runaway environmental change (Lenton et al., 2019;Wunderling et al., 2020). Global targets are not enough Encouragingly, many governments, financial institutions and businesses are setting targets to reach net-zero emissions, including targets for 2030. The cost of renewable energy is dropping rapidly. Many countries are aiming to protect at least 30% of the world's land and oceans by 2030 (High Ambition Coalition). These promises are not enough. Targets are easy to set and hard to achieve. They are yet to be matched with credible short and longer term plans to accelerate cleaner technologies and transform societies. Emissions reduction plans do not adequately incorporate health considerations (Global Climate and Health Alliance). Concern is growing that temperature rises above 1.5 C are beginning to be seen as inevitable, or even acceptable, to powerful members of the global community (CarbonBrief, 2020). Relatedly, current strategies for reducing emissions to net zero by the middle of the century implausibly assume that the world will acquire great capabilities to remove greenhouse gases from the atmosphere (Anderson and Peters, 2016;Fajardy et al., 2019). This insufficient action means that temperature increases are likely to be well in excess of 2 C (Climate Action Tracker), a catastrophic outcome for health and environmental stability. Critically, the destruction of nature does not have parity of esteem with the climate element of the crisis, and every single global target to restore biodiversity loss by 2020 was missed (Secretariat of the Convention on Biological Diversity, 2020). This is an overall environmental crisis (Steffen et al., 2015). Health professionals are united with environmental scientists, businesses and many others in rejecting that this outcome is inevitable. More can and must be done now-in Glasgow and Kunming-and in the immediate years that follow. We join health professionals worldwide who have already supported calls for rapid action (https:// healthyrecovery.net; UK Health Alliance). Equity must be at the centre of the global response. Contributing a fair share to the global effort means that reduction commitments must account for the cumulative, historical contribution each country has made to emissions, as well as its current emissions and capacity to respond. Wealthier countries will have to cut emissions more quickly, making reductions by 2030 beyond those currently proposed (Climate Action Tracker, 2021; United Nations Environment Programme, 2020) and reaching net-zero emissions before 2050. Similar targets and emergency action are needed for biodiversity loss and the wider destruction of the natural world. To achieve these targets, governments must make fundamental changes to how our societies and economies are organized and how we live. The current strategy of encouraging markets to swap dirty for cleaner technologies is not enough. Governments must intervene to support the redesign of transport systems, cities, production and distribution of food, markets for financial investments, health systems and much more. Global coordination is needed to ensure that the rush for cleaner technologies does not come at the cost of more environmental destruction and human exploitation. Many governments met the threat of the COVID-19 pandemic with unprecedented funding. The environmental crisis demands a similar emergency response. Huge investment will be needed, beyond what is being considered or delivered anywhere in the world. But such investments will produce huge positive health and economic outcomes. These include high quality jobs, reduced air pollution, increased physical activity, and improved housing and diet. Better air quality alone would realize health benefits that easily offset the global costs of emissions reductions (Markandya et al., 2018). These measures will also improve the social and economic determinants of health, the poor state of which may have made populations more vulnerable to the COVID-19 pandemic (Paremoer et al., 2021). But the changes cannot be achieved through a return to damaging austerity policies or the continuation of the large inequalities of wealth and power within and between countries. Cooperation hinges on wealthy nations doing more In particular, countries that have disproportionately created the environmental crisis must do more to support low and middle income countries to build cleaner, healthier, and more resilient societies. High income countries must meet and go beyond their outstanding commitment to provide $100 bn a year, making up for any shortfall in 2020 and increasing contributions to and beyond 2025. Funding must be equally split between mitigation and adaptation, including improving the resilience of health systems. Financing should be through grants rather than loans, building local capabilities and truly empowering communities, and should come alongside forgiving large debts, which constrain the agency of so many low income countries. Additional funding must be marshalled to compensate for inevitable loss and damage caused by the consequences of the environmental crisis. As health professionals, we must do all we can to aid the transition to a sustainable, fairer, resilient and healthier world. Alongside acting to reduce the harm from the environmental crisis, we should proactively contribute to global prevention of further damage and action on the root causes of the crisis. We must hold global leaders to account and continue to educate others about the health risks of the crisis. We must join in the work to achieve environmentally sustainable health systems before 2040, recognizing that this will mean changing clinical practice. Health institutions have already divested more than $42 bn of assets from fossil fuels; others should join them (Watts et al., 2021). The greatest threat to global public health is the continued failure of world leaders to keep the global temperature rise below 1.5 C and to restore nature. Urgent, society-wide changes must be made and will lead to a fairer and healthier world. We, as editors of health journals, call for governments and other leaders to act, marking 2021 as the year that the world finally changes course. Competing interests We have read and understood BMJ policy on declaration of interests and F.G. serves on the executive committee for the UK Health Alliance on Climate Change and is a Trustee of the Eden Project. R.S. is the chair of Patients Know Best, has stock in UnitedHealth Group, has done consultancy work for Oxford Pharmagenesis, and is chair of the Lancet Commission of the Value of Death. None further declared. Provenance and peer review Commissioned; not externally peer reviewed.
2021-09-07T06:22:59.330Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "60a9823c52e70a6f625ee05bcaabc997c6df4c7d", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/02692163211041999", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f30288aa407f87ba1347abc33c2c3ae4c72c7116", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
256634760
pes2o/s2orc
v3-fos-license
Skeletal muscle enhancer interactions identify genes controlling whole-body metabolism Obesity and type 2 diabetes (T2D) are metabolic disorders influenced by lifestyle and genetic factors that are characterized by insulin resistance in skeletal muscle, a prominent site of glucose disposal. Numerous genetic variants have been associated with obesity and T2D, of which the majority are located in non-coding DNA regions. This suggests that most variants mediate their effect by altering the activity of gene-regulatory elements, including enhancers. Here, we map skeletal muscle genomic enhancer elements that are dynamically regulated after exposure to the free fatty acid palmitate or the inflammatory cytokine TNFα. By overlapping enhancer positions with the location of disease-associated genetic variants, and resolving long-range chromatin interactions between enhancers and gene promoters, we identify target genes involved in metabolic dysfunction in skeletal muscle. The majority of these genes also associate with altered whole-body metabolic phenotypes in the murine BXD genetic reference population. Thus, our combined genomic investigations identified genes that are involved in skeletal muscle metabolism. Obesity and type 2 diabetes (T2D) are metabolic disorders characterized by insulin resistance in skeletal muscle. Here, the authors map skeletal muscle enhancer elements dynamically regulated after exposure to free fatty acid palmitate or inflammatory cytokine TNFα and identify target genes involved in metabolic dysfunction in skeletal muscle. T he prevalence of obesity and T2D comorbidity is reaching epidemic proportions worldwide, with currently 1.9 billion adults estimated as being overweight or obese 1 and 380 million suffering from T2D 2 . Skeletal muscle constitutes the largest metabolic organ and accounts for 30% of the basal metabolic rate 3 , and as the most prominent site of insulin-mediated glucose uptake in humans, insulin resistance (IR) in muscle is considered a contributing defect during development of T2D 4 . While the molecular basis for the pathology of obesity and T2D is incompletely understood, it is clear that both genetic and environmental factors contribute, probably in a synergistic manner 5 . Genomewide association studies (GWAS) have identified a plethora of genetic variants associated with T2D and obesity traits [6][7][8] . However, only a minority (<5%) of GWAS identified variants are located in coding sequences 9 , which makes functional characterization complex. Several studies have identified that a substantial proportion of the disease-associated variants lie within regulatory regions, including enhancer elements [9][10][11] . Enhancers serve as binding sites for transcription factors and co-regulators that assist in DNA looping and recruitment of the transcriptional machinery to targeted promoters. With an estimated 50,000 to 100,000 active enhancers in any given mammalian cell type 12 , enhancers are thought to account for the complexity of gene regulation. Enhancers are characterized by the presence of histone modifications including monomethylation of histone 3 lysine 4 (H3K4me1) and acetylation of histone 3 lysine 27 (H3K27ac) [13][14][15] . Thus, by determining the genome-wide distribution of these histone marks, it is possible to generate genome-wide maps of active enhancers (the enhancerome) in a specific tissue. Mapping the enhancerome in various cell types and during embryonic stem cell differentiation has demonstrated that enhancer activation is highly cell-type specific and dynamic 16,17 , and several studies have proposed that impaired enhancer activation could be at the origin of disease [18][19][20][21] . Besides interacting with nearby promoters, enhancers also engage in long-range interactions. Indeed, it is estimated that approximately 35-40% of all promoter-enhancer interactions are intervened by at least one gene 22 , which makes exact enhancer-target prediction challenging. Long-range enhancers interactions can be identified by chromosome conformation capture methods 23,24 . In the present study, we aimed to identify target genes of GWAS SNPs in human skeletal muscle by using cultured myotubes subjected to metabolic stress by either palmitate or TNFα exposure. Elevation of plasma levels of free fatty acids and proinflammatory cytokines associates with increasing adiposity 25 and represent an important link between obesity, skeletal muscle IR, and T2D. By RNA profiling and genome-wide mapping of enhancer elements in myotubes, we found that palmitate or TNFα treatment led to massive changes in gene transcription, as well as alterations in the activity of enhancers. Moreover, we showed that enhancers regulated by palmitate or TNFα exposure, overlapped SNPs from GWAS of BMI, waist-to-hip ratio (WHR), IR or T2D. Moreover, by mapping global promoter-enhancer interactions by chromatin conformation analysis, we directly couple these enhancers to promoters, where we found a concurrent change in gene transcription by the respective treatments. Thus, we established physical links between numerous GWAS SNPs and muscle-expressed genes and provided insight into the association between the identified genes and metabolic function in vivo. Results Transcriptomic profiling of human skeletal muscle cells. To study concurrent changes in gene transcription, enhancer activities and chromatin conformation, we used primary human skeletal muscle cells differentiated into myotubes that were subjected to metabolic stress by treatment with either palmitate or TNFα ( Supplementary Fig. 1A). As previously reported [26][27][28][29] , both treatments lowered insulin sensitivity, as confirmed by decreased AKT Ser-473 phosphorylation in response to insulin stimulation ( Supplementary Fig. 1B-E). First, we performed transcriptomic analysis by RNAsequencing (RNA-seq). Multidimensional Scaling (MDS) plots showed a clear sample separation based on palmitate or TNFα treatment (Fig. 1a). In total, we detected expression of 14,402 genes in skeletal muscle cells, of which 1542 were regulated by palmitate treatment (621 downregulated and 921 upregulated; (Fig. 1d) by palmitate exposure, whereas terms related to nucleosome assembly were specifically downregulated (Fig. 1e). GO analysis of TNFα upregulated genes returned several terms related to immune signaling (Fig. 1f), whereas downregulated genes were related to protein targeting to the endoplasmic reticulum (ER), insulin-like growth factor signaling and muscle filament sliding (Fig. 1g). Interestingly, both treatments seemed to significantly upregulate genes involved in inflammation (Supplementary Data 2 and Fig. 1h), and to downregulate genes related to muscle contraction (Supplementary Data 2 and Fig. 1i), both of which are processes related to skeletal muscle dysfunction and insulin resistance. Thus, our transcriptomic analyses of human muscle myotubes reveal thousands of target genes of which many are related to metabolic dysfunction. The dynamic enhancerome of skeletal muscle cells. Through chromatin immunoprecipitation followed by sequencing (ChIPseq), we mapped the distribution of the enhancer-associated histone H3 modifications, H3K4me1 and H3K27ac, in the muscle myotubes treated with TNFα or palmitate. Genome-wide, we identified 107,405 and 80,388 significant peaks of H3K4me1 or H3K27ac, respectively (Fig. 2a). These were mostly located in non-coding DNA, such as introns and intergenic regions, as well as in promoters ( Supplementary Fig. 3). In order to identify enhancers, we subtracted active promoter regions (defined by the promoter-associated H3K4me3 mark). We found that most (95.5%) of the non-promoter associated H3K27ac peaks overlapped a H3K4me1 peak, whereas only 36.9% of the H3K4me1 peaks overlapped H3K27ac (Fig. 2a). These findings support the notion that enhancers can be primed (marked by only H3K4me1) or active (marked by both H3K4me1 and H3K27ac) 16,17 . MDS plots of non-promoter associated H3K27ac and H3K4me1 ChIPseq data demonstrated a clear treatment-based separation of samples for H3K27ac (Fig. 2b), whereas this was less obvious for H3K4me1 (Fig. 2c), underlining the assumption that especially H3K27ac undergoes dynamic regulation in response to external stimuli and determines enhancer activity 16,30,31 . Therefore, to identify enhancers that were differentially activated after palmitate or TNFα treatment, we searched for peaks within the 62,866 identified active enhancers (covered by both H3K4me1 and H3K27ac) that showed significant changes in H3K27ac levels. This analysis returned 2243 enhancers with altered activity after palmitate treatment (FDR < 0.01, 1190 with a decreased activity and 1053 with an increased activity) ( Examples of enhancers with a strong increase in H3K27ac after palmitate treatment included elements located 10 kb upstream of the PDK4 promoter and 9 kb upstream of ANGPTL4 (Fig. 2f, g)two genes known to play a role in fatty acid metabolism. Moreover, some enhancers strongly regulated by TNFα were located close to cytokine genes, exemplified by enhancers located 21 kb downstream of CCL11 and 17 kb upstream of CCXL8 (Fig. 2h, i). The changes in H3K27ac were validated independently by ChIP-qPCR ( Supplementary Fig. 4A confirmed the presence of H3K4me1 at these sites (Supplementary Fig. 4B, C). None of the validated enhancer regions showed enrichment of the promoter-associated H3K4me3 mark (Supplementary Fig. 4D), ruling out that these genomic regions act as alternative promoters. Consistent with increased enhancer activity, expression of PDK4, ANGPTL4, CCL11, and CCXL8 were markedly upregulated after palmitate or TNFα treatment (Fig. 2j, k), supporting a regulatory role of these enhancers on expression of their nearby promoters. To further validate the cis-regulatory activity of the identified regions, we cloned the PDK4-10 kb and the CCXL8-17kb enhancers into a luciferase reporter vector. When transfected into muscle cells, luciferase activity was markedly increased in response to palmitate or TNFα treatment ( Supplementary Fig. 5), confirming a regulation of enhancer activity by these treatments. Collectively, our results identify thousands of dynamic enhancer activities in human skeletal muscle cells after treatment with palmitate or TNFα. Capture Hi-C identifies enhancer-promoter interactions. Besides interacting with nearby promoters, enhancers can also engage in long-range interactions, which makes enhancer-target prediction challenging. To overcome this, we performed genomewide mapping of enhancer-promoter interactions in skeletal muscle cells by the use of high-resolution Promoter Capture Hi-C 22,24 . First, we tested if treatment of myotubes with palmitate or TNFα was associated with a dynamic reorganization of promoter-enhancer interactions. Hi-C libraries were generated from skeletal muscle myotubes followed by hybridization-based capture of 21,841 human promoters, using a collection of 37,608 biotinylated RNA baits (approximately two baits per promoter) previously designed and tested by others 22 . By sequencing the captured ligation fragments and testing for a difference in mapped Hi-C interactions by palmitate or TNFα treatment, we did not detect any significant changes (Supplementary Fig. 6A-B), suggesting that acute exposure to these treatments does not cause major changes to chromatin structure. This agrees with another study showing that TNFα-responsive enhancers are already in contact with their target promoters before transient activation or repression of enhancer activity by TNFα treatment in human fibroblasts 32 . Next, we pooled all Promoter Capture Hi-C conditions in order to obtain a general chromatin conformation capture of myotubes. This identified 36,809 significant promoter-enhancer interactions ( Fig. 3a and Supplementary Data 4). Interactions covered 47% of tested promoters and 51% of identified enhancers regions (Fig. 3a) and largely spanned the entire genome ( Supplementary Fig. 7). Genomic distances of identified promoter-enhancer interactions ranged up to 6.2 Mb, with a median distance of 93.8 kb (Fig. 3b) and each of the captured promoters were on average connected to 4 enhancer regions (Fig. 3c). d Promoters captured by the Promoter Capture Hi-C were divided into three groups; promoters connected to enhancers that did not change H3K27ac in response to palmitate or TNFα treatment ("None"), and promoters connected to enhancers that either gained H3K27ac ("Up") or lost H3K27ac ("Down"). e-j Empirical cumulative distribution function (EDCF) plots of gene expression changes (RNA-seq logFC values) in the "Up" versus the "None" group for palmitate (e) or TNFα treatment (f), and the "Down" versus the "None" group for palmitate (g) or TNFα treatment (h). X-axis is the RNA-seq logFC, y-axis is the fraction of genes with this logFC or less. Differences between empirical cumulative distribution functions were tested using a Kolmogorov-Smirnov test (KS-test). To validate if our Promoter Capture Hi-C data identified functional enhancer-promoter interactions, i.e., where a dynamic change in enhancer activity also associate with a concurrent change in promoter transcription, we divided the promoters captured in our chromatin interaction data into three groups (Fig. 3d): promoters connected to enhancers that did not change H3K27ac in response to palmitate or TNFα treatment ("None") and promoters connected to enhancers that either gained H3K27ac ("Up") or lost H3K27ac ("Down"). Empirical cumulative distribution function (EDCF) plots of gene expression changes (RNA-seq logFC values) in the different groups revealed that promoters connected to enhancers with gained activity have higher logFC values than the "None" group ( Fig. 3e, f), whereas promoters connected to enhancers with decreased activity have significantly lower logFC values for both palmitate and TNFα treatments (Fig. 3g, h), supporting a regulatory role of the connected enhancers. Taken together, we have generated an enhancerpromoter connectivity map of skeletal muscle myotubes and demonstrated a general capture of promoter-enhancer pairs with concurrent changes in activity by palmitate or TNFα treatment. Chromatin interaction data predict enhancer target genes. Given that the vast majority of disease-associated variants are predicted to be located in regulatory regions [9][10][11] , our data represent an opportunity to identify target genes of GWAS SNPs in skeletal muscle cells by combining our enhancer mapping with information on chromatin conformation and gene transcription. For this, we used four sets of GWAS SNPs associated with T2D 6 , IR 33-38 , BMI 8 or WHR 7 , as well as tagged SNPs in high linkage disequilibrium (LD, r 2 > 0.8) (Fig. 4a). After overlapping the variants with enhancer regions regulated by either palmitate or TNFα treatment, we identified 58 palmitate-regulated enhancers and 522 TNFα-regulated enhancers each harboring one or more GWAS SNPs (Fig. 4b). Next, we selected enhancers that were both captured by our Promoter Capture Hi-C analysis and linked to genes differentially expressed after palmitate or TNFα treatment. When only considering enhancer-gene pairs where enhancer activity and gene expression were regulated in the same direction (i.e., either upregulated or downregulated), our analysis retrieved 11 palmitate-regulated, and 124 TNFα-regulated enhancers interacting with 11 and 99 predicted target gene promoters, respectively ( Fig. 4b and Supplementary Data 5). The predicted target genes included several known players in metabolism such as IRS1, IGFBP3, PPARG, SOCS2, and LEPR, providing a link between disease-associated SNPs and the ability of skeletal muscle to adapt to metabolic and inflammatory stress. To further narrow down the list of potential gene targets, we investigated the association between genotype of the enhanceroverlapping GWAS SNPs and the basal expression of each of their target genes in skeletal muscle biopsies of 139 individuals (by expression quantitative trait locus (eQTL) analysis). This approach identified 13 significant skeletal muscle eGenes (CEP68, GAB2, LAMB1, MACF1, EIF6, PABPC4, BTBD1, FILIP1L, TCEA3, NRP1, ZHX3, TBX15, and TNFAIP8) for 61 GWAS-SNPs, located within 20 distinct enhancer regions (Fig. 4c, d and Supplementary Data 6). Thus, by overlapping our genomic datasets, we have identified numerous putative target genes of metabolic GWAS SNPs, which may play a functional role under lipid toxicity or in response to proinflammatory stimuli. Moreover, for 13 genes, we demonstrate a significant association between GWAS SNP genotype and basal gene expression levels in human skeletal muscle. Identified target genes are linked to energy metabolism. In order to understand the role of the identified putative GWAS-SNP target genes in whole body metabolism in vivo, we analyzed the association between 48 metabolic traits in the BXD murine genetic reference population fed a control diet (CD) or high fat diet (HFD) [39][40][41] (Supplementary Data 7), and expression levels of the 13 identified eGenes in skeletal muscle (Supplementary Data 8), adipose tissue (Supplementary Data 9) and liver (Supplementary Data 10). Strikingly, expression of 12 out of the 13 genes (Cep68, Gab2, Lamb1, Macf1, Eif6, Btbd1, Filip1l, Tcea3, Nrp1, Zhx3, Tbx15, and Tnfaip8) showed associations with metabolic measures, such as blood glucose levels during glucose tolerance tests (GTTs), plasma lipid levels, body composition, and exercise performance, in at least one of the tested tissues (Table 1). For some target genes, metabolic measurements were specifically associated with expression in skeletal muscle. For example, expression of Tbx15 (Fig. 5a), which we found linked to SNPs associated with WHR in humans, was positively associated with lean body mass (Fig. 5b) and VO 2 max (Fig. 5c), as well as negatively associated with total body fat mass (Fig. 5d) and blood glucose levels during an oral GTT (Fig. 5e) in the BXD mice. Interestingly, the expression of Cep68, which we find linked to SNPs associated with T2D, was correlated with blood glucose levels during GTTs in HFD-fed mice in both muscle and liver ( Fig. 5f). More specifically, Cep68 expression was negatively correlated with blood glucose levels during an intraperitoneal GTT in skeletal muscle of both male (Fig. 5g) and female ( Fig. 5h) mice, and oral GTT in liver tissue (Fig. 5i). Moreover, Cep68 association with body fat mass and lean mass percentages in adipose tissue (Fig. 5j) suggests that CEP68 has a role in T2D through dysregulated expression in multiple organs. Collectively, these data demonstrate that the expression of identified putative GWAS SNP targets correlates with metabolic measures in mice, and suggest a role for these genes in the regulation of energy metabolism in vivo. Long-range interactions connect WHR SNPs to EIF6 expression. For some candidate genes identified as regulated by noncoding GWAS SNPs, including EIF6, the gene was not located at close vicinity of the differentially activated enhancer region, but connected through long-range chromatin interactions. The SNPs that we found linked to EIF6 are located within the UQCC1 locus and associate with WHR (Fig. 6a). We identified four enhancer regions, UQCC1 + 100 kb, UQCC1 + 26 kb, UQCC1 + 16 kb, and UQCC1 + 13 kb, that were all regulated by TNFα (Fig. 6b) and captured by our Promoter Capture Hi-C data. The enhancer regions overlapped several highly linked WHR-associated SNPs (Fig. 6a). From our chromatin interaction data, we found all enhancers to interact with the promoter of EIF6 (Fig. 6a). Moreover, the UQCC1 + 100 kb enhancer also interacted with MMP24 and EDEM2, whereas UQCC1 + 26 kb, UQCC1 + 16 kb, and UQCC1 + 13 kb enhancer regions looped to the GDF5/CEP250 shared promoter (Fig. 6a). Out of these genes, MMP24, EIF6 and GDF5 remained candidates to be under the regulation of the enhancers, since the expression of these genes was concurrently decreased by TNFα treatment (Fig. 6c). Importantly, the UQCC1 promoter was not found linked to the enhancer nor did UQCC1 change expression by TNFα. While GDF5 expression was below detection limit in skeletal muscle and could not be analyzed for eQTLs, we found associations of several LD-linked WHR associated SNPs, including rs878639, with the expression of EIF6 (Supplementary Data 6 and Fig. 6d), but not with MMP24 (Fig. 6e). In the case of rs878639, the major allele associates with an increased WHR, which establishes a link between lower EIF6 expression and an unhealthy body fat distribution. Consistently, we found that Eif6 expression in muscle from BXD mice positively associates with running distance , suggesting better aerobic capacity in animals with higher skeletal muscle Eif6 expression. To further validate our findings, we used siRNAs (siEif6#1 and siEif6#2) to knock down Eif6 expression in skeletal muscle cells ( Fig. 6i and Fig. S8A). We assessed mitochondrial respiration by measuring oxygen consumption rate (OCR) at basal state or during FCCPinduced uncoupling ( Fig. 6j and Supplementary Fig. 8B) and found that decreased Eif6 expression resulted in lower OCR (Fig. 6k), especially during maximal FCCP-induced respiration ( Fig. 6l and Supplementary Fig. 8C). Moreover, after differentiating C2C12 cells into myotubes, we found that Eif6 knockdown ( Supplementary Fig. 9A) led to reduced protein levels of the mitochondrial oxidative phosphorylation complex II ( Fig. 6m and Supplementary Fig. 9B), whereas we did not detect any changes in insulin-stimulated glucose uptake ( Supplementary Fig. 9C), glycogen synthesis ( Supplementary Fig. 9D), or AKT phosphorylation ( Supplementary Fig. 9E, F). Thus, long-distance interactions networks suggest that EIF6 is regulated by genetic variants associated with body fat distribution. Accordingly, we identified correlations between lower skeletal muscle Eif6 expression and reduced exercise performance, and further provide evidence for a role of EIF6 in the regulation of mitochondrial function in skeletal muscle. Discussion Here, we mapped the transcriptome and enhancerome of human skeletal muscle cells subjected to lipid-induced toxicity or a proinflammatory cytokine. We demonstrate a profound transcriptional reprogramming with thousands of promoter and enhancer regions showing altered activity. Integrating these data with GWAS of T2D, IR, BMI and WHR measures as well as genome-wide chromatin interaction studies, allowed us to detect concurrent changes in the activity of enhancers encompassing GWAS SNPs and transcription from a connected promoter, thereby establishing links between numerous non-coding diseaseassociated SNPs and gene targets. Using the murine BXD genetic reference population we provide further insight into the role of the identified target genes in the regulation of metabolic phenotypes like body composition, glucose response and exercise performance in vivo. In particular, we provide evidence that one of our identified targets, Eif6, controls mitochondrial respiration in skeletal muscle cells. Our cell-system using chronic exposure with palmitate or TNFα in human primary muscle cells allowed investigation into the distinct mechanisms by which the metabolic function of the skeletal muscle cell is impaired. Palmitate induces insulin resistance at the level of AKT phosphorylation 42 , impairs mitochondrial function 43 , lowers expression of the master regulator of mitochondrial function peroxisome proliferator-activated receptorgamma coactivator (PGC)-1 α 44 , and induces ER stress 45 . Interestingly, incubation of skeletal muscle cells with palmitate induces TNFα secretion by the muscle cell, suggesting that while saturated fatty acids and TNFα appear to activate distinct intracellular pathways, these pathways may share common nodes 46 . Saturated free fatty acid and TNFα treatment both alter upstream insulin signaling, but TNFα treatment does not alter insulin-stimulated glucose uptake in muscle cells whereas palmitate does 42,47 . In vivo however, TNFα infusion is associated with both lower activation of the upstream insulin-signal pathway and impaired glucose transport 48 . Even though TNFα exposure is not associated with lower fatty acid oxidation in muscle ex vivo 49 , we identified EIF6 as a gene regulated by TNFα exposure and show EIF6 plays a role in fatty-acid oxidation. The discrepancy between the effects of palmitate and TNFα on primary skeletal muscle cell cultures compared to in vivo may be due to specific tissue-culture conditions, different extracellular milieus or the influence of systemic factors. While the activity of enhancers and promoters were markedly changed after palmitate or TNFα exposure, promoter-enhancer interactions did not appear to be affected. These findings are consistent with a previous study showing that enhancerspromoter interactions are unchanged in fibroblasts treated with TNFα 32 . We cannot rule out, however, that palmitate or TNFα exposure could remodel chromatin in myotubes, as low sequencing depth or low power may have limited our capacity to detect subtle changes. From previous studies it seems clear that dynamic remodeling of promoter-enhancer interactions occurs during cellular differentiation, particularity at cell type-specific enhancers 23,[50][51][52][53] . Interestingly, the discrepancy between activation of cell type-specific enhancers and enhancers induced by treatments such as TNFα seems to correlate with H3K4me1 levels. Indeed, treatment-induced enhancers appear to exhibit largely unchanged levels of H3K4me1, despite a quick induction of H3K27ac, whereas cell type-specific enhancers display highly variable H3K4me1 levels 32 . This is consistent with our data, where palmitate-and TNFα-induce large changes in H3K27ac levels at enhancers but only minor changes in H3K4me1. Still, certain chromatin interactions were recently described to be variable in a circadian fashion 54 , suggesting that promoterenhancer interactions can indeed be dynamic even within a defined cell type. Our mapping of the chromatin interactome of human myotubes identified 36,809 specific enhancer-promoter interactions. Integrating these data with RNA transcription and enhancer activity analyses allowed us to specifically capture enhancerpromoter interactions where 1) the enhancer overlaps one or more SNPs associated with T2D, IR, BMI or WHR and 2) the enhancer activity and gene expression were regulated in the same direction by either palmitate or TNFα exposure. Our analysis retrieved more than 100 predicted GWAS target genes, which included several known players in metabolism such as IRS1, IGFBP3, PPARG, SOCS2, and LEPR. However, our eQTL analysis did not detect an association between genotype and gene expression for most of these genes. We therefore speculate that GWAS SNPs may be functionally linked with gene expression in situations of cellular stress encountered in metabolic disease such as increased plasma levels of fatty acids or proinflammatory cytokines. For the genes identified as significant eGenes in our eQTL analysis, we analyzed the association between their expression levels in skeletal muscle, adipose, or liver tissue and measures of 48 metabolic traits in the BXD murine genetic reference population. We found that 12 out of 13 genes (Cep68, Gab2, Lamb1, Macf1, Eif6, Btbd1, Filip1l, Tcea3, Nrp1, Zhx3, Tbx15, and Tnfaip8) exhibited marked associations with metabolic phenotypes in one or more of the tested tissues. For some targets, including Tbx15, the associations appeared specific for skeletal muscle expression and were not detected in either adipose or liver tissue, suggesting a muscle-specific role of Tbx15. This is consistent with the earlier finding that Tbx15 regulates muscle metabolism in mice and Tbx15 knockout animals are resistant to diet induced obesity and impaired glucose tolerance 55 . For other targets, such as Cep68, we identified associations in all of the tested tissues revealing the metabolic role of these genes in multiple organs. Linking gene expression with metabolic phenotypes represents a valuable tool to gain insight into gene function, Fig. 5 Correlating GWAS SNP-target genes with metabolic phenotypes in BXD mice strains. a Heatmap representation of rho-values from correlations between 48 metabolic measurements in CD or HFD fed mice and Tbx15 expression in skeletal muscle, adipose or liver tissue. The p-values from the 48 correlations from each diet and tissue were adjusted using false discovery rate correction (FDR) (*FDR < 0.2, **FDR < 0.1, ***FDR < 0.05). b-e Skeletal muscle expression of Tbx15 is positively correlated with lean mass (% of body weight) (b), negatively correlated with fat mass (% of body weight) (c) positively correlated with VO2 max (d) and negatively correlated with glycemia during an oral GTT (OGTT) (e). Statistics was performed using Spearmans rank correlation analysis. f Heatmap representation of rho-values from correlations between 48 metabolic measures in CD or HFD fed mice and Cep68 expression in skeletal muscle, adipose or liver tissue (*FDR < 0.2, **FDR < 0.1, ***FDR < 0.05). g-j Cep68 is negatively correlated with glycemia during an intraperitoneal GTT (IGTT) in both male (g) and female (h) mice in skeletal muscle, as well as an oral GTT (OGTT) in liver (i) and fat mass (% of body weight) in adipose tissue (j). Statistics was performed using Spearmans rank correlation analysis. although it does not infer on causality. Circulating leptin levels, for instance, are positively associated with fat mass 56 , but loss-offunction mutations of LEP are associated with obesity 57 . In our study, we observed a similar phenomenon where the CEP68 T2D risk variants are associated with increased CEP68 expression, but Cep68 expression is negatively associated with blood glucose levels during GTTs in mice. While further investigations are warranted to establish causal relationships and the mechanism by which CEP68 may regulate whole body metabolism, we speculate that dysregulated expression of CEP68 is involved in the pathogenesis of T2D. For some genes that we identified as potential targets of metabolic GWAS SNPs, the SNP-enhancer locus was not located in close proximity to the predicted target gene, but engaged in long-range DNA looping formations. For example, we identified interactions between the promoter of the translation initiation factor EIF6 and several enhancers located within the UQCC1 gene, each spanning SNPs associated with WHR in humans. We found both enhancers and EIF6 expression were downregulated by TNFα and we detected significant eQTLs for EIF6 expression with SNPs of all loci. In the BXD mice, Eif6 muscle expression was associated with increased running distance, as well as with basal and maximal VO 2 uptake after training. These findings are consistent with a study linking EIF6 to the regulation of energy metabolism during endurance training in humans and showing reduced exercise performance in Eif6 haploinsufficient mice 58 . Moreover, hypermethylation of the EIF6 promoter is linked to childhood obesity 59 . In support of this, we demonstrate that Eif6 knockdown in murine muscle cells causes lower mitochondrial respiration and reduced levels of the mitochondrial oxidative complex II. The identified link between EIF6 and modulation of WHR are consistent with data demonstrating that genetic variants within mitochondrial genes are associated with metabolic measures including WHR 60 . Notably, we did not detect a physical link between the UQCC1 intronic enhancers and the UQCC1promoter, nor did UQCC1 change expression by TNFα. A recent study has shown that human UQCC1 coding variants are associated with WHR 61 . Interestingly, eQTL analysis indicates that these variants associate not only with the expression levels of UQCC1, but also EIF6 61 , suggesting that several genes within this locus could contribute to the modulation of WHR in humans. Thus, our data demonstrate that EIF6 expression is regulated by TNFα and suggest a role for muscle-specific expression of Eif6/EIF6 in the regulation of mitochondrial function and exercise performance in mice, as well as in WHR ratio in humans. In conclusion, our study identified skeletal muscle enhancer elements that are dysregulated in the context of lipid-toxicity or under exposure of the proinflammatory cytokine TNFα. We identify hundreds of dysregulated enhancers which overlap with genetic loci previously implicated in metabolic disease and, using chromatin conformation assay, we predict the corresponding gene targets. We identify genes with known roles in metabolism, as well as targets that have not previously been linked to human metabolic disease, and demonstrate their association with metabolic phenotypes in mice. Given the influence of lifestyle and genetic factors in the development of obesity and T2D, and the prominent contribution of skeletal muscle in energy metabolism in humans, our investigations constitute a resource for identifying genes participating in the progression of metabolic disorders. Insulin stimulation experiments for human skeletal muscle cells and C2C12 cells were performed by serum depriving differentiated myotubes for 4 h before stimulating with either 10 or 100 nmol/L insulin for 5 min. Measurement of oxygen consumption rate (OCR). Real-time measurements of OCR were performed using a Seahorse XFe96 Extracellular Flux Analyzer (Agilent Technologies). C2C12 myoblasts were reverse transfected by seeding 5000 cells per well in Seahorse XFe96 Cell Culture Microplates (Agilent Technologies) together with transfection mix containing siScr or siRNAs against Eif6 (siEif6#1, n = 6 biological replicates or siEif6#2, n = 8 biological replicates). Cells were assayed 48 h after transfection using the Seahorse XF Cell Mito Stress Test kit (Agilent Technologies). OCR was measured under basal conditions and after injection of final concentrations of 1 µM oligomycin, 2.3 µM FCCP, or 2.55 µM antimycin A combined with 1 µM rotenone. The measured OCR values were normalized to protein levels by lysing the cells and performing BCA protein assay (Pierce BCA Protein Assay Kit from Thermo Scientific). Glycogen synthesis. Differentiated myotubes in 12-well plates were serum-starved for 4 h, followed by a 1 h incubation in KRP buffer (pH 7.3) containing 5 mM glucose, 2 µCi/ml Glucose, D-[U-14C] (Perkin Elmer), in the absence or presence of 100 nM insulin. Cells were washed 3 times in cold PBS, harvested in 200 µl of 1 M NaOH and heated to 70°C for 15 min. Ten microliter was taken for the determination of protein (BCA) and to the remainder, 25 µl saturated Na 2 SO 4 , and 900 µl ice-cold ethanol was added, vortexed and frozen for 30 min at −80°C, followed by a centrifugation step (10 min, 16,000×g, 4°C). Pellets were resuspended in 100 µl H2O, followed by addition of 1 ml ice-cold ethanol and re-centrifugation. The final pellet was resuspended in 100 µl H2O and radioactivity was determined by liquid scintillation counting after the addition of Ultima Gold LSC. Values were normalized to protein levels performing BCA protein assay (Pierce BCA Protein Assay Kit from Thermo Scientific). RNA purification. Total RNA was purified from human skeletal myotubes (control, palmitate or TNFα treated) using AllPrep DNA/RNA/miRNA Universal Kit (Qiagen). For quantification, total RNA was reverse-transcribed using iScript™ cDNA Synthesis Kit (Bio-Rad), according to the manufacturer's instructions and analyzed by real-time PCR using Brilliant III Ultra-fast SYBR Green QPCR Master Mix (AH Diagnostic) and a C1000 Thermal cycler (Bio-Rad). mRNA primer sequences are listed in Supplementary Data 12. RNA-sequencing. One microgram of total RNA was depleted of rRNA and subsequently used to generate libraries using the TruSeq standard total RNA with Ribo-Zero Gold kit (Illumina). The PCR cycle number for each library amplification was optimized by running 10% of the library DNA in a real-time PCR reaction using Brilliant III Ultra-fast SYBR Green QPCR Master Mix (AH Diagnostic) on a C1000 Thermal cycler (Bio-Rad) (Supplementary Data 11). Libraries were sequencing on a NextSeq500 system (Illumina) using the NextSeq 500/550 High Output v2 kit (75 cycles). An overview of all RNA-seq experiments are given in Supplementary Data 11. For bioinformatic analysis of RNA-seq data, reads were aligned to the hg38 GENCODE Comprehensive gene annotations 62 version 27 using STAR v2.5.3a 63 . Read summation onto genes was performed by featureCounts v1.5.3 64 . Differential expression testing was performed with edgeR v3.14.0 65 using a model of the form 0 + group + block, where group was a factor containing information on both passage and treatment, and block encoded the two replicates. Differential expression was found by testing e.g., (P5_Palmitate + P6_Palmitate)/2 -(P5_Control + P6_Control)/2 using the quasi-likelihood tests in edgeR. GO enrichments were found using the camera function 66 , which takes both inter-gene correlations and the distribution of log fold changes in the data-set into consideration and is part of the edgeR package. Only gene ontologies containing between 10 and 500 genes were investigated. Initial visualization of samples was performed by multi-dimensional scaling (MDS) plots, which are similar to PCA plots but use average log fold changes of the 500 most divergent interactions. ChIP-sequencing. Skeletal muscle myotubes were treated with palmitate or TNFα (n = 4 biological replicates using cells from two different passages), and cross-linked in 1% formaldehyde in PBS for 10 min at room temperature followed by quenching with glycine (final concentration of 0.125 M) to stop the cross-linking reaction. Cells were washed with PBS and harvested in 1 ml SDS Buffer (50 mM Tris-HCl (pH 8), 100 mM NaCl, 5 mM EDTA (pH 8.0), 0.2% NaN 3 , 0.5% SDS, 0.5 mM phenylmethylsulfonyl fluoride) and centrifuged for 6 min at 250 × g. The pelleted nuclei were lysed in 1.5 ml ice-cold IP Buffer (67 mM Tris-HCl (pH 8), 100 mM NaCl, 5 mM EDTA (pH 8.0), 0.2% NaN 3 , 0.33% SDS, 1,67% Triton X-100, 0.5 mM phenylmethylsulfonylfluoride) and sonicated (Diagenode, Biorupter) to an average length of 200-500 bp (between 15 and 20 cycles, high intensity). Before starting the ChIP experiment, chromatin was cleared by centrifugation for 30 min at 20,000 × g. For each ChIP, 2-10 μg DNA was combined with 2.5 μg antibody and incubated with rotation at 4°C for 16 h. The following antibodies were used for ChIP: H3K27ac (Ab4729), H3K4me1 (Ab8895), H3K4me3 (CST-9751S), H3 (Ab1791). Immunoprecipitation was performed by incubation with Protein G Sepharose beads (GE healthcare) for 4 h followed by three washes with low-salt buffer (20 mM Tris-HCl (pH 8.0), 2 mM EDTA (pH 8.0), 1% Triton X-100, 0.1% SDS, 150 mM NaCl) and two washes with high-salt buffer (20 mM Tris-HCl (pH 8.0), 2 mM EDTA (pH 8.0), 1% Triton X-100, 0.1% SDS, 500 mM NaCl). Chromatin was de-cross-linked in 120 μl 1%SDS and 0.1 M NaHCO 3 for 6 h at 65°C, and DNA was subsequently purified using Qiagen MinElute PCR purification kit. For library preparation and sequencing, 3-10 ng of immunoprecipitated DNA was used to generate adapterligated DNA libraries using the NEBNext® Ultra DNA library kit for Illumina (New England Biolabs, E7370L) and indexed multiplex primers for Illumina sequencing (New England Biolabs, E7335). The PCR cycle number for each library amplification was optimized by running 10% of the library DNA in a real-time PCR reaction using Brilliant III Ultra-fast SYBR Green QPCR Master Mix (AH Diagnostic) and a C1000 Thermal cycler (Bio-Rad) (Supplementary Data 11). DNA libraries were sequenced on a HiSeq2000 by 50-bp single-end sequencing at the National High-Throughput Sequencing Centre (University of Copenhagen, Denmark). An overview of all ChIP-seq experiments are given in Supplementary Data 11. ChIP-qPCR validations were performed by ChIP followed by real-time PCR using Brilliant III Ultra-fast SYBR Green QPCR Master Mix (AH Diagnostic) and a C1000 Thermal cycler (Bio-Rad). All reactions were analyzed in quadruplicates. ChIP-qPCR primer sequences are listed in Supplementary Data 12. For bioinformatic analysis of ChIP-seq data, sequenced reads were aligned using the sub-read aligner v1.5.0 67 against a full index of the main chromosomes of the hg38 reference genome, as genomic DNA and keeping only uniquely mapped reads. Duplicate reads were removed using Picard tools (http://broadinstitute.github.io/ picard). Peaks were called using MACS2 v2.1.0.20150731 68 with input control. H3K4me1 peaks were called as broad peaks, while H3K27ac peaks were called as narrow peaks. The quality of individual samples was assessed by testing whether fragment lengths could be estimated and whether more than 200,000 peaks could be called with a P-value cutoff of 0.05. These individual peak lists were only used to identify samples where the IP-step had failed and were not used in the downstream analysis. All samples passed these two tests. The consensus peak list used in the analysis was generated following the ENCODE 2012 IDR pipeline. For each histone modification a consensus peak set was generated as follows. All samples were pooled and the pooled reads were shuffled and split in two (pseudo replicates). Initial peak lists were called as above on each of these three samples (pool and two pseudo replicates), with a P-value cutoff of 0.05 and sorted by P-value. Finally, a consensus peak list was generated using the irreproducible discovery rate (IDR) software v2.0.2 69 with the pseudo replicate peak lists as input and the pooled peak list as oracle peak list. The IDR is analogous to an FDR, and has been shown to be a better measure of reproducibility in peak-calling experiments 70 . A lenient IDR threshold of 0.05 was used. For each sample, reads were summarized into consensus peaks using featureCounts v1.20.6 64 . Differentially bound peaks were detected in edgeR v3.22.0 as described 65 , using reads along the entire peak and the same model and testing procedure as in the RNA-seq analysis. Peaks were considered overlapping if they overlapped by any amount. Enhancer mapping. H3K4me3 peaks from human skeletal muscle myotubes derived from skeletal muscle myoblasts were downloaded from Roadmap Epigenomics 71 (sample E121), lifted to hg38 using the UCSC liftOver tool 72 and filtered to keep peaks with a FDR < 0.05. Active promoters were defined as RefSeq gene 73 promoters with a H3K4me3 peak within 3000 bp upstream or 1000 bp downstream of its TSS. Enhancers were defined as regions that contained a consensus peak of both H3K27ac and H3K4me1, as defined in the ChIP-seq, and was more than 3000 bp upstream or 1000 bp downstream of the TSS of an active promoter. Promoter capture Hi-C. Two 15 cm plates of in vitro differentiated myotubes (n = 3 biological replicates using three different passages of cells) were treated with either palmitate or TNFα or left untreated as control. Promoter Capture Hi-C was performed using similar protocols as described in 22,24 . Cells were cross-linked in 2% formaldehyde for 10 min followed by quenching with glycine (final concentration of 0.125 M). After washing with PBS, the cells were centrifuged for 10 min at 400 × g and frozen at −80°C until further analysis. Cells were lysed in 50 ml ice-cold lysis buffer (10 mM Tris-HCl pH 8, 10 mM NaCl, 0.2% Igepal CA-630 and protease inhibitor cocktail (Roche complete, EDTA-free)). After 30 min incubation on ice, nuclei were pelleted by centrifugation at 650 × g for 5 min. The pellet was resuspended in 1.25× NEBuffer 2 and added SDS (final concentration of 0.3%) followed by rotation at 37°C for 1 h. Triton X-100 was added (final concentration of 1.7%) and the samples were incubated shaking at 37°C for 1 h. After digesting with HindIII (NEB R0104T, 1500 units per 5 million cells starting material) at 37°C overnight, restriction fragment overhangs were filled by Klenow (NEB) using biotin-14-dATP (Life Technologies), dCTP, dGTP and dTTP (all at a final concentration of 30 µM) and incubating for 60 min at 37°C. Enzymes were deactivated by adding SDS (final concentration of 1.47%) and incubated shaking for 30 min at 65°C. Ligation was performed using 50 units T4 DNA ligase (Invitrogen) per 5 million cells starting material in a total volume of 8.2 ml 1X ligation buffer (NEB B0202S) containing 100 µg/ml BSA (NEB) and 0.9% Triton X-100, and by incubating for four hours at 16°C followed by 30 min at room temperature. Cross-links were reversed by incubation with Proteinase K at 65°C overnight. After 16 h, additional Proteinase K was added and the samples were further incubated for 2 h at 65°C. RNase A treatment was performed for 60 min at 37°C, and DNA was purified by two sequential phenol-chloroform extractions. DNA concentration was measured using a Qubit Fluorometer and Qubit dsDNA HS Assay Kit (Life technologies). In order to remove biotin from non-ligated DNA ends, 40 μg DNA was incubated with T4 DNA polymerase in a buffer containing 1× NEBuffer 2, 0.1 mg/ml BSA, and 0.1 mM dATP for 4 h at 20°C followed by phenol-chloroform extraction. DNA was sheared by sonication (Diagenode, Biorupter) to an average length of 400 bp (20 cycles, low intensity), followed by DNA end-repair by incubation with T4 DNA polymerase (NEB M0203L), T4 DNA Polynucleotide kinase (NEB M0201L), Klenow (NEB M0210L), and dNTP mix (0.25 mM) in 1X ligation buffer (NEB B0202S). After 30 min incubation at room temperature, DNA was purified using Qiagen MinElute PCR purification kit. For addition of dATP to the Hi-C libraries, DNA was incubated with Klenow exo-and 0.23 mM dATP in 1X NEBuffer 2 for 30 min at 37°C. Enzymes were inactivated by incubation at 65°C for 20 min. DNA fragments were size-selected by a double-sided SPRI bead purification (SPRI beads solution volume to sample volume to 0.6:1 followed by 0.9:1). Biotin-marked ligation products were isolated using MyOne Streptavidin C1 Dynabeads (LifeTechnologies). After washing the beads in tween buffer (5 mM Tris, 0.5 mM EDTA, 1 M NaCl, 0.05% Tween), binding of DNA was performed in binding buffer (5 mM Tris, 0.5 mM EDTA, 1 M NaCl) for 30 min at room temperature, followed by two washes in binding buffer, and one wash in ligation buffer (NEB B0202S). The beads were resuspended in ligation buffer and adapters (from SureSelect XT library prep kit ILM, Agilent Technologies) were ligated to the beadbound DNA by the addition of T4 DNA ligase (NEB) and incubation for 2 h at room temperature. The beads were subsequently washed twice in tween buffer, once in binding buffer and twice in 1X NEBuffer 2 before resuspending the beads in 40 µl 1X NEBuffer 2. The bead-bound library DNA was amplified with 12-14 PCR amplification cycles according to the SureSelect XT library prep kit ILM (Agilent Technologies) protocol before promoter capture. Promoter capture was performed by using 37,608 biotin-labeled RNA baits (each 120 nucleotides) covering 21,841 human promoters (approximately two baits per promoter, targeting each end of a HindIII fragment 22 ). The RNA baits were synthesized by Agilent Technologies and hybridization was performed using the Sure Select Target Enrichment kit ILM (Agilent Technologies) and SureSelect XT library prep kit ILM (Agilent Technologies) according to manufacturer's instructions. DNA libraries were paired-end sequenced on a NextSeq500 system (Illumina) using the NextSeq 500/550 High-Output v2.5 Kit (150 cycles). For bioinformatic analysis of Promoter Capture Hi-C data, di-tags reads were filtered and mapped against the main chromosomes of the hg38 reference genome by the HiCUP pipeline v0.6.1 74 using bowtie2 v2.2.6 75 without limits on maximum and minimum di-tag length. The HiCUP pipeline also removes PCR duplicate reads and filters out re-ligations and other experimental artefacts. Downstream analysis was performed with diffHic 76 as follows: di-tags were filtered keeping only DNA fragments shorter than 600 bp, and with a minimum inwards and outwards facing gap distance of 1000 and 25,000, respectively, as recommended in the manual. All conditions and passages of cells were then pooled to obtain a general chromatin conformation capture of myotubes; all enhancers were widened to 10 kb and interactions between a promoter and a histone mark were extracted and filtered to remove weak interactions so that only interactions with an average mean of 5 counts per million, calculated by the aveLogCPM function, and with a signal at least two-fold above the expected were kept (as calculated by the filterTrended function-see the diffHic manual for code examples). The connectCounts function was used to count reads supporting interactions for each library, interactions with enough reads to test for differential binding were selected using the filterByExpr function of edgeR and differential binding was performed using the edgeR quasilikelihood function as in the RNA-seq and ChIP-seq experiments but without the replicate blocking factor, resulting in a model of the form~0+ group. These criteria and cut-offs were as described in the diffHic package manual. The set of interactions interrogated for differential interactions is the one used in downstream analysis and reported in the Supplementary tables. To visualize Promoter Capture Hi-C data as heatmaps, rotated plaid plots were generated by the rotPlaid function supplied by the diffHic package on the merged dataset. Each chromosome was split in 1000 bins, and colored by the amount of reads in the interaction. Any interaction with more than 20 reads was colored a solid red. Overlapping enhancer regions with GWAS SNPs. GWAS studies for T2D 6 , BMI 8 , and WHR 7 have identified 402, 941 and 463 distinct association signals, respectively. For IR we collected distinctive GWAS signals covering from studies of fasting insulin (FI) with and without adjustment for BMI 34,36,37 , HOMA-IR 33 , the modified Stumvoll Insulin Sensitivity Index (ISI) 38 , and 53 genomic variants associated with both higher FI levels adjusted for BMI, lower HDL cholesterol levels and higher triglyceride levels 35 , leading to a total of 82 distinct association signals with IR. Each of these four sets was expanded with SNPs in high LD (R2 > 0.8) with the original distinct association signals. Specifically, plink19 (http://www.cog-genomics. org/plink/1.9/) 77 was used to extract high LD SNPs within a 1 MB range of each SNP based on a subset (6148 Danish individuals) of the HRC imputed dataset used in the T2D GWAS 6 . The variant positions were converted into genome build38 before overlapping them with palmitate and TNFα responsive enhancer regions. Regional plots were generated using standalone LocusZoom v1.4 78 , as well as summary statistics available for T2D 6 and WHR 7 . eQTL analysis. The ADIGEN study participants 79,80 were selected from the Danish draft boards records. The study was approved by the Ethics Committee from the Capital Region of Denmark and informed consent was obtained from all participants in accordance with the Declaration of Helsinki II. Juvenile obesity was defined as weight 45% above the Metropolitan desirable weight (BMI ≥ 31 kg/m 2 ) at the draft board visit. 1930 obese individuals and 3601 randomly selected individuals for the population-representative control group were invited to participate in the study. In total 557 individuals volunteered to participate. From a subset of these Danish white men, 71 juvenile obese and 74 age-matched control individuals, skeletal muscle biopsies were taken under lidocaine local anesthesia from their right thigh using a thin Bergström needle and snap frozen in liquid nitrogen. The participants were healthy by self-report and under 65 years of age at the time of ADIGEN examination. Gene expression analysis was performed by extracting total RNA using miRNeasy kit (Qiagen). The yield was optically measured and a randomly selected subset of the RNA samples were examined using an Experion electrophoresis station (BioRad) for integrity (RIN value), which was good in all cases. Gene expression of~47,000 transcripts was measured by the HumanExpression HT-12 Chip (Illumina, USA). cRNA was synthesized from total RNA using the Nano Labeling Kit from Illumina (Epicentre), and the cRNA concentration was measured by Qubit fluorescent dye (Invitrogen, Germany) before loading the arrays. Hybridization was performed as recommended by Illumina and the Illumina HiScan was used to obtain the raw probe intensity level data. For failed expression arrays cRNA was resynthesized and rerun. The raw probe intensity values were exported from GenomeStudio without background correction and imported into R where the lumi package 81 was used for pre-processing. The array pre-processing included; quantile normalization, log2 transformation and probe filtering to remove probes with a detected P-value above 0.01. The participants were genotyped using the Illumina CoreExome Chip v1.0 containing 538,448 genetic variants of which more than 240,000 are common. Genotypes were called using the Genotyping module (version 1.9.4) of GenomeStudio software (version 2011.1, Illumina) and Illumina HumanCoreExome-12v1-0_B.egt cluster file. The genotype data were subjected to standard quality control and then phased with EAGLE2 82 and imputed with the 1000 Genomes Project Phase III panel using Minimac3 83 . We selected 29 or 420 SNPs located within 11 or 124 enhancer regions, which changed activity by palmitate or TNFα treatment, respectively (see text for further description on how SNPs were selected). Only SNPs that were missing in less than 10% of the individuals, with an imputation quality (R2) higher than 0.4 and no significant deviation from Hardy-Weinberg equilibrium were extracted. Matrix eQTL 84 was used to assess the association between 461 (TNFα) and 39 (palmitate) gene-SNP pairs (selected based on our Promoter Capture Hi-C data) in a total of 140 individuals with both expression and SNP data available (R version 3.5.0). To account for complex non-genetic factors, we used probabilistic estimation of expression residuals (PEER) 85 . Specifically, eQTL analysis was performed on inverse normal-transformed expression residuals adjusted for age, BMI-group (obese or control) and 15 PEER factors which is the number of factors recommended by the GTEX consortium 86 for studies with less than 150 individuals. The models were also run without the adjustment for BMI. Significant e-genes were identified after hierarchical multiple testing correction of the p-values NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16537-6 ARTICLE NATURE COMMUNICATIONS | (2020) 11:2695 | https://doi.org/10.1038/s41467-020-16537-6 | www.nature.com/naturecommunications from TNFα and palmitate eQTL tests using the Bonferroni-BH procedure recommended by Huang et al. 87 . Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
2023-02-08T15:47:50.430Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "931c9c8636c059b8535f4d6cd14cff1f9a23b2ed", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-16537-6.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "931c9c8636c059b8535f4d6cd14cff1f9a23b2ed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
254663773
pes2o/s2orc
v3-fos-license
Thermohydraulic studies of alkali liquid metal coolants for justification of nuclear power facilities * The paper presents and discusses the results of experimental and computational studies obtained by the authors on hydrodynamics and heat exchange in fuel assemblies of the alkali liquid metal cooled fast reactor cores, and experimental data on hydrodynamics of flow paths in the heat exchanger and reactor header systems. Investigation results are presented on in-tank coolant circulation obtained using a well-developed theory of approximation simulation of the nonisothermic coolant velocity and temperature fields in the fast neutron reactor primary circuit and demonstrating stable stratification and thermal fluctuations in the coolant. Results are presented from experimental and computational simulation of the alkali liquid metal boiling process based on fuel assembly models during an emergency situation caused by an operational occurrence involving simultaneous loss of power for all reactor coolant pumps and the reactor scram rod failure. Objectives are formulated for further studies, achieving which is essential for the evolution of the liquid metal technology, as dictated by the need for the improved safety, environmental friendliness, reliability and longer service life of nuclear power facilities currently in operation and in the process of development. Introduction In the autumn of 1950, as part of discussing the proposals by A.I. Leypunsky, the Section of the Chief Directorate's Scientific and Technical Council recommended that Laboratory "V" (currently, the Institute of Physics and Power Engineering or the IPPE) focus its activities on development of liquid metal cooled reactors. Requirements were formulated with respect to coolants taking into account their influence on physical, process, corrosive and ther-mohydraulic characteristics of reactors, toxicity and cost. The list of liquid metals and alloys used or considered as candidates for application in nuclear power includes lithium, sodium, eutectic sodium and potassium alloy, potassium, cesium, lead, eutectic lead and bismuth alloy, and gallium. On 24 June 1954, a heat engineering department was established at the IPPE transformed further into the thermophysical sector which was led by V.I. Subbotin, and later by P.L Kirillov and A.D. Yefanov. The key research fields were thermal hydraulics, mechanisms of turbulent heat exchange, processes of liquid metal boiling and condensation, systematization, analysis and generalization of thermophysical data, establishment of an experimental thermophysical data base, heat tubes, thermal physics of thermionic generators, high-temperature nuclear power systems for outer space applications, and thermonuclear plants (Efanov et al. 2015). The need for providing a scientific thermophysical rationale for nuclear power plants and nuclear power facilities of a new type currently in the process of development required new methodologies, dedicated equipment and an experimental framework to match the challenges posed by the BR-10, BOR-60, BN-350, BN-600, BN-800 and BN-1200 fast reactor projects. An integrated system of hydrodynamic, liquid-metal thermohydraulic and process test benches built at the IPPE has made it possible to implement these projects and prepare for the experimental justification of innovative solutions for nuclear power facility (fast reactor) designs of a new generation (Thermophysical Bench Framework 2016). A sixty-year experience of adopting alkali liquid metals (sodium, eutectic sodium-potassium alloys, lithium, cesium), jointly with the industry's institutes, academies of sciences and experimental design bureaus engaged in development of nuclear power and propulsion systems, has led to the scientific basis set up for their application in nuclear power, and thermohydraulic parameters and processes justified, which have provided for the successful operation of fundamentally new nuclear power facilities. The combined experience of operating sodium cooled fast neutron nuclear power facilities (BR-10, BOR-60, BN-350, BN-600, BN-800) exceeds 200 years and is over 6 years and a half in the event of those with sodium-potassium coolant for spacecraft applications (BUK, TOPOL, TOPAZ) with rated parameters. Long operating times (of several decades) with the use of a sodium-potassium alloy have been recorded for the BR-5, DFR and RAPSODIE operation (Rachkov et al. 2014). This has been contributed by many years of international cooperation in using sodium in nuclear power facilities with foreign countries (Great Britain, Germany, the Republic of Korea, the USA, France, the Czech Republic, Japan, and others). The progress achieved as a result of adopting alkali liquid metal technologies have made it possible to propose the liquid-metal technology for various engineering applications: NPPs with fast neutron reactors (sodium), metallurgy and chemical industry (sodium and sodium-potassium), spacecraft propulsion systems (sodium-potassium, cesium, lithium), fusion or thermonuclear reactors (lithium), etc. Implementing the strategy of a two-component nuclear power with a closed fuel cycle using sodium cooled fast neutron reactors (Ponomarev-Stepnoi 2016), achieving a competitive edge, and maintaining the priority enjoyed by Russia in the field of NPPs with sodium cooled fast neutron reactors, including designs of a fast neutron reactor with a gas turbine unit (GTU) and a high-temperature reactor for nuclear hydrogen power (BN-VT), require further problem-oriented thermohydraulic studies. Methodology of investigations All stages of investigations have given a great deal of attention to measurement methodologies and techniques, including development of unique velocity, flow rate, pressure, level, temperature and other sensors. Microthermocouples were developed to measure temperature in safety cans with an outer diameter of 0.3 to 0.8 mm operating in a temperature interval of 300 to 1800 °С. Flow meters of different designs were developed to measure flow rates of liquid metals. Methodologies and a technique were developed later for measuring electromagnetically the liquid metal local flow rate (velocity) vectors in channels and fuel rod bundles, and measuring the coolant mixing characteristics in experiments in air with a small fraction of gaseous tracers added in the form of Freon or propane (Rachkov et al. 2018, Sorokin et al. 2021b. Much attention is given to methods of simulating physically experimental studies on hydrodynamics and heat exchange in liquid metal cooled nuclear power facilities. It has been experimentally shown that it is possible to simulate hydrodynamics of incompressible fluids, including liquid metals in experiments with air, and heat exchange in liquid metals, such as Na, Na-K, Li, Hg, Pb, Pb-Bi, etc., using simulating fluids (Sorokin and Kuzina 2019). The said methodologies have made it possible to undertake a broad range of fundamental and applied experiments. Channels and fuel rod bundles Extensive studies have been undertaken into hydrodynamics of irregularly shaped channels, including rod bundles, and flow paths of reactor facilities; maximum attention was given to measurement of velocity fields, distribution of tangential stresses, and turbulent characteristics. The results of experimental studies into hydrodynamic turbulent characteristics in fuel assemblies in air using a Pitot tube have shown (Fig. 1a, b) that there is a local maximum observed in the distribution of tangential stresses along the wetted perimeter of the regular fuel lattice cell at the point with the greatest channel expansion, which can be explained by the secondary vortex impact (Sorokin et al. 2021b). In a deformed lattice (Fig. 1c, d), the distribution of tangential stresses along the wetted surfaces is practically symmetrical with respect to the geometrical symmetry axes of the flow section, though anomalies are observed in some of the FA portions, which can be explained by the impact of certain secondary vortexes not only inside the channels, but also at the boundary. The velocity distribution along the normal to the wetted perimeter is described by a universal law if the local tangential stress value is used to calculate the dynamic velocity. There is a substantial intensification of turbulent velocity fluctuations observed in the bundle's peripheral region as compared with an infinite lattice (Sorokin et al. 2021b). Most attention has been given to thermohydraulic studies for the most heated and essential component of a reactor plant (the reactor core) affected, in the course of life, by a number of factors, including design, mode, process, radiation and operating factors (Rachkov et al. 2018). It was shown that there was no thermal (contact) resistance at the coolant -heat-exchange surface interface when the concentration of impurities in the coolant did not exceed their solubility at the circulating metal temperatures. In these conditions, heat transfer to liquid metals, such as Na, Na-K, Li, Hg, and Pb-Bi, in tubes is described by a single criterial relationship close to the Lion formula. When the coolant is saturated with impurities, the heat transfer coefficients are expected to decrease by a factor of one and a half to two, which corresponds to experiments by the Energy Institute (ENIN), the Central Boiler and Turbine Institute, and the IPPE (Rachkov et al. 2018). Experimental and computational studies have shown the need for solving the 'conjugate' problem of heat removal from fuel rods with regard for their thermophysical properties. P.A. Ushakov developed a theory of approximate thermal similarity of fuel rods in regular lattices (Sorokin and Kuzina 2019) that has made it possible to simulate fuel rods by multilayer tubes electrically heated from the inside and is used in all investigations for fast reactor cores. Detailed experimental data have been obtained for thermal hydraulics of full-scale core models provided there are fuel rod deflections, asymmetrical displacements and deformations of components, closure of different core parts, and counter flows. As a result of experimental studies and a computational and theoretical analysis of the mass, pulse and energy exchange among the channels in bundles of smooth and wire wrap finned fuel rods, physically justified methods and programs (ТЕМP, MIF) have been developed for thermohydraulic calculations of reshaped fast neutron reactor core fuel assemblies (Zhukov et al. 1991). The influence of the fuel rod geometry and materials, and the radiation-induced swelling and creep effects on the FA temperature mode has been investigated, and peculiarities of the core temperature mode formation in the course of operation (life) have been identified for fast neutron reactors. The efficiency of using differently directed wire wraps leading to oppositely directed coolant flows in transverse directions has been shown. Hydrodynamics of the heat exchanger and reactor header system flow paths Extensive and prolonged experimental studies were conducted in a wind tunnel bench and on a water table for hydrodynamics of flow paths in different types of axisymmetrical flat-plate and cylindrical distributing header systems (DHS) with different conditions of liquid supply and removal (Gabrianovich and Delnov 2016). DHSs with central supply and side removal of liquid A water flow pattern has been obtained for the flow path of a flat-plate DHS with central supply and side removal of water. It has been found that the liquid flow pattern in the header is defined by the DHS dimensional ratio and design. Liquid flow models for cylindrical-type DHSs The liquid flow in the cylindrical-type DHS flow path is of a complex nature and is defined predominantly by the DHS dimensional ratio and design, the liquid flow pattern, and the hydraulic resistance coefficient for the outlet component flow path. Representative models of liquid flows in the flow path of the above DHS type (Fig. 2) have been obtained with regard for results of various experimental studies (Gabrianovich and Delnov 2016). DHSs with side supply and central removal of liquid A water flow pattern was obtained on a water table for the flow path of a flat-plate DHS with side supply and central removal of water. It has been found that the liquid flow pattern in the header is defined by the DHS dimensional ratio and design. The liquid flow in the flow path of a cylindrical-type DHS is of a complex type and is defined predominantly by the DHS dimensional ratio and design, the liquid flow pattern, and the hydraulic resistance of the lattice. Representative models of the liquid flow in the flow path of the considered DHS type have been obtained with regard for the results of experimental studies (Gabrianovich and Delnov 2016). The most graphic peculiarities of the liquid flow manifest themselves in the inlet, main and outlet portions of the flow path in the DHS under consideration (Fig. 3). Scientific discoveries As a result of the studies, the earlier unknown regularity and phenomenon, dealing with nuclear, space, metallurgical and chemical fields of science and technology, have been identified and registered as scientific discoveries. Regularity It has been found that there is an earlier unknown regularity of the liquid distribution at the outlets of the flow paths in distributing header systems that consists in that axisymmetrical regions form as the liquid exits the header, the characteristics of which are defined by the design and process peculiarities of the header system (liquid supply point, motion path, jet parameters, hydraulic resistance, etc.) (Delnov et al. 2019). Phenomenon It has been found that there is an earlier unknown phenomenon of the hydrodynamic identity occurrence in distributing header systems that consists in the similarity of the hydrodynamic characteristics of the flow paths in axisymmetrical distributing header systems, e.g., nuclear power facilities and heat exchangers with different conditions of supplying and removing the liquid flowing in the system (Delnov 2021). constricted DHSs with the displaced tube sheet in a shell and with free and constricted inlet sections respectively; c, d. constricted DHSs with a constricted inlet section, the displaced tube sheet in a shell and inserts of relatively large and small diameters respectively; e, f. superconstricted DHSs with a header constricted by the inlet section, with no lattice displacement in a shell with no and with inserts respectively; 1 -annulus; 2 -header; 3 -shell; 4 -lattice; 5 -bottom; 6 -insert; 7 -housing. Differences in designs and hydrodynamics of flow paths in DHSs of different types DHS design differences: a tube sheet and a system of plates are used as the outlet component respectively in cylindrical and flat-plate DHSs: -axisymmetrical round, cylindrical, conical and annular jets occur in DHSs of a cylindrical type, and jets with a rectangular (square) cross-section are typical of flat-plate DHSs; -in cylindrical DHSs, one type of jets is transformed upstream into another type, and flat-plate DHSs have the common jet divided into individual parts or individual parts of the jet converge into one jet; -DHSs with different liquid supply and removal points differ in the sequence for transformation of certain types of jets into others. General characteristic of scientific discoveries Scientific discoveries -have changed the existing scientific concepts in the field of hydrodynamics of axisymmetrical DHSs; -have explained the scientific facts and experimental data not rationalized in scientific terms earlier; -have shown an extremely strong effect of minor changes in the DHS design and dimensional ratio on the DHS liquid flow; -have made it possible to obtain empirical relationships for determining the hydrodynamic irregularities at the outlet of different DHS designs and types; -have allowed predicting, prior to experiments and calculations, the DHS designs with the required liquid flow profile at the header outlet; -contain data on the models of liquid flows in different DHS types; -have identified the existence of constricted and free DHSs the liquid flow in which differs greatly from that in thoroughly studied classic superconstricted and free DHSs. Scientific discoveries have been used to justify the flow paths of DHSs in reactors and heat exchangers of nuclear power facilities and to develop and verify the DHS flow fluid dynamics codes. In-vessel circulation Experimental studies based on an integrated water model of reactors using a developed theory of approximation simulation, temperature fields and the structure of the nonisothermic coolant movement in the primary circuit components of a fast neutron reactor for forced circulation modes, and changeover to the cooldown mode and emergency cooldown by natural coolant circulation (Opanasenko et al. 2017), demonstrate a substantial and stable temperature stratification of the coolant in the peripheral region of the reactor's upper (hot) chamber above the side shields, in the cold and discharge chambers, in the elevator baffle, in the reactor vessel cooling system, and at the intermediate and auxiliary heat exchanger outlets in different modes of the heat exchanger operation (Figs 4, 5). High gradients and fluctuations of temperature have been recorded at the interfaces of stratified and recirculating formations. The obtained results can be used for verification of codes and coarse estimation of the reactor plant parameters during recalculation based on similarity criteria. Heat exchangers and steam generators The Protva and Ugra codes, set up based on a well-developed theory of an anisotropic porous body for the calculation of complex flows in reactors, heat exchangers and steam generators, were used to prove the possibility of using in the BN-800 design the heat exchangers with the same heat-transfer surface as in the BN-600 design. Characteristics of heat exchange, critical heat fluxes and circulation stability have been studied for steam generators of reactor plants with the BN-350, BN-600 and BN-800 reactors and a fundamentally new large-module steam generator for an advanced fast reactor (Grabezhnaya and Mikheev 2015). Boiling of alkali liquid metals in fuel rod bundles Investigations into the liquid metal boiling based on fuel assembly (FA) models have shown three patterns for a two-phase liquid metal flow in fuel rod bundles (bubble, slug and annular dispersed flow patterns) which is limiting with respect to the assembly cooling. It has been shown that long-term cooling of the core is conceptually possible in emergency modes involving boiling of liquid metals. Heat transfer during boiling of liquid metals in fuel rod bundles has been studied, the effects of the fuel rod surface roughness on the development of the boiling process has been investigated, and a diagram of the twophase liquid metal flow patterns in fuel rod bundles has been plotted [Sorokin et al. , 2019. The results of computational studies for a system of parallel FAs (Fig. 6), undertaken based on an upgraded version of the SABENA code (Sorokin et al. 2021a) implementing a two-liquid model of a two-phase liquid metal flow in an approximation of equal pressures in a steam or liquid phase, reproduce the variation of temperature, the development of single-phase (bubble, slug) flow patterns, and the liquid metal flow rate fluctuations obtained in experimental studies, and demonstrate antiphase coolant flow rate fluctuations in parallel FAs, the interchannel stability characterized by a major growth in the coolant flow rate fluctuation amplitude in parallel FAs as compared with single FAs, a periodic drop in the FA coolant flow rate practically to zero, and potential FA exposure (departure from nucleate boiling). To exclude the development of an emergency caused by an operational occurrence with simultaneous loss of power for the reactor coolant pumps and a failure of the reactor scram rods (an ULOF accident), a design solution has been proposed with a 'sodium plenum' arranged above the reactor core. Comparing the calculation and experiment results has shown the possibility of heat removal by the boiling coolant in a model FA with a 'sodium plenum' during thermal loads of 10% to 15% and the sodium flow rate level of about 5% of the nominal values . The activities to systematize thermophysical data in the field of alkali cooled fast reactors were undertaken under the auspices of the IPPE's Thermophysical Data Center in many fields of fast reactor thermal physics: velocity and temperature fields in the core, in the hot chamber, and in heat exchangers and steam generators, hydraulic resistance and heat transfer in channels and fuel rod bundles for a single-phase flow, heat transfer and the diagram of two-phase flow patterns, departure from nucleate boiling in assemblies, and thermophysical properties of coolants and reactor materials. Technical guidelines and reference books have been developed (Kirillov et al. 2010). Objectives of further thermohydraulic studies in nuclear power f1acilities for alkali liquid metal coolants: -refining methods to calculate local turbulent characteristics of pulse and energy transport for singleand two-phase liquid metal flows in channels and in large volumes taking into account large-scale eddy currents, and the coolant flow stratification effects; -development and justification of a system of verification tests; -development of a verified system of codes taking into account the interconnection of nuclear physical, thermohydraulic, physicochemical, thermomechanical, mass-exchange and engineering processes taking place in a nuclear power facility, for justifying its service life with regard for the entire combination of its operating processes and modes; -evaluation of temperature fluctuations directly in the coolant flow and on the channel walls, and investigation of the effects these fluctuations have on structural strength; -justification of the core's temperature mode taking into account random deviations of parameters (geometrical, operating and process parameters, uncertainties of thermophysical properties, calculated constants, etc.); -analyzing the consequences of potential nonstandard modes (interlocks, emergency heat removal, boiling) and development of measures to prevent these from progressing into a severe accident; -investigating the dynamics of the sodium boiling region propagation in a real FA with a 'sodium plenum' above the core; -justification of nominal modes that exclude the formation of eddies in the core header and on the sodium surface (gas capture), and passive circulation zones (stratification phenomena, temperature fluctuations); -estimation of temperature fluctuations directly in the coolant flow and on the channel walls, and investigation of the effects these fluctuations have on structural strength; -development of a system for detection of anomalies in an individual FA or in a number of FAs prior to the fuel cladding failure; -justifying, for large-unit SGs, thermohydraulic modes and hydrodynamic stability, and integrated tests of the automatic steam generator protection systems (all scram systems shall be supplied and assembled to support the tests, and the scram system bench shall be put into operation); -development of the material and design for improving the safety of the large-unit steam generator to provide for the slowdown of the leakage self-development and development processes, and online repair of SGs after the water escape into sodium.
2022-12-15T16:36:48.772Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "466fe1836d61eabc8f9110b1a9791b6b7159ea79", "oa_license": "CCBY", "oa_url": "https://nucet.pensoft.net/article/96568/download/pdf/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "466fe1836d61eabc8f9110b1a9791b6b7159ea79", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
271183173
pes2o/s2orc
v3-fos-license
Dietary LPC-Bound n-3 LCPUFA Protects against Neonatal Brain Injury in Mice but Does Not Enhance Stem Cell Therapy Neonatal hypoxic-ischemic (HI) brain injury is a prominent cause of neurological morbidity, urging the development of novel therapies. Interventions with n-3 long-chain polyunsaturated fatty acids (n-3 LCPUFAs) and mesenchymal stem cells (MSCs) provide neuroprotection and neuroregeneration in neonatal HI animal models. While lysophosphatidylcholine (LPC)-bound n-3 LCPUFAs enhance brain incorporation, their effect on HI brain injury remains unstudied. This study investigates the efficacy of oral LPC-n-3 LCPUFAs from Lysoveta following neonatal HI in mice and explores potential additive effects in combination with MSC therapy. HI was induced in 9-day-old C57BL/6 mice and Lysoveta was orally supplemented for 7 subsequent days, with or without intranasal MSCs at 3 days post-HI. At 21–28 days post-HI, functional outcome was determined using cylinder rearing, novel object recognition, and open field tasks, followed by the assessment of gray (MAP2) and white (MBP) matter injury. Oral Lysoveta diminished gray and white matter injury but did not ameliorate functional deficits following HI. Lysoveta did not further enhance the therapeutic potential of MSC therapy. In vitro, Lysoveta protected SH-SY5Y neurons against oxidative stress. In conclusion, short-term oral administration of Lysoveta LPC-n-3 LCPUFAs provides neuroprotection against neonatal HI by mitigating oxidative stress injury but does not augment the efficacy of MSC therapy. Introduction Perinatal asphyxia is a major cause of hypoxic-ischemic (HI) brain injury in the newborn.In affluent countries, the occurrence of HI brain injury is estimated to range between 1.3 and 1.7 cases per 1000 live births [1].Neonatal HI brain injury is a significant contributor to neonatal mortality and can result in various lifelong health challenges such as intellectual disabilities, epileptic seizures, and cerebral palsy [2].These detrimental consequences result from a series of harmful processes in the brain, including oxidative stress and excitotoxicity, ultimately culminating in the death of neurons over hours to days following the initial insult [2].As current hypothermia treatment offers only partial protection, there is an urgent need for novel therapeutic strategies to combat HI injury [3,4].Nutritional supplementation emerges as such a novel treatment option.Notably, nutritional intervention is considered safe and has the potential to be easily integrated into clinical practice [5].Moreover, its limited side effects make it an attractive candidate for combination with additional therapies [5,6]. The brain is particularly rich in lipids with around 30% consisting of n-3 LCPUFAs, such as docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) [7].Specifically, DHA plays a vital role in the cell membranes of gray matter [8].By interacting with membrane proteins and influencing downstream signal transduction, DHA can influence synaptic activity, promote neurogenesis, and neuronal survival [9].Interestingly, DHA is not confined to neurons but also has been found in membranes of astrocytes, microglia, and oligodendrocytes, allowing it to influence neuroinflammation or myelin formation [7].Furthermore, DHA can be converted into bioactive metabolites such as oxylipins (e.g., NPD1) and synaptamides.Similar to DHA itself, these bioactive metabolites can reduce neuroinflammation and apoptosis while stimulating neurite growth and synaptogenesis [9,10]. DHA uptake in the brain is dependent on a constant supply from the blood which is influenced by the dietary intake of DHA [11].Intriguingly, DHA transport into the brain is mainly dependent on its binding to lysophosphatidylcholine (LPC) [12].The transporter Major Facilitator Superfamily Domain containing 2a (Mfsd2a) specifically transports LPCesterified n-3-LCPUFAs across the blood-brain barrier into the brain parenchyma [12,13].Indeed, some studies have shown that the supplementation of LPC-DHA but not of free DHA or DHA bound to triglycerides (TAG-DHA) leads to the accumulation of DHA in the brain [13,14].In addition, only LPC-DHA leads to subsequent improvements in memory function in adult mice [13,14].Similarly, only LPC-EPA enriches both EPA and DHA in the brain [15].Altogether, these data suggest that nutritional supplementation containing LPC-DHA and LPC-EPA shows the most promise to yield the beneficial effects of DHA and EPA enrichment in the brain.Surprisingly, no studies have examined the impact of nutritional supplementation with LPC-DHA or LPC-EPA on HI injury in mice. Neonatal HI brain injury has been shown to reduce the amount of DHA in the brain [16,17].Nutritional supplementation with DHA could complement this deficiency.Furthermore, studies showed that intraperitoneal free or tri-DHA pretreatment or tri-DHA treatment shortly after HI injury in neonatal rats and mice reduced brain volume loss, glial reactivity, and microglia activation and normalized anxiety-like behavior, long-term working memory, and sensorimotor functioning [18][19][20][21].Similarly, a long-term DHA-rich diet in HI-injured mice reduced gray and white matter lesion size, microglia activation, and glial reactivity in male mice and improved novel object recognition memory in both sexes [16].Moreover, multiple studies showed that intraperitoneally or intravenously administered free or tri-DHA shortly after HI injury in rats or piglets reduced lipid peroxidation, preserved mitochondrial integrity, reduced mitochondrial ROS production, and improved mitochondrial Ca 2+ buffering capacity [20,22,23]. Currently, one of the most promising therapies for neonatal brain injury is mesenchymal stem cell (MSC) therapy [24][25][26][27].MSCs are non-immunogenic cells that can be extracted from different sources such as bone marrow [28].Intranasal administration of MSCs has been shown to effectively reduce lesion size and improve long-term sensorimotor and cognitive outcome in experimental models for neonatal brain injury [24][25][26][29][30][31].Upon intranasal administration, MSCs migrate specifically to the lesion site, are short-lived, and do not integrate [26].During this short presence at the lesion site, MSCs react to the HI environment by secreting various factors that have beneficial effects on the injured brain by boosting neurogenesis and dampening neuroinflammation [24,26,[32][33][34].MSCs have a large therapeutic window being effective when administered up until 10 days post-HI and stimulate the late repair of the neonatal HI-injured brain, which may complement the early neuroprotective window of n-3 LCPUFAs [26,35].Additionally, n-3 LCPUFAs may provide building blocks for the neuronal repair process [29], further boosting neurite outgrowth and synapse formation [9].Indeed, Ghazale et al. showed that DHA enhances the therapeutic potential of neural stem cell treatment after traumatic brain injury, by reducing apoptosis and promoting neurogenesis [6]. The current study aims to assess the therapeutic efficacy of nutritional supplementation with LPC-DHA and LPC-EPA from Lysoveta on lesion size and functional outcome in a mouse model of neonatal HI brain injury.Potential underlying mechanisms of Lysoveta were assessed using in vitro models of neuronal injury.Lastly, the additive benefit of Lysoveta supplementation on the neuroregenerative effects of intranasal MSC therapy following neonatal HI brain injury was investigated.Together, this study provides valuable insights into the prospects of LPC-bound n-3 LCPUFA supplementation for infants with HI injury. Animals and HI Injury Model All procedures were carried out according to the Dutch and European international guidelines (Directive 86/609, ETS 123, Annex II) and the Central Authority for Scientific Procedures on Animals (The Hague, The Netherlands) and approved by the Experimental Animal Committee Utrecht (University Utrecht, Utrecht, Netherlands).All efforts were made to minimize suffering.This paper is written in accordance with the ARRIVE guidelines [36].C57Bl/6 mice (OlaHsa, ENVIGO, Horst, The Netherlands) were kept in individually ventilated cages with woodchip bedding, cardboard shelters, wooden nibble stick and tissues provided, on a 12 h day/night cycle (lights on at 7:00 a.m.), in a temperature-controlled room at 20-24 • C and 45-65% humidity with ad libitum food and water access.Mice were bred in-house by placing males and females together in 1:2 ratio for 10 days.Afterwards, dams were housed solitarily to give birth.The day of birth was considered as postnatal day (P)0.Litter size was controlled between 6 and 8 pups, to ensure the adequate feeding of each pup.HI injury (Vannucci-Rice Model, [37]) was induced in 9-day-old pups by unilateral (right) carotid artery ligation under isoflurane anesthesia (5-10 min; 5% induction, 3-4% maintenance with flow O 2 :air 1:1), followed by recovery with their mother for at least 75 min and subsequently systemic hypoxia at 10% O 2 for 45 min in a temperature-controlled humidified hypoxic incubator.Control animals (sham procedure) were subjected to anesthesia and surgical incision only and were without exposure to hypoxia.Xylocaine (#N01BB02, AstraZeneca, Cambridge, UK) and Bupivacaine (#N01BB01, Actavis, Allergan Inc., Dublin, Ireland) were applied to the wound for pre-and post-operative analgesia, respectively.Litters received a turning wheel on a colored plastic shelter as cage enrichment starting from two days post-injury.Mice were treated as described below.Group sizes were determined based on the effect size in previous experiments by performing a power analysis.The power analysis was based on an effect size of the lesion size of 1, alpha of 0.05 with Bonferroni correction for multiple comparisons, and power of 0.8, resulting in a minimum number of animals of 24 per group (G-power 3.1.9,Universität Kiel, Kiel, Germany).The number of animals used in this study is depicted in Table 1.Animals within each litter were randomly assigned to the experimental groups aiming for an equal male-to-female ratio to the most feasible extent.Mice were weaned after finishing the behavioral tests and housed in same-sex groups per litter.Mice were euthanized at P37 by overdose of 20% pentobarbital, followed by transcardial perfusion with phosphate buffered saline (PBS, #524650-1, VWR, Radnor, PA, USA) followed by 4% paraformaldehyde (PFA, #4078.9020,VWR), and brains were collected for further analysis. Treatment: Nutritional Lysoveta Supplementation and/or MSC Administration Immediately following systemic hypoxia, mice were orally treated with Lysoveta, a product rich in LPC-n-3 LCPUFA (Table 2, Aker Biomarine Human Health Ingredients AS, Lysaker, Norway), or with the vehicle solution coconut oil (#C1758, Merck KGaA, Saint Louis, MO, USA), rich in saturated short-chain fatty acids (Table 3).Lysoveta is a supplement derived from the oil of Antarctic krill (Euphausia superba) through the enzymatic hydrolysis of phosphatidylcholine, resulting in high levels of LPC-DHA and LPC-EPA.To reduce viscosity and allow for oral gavage, the Lysoveta product was diluted 4× in coconut oil and administered with a sterile plastic feeding tube (22 ga, 38 mm, #FTP22-38 Instech Laboratories, Leipzich-Markkleeberg, Germany) attached to a 25 µL Hamilton syringe (Hamilton company, Reno, NV, USA) at a dose of 5 µL/g body weight (equal to 1.25 µL pure Lysoveta/g body weight) directly after hypoxia and for 7 consecutive days.This resulted in a daily dosage of 117.5 mg/kg body weight DHA and 210.63 mg/kg body weight EPA supplementation, which has been shown to be sufficient to result in increased DHA or EPA brain incorporation [13,15].On P12, mice were intranasally treated with 0.5 × 10 6 MSCs (Thermo Fisher Scientific) or vehicle (D-PBS, D8537, Merck KGaA) per animal by the administration of 3 rounds of 2 µL per nostril.Additionally, 30 min prior to administration, hyaluronidase (100U, #H4272, Merck KGaA) was administered intranasally to increase the permeability of the connective tissue in the nasal cavity (3 rounds of 2 µL per nostril). Immunohistochemistry Collected brains were post-fixed in 4% PFA for 24 h and dehydrated in increasing ethanol concentrations followed by embedment in paraffin.Coronal sections (8 µm) were cut at the hippocampal level at bregma level −1.70 (in adult mice).Sections were stained with anti-microtubule-associated protein 2 (MAP2) or anti-myelin basic protein (MBP) antibodies to analyze gray or white matter damage, respectively, as a primary outcome measure.All sections were deparaffinized and blocked with 3% H 2 O 2 (#1.072.101.000,VWR) in methanol for 20 min and afterwards hydrated in decreasing ethanol concentrations.Antigen retrieval was performed for 3 min at 95 • C in 10 mM citrate buffer (pH 6.0) for MAP2 staining or for 15 min in 0.05 M TRIS HCL/0.01 M EDTA (pH: 9.0) for MBP staining followed by blocking with 5% normal horse serum (NHS, #26050088, Invitrogen, Waltham, MA, USA) in PBS or 20% normal goat serum (NGS, X090710-8, Agilent Technologies, Amstelveen, The Netherlands) in 0.025% Triton (X100, Merck KGaA) in PBS for 30 min, respectively.Afterwards, slides were incubated with 1:1000 mouse-anti-MAP2 antibody (#M4403-2 mL, Merck KgaA) in 2% NHS in PBS or 1:2000 rabbit-anti-MBP antibody (#ab218011, Abcam, Cambridge, MA, USA) in 0.025% Triton and 10% NGS in PBS overnight at 4 • C. The following day, slides were incubated with 1:100 horse-anti-mouse biotin antibody (#BA-2000, Vector Laboratories, Newark, CA, USA) in PBS or 1:100 goatanti-rabbit biotin antibody (#BA1000, Vector Laboratories) in PBS for 45 min at RT. Next, the slides were washed in PBS and incubated for 30 min with AB complex (#PK-4000, Vector Laboratories) in PBS at RT. Slides were washed briefly in Tris-HCl (pH 7.6) and stained with DAB solution (#D5637-10G, Merck KgaA) until the desired intensity was reached (±5 min).Lastly, sections were dehydrated and embedded with DEPEX (#18243.01,Serva Electrophoresis GmbH, Heidelberg, Germany).Full-section images were captured with a Nikon D1 digital camera (Nikon, Tokyo, Japan).Area measurements (in pixel) were manually performed by a blinded observer using Adobe Photoshop CS6 or Fiji 1.53 (NIH) for MAP2-or MBP-stained brains, respectively.The positive stained area was measured in both the ipsilateral (lesioned) and contralateral (non-lesioned) hemispheres of each brain section.Ipsilateral MAP2 + or MBP + area loss was calculated as (1 − ipsilateral positive area/contralateral positive area) × 100%, to correct for differences in brain size between animals. Behavioral Testing For behavioral experiments, the day/night cycle was reversed at P24, and behavioral tasks were performed during the dark phase under red light conditions.Behavior was scored blinded by experienced researchers.Arenas were cleaned with soapy water in between each trial.Mice that showed severe repetitive turning behavior after HI were omitted from all behavioral analyses (HI VEH: n = 1, HI MSCs: n = 3, HI Lysoveta + MSCs: n = 4). Open Field Task Rodents naturally tend to cautiously explore the environment while avoiding open areas.Mice that express anxiety-like behavior tend to spend more time on the sides of the arena than crossing into the inner zone of the arena.The open field task was conducted on P30.Mice were recorded freely roaming in a rectangular plexiglass arena (560 × 330 × 200 mm) for 10 min.Movies were automatically analyzed by Ethovision XT software version 15 (Noldus, Wageningen, The Netherlands) by digitally determining an inner zone starting at 10 cm from the walls.Time spent in the inner zone and total distance moved were measured. Spatial Memory The novel object recognition task (NORT) is used to test spatial memory and reflects hippocampal and perirhinal cortex function.The NORT was conducted on P34 as previously described [16].Mice were habituated in a rectangular plexiglass arena (560 × 330 × 200 mm) for 10 min on four consecutive days prior to the test day.On the fifth day, mice were placed in the same arena, which now contained two identical objects; a cylinder made of four stacked blue 50 mL Falcon tube-caps (Corning).Mice were left to explore both identical objects for 10 min.Mice were then returned to the home cage for one hour.During the novel object trial, one object was replaced by a novel object, which was a yellow, 8-hole Duplo brick (The Lego Group, Billund, Denmark), and mice freely explored both objects for 10 min while being recorded by a BlackflyS USB3 camera (BFS-U3-04S2C0C, Flir, Wilsonville, OR, USA).The location of the novel object was randomized between animals.Trials in which the total exploration time of both objects lasted <5 s were considered insufficient and removed from the analysis (HI VEH n = 1) [38].Animals that did not participate (e.g., jumped out of the arena) in the first 5 min of the novel object trial were omitted from the analysis (SHAM: n = 8, HI VEH: n = 4, HI Lysoveta: n = 2, HI MSCs: n = 2, HI Lysoveta + MSCs: n = 2).Time spent exploring the objects during the NORT, i.e., orienting the nose toward the object with a 1-2 cm distance, was scored manually by an experienced observer.Novel object preference was calculated as [(time spent with novel object/total time spent with both objects) × 100%]. Cylinder Rearing Task Unilateral sensorimotor impairments were measured in the cylinder rearing task on P37.Mice were placed in a transparent Plexiglas cylinder (80 mm diameter and 300 mm height) and video-recorded for 5 min.Mice were omitted from the study when they failed to perform at least 10 rearings in 5 min (SHAM: n = 1, HI Lysoveta: n = 1, HI MSCs: n = 2, HI Lysoveta + MSCs: n = 3).The first weight-bearing forepaw contacting the cylinder wall during a full rear was scored by an experienced observer as left (impaired), right (unimpaired), or both using Noldus Observer XT16.0 software (Noldus).Non-impaired forepaw preference was calculated as ((right rearings − left rearings)/(right + left + both rearings)) × 100%. 2.6.1.Oxygen Glucose Deprivation Model SH-SY5Y cells were plated in a 96-well plate (Thermo Fisher Scientific) at 60,000 cells per well and were allowed to attach overnight.The next day, the culture medium was replaced with DMEM without glucose (#11966025, Merck KgA) supplemented with 1% P/S and 0 µM, 5 µM, or 10 µM Lysoveta or algae-derived DHA (#D2534, Merck KgA).Cells were incubated in a humidified hypoxic incubator with 1% O 2 and 5% CO 2 at 37 SH-SY5Y cells were plated in a 96-well plate at 60,000 cells per well and were allowed to attach overnight.The next day, cells were exposed to 60 µM H 2 O 2 or control medium (0 µM H 2 O 2 ) in combination with 0 µM, 5 µM, or 10 µM Lysoveta or DHA in a humidified incubator with 5% CO 2 at 37 • C for 24 h.Control conditions contained DMSO or BSA as a vehicle.Neuronal cell death was assessed with an MTT assay (see Section 2.6.4).2.6.3.DNA Damage Model: Etoposide SH-SY5Y cells were plated in a 96-well plate at 60,000 cells per well and were allowed to attach overnight.The next day, cells were exposed to 2.8 µM etoposide (#E1383, Merck KGgA) or control medium in combination with 0 µM, 5 µM, or 10 µM Lysoveta or DHA in a humidified incubator with 5% CO 2 at 37 • C for 24 h.Etoposide is a topoisomerase II inhibitor that induces p53-dependent cell death [39].Control conditions contained DMSO or BSA as a vehicle.Neuronal cell death was assessed with an MTT assay (see Section 2.6.4). Methylthiazolyldiphenyl-Tetrazolium Bromide (MTT) Assay After exposure to the respective hit, the medium was carefully replaced with a culture medium containing 0.5 mg/mL MTT (#M2128, Merck KGgA).MTT solution was incubated for 3 h in a humidified environment with 5% CO 2 at 37 • C.After incubation, the medium was carefully removed from the wells and MTT crystals were dissolved in 100 µL DMSO (#23500260, VWR).Optical density was measured at 570 nm using a spectrophotometer (ThermoFisher, Multiskan GO). Background absorbance was subtracted from each well.All data points were depicted as relative values compared to the average of the negative control condition, which was not exposed to the respective hit. Statistical Analysis All data were acquired in a blinded manner.Statistical analysis was performed using GraphPad Prism 10 (GraphPad Software, Boston, MA, USA).Outliers were identified by the ROUT analysis (Q = 1%).Data were checked for normal (Gaussian) distribution using the Shapiro-Wilk normality test.If data were normally distributed, statistical analysis was performed by a comparison of more than 2 groups by a one-way ANOVA with Holm-Šidák post hoc tests.For the statistical analysis of in vitro data, one-way ANOVA with Holm-Šidák post hoc tests and a one-way ANOVA-based test for trend was performed.Statistical analysis of body weight was performed with a two-way ANOVA considering treatment and postnatal day as independent variables.If data were not normally distributed, statistical analysis was performed by a non-parametrical test with Dunn's post hoc tests.The experimental unit was an animal or cell culture well.Data are presented as mean ± SEM.Differences of p ≤ 0.05 were considered statistically significant. Results 3.1.Oral Lysoveta Supplementation Reduces HI Brain Injury without Affecting Body Weight 3.1.1.Lysoveta Supplementation Does Not Affect Body Weight of HI-Injured Animals To assess whether Lysoveta supplementation exerts neuroprotective effects following neonatal HI brain injury, Lysoveta was orally supplemented daily from P9 until P15 and the body weight of mice was monitored throughout the study (Figure 1A).Mouse pups exposed to HI showed a significantly reduced body weight compared to SHAM mice (main effect: p = 0.0096, Figure 1B).Post hoc comparisons revealed that differences in body weight were found between HI animals and SHAM animals from P12 until P23 (P12: p < 0.0001, P15: p < 0.0001, and P23: p = 0.0054).No differences in body weight were found between vehicle-treated and Lysoveta-treated HI mice, indicating no metabolic effects of Lysoveta supplementation compared to vehicle treatment.3.1.2.Lysoveta Supplementation Reduces Gray and White Matter Loss in HI-Injured Animals HI animals orally supplemented with vehicle displayed significant MAP2 + area loss in the ipsilateral hemisphere compared to SHAM animals (p < 0.0001, Figure 1C,D), indicating gray matter injury following HI.Lysoveta supplementation significantly decreased ipsilateral MAP2 + area loss in HI animals compared to vehicle treatment (p = 0.0444).Similarly, significant MBP + area loss in the ipsilateral hemisphere was observed in HI-injured animals compared to SHAM animals (p < 0.0001, Figure 1C,E), indicating white matter injury following HI.Lysoveta treatment significantly reduced ipsilateral MBP + area loss in HI-injured animals compared to vehicle treatment (p = 0.0299).Therefore, Lysoveta supplementation for seven days following HI demonstrated neuroprotective effects, as evidenced by a reduction in gray and white matter loss, without being confounded by general caloric effects. Oral Lysoveta Supplementation Does Not Improve Functional Outcomes after HI Injury To assess the effects of Lysoveta supplementation on functional outcomes, at 3-4 weeks after HI induction, mice were subjected to different behavioral tests to assess anxiety-like behavior, spatial memory, and sensorimotor behavior (Figure 1A). Lysoveta Supplementation Does Not Reduce Anxiety-Like Behavior after HI Injury In the open field task, anxiety-like behavior and locomotor activity were assessed.HI-injured animals had similar movement velocities as SHAM animals, indicating that general locomotor activity was not affected by HI (SHAM vs. vehicle-treatment: p = 0.9840, Figure 2A).HI animals orally supplemented with the vehicle displayed anxiety-like behavior as they spent significantly less time in the inner zone of the arena than SHAM mice (p = 0.0032, Figure 2B).Oral Lysoveta supplementation did not improve the anxiety-like behavior of HI-injured animals compared to vehicle treatment (p > 0.9999). Lysoveta Supplementation Does Not Ameliorate Spatial Memory Impairment after HI Injury The novel object recognition task was performed to assess spatial memory.As expected, SHAM animals spent more time (72%) with the novel object than with the familiar object (Figure 2C).HI-injured animals showed significantly reduced preference for the novel object, indicating impaired spatial memory (SHAM vs. HI vehicle-treatment: p = 0.0073).No benefit of Lysoveta supplementation on spatial memory functioning was found in HI-injured animals compared to vehicle treatment (p = 0.3798). Lysoveta Supplementation Does Not Reduce Sensorimotor Impairment after HI Injury Unilateral sensorimotor impairment was assessed in the cylinder rearing task.HIinjured animals showed a significantly higher preference of using their non-impaired forepaw compared to SHAM animals (p = 0.0120, Figure 2D), indicating sensorimotor impairment following HI injury.Similar impairment was found in HI animals that were supplemented with Lysoveta (p = 0.8593 versus HI-vehicle). HI-injured animals had similar movement velocities as SHAM animals, indicating that general locomotor activity was not affected by HI (SHAM vs. vehicle-treatment: p = 0.9840, Figure 2A).HI animals orally supplemented with the vehicle displayed anxiety-like behavior as they spent significantly less time in the inner zone of the arena than SHAM mice (p = 0.0032, Figure 2B).Oral Lysoveta supplementation did not improve the anxiety-like behavior of HI-injured animals compared to vehicle treatment (p > 0.9999). Lysoveta Supplementation Protects Neurons against Oxygen Glucose Deprivation by Reducing Injury from Oxidative Stress To explain how Lysoveta exerts neuroprotective effects in vivo, we assessed the effects of Lysoveta on different types of neuronal injury in vitro.Firstly, oxygen glucose deprivation (OGD) significantly reduced cell viability by approximately 32% compared to control (non-OGD) cells (p < 0.0001, Figure 3A).The addition of Lysoveta at a concentration of 5 or 10 µM (LPC-bound) DHA dose-dependently (trend analysis: p = 0.0142) partially protected neurons against OGD-induced cell death compared to vehicle treatment (5 µM: p = 0.0182, 10 µM: p = 0.0057).At similar dosages, DHA alone also improved cell viability dose-dependently (trend analysis: p = 0.0072) after OGD compared to vehicle treatment (5 µM: p = 0.0195, 10 µM: p = 0.0082, Figure 3B).To further examine whether Lysoveta protects neurons against oxidative stress or DNA damage, neuronal cultures were exposed to H 2 O 2, or etoposide, respectively.An oxidative stress hit with H 2 O 2 significantly reduced cell viability compared to 0 µM H 2 O 2 (p = 0.0017, Figure 3C).The addition of Lysoveta dose-dependently protected against H 2 O 2 -induced cell death (trend analysis: p = 0.0040) and at a concentration containing 10 µM (LPC-bound) DHA, it significantly protected neurons against H 2 O 2 -induced cell death compared to vehicle treatment (p = 0.0131).Similarly, DHA dose-dependently protected against H 2 O 2 -induced cell death (trend analysis: p = 0.0029) and 10 µM DHA significantly protected neurons against oxidative stress (p = 0.0017, Figure 3D).Etoposide significantly reduced cell viability compared to nonexposed cells (p < 0.0001, Figure 3E).Lysoveta at a dose of 5 µM (LPC-bound) DHA did not protect neurons against DNA damage induced by etoposide (p = 0.4635) and 10 µM Lysoveta significantly worsened cell viability compared to the vehicle-treated condition (p = 0.0032).Additionally, DHA alone did not protect neurons against an etoposide hit similarly to Lysoveta (Figure 3F).Conclusively, Lysoveta protected neurons specifically in an oxygen glucose deprivation model and against oxidative stress in a H 2 O 2 hit model but did not protect neurons against DNA damage-induced cell death.As DHA alone evoked similar patterns of neuroprotection, DHA is likely partially responsible for the neuroprotective effects of Lysoveta against oxidative stress injury. The Combination of Oral Lysoveta Supplementation and Intranasal MSC Therapy Does Not Improve Gray or White Matter Loss Compared to MSC Therapy Alone To assess the potency of oral Lysoveta to enhance the regenerative efficacy of intranasal MSC therapy, HI-injured mice were treated with single or combination therapies and the anatomical outcome was assessed by measuring ipsilateral gray and white matter loss (Figure 4A).HI-injured animals showed significant gray and white matter loss in the ipsilater hemisphere compared to SHAM animals (p < 0.0001, Figure 4B-D).Intranasal MSC the apy alone and the combination therapy of MSCs and Lysoveta supplementation signifi cantly reduced MAP2 + area loss in HI-injured animals compared to vehicle treatment (p 0.0372) (Figure 4C).However, Lysoveta supplementation did not further reduce gray ma ter lesion size in MSC-treated animals compared to MSC therapy alone (p = 0.9306).Bot intranasal MSC therapy alone and combination therapy with Lysoveta supplementatio did not significantly reduce MBP + area loss in HI-injured animals compared to vehic treatment (VEH vs. MSCs: p = 0.4218, VEH vs. Lysoveta + MSCs: p = 0.0739, Figure 4D).I HI-injured animals showed significant gray and white matter loss in the ipsilateral hemisphere compared to SHAM animals (p < 0.0001, Figure 4B-D).Intranasal MSC therapy alone and the combination therapy of MSCs and Lysoveta supplementation significantly reduced MAP2 + area loss in HI-injured animals compared to vehicle treatment (p = 0.0372) (Figure 4C).However, Lysoveta supplementation did not further reduce gray matter lesion size in MSC-treated animals compared to MSC therapy alone (p = 0.9306).Both intranasal MSC therapy alone and combination therapy with Lysoveta supplementation did not significantly reduce MBP + area loss in HI-injured animals compared to vehicle treatment (VEH vs. MSCs: p = 0.4218, VEH vs. Lysoveta + MSCs: p = 0.0739, Figure 4D).In sum , although Lysoveta supplementation alone is effective in reducing lesion size (Figure 1), the addition of Lysoveta supplementation to MSC therapy did not increase the regenerative potential of intranasal MSC therapy to reduce lesion size following HI brain injury. Discussion This study was conducted to assess the efficacy of Lysoveta, a product high in LPC-DHA and LPC-EPA, in a mouse model of neonatal HI brain injury and in in vitro neuronal injury paradigms. In accordance with our hypothesis, oral Lysoveta supplementation provided protection against neuronal damage and myelin loss, thereby reducing lesion size.Importantly, no significant weight changes were observed upon Lysoveta supplementation in HI-injured animals, indicating that therapeutic effects were likely due to specific mechanisms of Lysoveta rather than general metabolic changes related to the supplementation of energyrich lipids. Although Lysoveta supplementation significantly reduced lesion size following HI, no functional improvements were observed in the tested behavioral paradigms upon short-term (7 days) oral Lysoveta supplementation.When we studied the brain areas related to the behavioral tasks selectively, such as the hippocampus for the novel object recognition task and the sensorimotor cortex for the cylinder rearing task [16,40], we did not find significant anatomical improvements in the lesion in these areas separately.This indicates that the discrepancy between anatomical and functional recovery in this study may be related to Lysoveta targeting other brain areas unrelated to these behavioral tasks.Moreover, to assess sensorimotor behavior, we performed the cylinder rearing task, which is widely used in the field of HI brain injury research [41,42] but might not show subtle motor improvement in the animals treated with Lysoveta supplementation.Therefore, we suggest that future research should perform other often used motor behavioral tasks such as the rotarod task to further assess potential subtle improvements by Lysoveta supplementation [41]. Previous studies using animal models of neonatal HI brain injury showed functional improvements by intraperitoneal administration of 1 to 375 mg/kg DHA [19,20,23].In the current study, we administered 117.5 mg/kg body weight LPC-bound DHA and 210.63 mg/kg body weight LPC-bound EPA in Lysoveta daily.Although these levels are higher than those previously found sufficient to enhance brain incorporation [13,15] and sufficient to reduce lesion size in the current study, the levels might have been insufficient to provide functional recovery also.Alternatively, the timing of intragastric LPC-n-3 LCPUFA administration immediately after HI may have been suboptimal for functional recovery.Blood and brain n-3 LCPUFA levels were shown to increase within 4-12 h after oral administration of krill oil to rats [43], indicating that dietary LPC-n-3 LCPUFAs in our study should have reached the brain within the established neuroprotective treatment window of 6 h post-insult, but maybe not yet in sufficient amounts [42,43].Furthermore, a study by Huang et al. indicated that long-term, but not short-term, dietary supplementation with DHA reduced functional impairment in a spinal cord injury mouse model [44].Combining dietary DHA supplementation with acute intravenous DHA further improved functional outcome compared to acute intravenous treatment alone [44], indicating that dietary intervention may be more effective in later repair mechanisms extending the 7 days of treatment applied in this study.Additionally, more complex supplementation of multiple nutritional components might be more effective, as not only DHA and EPA are needed for phospholipid production but also zinc and vitamin B12 may be essential to contribute to functional repair [16]. Lysoveta, rich in LPC-DHA and LPC-EPA, may have additional benefits compared to non-complexed DHA or EPA supplements.LPC-DHA and LPC-EPA have been shown to significantly increase brain DHA and EPA levels and improve memory function in adult mice compared to free DHA or TAG-DHA [13][14][15].The enhanced uptake of LPC-DHA or LPC-EPA is mediated by the Mfsd2a transporter that selectively transports LPC esterified n-3 LCPUFAs across the blood-brain barrier and strongly regulates lipogenesis and proper brain development [12,45].A study by Li et al. showed that dietary supplementation with LPC-DHA upregulated Mfsd2a expression and had a protective effect on the blood-brain barrier after HI brain injury [46].A direct back-to-back comparison of LPC-n-3 LCPUFAs and free n-3 LCPUFA treatment in animal models of neonatal HI brain injury is needed in order to determine whether the administration of LPC-n-3 LCPUFAs indeed shows increased therapeutic efficacy. Lysoveta contains both LPC-bound DHA and EPA.Recent research has stressed the importance of the EPA/DHA ratio during supplementation by suggesting that EPA and DHA compete for occupancy in the phospholipid membrane pool and exert different effects on plasma membrane biophysical structure, thereby impacting cell signaling [47].Studies of HI injury that examined both DHA and EPA separately indicated that only DHA supplementation reduced lesion size, decreased oxidative damage, and improved neurological outcomes [21,23].These studies indicate that DHA may be the primary contributor to the observed beneficiary effects in our study.Indeed, in this study, we showed in in vitro models of neuronal damage that DHA alone exerts similar effects as Lysoveta, suggesting that DHA is at least one of its neuroprotective components.However, it is important to note that LPC-EPA supplementation was shown to increase both EPA and DHA levels in the brain [15], indicating that both forms of LPC-bound n-3 LCPUFAs can enhance DHA incorporation in the brain. Our study provides evidence suggesting that Lysoveta protects against neuronal damage by reducing oxidative stress-induced injury.During the first 24 h after the HI insult, oxidative stress is an important mediator that leads to mitochondrial degradation and eventually apoptosis [2].As blood and brain LCPUFA levels increase within 4-12 h after oral administration of krill oil [43], Lysoveta indeed may have had direct effects on oxidative stress in vivo in the current study.Indeed, intravenous administration of DHA 3.5 h after HI has been shown to reduce lipid peroxidation biomarkers in the cortex and hippocampus of piglets [48].Additionally, tri-DHA administration within the first hour after HI resulted in the preservation of mitochondrial Ca 2+ buffering capacity, leading to reduced oxidative injury following HI in mice [23].Similarly, our in vitro results indicate that Lysoveta rescues neuronal cells specifically against oxidative stress-induced hits (oxygen radicals and oxygen glucose deprivation).Whether Lysoveta also reduces oxidative stress in vivo should be assessed in future short-term follow-up experiments.Additionally, other neuroprotective mechanisms of Lysoveta should be considered, including the restoration of depleted DHA levels following HI [16] and support of neurogenesis [49,50].Importantly, the inhibition of neuroinflammation by Lysoveta should be assessed in vivo [16] and in vitro by examining the effect of Lysoveta on challenged astrocytes or microglia [51]. This study also assessed the potential additive benefit of Lysoveta supplementation on the efficacy of intranasal MSC therapy for the treatment of neonatal HI brain injury.Consistent with previous work, MSCs alone reduced neuronal tissue loss in HI-injured mice [35].However, inconsistent with previous work, myelin tissue loss was not reduced by MSCs alone [35], which could be due to oral gavage with vehicle solution coconut oil that contains oleic acid (highly present in the myelin sheet [52]) or a gavage-induced stress response that may interfere with the regenerative effect of MSCs [53].The results of the current study did not support our hypothesis that Lysoveta supplementation would enhance the therapeutic efficacy of MSC therapy.Upon intranasal administration, MSCs respond to chemotactic signals released in the HI lesioned tissue [24,54,55].Possibly, Lysoveta supplementation alters the HI brain milieu to such an extent that MSCs might not be able to optimally migrate into the brain, potentially hampering additive benefits of early Lysoveta supplementation on MSC therapy [30].Moreover, the treatment duration of Lysoveta may need to be prolonged to truly target repair and neurogenesis and complement the regenerative effects of MSC therapy [56].Another potential opportunity to enhance MSC efficacy is by preconditioning MSCs with n-3 LCPUFAs prior to administration.Preconditioning MSCs with DHA has been shown to alter the content of the MSC secretome (e.g., containing oxylipins) and thereby impact the immunomodulatory properties of the MSCs [57,58].The preconditioning of MSCs with EPA enhances the pro-resolution and antiinflammatory effects in a mouse model of allergic asthma [59].Conclusively, to maximize the effects of Lysoveta supplementation on MSC therapy in future studies, emphasis should be put on optimizing the timing of Lysoveta supplementation and exploring new methods to combine MSCs and the Lysoveta supplement. Conclusions In conclusion, our current investigation highlights the potential of early-life nutritional supplementation with LPC-bound n-3 LCPUFA from Lysoveta as a promising strategy to reduce neonatal HI brain injury.Short-term Lysoveta supplementation protects against gray and white matter injury but does not ameliorate the investigated functional deficits later in life in this neonatal HI animal model.The protective effects may be attributed to the antioxidant effects of the Lysoveta supplement.Lysoveta supplementation at the dose and timing used in this study did not enhance the treatment efficacy of intranasal MSC therapy.Future preclinical studies should optimize the treatment dose, timing, and duration of LPC-DHA and LPC-EPA with the aim to obtain the most beneficial treatment regime for the improvement in both anatomical and functional outcomes following neonatal HI brain injury. Figure 1 . Figure 1.Lysoveta supplementation reduces gray and white matter loss in HI-injured animals.(A) Experimental setup; P: postnatal day.(B) Body weight during the experiment in grams of SHAM, Figure 2 . Figure 2. Oral supplementation with Lysoveta does not improve anxiety-like behavior, spatial memory, and sensorimotor impairments in HI-injured mice.(A) Quantification of the average velocity in cm/second in the open field of SHAM-operated animals and HI-injured animals treated with vehicle (VEH) or Lysoveta; SHAM: n = 27, HI VEH: n = 25, HI Lysoveta: n = 26.(B) Quantification of the total time spent in the inner zone of the open field arena; SHAM: n = 28, HI VEH: n = 25, HI Lysoveta: n = 27.(C) Quantification of the novel object preference of SHAM-operated animals and HI-injured animals treated with vehicle or Lysoveta.Dotted line at 50% chance level; SHAM: n = 19, HI VEH: n = 20 HI Lysoveta: n = 25.(D) Quantification of the non-impaired forepaw preference of SHAM-operated animals and HI-injured animals treated with vehicle or Lysoveta in the cylinder rearing task; SHAM: n = 27, HI VEH: n = 25, HI Lysoveta: n = 26.* p < 0.05, ** p < 0.01, ns = no significant difference.Data are presented as mean + SEM. Figure 2 . Figure 2. Oral supplementation with Lysoveta does not improve anxiety-like behavior, spatial memory, and sensorimotor impairments in HI-injured mice.(A) Quantification of the average velocity in cm/second in the open field of SHAM-operated animals and HI-injured animals treated with vehicle (VEH) or Lysoveta; SHAM: n = 27, HI VEH: n = 25, HI Lysoveta: n = 26.(B) Quantification of the total time spent in the inner zone of the open field arena; SHAM: n = 28, HI VEH: n = 25, HI Lysoveta: n = 27.(C) Quantification of the novel object preference of SHAM-operated animals and HI-injured animals treated with vehicle or Lysoveta.Dotted line at 50% chance level; SHAM: n = 19, HI VEH: n = 20 HI Lysoveta: n = 25.(D) Quantification of the non-impaired forepaw preference of SHAM-operated animals and HI-injured animals treated with vehicle or Lysoveta in the cylinder rearing task; SHAM: n = 27, HI VEH: n = 25, HI Lysoveta: n = 26.* p < 0.05, ** p < 0.01, ns = no significant difference.Data are presented as mean + SEM. FOR PEER REVIEW 12 of 19 Figure 3 . Figure 3. Lysoveta and DHA treatment partially protect SH-SY-5Y cells against oxygen glucose deprivation (OGD) and oxidative stress.(A) Quantification of the relative cell viability of SH-SY-5Y cells exposed to oxygen glucose deprivation (OGD) with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to non-OGD control condition; this experiment was performed 8-fold and repeated twice.(B) Quantification of the relative cell viability of SH-SY-5Y cells exposed to oxygen glucose deprivation (OGD) with 0, 5, or 10 µM DHA compared to non-OGD control condition; this experiment was performed 8-fold and repeated three times.(C) Quantification of the relative cell viability of SH-SY-5Y cells exposed to 60 µM H2O2 with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to no H2O2 control condition; this experiment was performed 6-fold and repeated three times.(D) Quantification of the relative cell viability of SH-SY-5Y cells exposed to 60 µM H2O2 with 0, 5, or 10 µM DHA compared to no H2O2 control condition; this experiment was performed 6-fold and repeated three times.(E) Quantification of the relative cell viability of SH-SY-5Y cells after exposure to 2.8 µM etoposide with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to no-etoposide control condition; this experiment was performed 6-fold and repeated three times.(F) Quantification of the relative cell viability of SH-SY-5Y cells after exposure to 2.8 µM etoposide with 0, 5, or 10 µM DHA compared to no-etoposide control condition; this experiment was performed 6-fold and repeated three times.* p < 0.05 ** p < 0.01, **** p < 0.0001, ns: no significant difference.Data are presented as mean + SEM. Figure 3 . Figure 3. Lysoveta and DHA treatment partially protect SH-SY-5Y cells against oxygen glucose deprivation (OGD) and oxidative stress.(A) Quantification of the relative cell viability of SH-SY-5Y cells exposed to oxygen glucose deprivation (OGD) with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to non-OGD control condition; this experiment was performed 8-fold and repeated twice.(B) Quantification of the relative cell viability of SH-SY-5Y cells exposed to oxygen glucose deprivation (OGD) with 0, 5, or 10 µM DHA compared to non-OGD control condition; this experiment was performed 8-fold and repeated three times.(C) Quantification of the relative cell viability of SH-SY-5Y cells exposed to 60 µM H 2 O 2 with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to no H 2 O 2 control condition; this experiment was performed 6-fold and repeated three times.(D) Quantification of the relative cell viability of SH-SY-5Y cells exposed to 60 µM H 2 O 2 with 0, 5, or 10 µM DHA compared to no H 2 O 2 control condition; this experiment was performed 6-fold and repeated three times.(E) Quantification of the relative cell viability of SH-SY-5Y cells after exposure to 2.8 µM etoposide with 0, 5, or 10 µM (LPC-bound) DHA in Lysoveta compared to no-etoposide control condition; this experiment was performed 6-fold and repeated three times.(F) Quantification of the relative cell viability of SH-SY-5Y cells after exposure to 2.8 µM etoposide with 0, 5, or 10 µM DHA compared to no-etoposide control condition; this experiment was performed 6-fold and repeated three times.* p < 0.05 ** p < 0.01, **** p < 0.0001, ns: no significant difference.Data are presented as mean + SEM. Table 2 . Fatty acid profile of the nutritional supplementation product, Lysoveta, used in this study. Table 3 . Fatty acid profile of coconut oil used as a vehicle in this study.
2024-07-15T15:18:54.978Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "a197a67aac7371bafbb99c4646e27b53eb6e38e6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "54ee675ee198ebae5dc1620c033389ac87080fa7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
204870865
pes2o/s2orc
v3-fos-license
The Connotation, Objective, and System Construction of the Classified Cultivation of Undergraduates Classified cultivation is the task and necessary choice under the popularization of higher education. This paper adopts normative analysis, empirical analysis, expert interview and field investigation to analyze the connotation of classified training mode of undergraduate talents and proposes the talent classification standards and basis, and the core task of classified cultivation. It targets the talents as the practical application type, innovative research type, interdisciplinary and comprehensive type, and innovation and entrepreneurship type. It attempts to build a classified training system including the subject of education, curriculum system, teaching team, management system and other elements, to enrich the research results in the field, explore the reform and innovation of talent training methods, and promote the innovative development of talent training mode in colleges and universities. Keywords—classified education; employment orientation; differentiation; curriculum system; practical teaching INTRODUCTION Under the popularization of higher education, the scale of undergraduates keeps expanding, resulting in the widening gap in cultural foundation, academic performance, learning ability as well as emotional attitude, values and career expectation of students in same university, school, grade and even the same class. What's worse, the singularities of the existing undergraduate training objectives and the assimilation tendency of training methods are also very eminent, which have seriously restricted the independent and characteristic development of the subject of education. To be specific, cope with the diversified and differentiated needs by the unified teaching plan and method, evaluation system and standard, which greatly inhibit students' consciousness and initiative in learning, taking a toll on students' adaptability and employability in the society. The contradiction will inevitably lead to the reform of the undergraduate training mode in the aspects of training objectives, course teaching, quality evaluation, etc., which provides an opportunity and a realistic basis for the exploration of classified talent training mode. The classified cultivation of undergraduates is an innovative mode, which has become the objective requirement of the profession development in colleges and universities. It is now under the exploration and the trial in engineering specialty in few colleges and universities. For example, Fuzhou university implements diversified personnel training scheme, explores the pluralistic education mode inside and outside the school. It has successfully explored a new kind of joint engineering talents training mode -"Zijin model", that is, enterprise support the school construction, participate in the educational process and inspect results. Chengdu University puts forward to constantly refining the connotation of talents and constructing a classified training system based on social needs and professional characteristics, and tries to construct diversified talent training modes. However, many research results are in a state of theoretical exploration, just as a thought. At present, the domestic theoretical achievements are mainly about the connotation, type and target orientation of classified cultivation, while the research on the selection of cultivation path and construction of cultivation system still needs to be further explored. The current theories of classified cultivation also need to be modified and improved in the process of theoretical research and practical exploration. A. Taking career development and social needs as guidance The connotation of classified training of professional talents lies in establishing market demand orientation, subdividing and integrating all kinds of social needs, respecting the diversified needs of professional development, industry development and emerging posts for talents, and valuing students' autonomy and differentiated choices. It can be said that no institution or specialty can surpass the market value orientation and consumer identity. Market-oriented education represents the inherent attribute and value orientation of higher education. Whether the educated can adapt to the society and get a higher social evaluation is the ultimate measure of the success or failure of talent training. Classified training is directly oriented to social needs, and the promotion of social adaptability and competitiveness of the subject of education is the prerequisite for the popularization of higher education and the rational choice for the popularization of mass education from scale expansion to quality improvement. B. Taking the cultivation of employment ability as the core The concept of employability originated in the 1950s, which can be summarized as the ability of educated people to obtain and compete for jobs. It is essentially a kind of employment competitiveness. It covers four levels of content, namely job-hunting ability, basic vocational ability, vocational adaptability and sustainable development ability. It represents the social awareness of universities and their majors. It is not only an important yardstick to measure the level of professional development, but also a decisive factor for the survival and development of colleges and universities. Employability training has become the core content of university education and professional development. Classified training mode is an effective way to improve employment ability. From the establishment of training objectives to the construction of training system to the operation of training plan, every link of classified training is centered on the promotion of employment ability. C. Taking the cultivation of diverse and multi-specified talents as a task Higher education can be seen as a process of producing products. It is the suppliers of products, while demand comes from three main bodies, that is, learners, society and disciplines. To meet these three needs is the realistic task of undergraduate talent training. For learners, the demand will inevitably be diversified based on the differences of interests, career preferences, social cognition and career expectations; for society, with the adjustment of industrial structure and the change of job demand, various fields of society will inevitably put forward multi-class and multi-level requirements for undergraduate talents; for disciplines, it mainly summarizes as theoretical research, applied research needs and educational talent needs. It is the realistic task of college undergraduate education to carry out classified training and coordinate and integrate the diversified needs of the three parties. A. Trade practical type The goal of personnel training can be expressed as systematic mastery of basic professional knowledge and related theories, skilled use of professional skills, application technology and modern information and communication technology, competent for professional expertise, skills and other work. The training of applied talents should focus on two aspects. Firstly, we should adhere to the combination of academic education and vocational ability improvement and attach importance to the training and examination of professional qualifications, so as to enhance the employment ability and market competitiveness of graduates of this major. Secondly, we should strengthen the practical teaching system. We should standardize the practice teaching link; expand the practice training base; innovate the mode of cooperation between production, teaching and research; enrich the practice teaching methods; improve the teaching effect and enhance students' practical ability. B. Innovative research type The training objective is to produce innovative research talents with solid theoretical foundation and systematic professional knowledge, strong scientific research consciousness and preliminary ability to engage in research work, and that are excellent enough to receive graduate education and competent for theoretical study and applied research post in research institutions and the enterprise scientific research department, and professional teaching work in colleges and universities. Solid theoretical basis and basic scientific literacy are the key to the training of such talents. The training should focus on the following two aspects. The first is the academic and cutting-edge nature of the curriculum. The curriculum content should reflect the major achievements, frontier theories, development direction and trend of the discipline in the field of discipline and production, and lay a solid theoretical foundation for its further study and theoretical innovation. The second is to highlight the research and enlightenment of teaching methods, focusing on the training of students' problem consciousness, innovative thinking and scientific research methods, and cultivating students' habit of independent thinking and self-learning. C. Interdisciplinary and comprehensive type Interdisciplinary and comprehensive talents are required to have good professional background, get familiarity with and master basic theoretical knowledge such as economy, politics, management, foreign languages and computers; good communication and coordination skills, and competence in management, research, publicity and planning work in government, enterprises, institutions and other units. The cultivation of the kind of talents, on the one hand, is to construct a teaching platform of quality development course, set up one or more groups of related professional courses according to the specialty characteristics and training objectives for students to choose independently, and incorporate them into the students' grades and credit management; on the other hand, it is to guide students to choose courses across majors and disciplines, and to realize the intersection of majors and disciplines in the form of "two majors, two degrees and major and minor subjects", hence improving their comprehensive quality and enhancing their employment advantages. D. Innovation and entrepreneurship type Entrepreneurial talents must have innovative thinking, innovative consciousness, perfect knowledge structure and strong pioneering ability, and be able to use what they have learned to achieve independent entrepreneurship in the form of individuals or teams. The quality of entrepreneurship is called "the third education passport". The entrepreneurship education is the sublimation of the university education idea and the education goal. We should integrate the innovative entrepreneurship education into the talent training plan and curriculum system; build a multi-level and three-dimensional innovative entrepreneurship training platform; cultivate Advances in Economics, Business and Management Research, volume 96 students' innovative thinking and habits; stimulate students' enthusiasm for entrepreneurship and enhance entrepreneurship ability by carriers of business plan competition, business project contest, ERP sand table simulation confrontation, entrepreneurship salon activities; provide a realistic platform for innovative entrepreneurship activities and accelerate the cultivation and incubation of students' entrepreneurship projects with the driving force of school entrepreneurship incubation base and national, provincial and municipal threelevel entrepreneurship incubation fund. IV. CONSTRUCTION OF CLASSIFIED CULTIVATION SYSTEM Classified training system is a comprehensive system consisting of educational subjects, training objectives, teaching system and system factors which promote and depend on each other. It is characterized by integrity, orderliness and dynamic development. The external environment, including social needs, school-running orientation and regional characteristics, is the external environment on which the talent training system depends for survival. Separating from the reality of colleges and universities, the goal of classified training can only "look beautiful". It is difficult to embody the features of talent training without regional characteristics. Classified training system must reflect the changes of external environment in time, and actively affect the improvement of external environment, so as to achieve a good interaction between talent training and external environment finally. A. Core level -education subject The subject of education is at the core level of the talent training system. The establishment of training mode, curriculum system and teaching management system must revolve around this central goal. The classification of talents cannot be decided by the educators, but by the leaners themselves. In other words, the decisive factor of talent training mode is not the orientation and subject level of universities, but the subjective needs of learners. The schoolrunning mode is not the same as the training mode, the discipline development cannot replace the professional construction, and the viewpoint that the academic type equals the social elite also needs to be discarded. Otherwise, there will be a deviation between the discipline development and the professional development-the discipline level is very high, but the professional construction level is not complimentary, and the employment quality of graduates is worrying. The differences and diversified needs of the educated subjects are the direct basis for the classified training of talents. Without this point, our education will not be able to cater to the students, and the students we train will not be able to adapt to the society. B. Intermediate level-safeguard measures 1) Construction of teaching staff team Promoting the coordinated development of teachers and students' classified training is an important support for realizing diversified talents training. According to the requirements of various training objectives, team building can be divided into three directions: double-qualified teaching team, research-based teaching team and innovative entrepreneurship mentor team. (1) Double-qualified teaching team. Teachers should have solid theoretical basis and rich teaching experience, have received training in relevant industries or have working experience in the industry. Dualqualified teachers mainly solve the problem of "how to do". They play the role of preacher, instructor and puzzleinterpreter. They aim at cultivating students' professional consciousness and practical ability and designing curriculum content according to actual production process. They do not pursue the profundity of professional theory, but seek to realize knowledge dissemination, technology imparting and experience transfer. (2) Research-based teaching team. Research-based teaching team not only requires teachers to have solid and profound theoretical literacy, but also puts forward special requirements for teachers' innovative thinking and scientific research ability. The teaching emphasizes integrity of the knowledge system. We should organize and implement the teaching content with the inherent logic of discipline knowledge. We shouldn't be satisfied with the transfer of knowledge, but pay more attention to the cultivation of students' innovative thinking and scientific research consciousness, acts as the designer, organizer and guide of teaching activities, encourages students to participate in academic discussions and research projects, and enables them to grasp basic research methods, hence cultivating their independent thinking habits and questioning and critical spirit. (3) Innovation and entrepreneurship mentor team. Members are mainly composed of full-time teachers, professional teachers, psychological consultants and elites in the industry. They highlight the combination of full-time and part-time work, the combination of managers and professional teachers, and complete the guidance work in a face-to-face and intimate manner. There is no doubt about the significance of team building of entrepreneurship mentors, but they are faced with four major obstacles: lack of resources, lack of motivation, lack of experience and lack of institutional mechanisms. It is urgent to explore and properly solve such problems as strengthening the training, selection and introduction of entrepreneurial instructors, implementing entrepreneurial fund and venues, and incorporating guidance performance into job evaluation and employment. 2) Construction of course system The curriculum system under the classified training mode is mainly based on the needs of society, students and subject development. It must be open, targeted and regional-based in order to achieve specific training objectives by arranging and combining the elements of curriculum structure, proportion, and inside and outside class hours. Openness means that the curriculum system must be characterized by dynamic development and must accurately grasp the changes of employment market and social needs through stable feedback mechanism of social evaluation and dynamic and orderly selfregulation mechanism, and timely adjust, enrich, update and improve it; pertinence means that the curriculum system can meet the needs of specific training objectives. A group of courses should correspond to and support the training requirements of specific types of talents. Every detail, such as course attributes, school hours, teaching contents and teaching methods, can meet the characteristics of specific target groups. Advances in Economics, Business and Management Research, volume 96 Regional characteristics refer to the regional features of resources, industries, nationalities and geographic characteristics, etc., whose role for professional development and curriculum system construction cannot be ignored. Only by building the curriculum system relying on regional characteristics can we form distinct professional characteristics. 3) Construction of practical teaching system Practical teaching system is the sum of a series of teaching activities based on certain training objectives and it is around the cultivation of practical ability. Specifically, it includes the target system, content system, guarantee and management system of practical teaching. To strengthen the classified training of talents, we must embody the diversity and multilevels of teaching objectives, build a three-level practical teaching platform around the cultivation of basic practical ability, comprehensive practical ability and innovative practical ability; standardize teaching content, improve training programs and teaching plans, and build a comprehensive and three-dimensional practical teaching network that is featured by "school-enterprise combination plus in-class and out-of-class collection plus examination and qualification training combination plus simulation and fullfledged integration plus double-qualified team"; strengthen the construction of the assessment and evaluation system of practical teaching quality and provide strong support and institutional guarantee for the standardization and orderliness of practical teaching, hence realizing "doing in learning, learning in doing, doing in innovation" , creating students' profession ability and employment advantages and offering a solid guarantee for the implementation of the talent classification training model. 4) Construction of employment guidance service system Strengthening employment guidance and providing students with all-round and systematic services from enrollment to graduation are the key links to implement classified training and improve their employment ability. A perfect employment guidance system is a comprehensive system composed of scientific career guidance courses, specialized guidance institutions and professional teachers. To cultivate students' healthy employment concept in the form of career guidance courses, expert lectures, psychological counseling and discussion, we should guide students to strengthen self-awareness and avoid herding effect in students' career choices, assist students in career design, and impart career selection skills through theoretical lectures and simulated interviews, etc., including writing job-hunting materials, psychological adjustment for job-hunting, correct signing of contracts, etc. We should build a three-dimensional information network; mine, collect and screen employment information; eliminate backward and outdated information; block the spread of false information; do a good job in the analysis and prediction of employment situation, enabling students to understand the employment policy, industry dynamics and job requirements, avoiding all kinds of job hunting traps (such as financial fraud, romance fraud, contracts and intellectual property traps), better capturing employment opportunities and improving the quality of employment. C. Outer layer-institutional factor Institutional factors are the baton of command of teaching and learning activities in colleges and universities, as well as an important guarantee for the realization of teaching objectives. To do a good job in classified training mode of students, flexible teaching management system, diversified teaching quality evaluation system and teacher classified management system should be matched. 1) Flexible teaching management system Flexible management emphasizes students' self-learning, self-development and self-improvement, which coincides with the design concept of talent classification and training. (1) Class system and learning team system are parallel. Class system is a traditional education management model, which has its space and advantages, but it cannot meet the specific requirements of classified training. The learning team can be positioned as a learning organization with teachers' guidance and students' free participation. Under the background of classified training, we should advocate team learning and knowledge sharing, strengthen team building and management, classroom teaching, practical teaching and graduation design (thesis) links, and enhance the comprehensive evaluation of team members' cooperative ability; (2) Deepen the reform of credit system. Implementing flexible education system, workstudy alternation and mutual recognition of credit among colleges and universities are the key objectives of deepening the reform of credit system. Perfecting the teaching management mode suitable for credit system and realizing the linkage of credit system, tutorial system and learning team system are the feasible ways to realize the reform task. (3) Undergraduate tutorial system. The tutorial system for undergraduates is indispensable for the development of students' autonomy and individuality. The function orientation of tutors is mainly carried out in ideological guidance, curriculum guidance, life guidance and talent guidance. It is implemented in many ways, such as "one-to-one", "one-tomany", "many-to-one" and "team-to-team", and runs through the whole process from enrollment to employment. In the specific implementation process, we should scientifically and reasonably orientate the guidance, define its responsibilities and powers, innovate the mechanism of selecting, encouraging and evaluating the effect of guidance, break through the bottleneck constraints such as excessive ratio of students to teachers, insufficient motivation of tutors, unclear objectives and responsibilities, and promote the standardization and institutionalization of undergraduate guidance system. (4) The two-track system of professional course teaching. "Dual-track teaching" can be understood as: in the course group of "professional core and major professional courses", classes are held in parallel according to "application orientation" and "research orientation". Two groups of teachers teach separately, students choose courses freely and take the learning team as the teaching object to realize "different learning in the same class" and "different classes in the same learning". Credit system can allow students to choose courses freely, but they cannot choose the teaching methods, means and evaluation methods of courses. The purpose and pertinence of dual-track teaching are stronger, which can Advances in Economics, Business and Management Research, volume 96 better meet the requirements of students' differentiated development and implement the demands of classified training. 2) Monitoring and evaluation system of diversified quality Diversified quality evaluation is embodied by the diversification of evaluation subjects, evaluation criteria and evaluation methods. Diversification of evaluation subjects means that professional teachers, students, parents and employers jointly complete the evaluation of course learning effect and professional training quality. We should set up diversified quality evaluation criteria compatible with application-oriented, research-oriented, compound-oriented and innovative entrepreneurship-oriented, reflecting the requirements of multi-type training objectives; and diversification of evaluation methods can be realized through the combination of process evaluation and intensive evaluation, individual evaluation with team evaluation, and combination of written and oral examination to encourage independent thinking and make the examination a continuous learning process. 3) Teachers' classified management and assessment system The classified management of teachers is unavoidable in the process of classified training of talents, which has been gradually carried out in colleges and universities all over the country. It is feasible to classify the subject into three parts (teaching-oriented, teaching research-oriented, scientific research-oriented), four parts (teaching-oriented, teaching research -oriented, research teaching-oriented, scientific research-oriented) or other classification methods, but it must be in line with the objective reality of schools, majors and students so as to achieve classified training of talents as the central task. V. CONCLUSION Classified cultivation is an all-round, integrative teaching reform carried out under the popularization of higher education. It aims to correspond to students' autonomy, differentiated and diversified social needs with multi-objective, multi-level, diversified and multi-type teaching methods. The classified training system subdivides talents into four categories: practical trade, innovative research, interdisciplinary and comprehensive, and innovative and entrepreneurial. It adheres to the vocational and social demand, takes the employability training of students as the standard, and the training of diversified talents as the realistic task. In the process of carrying out the training reform, we must focus on curriculum system reform, practice teaching reform, teaching mode and quality evaluation reform; strengthen the construction teaching team, scientific research team, innovation and entrepreneurship tutor team; realize schoolenterprise combination, in-class and out-class combination, academic education and training combination. The classified training mode centers on "who to be cultivated" and "how to cultivate". It is the inheritance and innovation of the original training mode, aiming to enhance students' learning motivation, career choice ability and sustainable development ability.
2019-10-10T09:17:36.634Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "792727d4521bb4027201cad55d5f153b5d626c14", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/125918101.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00957f007ca590ad4c4f6239d347fbf0703c03ee", "s2fieldsofstudy": [ "Education", "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
52224635
pes2o/s2orc
v3-fos-license
Non-metal-templated approaches to bis(borane) derivatives of macrocyclic dibridgehead diphosphines via alkene metathesis Two routes to the title compounds are evaluated. First, a ca. 0.01 M CH2Cl2 solution of H3B·P((CH2)6CH=CH2)3 (1·BH3) is treated with 5 mol % of Grubbs' first generation catalyst (0 °C to reflux), followed by H2 (5 bar) and Wilkinson's catalyst (55 °C). Column chromatography affords H3B·P(n-C8H17)3 (1%), H3B·P((CH2)13CH2)(n-C8H17) (8%; see text for tie bars that indicate additional phosphorus–carbon linkages, which are coded in the abstract with italics), H3B·P((CH2)13CH2)((CH2)14)P((CH2)13CH2)·BH3 (6·2BH3, 10%), in,out-H3B·P((CH2)14)3P·BH3 (in,out-2·2BH3, 4%) and the stereoisomer (in,in/out,out)-2·2BH3 (2%). Four of these structures are verified by independent syntheses. Second, 1,14-tetradecanedioic acid is converted (reduction, bromination, Arbuzov reaction, LiAlH4) to H2P((CH2)14)PH2 (10; 76% overall yield). The reaction with H3B·SMe2 gives 10·2BH3, which is treated with n-BuLi (4.4 equiv) and Br(CH2)6CH=CH2 (4.0 equiv) to afford the tetraalkenyl precursor (H2C=CH(CH2)6)2(H3B)P((CH2)14)P(BH3)((CH2)6CH=CH2)2 (11·2BH3; 18%). Alternative approaches to 11·2BH3 (e.g., via 11) were unsuccessful. An analogous metathesis/hydrogenation/chromatography sequence with 11·2BH3 (0.0010 M in CH2Cl2) gives 6·2BH3 (5%), in,out-2·2BH3 (6%), and (in,in/out,out)-2·2BH3 (7%). Despite the doubled yield of 2·2BH3, the longer synthesis of 11·2BH3 vs 1·BH3 renders the two routes a toss-up; neither compares favorably with precious metal templated syntheses. We subsequently developed an interest in the free dibridgehead diphosphine ligands P((CH 2 ) n ) 3 P (n = 14, 2; 18, 3), prompted in part by the unexpected discovery of the facile demetalations shown in Scheme 1 [5,6,10,22]. Such compounds were previously known only for much smaller ring sizes (n < 4) [23]. These reactions require excesses of certain nucleophiles, and the mechanisms remain under study. The yields are quite good, but the routes are stoichiometric in precious metals. Although the metals can be recovered as species such as K 2 Pt(CN) 4 or RhCl(PMe 3 ) 3 , we have nonetheless sought to develop more economical protocols. The analogous Fe(CO) 3 adducts are easily prepared [1][2][3][4], but in efforts to date it has not been possible to efficiently remove the dibridgehead diphosphine ligands from the low cost iron fragment. Oxidations that lead to the corresponding dibridgehead diphosphine dioxides (O=)P((CH 2 ) n ) 3 P(=O) have exhibited promise, but purification has been problematic [24]. Indeed, phosphine oxides are everyday precursors to phosphines, so we have considered various non-metal-templated routes to 2·2(=O), 3·2(=O), and related species. However, as described in the discussion section, the yields have not been competitive [25]. Another preliminary point concerns the ability of macrocyclic dibridgehead diphosphorus compounds to exhibit in/out isomerism [26]. As shown in Scheme 1, there are three limiting configurations for 2 and 3: in,in, out,out, and in,out (identical to out,in). The first two, as well as the degenerate in,out pair, can rapidly interconvert by a process termed homeomorphic isomerization [26,27], which is akin to turning the molecules inside out. Readers are referred to earlier publications in this series for additional details [22,25,[28][29][30]. Interconversions between the in,in/out,out and in,out/out,in manifolds require phosphorus inversion and temperatures considerably in excess of 100 °C. In this paper, we describe two non-metal-templated approaches to 2 that are based upon metatheses of phosphine boranes of alkene containing phosphines. The first involves the monophosphorus precursor H 3 B·P((CH 2 ) 6 CH=CH 2 ) 3 (1·BH 3 ) [31], and the second a diphosphorus precursor in which one of the methylene chains linking the two phosphorus atoms has already been installed. The advantages and limitations of each are analyzed in detail. Some of the results (Scheme 2) have appeared in the supporting information of a preliminary communication [28], and others in a dissertation [32]. Monophosphorus precursors As reported earlier [31], the alkene containing phosphine P((CH 2 ) 6 CH=CH 2 ) 3 (1) can be prepared in 87% yield from the reaction of PCl 3 and MgBr(CH 2 ) 6 CH=CH 2 . Following the addition of H 3 B·SMe 2 , the phosphine borane 1·BH 3 can be isolated in 65-85% yields [31], as shown in Scheme 2. It is critical to avoid an excess of H 3 B·SMe 2 , as this brings the C=C units into play. In fact, when substoichiometric amounts of H 3 B·SMe 2 are added to THF solutions of purified 1·BH 3 , gels immediately form. A ca. 0.01 M CH 2 Cl 2 solution of 1·BH 3 and a ca. 0.002 M CH 2 Cl 2 solution of Grubbs' first generation catalyst (3 mol %) were combined at 0 °C. The mixture was warmed to room temperature, and a second charge of Grubbs' catalyst added (2 mol %). The sample was refluxed, and then filtered through silica gel. The filtrate was concentrated and treated with H 2 (5 bar) and Wilkinson's catalyst (55 °C). The mixture was taken to dryness and the residue tediously chromatographed on a silica gel column. Numerous fractions were collected and analyzed by TLC. The mass recovery from the column was 33% of theory (for complete metathesis). More than ten mobile products could be discerned, but only five could be isolated in pure form and ultimately identified. These are described in order of elution. Each was analyzed by NMR ( 1 H, 31 P{ 1 H}, 13 C{ 1 H}; always CDCl 3 ) and IR spectroscopy, mass spectrometry, and microanalysis, as summarized in the experimental section. The 13 C{ 1 H} NMR spectra proved to be most diagnostic of structure, and were analyzed in detail. The 31 P{ 1 H} NMR spectra were all very similar (broad apparent doublets due to phosphorus boron coupling). First, traces of a colorless oil were obtained. The 1 H NMR spectrum showed a characteristic triplet at 0.83 ppm consistent with a terminal methyl group. The 13 C{ 1 H} NMR spectrum exhibited eight signals, two of which were phosphorus coupled doublets. One of the singlets (14.0 ppm) was typical of a terminal methyl group. Based upon these data, and the integration of the 1 H NMR spectrum, the oil was assigned as the hydrogenated phosphine borane H 3 B·P(n-C 8 H 17 ) 3 (4·BH 3 ), a known compound [33]. The yield was only 1%. Next, another colorless oil eluted. The 1 H NMR and 13 C{ 1 H} NMR spectra again exhibited signals characteristic of a methyl group (0.86 ppm, t; 14.0 ppm, s). Integration of the 1 H NMR spectrum established a 14:1 area ratio for the methylene (1.62-1.19 ppm) and methyl signals. The 13 C{ 1 H} NMR spectrum featured one set of seven signals and another set of eight with an intensity ratio of approximately 2:1. The less intense set resembled the signals arising from the n-octyl groups in 4·BH 3 . The more intense set was very similar to the signals arising from the cyclic substructures of 6·2BH 3 (described below) and a phosphine borane reported earlier [34]. The mass spectrum exhibited an intense ion at m/z 340 (5 + , 93%), and no ions of higher mass. Hence, the oil was assigned as the monocyclic intramolecular metathesis product (5·BH 3 ; see Scheme 2). The yield was 8%. The third product was also a colorless oil. The 13 C{ 1 H} NMR spectrum exhibited seven signals, three of which were phosphorus coupled doublets (second spectrum from top, Figure 1). Analogous coupling patterns are found with the free dibridgehead diphosphines 2 and 3 in Scheme 1. No NMR signals diagnostic of methyl groups were present, and further analysis is presented along with that for an isomer below. A white powder was obtained next. The 13 C{ 1 H} NMR spectrum exhibited fourteen signals, half of which were approximately twice as intense as the others. Two signals of each set exhibited phosphorus coupling. The overall pattern was quite similar to those shown by metal complexes with cis or trans coordinating diphosphine ligands of the formula (6) [6,7,12,13,35]. This suggested the diphosphine diborane structure 6·2BH 3 (see Scheme 2), which is derived from one metathesis involving alkenyl moieties on different phosphorus atoms, and two metatheses of alkenyl moieties on identical phosphorus atoms. The yield was 10%. The structure has been confirmed by an independent synthesis (detachment of the diphosphine from a platinum complex followed by borane addition) and a crystal structure [6]. Finally, another white powder was obtained. As with the previous oil isolated above, the 13 C{ 1 H} NMR spectrum exhib-ited seven signals, three of which were phosphorus coupled doublets (third spectrum from top, Figure 1). Both spectra were consistent with dibridgehead diphosphine diboranes H 3 B·P((CH 2 ) 14 ) 3 P·BH 3 (2·2BH 3 ) derived from threefold intermolecular metatheses of 1·BH 3 . Based upon independent syntheses from the dibridgehead diphosphines 2 obtained in Scheme 1 [6], they were assigned as in,out-2·2BH 3 (4%) and the stereoisomer (in,in/out,out)-2·2BH 3 (2%), as shown in Scheme 2. The depiction of the latter as an out,out (vs in,in) isomer in Scheme 2 is arbitrary, but represents the form found in a confirming crystal structure [6]. Parallel reactions were conducted with Grubbs' second generation catalyst and the nitro-Grela catalyst [36]. However, the combined yields of 2 diminished. Diphosphorus precursors Since the yields of the cage like diphosphine diboranes 2·2BH 3 in Scheme 2 were -as expected -very low, alternative strategies were considered. The poor mass balance was attributed, at least in part, to the formation of oligomeric products that were retained on the column. Improvements might be expected from precursors in which one of the methylene chains tethering the two phosphorus atoms was pre-formed. Thus, we set out to prepare a tetraalkenyl metathesis precursor as shown in Scheme 3. Thus, despite the low yield of the final step in Scheme 3, reasonable quantities of the diphosphine diborane 11·2BH 3 could be stockpiled. As shown in Scheme 5, 11·2BH 3 was subjected to a metathesis/hydrogenation/column chromatography sequence similar to that for 1·BH 3 in Scheme 2. However, a tenfold higher dilution was used in the metathesis step (0.0010 M as compared to 0.010 M). Figure 1 shows a 13 C{ 1 H} NMR spectrum of the crude product after hydrogenation stacked above spectra of the three products that could be isolated after the rather tedious column chromatography: the dibridgehead diphosphine diborane in,out-2·2BH 3 , its constitutional isomer 6·2BH 3 , and its stereoisomer (in,in/ out,out)-2·2BH 3 . It can be inferred from the top spectrum that the three products were the major components and moreover present in approximately equal amounts. However, the isolated yields were affected by the challenging separation. In particular, in,out-2·2BH 3 and 6·2BH 3 eluted very closely, rendering some mixed fractions unavoidable and lowering the amounts of pure products. Compared to the metathesis/hydrogenation sequence for 1·BH 3 (Scheme 2) the yields of in,out-2·2BH 3 and (in,in/out,out)-2·2BH 3 (Scheme 5) are higher but still poor. Taking into account the overall yields (three steps from PCl 3 and BrMg(CH 2 ) 6 CH=CH 2 in the first synthesis vs seven steps from 1,14-tetradecanedioic acid in the second), the latter route does not offer any advantage, even if one were to improve the conversion of 10·2BH 3 to 11·2BH 3 . Scheme 6: Schematic comparison of the key alkene metathesis steps in Scheme 2 and Scheme 5. Discussion As contrasted in Scheme 6, Scheme 2 and Scheme 5 present two conceptually related routes to the isomeric title compound 2·2BH 3 . In the first, two trialkenylphosphine boranes (1·BH 3 = I) must undergo metathesis. The first productive step is intermolecular, giving a diphosphorus compound with a P(CH 2 ) 6 CH=CH(CH 2 ) 6 P tether II that is positioned for subsequent intramolecular ring closing steps. Those involving alkenyl groups from different phosphorus atoms are productive (leading to 2·2BH 3 via hydrogenation of IIIa), and those involving groups from the same phosphorus atoms are non-productive (leading to 6·2BH 3 via hydrogenation of IVa). In the second, the starting material has a preformed P(CH 2 ) 14 P tether (11·2BH 3 = V), and the four alkenyl groups have reactivity options (→ IIIb or IVb) analogous to those of intermediate II with the P(CH 2 ) 6 CH=CH(CH 2 ) 6 P tether. Importantly, all of these steps are presumed to be largely under kinetic control, consistent with experience with the types of metatheses in Scheme 1 [1][2][3][4][5][6][7][8][9][10][11][12][13]34]. Although the second route intuitively seems more favorable, after the initial intermolecular metathesis of 1·BH 3 (I), both require an equivalent series of steps to reach (after hydrogenation) 2·2BH 3 . One reason 1·BH 3 is an inferior substrate is that following the initial generation of a P(CH 2 ) 6 CH=Ru species, two P(CH 2 ) 6 CH=CH 2 moieties remain available for nonproductive intramolecular ring closing metathesis (giving VI). In contrast, with the analogous intermediate derived from 11·2BH 3 (V), there is only one P(CH 2 ) 6 CH=CH 2 moiety that can give non-productive chemistry. It is also worth noting that high dilution provides less of an advantage in Scheme 2, as one wants to favor intermolecular over intramolecular metatheses in the first step. In Scheme 5, one wants to avoid intermolecular metatheses at all stages. At present, we have no rationale for the in,out vs (in,in/out,out) isomer ratios for 2·2BH 3 . However, it is easy to map the sequence leading to each, as shown in Scheme 7. When there is only one tether between the two phosphorus atoms, the phosphorus-boron bonds can be arrayed in an anti fashion, as depicted in VII. When subsequent metatheses join alkenyl groups in the syn positions on each phosphorus atom (front to front and rear to rear), (in,in/out,out)-2·2BH 3 must result (as drawn in Scheme 7, the out,out isomer would be the kinetic Scheme 8: Another non-metal-templated approach to dibridgehead diphosphorus compounds. product). When the first metathesis does not join the syn positions, as in VIII (front to rear), one phosphorus-boron bond must subsequently be rotated by 180° to create a syn orientation for the second metathesis. Of course, if the first metathesis step does not require a syn relationship (per VIII), the same possibility can be entertained for the second (see IX). This would lead to an isomeric bicyclic compound with "crossed chains". We have sought to access such species by conducting metatheses of substrates of the types in Scheme 1 that give thirty-three membered macrocycles (n = 30) [7]. However, none have so far been detected. Other types of crossed chain in/out isomer systems have in fact been realized [25,30]. However, as shown in Scheme 8, it has proved possible to synthesize the diphosphine dioxides 14, in which the two phosphorus atoms are tethered by a methylene chain, in two steps in 66-68% overall yields from diethyl phosphonate ((O=)PH(OEt) 2 ), Grignard reagents BrMg(CH 2 ) m CH=CH 2 , base (NaH), and appropriate α,ω-dibromides Br(CH 2 ) n Br [25]. Following metathesis and hydrogenation, these afford dibridgehead diphosphine oxides 15 and 16 in 14-19% yields. This is slightly better than the combined yield of in,outand (in,in/ out,out)-2·2BH 3 in Scheme 5, although the data are not strictly comparable as the ring sizes differ. It has not yet proved possible to efficiently separate the in/out isomers of 15 and 16. However, byproducts derived from metatheses of alkenyl groups on the same phosphorus atom -such as 17 (comparable to 6·2BH 3 ) -appear to form in much smaller amounts. To our knowledge, only one macrocyclic dibridgehead diphosphine diborane has been previously reported, (in,in/out,out)-18·2BH 3 in Scheme 9 [50,51]. This features triarylphosphorus bridgeheads and p-phenylene containing tethers that are long enough to allow rapid homeomorphic isomerization. The precursor 18·2(=O) was prepared by a threefold Williamson ether synthesis in surprisingly high yields (61% in,in/out,out and in,out combined) [50,51], likely aided by the geminal dialkyl effect associated with the quaternary centers [52]. Finally, it should be noted that a number of alkene containing phosphine boranes have been employed in metathesis reactions [53,54]. In particular, the tetraalkenyl diphosphine diborane 19·2BH 3 in Scheme 10 represents a downsized version of 11·2BH 3 . A species analogous to 6·2BH 3 , 20·2BH 3 , is obtained in much higher yield than any of the products in Scheme 5 [53]. Hence, selectivities can strongly depend upon the lengths of the methylene segments in the precursor. Conclusion In conclusion, this work constitutes a further installment in the evolution of synthetic strategies for dibridgehead diphosphorus compounds that employ alkene metathesis. The new approaches (Scheme 2; Scheme 3 and Scheme 5) lack metal templates, which differentiates them from the routes presented in Scheme 1. However, neither is competitive with Scheme 1, despite eliminating the requirement for stoichiometric amounts of precious metals. Furthermore, preassembling a diphosphine diborane substrate per Scheme 3 and Scheme 5 is not competitive with the "shotgun" approach in Scheme 2, and both routes require comparably demanding preparative column chromatography. Hence, the most promising direction for future research would seem to be templated syntheses via non-precious metals [55]. This remains an area of ongoing investigation in our laboratory and further results will be reported in due course. Supporting Information Supporting Information File 1 Additional experimental data.
2018-09-12T21:25:31.873Z
2018-09-07T00:00:00.000
{ "year": 2018, "sha1": "f31d8d0bb23434d24d86d8b47e8f3a6b7f6ab9bb", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-14-211.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f31d8d0bb23434d24d86d8b47e8f3a6b7f6ab9bb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
16866812
pes2o/s2orc
v3-fos-license
The prioritisation of paediatrics and palliative care in cancer control plans in Africa Background: Given the burden of childhood cancer and palliative care need in Africa, this paper investigated the paediatric and palliative care elements in cancer control plans. Methods: We conducted a comparative content analysis of accessible national cancer control plans in Africa, using a health systems perspective attentive to context, development, scope, and monitoring/evaluation. Burden estimates were derived from World Bank, World Health Organisation, and Worldwide Palliative Care Alliance. Results: Eighteen national plans and one Africa-wide plan (10 English, 9 French) were accessible, representing 9 low-, 4 lower-middle-, and 5 upper-middle-income settings. Ten plans discussed cancer control in the context of noncommunicable diseases. Paediatric cancer was mentioned in 7 national plans, representing 5127 children, or 13% of the estimated continental burden for children aged 0–14 years. Palliative care needs were recognised in 11 national plans, representing 157 490 children, or 24% of the estimated Africa-wide burden for children aged 0–14 years; four plans specified paediatric palliative needs. Palliative care was itemised in four budgets. Sample indicators and equity measures were identified, including those highlighting contextual needs for treatment access and completion. Conclusions: Recognising explicit strategies and funding for paediatric and palliative services may guide prioritised cancer control efforts in resource-limited settings. Of the 163 284 children diagnosed with cancer per year in the world, an estimated 133 301 live in 'less-developed' or low-and middle-income settings (Ferlay et al, 2013). Health ministries, nongovernmental organisations, hospitals, and academic and research institutes have championed the right to quality of life for cancer patients. Avoidance of programmatic redundancy and competition for limited resources requires strategic, implementable frameworks of cancer control inclusive of patients of all ages. Cancer control plans (CCPs) bring the potential to longitudinally coalesce policy-makers, health providers, patients, and advocates into a functional health delivery system for measurable cancer outcome improvement (Farmer et al, 2010). The World Health Organisation (WHO) describes national CCPs as 'public health programmes designed to reduce cancer incidence and mortality while improving the quality of life of cancer patients, through the systematic and equitable implementation of evidence-based strategies for prevention, early detection, diagnosis, treatment, and palliation, making the best use of available resources' (WHO, 2002). The Union for International Cancer Control (UICC) describes national CCPs as foundational for comprehensive cancer control, critical to the World Cancer Declaration (UICC, 2008). The most recent GLOBOCAN data reveals that B847 000 new cancer cases were diagnosed in Africa in 2012 with an associated mortality rate of 591 000 (Ferlay et al, 2013). Cancer registries are estimated to cover o15% of the continent (Parkin, 2006), thus the exact incidence of cancer in Africa is grossly underestimated, particularly among children (Kruger et al, 2014). Unlike many adult cancers known to be associated with environmental exposures warranting screening and behavioural interventions, most paediatric cancers are not associated with modifiable risk factors. Paediatric cancers thus require a health system focus on early and accurate detection and treatment completion. Investment in these facets of cancer control has the potential to enhance national wellness and productivity, as the care of children represents the continent's future with youth aged between 0 and 14 years comprising of 43% of sub-Saharan Africa's population in 2013 (World Bank, 2014), an age demographic expected to grow (UNICEF, 2014). Alarmingly, cure rates for childhood cancer in some sub-Saharan African countries have been reported as o10% , and in some others, even treatment completion rates may be o10% in the setting of high mortality and treatment abandonment rates (Slone et al, 2014). However, multicentre studies in Africa have shown that childhood cancer is curable in the context of provider training, adapted treatment regimens, and aggressive supportive care. For example, the Wilms tumor treatment protocol resulted in 93% treatment completion and 46% overall survival in Malawi (Israels et al, 2012), an approach now initiated as a multisite prospective clinical trial in eight African centres. The French-African Paediatric Oncology Group (GFAOP), with personnel training and joint treatment protocols, revealed the survival rate of Burkitt lymphoma protocols adapted for Africa at a 3-year overall survival of 61% (Harif et al, 2008). The priorities and strategies delineated in national CCPs represent the opportunity for establishing a national commitment to patient outcomes by enhancing cure through frameworks of sustained treatment approaches, for even the most vulnerable of community members (Sullivan et al, 2013). This research sought to first identify cancer control plans in Africa and to analyse the national control plan content for inclusiveness of paediatric populations. Given the current reality of late diagnoses and limited access to advanced curative therapies for many children in Africa, palliative care constitutes a critical component (Amery et al, 2009) that we sought to analyse. According to the Worldwide Palliative Care Alliance (WPCA), 49% of children in need of palliative care globally are found within the African Region (WPCA, 2014). A report commissioned by the United Nations Children Fund recently reported that the number of youth reaching palliative care services in South Africa and Zimbabwe in 2012 represented only 5% of the children with palliative care needs, while the number in Kenya was 1% (UNICEF and ICPCN, 2013), thus drawing attention to the urgent need for palliative care scale-up for children on the continent. Furthermore, recognising the variability in palliative care provision in Africa with many countries described as having limited capacity building or isolated provision only (Knapp et al, 2011;Lynch et al, 2013;WPCA, 2014), depiction of palliative care needs in CCPs could be a critical step in promoting integrated and implemented services. Aware of the role that socioeconomic contexts such as rural vs urban contexts, private vs public access, health literacy, and poverty have in health outcomes (Sullivan, 2012), the study team was attentive to social determinants, stakeholder inclusion, and equity prioritisation, as components of cancer control contextual and developmental considerations (WHO, 2009). Finally, monitoring and evaluation components of CCPs were examined, including recognition of data registries, equity measures, and treatment completion. This study describes the current state of cancer control models in Africa from a paediatric, palliative, and quality inclusive health systems perspective. The study team intended to provide an overview of publicly available plans as a platform to capture existing context-specific elements, to celebrate examples of inclusion, and to identify opportunities for synergy. MATERIALS AND METHODS Search strategy and inclusion criteria. We identified African countries reporting to WHO as having a national cancer plan (WHO, 2010). We included in our study all publicly available CCPs. We identified and sourced these CCPs through database searches including the UICC International Cancer Control Partnership (ICCIP) portal (UICC, 2014), communication with Ministry of Health representatives, and through Google searches with combinations of the terms 'oncology, ' 'cancer,' 'malignancy,' 'plan,' and 'strategy' with and without individual country names. The last search was completed on 12 October 2014. We analyzed the contents of publicly available CCPs, but did not include plans that were either publicly unavailable or in process. We excluded from our analysis national health policies that only focussed on subcomponent analysis, such as cervical cancer screening or evaluation-only documents. Data extraction and analysis. Structured analysis was guided by existing frameworks for CCP analyses, including the WHO's national cancer control programme guidelines and assessment tool (WHO, 2002), Centres for Disease Control and Prevention's Guidance for Cancer Control Planning (Given et al, 2010), the American Cancer Society's National Action Plan for Childhood Cancer (American Cancer Society, 2000), the UICC Toolkit for Cancer Control Planning (UICC, 2012), the European Cancer Control Plan Evaluation Analysis (Atun et al, 2009), and global insights from the Third International Cancer Control Congress (Harford et al, 2009), adapted to the local context. Resources providing additional insight into locally relevant extraction and interpretation priorities included the Lancet Oncology Cancer Control in Africa series (Harding et al, 2013;Harif et al, 2013;Stefan et al, 2013;Vento, 2013) and health systems assessments, such as the African Child Policy Forum's Report on Child Wellbeing and Child Friendliness Index (Bequele, 2010). Data extraction was based on elements of cancer control as outlined by WHO (2006) with emphasis on context, development, scope, and monitoring and evaluation ( Figure 1). The data extraction template was piloted on two plans and revised to reach team consensus (Supplementary Appendix 1). Data extraction occurred by two independent reviewers with content independently verified by study team members. Discrepancies were resolved by discussion and consensus. In recognition that cancer care may be provided for children within a continuum of services spanning age groups, cancer control aspects nonspecific for age were also extracted. Country income classifications were based on the World Bank income classification by gross national income per capita as of July 2013 listings (World Bank, 2014 burden estimates for analyses were collected from current World Bank (World Bank, 2014), GLOBOCAN (Ferlay et al, 2013), and WPCA/WHO data (WPCA, 2014). Role of the funding source. No sources of funding affected this study. The first and corresponding authors had full access to all data and final responsibility for submission. RESULTS Seventeen countries reported to the WHO as having a plan in 2010 (WHO, 2010); 22 plans were identified through the ICCP portal (UICC, 2014) and one identified through contact with a Health Minister, resulting in 19 non-duplicate evaluable plans ( Figure 2 Number of control plans reviewed after duplicates removed; n = 22 n = 9 countries in english n = 9 countries in french n = 1 continent in english n = 1 palliative plan only (Rwanda) n = 1 cervical cancer only (Uganda) n = 1 partnership summary (Tunisia) n = 1 evaluation report (Tunisia) , 2013-2017). The community was recognised not simply as a beneficiary but as an actor (République Togolaise Ministère de la Santé, 2012), noting that 'it takes a village' for improved cancer outcomes (African Organisation for Research and Training in Cancer (AORTIC), 2013-2017). Cancer control plan frameworks emphasised interdisciplinary team partnerships with nursing, nutrition, social work, psychology, hospital chaplaincy, and local religious groups along with core subspecialties such as surgery, radiation oncology, and infectious diseases. Policy-makers and governing officials, NGOs, civil societies, regional, and international partnership were represented as essential collaborators for plan implementation. Media groups were recognised as essential publicity partners ( chemotherapeutics or supportive care drugs specifically for treatment of childhood cancers were noted in five plans (Table 1) Equity measures. In addition to the holistic promotion of equity in health services, funding and implementation as outlined above, several plans outlined specific measures to promote equity in practice. The goal for trained personnel inclusive of both genders was highlighted in the Morocco plan. The Zimbabwe plan suggested research priorities to include cancer screening in marginalised female populations. The Malawi plan articulated a performance indicator of population satisfaction with health services by gender. The NCD plan for Botswana described improved health literacy as a strategic objective with an actionable plan to establish and strengthen community resource centres and health libraries through the development of age-, gender-, and culturally-sensitive health learning materials (Ministry of Health Government of Botswana, 2010). Guided by the Health Literacy/Promotion Taskforce, the Zimbabwean program included a measurable health literacy outcome: to increase the proportion of Zimbabweans who are cancer literate to 80% by 2017 (Zimbabwe Ministry of Health andChild Welfare, 2013-2017). Universal health insurance was listed as an intention in 12 plans ( DISCUSSION This review of contemporary CCPs in Africa highlighted notable examples of paediatric cancer control and palliative care needs as well as sample commitments made on a national and continental level. Prominently, the majority of publically available plans did not yet specify inclusion of paediatric elements (in 11 of 19 plans), nor paediatric-specific palliative care (in 15 of 19 plans), although a number exemplified recognition of the unique needs of children with cancer. While recognising distinct variability in population size, geography, language, culture, ethnic groups, sociopolitical stability, strength of economies, and health systems among a multitude of features particular to (and heterogeneous even within) each country, our analysis also allowed recognition of shared features. Furthermore, our examples supported how commitments to paediatric cancer and palliative care are not necessarily limited by national wealth or progress to date, as exemplary commitments spanned across all income-level economies (World Bank, 2014), previously reported levels of child-friendly governance (Bequele, 2010), as well as levels of paediatric palliative care provision (Knapp et al, 2011). Review of existing cancer control plans has the potential to inform development of future iterations of CCP framing and development. Although it was beyond the scope of this project to undertake analyses of plan implementation within broader health policies and on-the-ground practices, review of scripted CCPs support the need to further advocate for paediatric-and palliativesensitive elements within national cancer control strategies. Specific areas for targeted intervention include mandating childhood cancer reporting and registration, accurate and early diagnosis, and coverage for childhood cancer treatment. In the current setting including late diagnosis and scarce diagnostic and therapeutic resources, the plans revealed notable opportunities for palliative care partnerships and collaborations across diagnoses. Explicit strategies and funding for paediatric and palliative services in national cancer plans may help guide prioritised health delivery implementation. Based on review of these national cancer control plans, priority areas for policy, resources, and action include: diligent monitoring of childhood cancer cases and outcomes in the context of existing health service delivery; tangible commitment and multisector stakeholder engagement to minimise threats to outcomes such as treatment abandonment and inaccessibility of essential staff and supplies; and longitudinal evaluation of control plan interventions to optimise equity, integration, and responsiveness of health system elements. Advocacy and funding accountability remain essential to shift plans from paper to improved population outcomes. We focussed this review on paediatric populations, noting during this process that the context of adolescence remained inconsistently described. The Mauritius plan noted that adolescents with cancer were treated with adults. While several plans recognised the unique phase of adolescence, including adolescents' developmental vulnerability (Ministry of Health Government of Botswana, 2010;Malawi Ministry of Health, 2011-2016, stressors (République du Sénégal Ministère de la Santé et de la Prévention, 2009), and potential benefit from adolescent-specific and youth-and adult-partnered services (Ministry of Health Government of Botswana, 2010), needs for adolescent-specific cancer care and transition between paediatric and adult services appear to be an area meriting further focus (Institute of Medicine National Cancer Policy Forum, 2013). Including current plans, whether as an NCD or cancer-specific plan format, enabled the study team to think outside of subspecialty silos. Plans that incorporated prompt diagnosis and treatment of infectious diseases modelled cross-over between subspecialties: Burkitt lymphoma is commonly associated with holoendemic malaria and Epstein-Barr virus, whereas Kaposi's sarcoma is associated with coinfection with HIV and high titres of human herpes virus-8 (African Organisation for Research and Training in Cancer (AORTIC), 2013-2017). Similarly, safe blood supply is a concern across HIV and oncological teams as depicted in Botswana's NCD plan (Ministry of Health Government of Botswana, 2010). Interventions that improve treatment adherence and treatment completion have the potential to span NCDs. Cancer treatment abandonment rates in Africa are as high as 26% in Mali (Togo et al, 2014), 54% in western Kenya (Njuguna et al, 2014), 46% in Zambia (Slone et al, 2014), thus plans that prioritise treatment completion have arguably the greatest potential for impact on survival rates in childhood cancer. This evaluation model provides a start to assessing the progress made toward paediatric cancer and palliative care, and the future data collection and resource needs for sustainable care measures. Disseminating best practices from these written plans has the potential to support and complement work across NCDs. Longterm evaluation strategies benefit from measurable and timely objectives, both for advocacy and for accountability. Attentiveness to paediatric cancer and palliative care outcome evaluation is now essential to track progress toward achieving objectives and goals. CONCLUSION Commitment to cancer control must encompass the needs of children and palliative care. This analysis for paediatric-and palliative carespecific elements can inform the development of future plans within Africa and low-and middle-income countries elsewhere, particularly the next step of implementation and evaluation of impact.
2017-11-08T17:33:14.346Z
2015-06-04T00:00:00.000
{ "year": 2015, "sha1": "385269bb80e1d0bb9b4f16eaeadca385aa1cc130", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/bjc2015158.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "385269bb80e1d0bb9b4f16eaeadca385aa1cc130", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1792200
pes2o/s2orc
v3-fos-license
Automated Nuclear Analysis of Leishmania major Telomeric Clusters Reveals Changes in Their Organization during the Parasite's Life Cycle Parasite virulence genes are usually associated with telomeres. The clustering of the telomeres, together with their particular spatial distribution in the nucleus of human parasites such as Plasmodium falciparum and Trypanosoma brucei, has been suggested to play a role in facilitating ectopic recombination and in the emergence of new antigenic variants. Leishmania parasites, as well as other trypanosomes, have unusual gene expression characteristics, such as polycistronic and constitutive transcription of protein-coding genes. Leishmania subtelomeric regions are even more unique because unlike these regions in other trypanosomes they are devoid of virulence genes. Given these peculiarities of Leishmania, we sought to investigate how telomeres are organized in the nucleus of Leishmania major parasites at both the human and insect stages of their life cycle. We developed a new automated and precise method for identifying telomere position in the three-dimensional space of the nucleus, and we found that the telomeres are organized in clusters present in similar numbers in both the human and insect stages. While the number of clusters remained the same, their distribution differed between the two stages. The telomeric clusters were found more concentrated near the center of the nucleus in the human stage than in the insect stage suggesting reorganization during the parasite's differentiation process between the two hosts. These data provide the first 3D analysis of Leishmania telomere organization. The possible biological implications of these findings are discussed. Introduction The study of nuclear organization is essential to understanding the way genomes function. Spatial localization of a gene within the nucleus can modulate its expression, leading either to its activation or repression [1]. Chromosomes were first shown to be organized and later shown to occupy particular territories in the nucleus; chromosome properties such as size and gene density were found to be important in the nuclear positioning of the chromosome. In fact, a correlation between transcriptional silencing and localization to the nuclear periphery has been suggested. Gene-rich chromosomes have been observed to occupy the interior of the nucleus, while gene-poor chromosomes have been seen to localize at the nuclear periphery (for review see [2] and [3]). In yeast, the interaction between the chromosome and nuclear periphery can be mediated by telomeres [4]. Telomeres are DNAprotein complexes at the physical ends of the chromosomes that function to protect chromosomal extremities against end-to-end fusions and degradation by nucleases. They are also important for the replication of chromosomal ends. Telomeres show a conserved structure of G-rich tandemly repeated DNA sequences extending toward the chromosome extremities and ending in a 39 overhang. The telomere repeat sequence 59-TTAGGG-39 is shared between phylogenetically unrelated organisms, such as vertebrates, and early diverging eukaryotes, such as the trypanosomatids [5]. Trypanosomatids are flagellated protozoa of medical importance as the causes of parasitic diseases such as leishmaniasis, Chagas' disease, and African trypanosomiasis. Of these three diseases, leishmaniasis is the most geographically widespread: it is present in over 80 countries and puts around 350 million people worldwide at risk of infection (WHO/TDR). There are over 20 Leishmania species pathogenic to humans, and no vaccines exist against any of them. The available treatments frequently show low efficacy and considerable toxicity. Trypanosomes have peculiar biological features such as polycistronic transcription and trans-splicing. In the trans-splicing reaction, long polycistronic messages are processed by addition of a 30-40 nucleotides RNA derived from the spliced leader gene at the 59 end of each cistron, followed by addition of a poly(A) tail at the 39 end. Transcription is constitutive for almost all genes characterized to date and overall transcription rates vary according to the parasite developmental stages [6]. Thus, most regulation of gene expression in trypanosomes seems to occur posttranscriptionally, either by modulation of the stability of the processed mRNAs or by translational control (reviewed in [7]). The life cycle of the Leishmania parasite comprises two stages: amastigote, the intracellular stage found in mammalian cells (human stage); and promastigote, the extracellular stage found in the insect vector (insect stage). The most studied species, Leishmania major, has a 32-megabase genome and 8200 protein-coding genes distributed on 36 chromosomes [8]. Telomeres in Leishmania are known to be heterogeneous in structure [9] and unlike what is found in other pathogenic protozoa, Leishmania major subtelomeric regions do not contain genes coding for the surface molecules frequently associated with parasite virulence [10]. Instead, L. major contains clusters of housekeeping genes extending up to 5 kb away from the telomeres [8]. The telomeric localization of virulence genes could provide increased opportunities to generate variability, as it is suspected of enhancing recombination creating new antigenic variants in Trypanosoma brucei and Plasmodium [11], [12], [13], [14]. In this process, the nuclear architecture may play a role in increasing the emergence of new antigenic and adhesive variants, in the same way that has been suggested for P. falciparum. The telomeres of P. falciparum lie in clusters of 4-7 chromosome ends in the nuclear periphery, and this clustering is thought to enhance recombination of subtelomeric genes like those of the var gene family [14]. Little is known about nuclear organization in Leishmania parasites. Given that these parasites are devoid of antigenic variation and their subtelomeric regions do not harbor virulence genes as seen for other protozoan parasites, we wanted to know whether Leishmania telomeres are organized in clusters. In addition, given that transcription in these parasites is polycistronic and constitutive, we wanted to know the distribution of the Leishmania telomeres in the nucleus, since in other models telomeres are often seen at the nuclear periphery associated with transcriptional silencing. To answer these questions we investigated the spatial organization of Leishmania major telomeres in the insect stage, and we extended this analysis to the intracellular human stage. The small-sized nucleus and complex telomere hybridization patterns in this organism made it impossible to study telomere dynamics using available methods. In order to have more accurate measures and obtain robust statistics on telomere localization within the nucleus, we developed a fully automated 3D image processing system to extract nuclei and detect telomere. In this paper we describe the telomere organization found in Leishmania parasites, we compare the organization/distribution found in nuclei in the human stage and the insect stage, and we discuss the possible implications of these findings for understanding the biology of the parasite. Amastigote preparation (human stage) Mouse macrophage cell line J774A.1 was maintained in RPMI 1640 with L-glutamine (300 mg/L), 25 mM HEPES (pH 7.5) (GIBCO), 100 units/ml penicillin, 100 mg/ml streptomycin, and 10% heat-inactivated fetal bovine serum at 37 uC and 5% CO 2 . Macrophages were infected with exponentially growing promastigotes diluted to 2610 5 cells/ml and cultured 5 days prior to infection to allow them to reach stationary phase. The cells were then harvested and washed with RPMI 1640 medium before infection. In a 24-well plate, macrophages were added at 1610 5 cells/ml and incubated for one day prior to infection. On the day of infection, cells were washed once with medium and infected at a ratio of 1:10 (host cell:parasite) for 48 h at 37 uC and 5% CO 2 . After this period, cells were washed 3 times with PBS and processed for telomere detection as follows. Fluorescence in situ hybridization (FISH) Leishmania major telomeres were detected by FISH using a telomere PNA probe (Telomere PNA FISH kit/FITC, DakoCytomation) according to the manufacturer's protocol except for the fixation step, which was performed with 3.7% formaldehyde (Sigma) for 15 minutes. In addition, the manufacturer's pretreatment step was omitted. To dissociate telomeric clusters, cells were treated with 0.05% proteinase K for 10 s prior to fixation [15], then submitted to FISH with the telomere PNA probe. Z-series images covering the whole nucleus were taken at distance intervals of 0.1 mm by exceeding the DAPI signal in a Nikon Eclipse 90i microscope using a 1006/1.4 Plan ApoVC lens and a Nikon DS-QiMc camera or a Zeiss LSM5 Line Scanning Confocal Microscope. Automated Image analysis A. Nuclei detection and segmentation. First, all the parasites were automatically cropped from the 3D images by isolating each nucleus-kinetoplast pair. Then, a novel 3D analysis framework based on deformable models called ''active meshes'' was employed [16]. For each parasite, a mesh was used to detect the boundary of the kinetoplast, while another mesh was simultaneously used to detect the boundary of the nucleus. The mesh representation allowed the measurement of distances relative to the nuclear membrane and a 3D visualization that was fast and accurate. B. Telomere cluster localization. Telomere clusters are small compared to the resolution limit of current optical microscopes. Therefore they appear on the images as the representation of the microscope's point spread function (PSF) [17]. This phenomenon is taken into account during the detection process, which consists of two steps. First, the image voxels with high curvature values are extracted. Then, a Gaussian approximation of the PSF is fitted to each pre-localized cluster to refine its localization in real space coordinates [18]. The intensity of each cluster is given by the estimated intensity value at cluster localization. For a proper comparison of the data from populations in the human or insect stages, the measured cluster intensities of each population were standardized by subtracting their respective population mean value, and dividing by their respective population standard deviation. To compare the relationship between intensity and cluster position, a relative intensity value was calculated for each cluster. This relative intensity was calculated by dividing the cluster intensity value by the sum of all cluster intensity values from the same nucleus, thereby converting the intensity to a fraction of the whole cell intensity. This step made it possible to compare cluster intensities between different cells. To study the spatial organization of the clusters in each population and across all nuclei, we computed a relative location of each cluster along the nuclear radius as shown in Figure 1. Telomeres are clustered in Leishmania parasites In FISH, hybridization of probe to target DNA is obtained through somewhat harsh fixation steps, high denaturation temperatures, and stringent washing conditions, sometimes leading to disruption of nuclear ultrastructure [19] [20]. We hypothesized that by using lower hybridization temperatures, we could better conserve this ultrastructure. Compared to DNA probes, peptide nucleic acid (PNA) probes show improved hybridization characteristics: they hybridize efficiently at low ionic strength, and their hybridization is more specific and faster (30-45 min), allowing milder hybridization protocols and resulting in lower background. Furthermore, PNAs are resistant to both protease and nuclease degradation (reviewed by [21] and [22]). Therefore, we decided to take advantage of PNA probes in order to study the 3D distribution of telomeres in Leishmania parasites. We performed FISH experiments in both L. major insect and human intracellular stages using a PNA probe complementary to the telomeric repeat of Leishmania parasites. As shown in Figure 2A, L. major telomeres were found in a speckled pattern, dispersed throughout the nucleus in both human and insect stages of the parasite. Telomeres cluster with one another to form discrete foci tethered to the nuclear periphery, and this peripheral localization can modulate gene expression (reviewed by [23]). In order to verify whether L. major telomeres are organized in clusters, we performed proteinase K digestion of nuclei prior to cell fixation [15]. As expected, proteinase K treatment disrupted the overall nuclear structure. After this treatment, the number of telomere spots increased, indicating that telomeres were indeed associated in clusters, and that the clusters might have been disrupted by digestion. The localization of telomeres was then assessed by automated image analysis of 3D series of L. major nuclei from insect and human stages. Determination of nuclear volume was carried out using the active mesh framework method (as described previously). One advantage of using this framework is that the Z-series are processed as a full volume, in contrast to methods in which each Zslice is processed independently. Another advantage of our approach is that the meshes are permanently rendered during the detection process; thus there is no difference between what is seen on the screen and the model processing the data. Interestingly, the observed average number of clusters in both stages is 16, suggesting that the number of clusters may be important for the parasite (Fig. 2B). Given that L. major has 36 chromosomes, an average of 4-5 chromosome ends are associated in each cluster. When we treated the cells with proteinase K before fixation, we obtained an average of 30 telomere spots per nuclei (Fig. 2B). These results provide evidence that L. major telomeres are in clusters brought together through protein interactions. Moreover, since L. major has 72 chromosome ends, our findings suggest that even after physical disruption of the clusters, telomeres may remain in close association. It is important to note that Leishmania chromosomes have never been observed in condensed states and thus the PNA-FISH system cannot be tested on L. major metaphase plates. Therefore, it is not possible to know precisely the resolution limit of this technique for detecting L. major telomeres. Nevertheless, it has been shown to detect more than 90% of telomeres in metaphase plates preparations from mammals [24,25]. Differences in telomere cluster location unveiled through precise automated assignment of cluster nuclear position After measuring the positions of hundreds of clusters in the nuclei of both L. major stages, we decided to analyze the distribution of the clusters. Nuclei from human and insect stages were identified based on DAPI signal and segmented as described in the Methods section. Each telomeric cluster was assigned a relative location along the nuclear radius. Even though telomere clusters are widespread throughout the nucleus during both stages, they concentrate in central areas more frequently than would be expected for random distribution, as shown in Figure 3. Moreover, the position of the telomere clusters differs between the two stages: clusters are more concentrated near the center of the nucleus in the human stage than in the insect stage. Another way of looking at these data is to divide the nucleus in two parts of identical volume, one being internal and the other external ( Figures 4B and 4C, dotted vertical lines). This procedure reveals that ,85% of clusters in the human stage, but ,50% of clusters in the insect stage, are distributed in the internal half of the nuclear volume. Thus, transition to the human stage reduces the fraction of clusters in the external half of the nuclear volume to 15%, which means that the telomeres are repositioned to the center of the nucleus. This suggests a spatial reorganization of L. major telomere clusters upon transition between stages of the parasite life cycle. Analysis of telomeric cluster intensity suggests reorganization between human and insect stages The observed spatial reorganization of telomeric clusters between insect and human stages prompted us to investigate whether the composition of clusters also changed between the two stages. The intensity of a telomeric cluster depends on the number of telomeric repeats present on each cluster. The number of repeats, however, can be attributed to the number of repeats within a single chromosome extremity as well as to the number of extremities present in each cluster. To our knowledge, there is no evidence so far that the repeat number within the chromosomes changes between insect and human stages. We therefore assume that the intensity of telomeric clusters depends solely on the number of chromosomes associated in each telomeric cluster. For proper comparison of the data from human and insect stages, the measured cluster intensities of each population were standardized by subtracting their respective population mean value, then dividing by their respective population standard deviation. Figure 4A shows a comparison of the standardized cluster intensity distribution between human and insect stages. Surprisingly, the overall pattern of clusters intensity changes from one stage to the other, suggesting that chromosomal distribution in telomeric clusters changes upon L. major cellular differentiation. To further gain insights into telomere distribution among the clusters, we correlated the intensity of clusters to their nuclear position. In both insect and human stages, the most intense clusters tend to be centrally located (Figures 4B and 4C). At present it is not known whether this differential distribution is biologically relevant, and whether it is due to differences in the distribution of chromosomes according to telomere size, or to a different number of chromosomes per telomeric cluster. In order to facilitate the visualization of spatial distribution of telomeres in Leishmania parasites, 3D models were produced for both the human and insect stages ( Figure 5A and 5B, respectively, and supplemental movies S1 and S2). Discussion Here we have shown that Leishmania major telomeres are organized in clusters in both stages of the parasite life cycle. We have also observed that the number of clusters per cell does not change between the different life stages, suggesting that it may be important for parasite nuclear biology. Interestingly, the positioning of these clusters in the nucleus changes from one stage of the life cycle to the other. In the human stage, the clusters are more concentrated in the nucleus center, although in both stages clusters are found throughout the whole nuclear space. A comparison of cluster intensities shows that there is a reorganization of the nucleus when the parasites differentiate from one stage to the other, with cluster intensities being more homogeneous in the human stage. In both stages, however, we show that the clusters with the highest intensities are kept at more internal positions in the nucleus, while clusters of lower intensities localize towards the periphery. The organization of Leishmania telomeres in clusters is comparable to the situation observed in other eukaryotic cells. In yeast, telomeres are clustered and tethered to the nuclear periphery. Association with the nuclear periphery correlates with gene silencing at the telomeres, where genes are closer to the pools of silencing proteins such as the Sir proteins. This transcriptional inhibition due to the telomeric localization of a gene is called the telomere position effect (TPE), and association of the telomeres with the periphery is thought to be necessary for TPE to occur (reviewed in [26]). For example, telomeric repression in trypanosomes has been demonstrated for genes encoding variant surface molecules (VSG) in T. brucei [27]. However, the importance of the nuclear localization in this process remains unclear. It has been suggested that perinuclear localization facilitates transcriptional repression in the stage of the parasite that does not express the VSG genes and that in order to be expressed, the VSG gene moves away from the periphery towards the center of the nucleus ( [28] and for review see [29]). We have observed that Leishmania major telomere clusters are not concentrated at the nuclear periphery but instead are distributed throughout the nucleus. Unlike what is seen for Trypanosoma brucei, L. major subtelomeric regions do not contain genes coding for surface molecules [8] [10]. The presence of housekeeping genes at the subtelomeric regions of Leishmania may explain the distribution of the telomeres throughout the nucleus. Besides the lack of perinuclear localization of the telomeres, we have also observed that Leishmania major telomeres are reorganized in the nucleus during the life cycle. In T. cruzi, an extensive redistribution of the heterochromatic regions occurs during the life cycle and is associated with changes in the transcriptional status of the cell [6]. We have shown that in both stages of the parasite life cycle the more intense clusters are found in central positions and clusters of lower intensity localize towards the periphery. The role of chromatin in this organization was not examined and therefore cannot be ruled out. It is possible that the decrease in cluster intensity reflects a difference in probe accessibility due to more compact heterochromatin in the periphery compared to more relaxed and accessible euchromatin in central regions. Whether the reorganization of Leishmania telomeres reflects a more extensive and general reorganization of the chromatin remains to be elucidated, as does the functional importance of the telomere reorganization itself. Supporting Information Movie S1 Three-dimensional reconstruction of an L. major nucleus showing the spatial distribution of telomeric clusters in L. major human stage. Nucleus and kinetoplast are shown in blue and telomeric clusters in red. Differences in cluster intensity reflect differences in cluster size. This reconstruction was based in 3D images from a single parasite. Found at: doi:10.1371/journal.pone.0002313.s001 (2.15 MB MOV) Movie S2 Three-dimensional reconstruction of an L. major nucleus showing the spatial distribution of telomeric clusters in L. major insect stage. Nucleus and kinetoplast are shown in blue and telomeric clusters in red. Differences in cluster intensity reflect differences in cluster size. This reconstruction was based in 3D images from a single parasite. Found at: doi:10.1371/journal.pone.0002313.s002 (2.72 MB MOV)
2014-10-01T00:00:00.000Z
2008-06-11T00:00:00.000
{ "year": 2008, "sha1": "10995ff7f451405d78e393279bad3f4b2cb4d775", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0002313&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10995ff7f451405d78e393279bad3f4b2cb4d775", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265609670
pes2o/s2orc
v3-fos-license
SAINT (Small Aperture Imaging Network Telescope) -- a wide-field telescope complex for detecting and studying optical transients at times from milliseconds to years (Abridged) In this paper, we present a project of multi-channel wide-field optical sky monitoring system with high temporal resolution -- Small Aperture Imaging Network Telescope (SAINT) -- mostly built from off-the-shelf components and aimed towards searching and studying optical transient phenomena on the shortest time scales. The instrument consists of 12 channels each containing 30cm (F/1.5) objectives mounted on separate mounts with pointing speeds up to 50deg/s. Each channel is equipped with a 4128x4104 pixel, and a set of photometric $griz$ filters and linear polarizers. At the heart of every channel is a custom built reducer-collimator module allowing rapid switching of an effective focal length of the telescope -- due to it the system is capable to operate in either wide-field survey or narrow-field follow-up modes. In the first case, the field of view of the instrument is 470 square degrees and the detection limits (5$\sigma$ level at 5500$\AA$) are 12.5-21 mag for exposure times of 20 ms - 20 min correspondingly. In the second, follow-up regime, all telescopes are oriented towards the single target, and SAINT becomes an equivalent to a 1m telescope, with the field of view reduced to 11$'$ x 11$'$, and the exposure times decreased down to 0.6 ms. Different channels may then have different filters installed, thus allowing a detailed study -- acquiring both color and polarization information -- of a target object with highest possible temporal resolution. The operation of SAINT will allow acquiring an unprecedented amount of data on various classes of astrophysical phenomena, from near-Earth to extragalactic ones, while its multi-channel design and the use of commercially available components allows easy expansion of its scale, and thus performance and detection capabilities. Introduction A special place among the studied astronomical phenomena belongs to non-stationary ones.In a sense, all astronomy is the science of change (evolution) of the Universe with time, both as a whole and its parts of various scales, from meteors to clusters of galaxies.It is no coincidence that in recent years a new direction has been formed and is rapidly developing -astronomy in the time domain ("Time domain astronomy").It combines methods, tools and ideas focused on the study of non-stationary phenomena in the Universe at different time and space scales.The website of the IAU working group1 provides information on 62 instruments that investigate (or had investigated) variable objects.As long as the list of telescopes is, so is the range of specific tasks for which these instruments are intended.This includes observations of meteors, comets and asteroids, artificial satellites of the Earth, space debris, searches for the effects of microlensing and transits of exoplanets, the study of variable stars, studies of optical afterglows of gamma-ray bursts and searches for optical flares synchronous with these bursts, searches for supernovae, and the study of brightness variations of galactic nuclei, search and characterisation of optical counterparts of fast radio bursts and gravitational wave signals. The latter direction is all the more important and exceptionally interesting in connection with the discovery of the gravitational wave event GW170817 [1], caused by the merger of two neutron stars and the associated short gamma-ray burst GRB 170817A [2,3,4], the kilonova emission [5,6] and subsequently with a long multi-wavelength afterglow [7,8,9,10,11,12].At the same time, there are considerations that precursors of short gamma-ray bursts can precede gravitational-wave pulses [13], thereby marking the region of their localization, which gives hope for searching for electromagnetic radiation synchronously with the pulse itself.When searching for variability, as a rule, two modes of observation are implemented (we note a certain conventionality of such a division): survey and monitoring.In the first case, the radiation of relatively large areas of the sky is periodically recorded, while the ratio between the time of their exposure (determining the depth of the survey) and the size of the fraction of the celestial sphere observed during the night is determined by the nature of the required variability.Some of these projects and tools are SDSS [14]; Sky Mapper [15]; QUEST-La Silla survey [16]; ASAS-SN [17]; Catalina Surveys [18]; ZTF [19], LSST under construction [20].In the second observation mode, we are talking about the longest possible registration of the radiation of a certain object in anticipation of changes in its characteristics (intensity, spectrum, polarization, etc.). An outstanding example of this kind of TAOS II program is the search for trans-Neptunian planets by their occultations of 10,000 stars, whose long-term simultaneous monitoring of high temporal resolution (exposure -50 ms) in individual subapertures is carried out using three telescopes with a diameter of 1.3 m [21].Another classic example is the monitoring of UV Cet-type stars to study their flare activity using the largest telescopes.In particular, subsecond polarized spikes of synchrotron origin were first detected during a powerful UV Cet flare with the 6-meter telescope of SAO RAS [22].In essence, both modes consist of monitoring, spatio-temporal in the first case, and temporal in the second. A special place among the non-stationary phenomena studied within the framework of Time Domain Astronomy is occupied by unexpectedly appearing (and also suddenly disappearing) objects, whose localization in space and/or time is not known in advance, and whose duration is rather shortthe so-called transients (transient sources).They manifest themselves as stochastic fluctuations of electromagnetic radiation of different frequencies (from radio to gamma-rays), low (tens of MeV) and high (TeV-PeV) energy neutrino events, cosmic rays, gravitational waves.It should be emphasized that despite the variety of forms of energy release in these events, the possibilities of understanding their nature and constructing their models are determined to a certain extent by the detection and study of their manifestations in the optical range -optical transients.These include non-stationary phenomena in the near-Earth space and the Earth's (exoplanet's?)atmosphere, non-moving phenomena such as auroras, transient luminous events (TLE) -elves, sprites, jets [23], and moving phenomena -comets, asteroids, meteors, satellites, space debris.Finally, undoubtedly, such transients can be optical flashes (including periodic) of artificial origin -signals (transmissions) of extraterrestrial civilizations [24,25].The characteristic duration of these phenomena range from minutes-hours to weeks-months (transits of exoplanets, outbursts of novae, supernovae, variable stars, microlensing effects) and from milliseconds to tens of seconds (gamma and radio bursts, gravitational waves, TLE, flyby meteors and satellites).In essence, we are talking about two different types of transients -long and short -the detection and study of which require the use of different methods and tools.At the same time, within the framework of the prevailing ideas and conceptual apparatus, events of the first type are often referred to as fast transients.For example, one of the most extensive modern programs "Wider Faster Deeper" [26], which unites about 40 ground and space instruments and is dedicated to the study of all of the fastest transients, uses the follow-up mode in the optical range (repointing of the instruments to the area of the already detected fast transients).With the use of the 4-meter Blanco telescope and DECam, at an exposure of 20 seconds, information on the subsecond fast transients is completely lost, but their environment and slow transients are successfully studied -gamma-ray burst afterglows, localization regions of fast radio bursts and gravitational wave events, bursts of novae, supernovae, red dwarfs [27]. There is real contradiction between the need to use instruments with an extremely high temporal resolution in the search and study of fast optical transients and the real characteristics of telescopes, both existing and under construction.Figure 1 and Table 7 from Appendix demonstrate this perfectly.We present the characteristics of 30 survey instruments focused on the search and study of nonstationary objects of various types, whose fields of view exceed 4 square degrees. The minimum exposure duration exceeds 10 seconds for the vast majority of these telescopes, which, given the possibility of summing up a sequence of individual frames, fully provides a depth sufficient for detecting and studying variability up to magnitudes 19-25.At the same time, these limits for systems with close fields of view may differ significantly due to the difference in the diameters of the mirrors.Note that most of the telescopes in our list are multi-element systems.Indeed, to implement wide-angle spatial-temporal monitoring with a single aperture, a combination of initially mutually contradictory conditions is necessary -a sufficiently high detection limit (large diameter of the supply optics) and a wide field of view (short focus).This circumstance significantly limits the set of possible optical schemes of instruments that determine the optimum of such a combination.So, with an aperture diameter of D ¿ 1-2 meters, the size of the field of view is 2-3 degrees, and with D ¡ 1 it can reach 5-10 degrees [28,29].This implies the need to move to multi-telescope systems with fields of view of hundreds and thousands of square degrees, which also have a number of other advantages -relative cheapness, the ability to change the configuration, and (most importantly), since the dimensions of the detectors can be quite small, the use of high-temporal resolution instruments.[29,30].However, as can be seen from Figure 1 and Table 7 of Appendix, only one multi-aperture system is currently being developed -Argus [31], which has a time resolution of 0.05 seconds.It consists of 38 (as a starting point, with at least 900 planned in the future) 20 cm telescopes with a total field of view of 344 (8000 in the final version) sq.degrees.Unfortunately, Argus is not able to change the configuration of the field of view, orienting all telescopes to one region of the sky, which will not allow it to be used as a single telescope with an effective diameter of about 5 meters.The remaining three instruments have the limiting dimensions of the fields of view (3-5 degrees) for a monolithic configuration in the Schmidt scheme and are equipped with sufficiently fast receivers.Nevertheless, it is clear that due to the small size of the fields of view when searching for fast transients, they can function only in the alert (followup) mode, like many large telescopes with standard fields of view of ¡ 1 degree (see, for example, [32] and references therein).At the same time, this mode does not allow one to detect optical radiation synchronously with the fastest transients -gamma-ray bursts, fast radio bursts, and gravitational waves -for this, it is necessary to continuously monitor the celestial sphere using instruments that have the widest possible fields of view of hundreds and thousands of square degrees.In other words, it is necessary to independently detect and study fast optical transients and only afterwards look for their connection with events in other spectral ranges and of a different nature, comparing the time of occurrence and the position of both.We emphasize that the understanding of their origin, the choice of models that describe them, will largely be determined by the solution of this most complex problem of modern practical astrophysics.The areas of monitoring of gravitational waves, gamma and radio telescopes when searching for transients cover almost half of the area of the celestial sphere (the exception is the BAT detector of the Swift instrument with a field of view of about 5000 square degrees).This means that even with a relatively high accuracy (arc minutes -degrees) of determining the coordinates of an event for gamma detectors [32], positioning optical telescopes according to an alert will take tens of seconds -minutes.At the same time, the duration of 90% of gamma-ray bursts lies in the 20 ms -100 s interval, and about 30% of the events last less than 2 seconds [33].The latter are the result of the merger of two neutron stars or a black hole and a neutron star and form a class of short gamma-ray bursts (SGRB), in contrast to long-duration GRBs (LGRB), caused by the collapse of some massive stars to form black holes [34,35].Thus, in the alert mode, it is impossible to detect the optical companion of the short gamma-ray burst itself -gamma radiation is no longer detected by the time the telescope is pointed at the event localization region.When a long burst is detected, it is sometimes possible to begin optical observations of its location zone until the gamma-ray emission disappears.As a rule, in both cases, in the alert mode, it is possible to detect and study only the so-called afterglow -optical radiation resulting from the interaction of the plasma ejected during the generation of a gamma-ray burst with interstellar gas [36]. It should be emphasized that so far, despite numerous works on observations and modeling of the phenomenon of gamma-ray bursts, no self-consistent theory is yet available.And it is generally accepted that the study of joint optical radiation with gamma emission (prompt emission), comparison of their characteristics and temporal structure can provide the key to solving this problem [34,37,38,39,40].At the same time, out of 900 optically identified gamma-ray bursts, only 21 have scant optical information (only 1-3 brightness measurements) simultaneous with gamma-ray emission, and only six events have truly informative optical light curves obtained during the same period, usually in its final phase [38].Finally, for the single GRB 080319B (Naked Eye Burst) event, we detected a joint optical burst with a temporal resolution of 0.13 seconds during the entire GRB by the wide-field monitoring independent of gamma-ray telescope data [41,42].These unique observations with the wide-angle (600 sq.degrees) high-temporal resolution camera TORTORA [43] made it possible to establish the essential features of the mechanisms of gamma-ray burst generation [44].The prototype of this camera is our similar instrument FAVOR (FAst Variability Optical Registrator), which was used for observations in 2003-2009 [45,46]. Further, in the course of developing the program for wide-angle high temporal resolution monitor-ing, SAO RAS (together with KFU and OOO Parallax) created a 9-channel system with a field of view of 900 square degrees and a temporal resolution of 0.1 seconds, Mini-MegaTORTORA (MMT) [47].It was used to detect optical flares synchronous with the bursts GRB 160625B [48] (however, with a time resolution of 30 seconds) and GRB 210619B.In the latter case, radiation was recorded in 5 channels simultaneously with exposures of 1, 5, 10, and 30 seconds in white light (3 channels) and in filters B and V.The implementation of such a research mode is a fundamental feature of our approach, it allowed us to compare the structures of light curves in the gamma and optical ranges, build spectra in these ranges, and finally show that in this case the burst emission is due to a backward shock wave propagating in a relativistic jet [49].Note that the MMT system is also equipped with polaroids with different orientations for measuring the linear polarization of optical flares, which should provide unique information about the physical properties of the bursts, namely, the structure of the emitting regions, the characteristics of the magnetic field, and the details of the mechanisms for generating radiation of different energies [34,50,51].Space-telescope measurements of the polarization of hard burst radiation have low accuracy (30 -50%) and temporal resolution [52], which does not allow one to choose between models, providing information only about the most general features of the phenomenon [37].Because of this circumstance, polarization studies of optical flashes accompanying gamma-ray emission are of particular importance.Such observations are practically absent -a small number of works on this topic contain the results of measurements of the polarization of optical afterglows, at the level of 10 -20%, but minutes -hours after the burst itself ( [53] and references therein).In [54] an upper limit for linear polarization at a level of 12% have been obtained for the second episode of the GRB 140439A burst 2.5 minutes after the alert.Thus, the polarization of optical radiation simultaneous with the bursts, as well as its spectral characteristics, have yet to be investigated in high-resolution wide-angle surveys. At present, one of the unsolved (and most important) problems of observational astrophysics is the detection of optical companions of fast radio bursts -the choice of adequate models of these phenomena from a large number of the ones considered requires the study of their possible optical counterparts, which, unlike gamma-ray bursts, are the subject of only theoretical constructions.Some of the several dozen models predict the generation of sufficiently bright optical flares [55,56,57].On the other hand, the frequency of occurrence of fast radio bursts throughout the sky can reach several thousand per day [58], and, consequently, the optical flares accompanying them can be quite frequent.At the same time, as we know, unlike radio observations, optical surveys to search for these events with a high (1 ms) time resolution are not carried out at the moment.As a rule, studies are focused on searching for stationary optical burst hosts [59], or searching for millisecond optical flashes in repetitive bursts.About 20 such objects, which erupted about 200 times, were found among the almost 800 bursts proper.Their coordinates were determined, which made it possible to observe the regions of localization of these sources using large telescopes (see, for example [60] and [61]), in which optical flares were not detected -the same result was obtained by us with a 6-meter telescope [62].Gravitational wave events can also be accompanied by optical flares ranging from a few milliseconds to a few seconds, especially when they come from merging neutron stars, like the recently discovered event GW 170817 [63,64].At the same time, since the accuracy of localization of gravitational waves is hundreds of degrees, such flares can only be detected by monitoring the celestial sphere independent of the data of gravitational detectors using instruments with comparable fields of view and millisecond time resolution.Moreover, in the constantly updated catalog of fast radio burst models [65], probable deep connections between them, the gravitational waves and gamma-ray bursts are clearly visible, based on the universal cause of these phenomena -the interaction and evolution of relativistic objects.It is all the more important to look for signatures of similar phenomena in the optical range using wide-angle monitoring of high temporal resolution.Based on the above considerations, and on the experience of developing and using the FAVOR, TORTORA, Mini-MegaTORTORA tools, we proposed the concept of a multitelescope complex with high temporal resolution for wide-angle monitoring of the celestial sphere -SAINT (Small Aperture Imaging Network Telescope) [66,67,62,68] using telescopes with a diameter of 30-50 centimeters, with the limiting magnitude at the level of single-component instruments (see Fig. 1 and Table 7) and capable of detecting and investigating non-stationary objects at times from milliseconds to months-years.This paper is devoted to the development of this project using modern devices, the choice of the design of the complex, its components, mode of operation, and the assessment of its parameters. Small Aperture Imaging Network Telescope (SAINT) project The performed analysis demonstrates the obvious need to create a multi-channel optical monitoring system that would be capable of conducting complex searches and studies of non-stationary objects and phenomena within the framework of universal instrumental and methodological approaches and would not have the disadvantages of existing systems.The features of this kind of instrument should be as follows: • a large field of view of several hundred degrees, which requires a multi-channel design; • high temporal resolution in the range of tenths to hundredths of a second; • detection limit of 18 -20 mag at time scales of 20-30 seconds to be at the level of the limits of existing survey telescopes (Fig. 1); • a combination of monitoring (wide-field) and follow-up (narrow-field) modes with their rapid change; • the possibility of obtaining maximum information (spectral, polarimetric, photometric) about non-stationary objects in follow-up mode; • processing of accumulated information in real time to detect, characterize and classify transient phenomena, and make decisions on the transition to the follow-up mode; • preservation of all raw and reduced data, as well as maintenance of databases obtained as a result of its a posteriori analysis; • complete robotization of the complex operation using information from external sources (meteo station, network, other instruments) and a system to control its condition. As a practical implementation of these principles, we propose the project of SAINT (Small Aperture Imaging Network Telescope) -12-telescope complex with high temporal resolution, built mostly from off-the-shelf commercially available components that may be implemented in about $2M cost (plus additionally about $500K for labor costs).Its overall parameters are listed in Table 1, schematic view of a single channel is shown in Figure 2, and details on individual components and details of its operation modes are given below. Telescope The basic component of the complex is GENON Max -a catadioptric two-mirror Schenker-Terebizh telescope with five corrective lenses [29] with a diameter of 30 cm (F/1.5) and a field of view of about 40 square degrees (Table 2).In its Cassegrain focus a multimode photopolarimeter of a rather complex design is installed, capable of operating both in wide-field monitoring and in the study of a single object (Section 2.4). Mount The telescopes are mounted on ASA DDM100 mounts based on direct drive technology (see Table 3).This design has a number of important advantages listed below, which are especially significant for the proposed complex. • High acceleration and, as a result, high pointing speed up to 50 • /s, which will allow changing the field of view of the complex and the observation mode in several seconds. • Due to the absence of drive belts and gears, the rotation of the axes is very uniform and silent, which ensures the accuracy of guidance and the absence of vibrations, which determine the high stability of the position of sources in the focal plane of telescopes. • Due to the absence of gearboxes, the absence of mechanical backlash and hysteresis is ensured.For the same reason, moving parts wear out little, which increases the period of maintaining the stability of the mount's characteristics. • All movable structural elements are equipped with 28-bit encoders and feedback systems, which ensures the accuracy of rotation angles and their determination up to 0".004. • Maintaining the stability of the functioning of all elements of the mount and its control is provided along with feedback systems by a high-speed controller on line with a computer. Detector Every channel of SAINT is equipped with a photopolarimeter (Section 2.4), which is the primary instrument in the SAINT complex.The detector used in it is Andor sCMOS Balor 17F-12 camera, whose characteristics are listed in Table 4.This detector currently has the best combination of a large size of 16.9 Mpix (70 mm diagonal), high temporal resolution when reading a full frame (about 20 ms) and a fairly low readout noise (2.9 e-), which is unattainable in CCD matrices for several data transmission channels [69].A sufficiently large pixel size (12 µm) allows using the maximum temporal resolution in observations even with relatively low image quality (> 1 ′′ ), nevertheless concentrated in one pixel, which keeps the readout noise to a minimum.At the same time, a high degree of signal response linearity (>99.7%) is combined with a large electron well depth (80000) and a special way of amplifying and digitizing the signal, which ensures its continuous recording in the maximum dynamic range.These features make it possible to detect sources with an intensity from the read-out noise level to the saturation limit without distortion in a single camera frame, which is fundamental in wide-field observations.Moreover, reducing the read-out area (Region of Interest, ROI) (Table 5) allows to improve the temporal resolution down to 2.5 or 0.6 ms which may be used in the regime of targeted observations of individual objects. The microlens array on top of the sensor in Balor camera allows lossless operation for the relative apertures of up to F/0.3 at cone angle up to 110 • , which makes it possible to avoid light losses even at the outer margins of the telescope field of view at aperture ratio of F/1.5.Important point is what the camera allows to timestamp and synchronize the acquired frames using the signal from GPS receiver which is crucial for combination of data from different channels pointed towards a single target.On the other hand, in contrast to traditional CCD sensors, the question of long-term spatio-temporal stability of individual pixels of CMOSes pose potentially significant problem for the co-addition of images from such detectors, as it introduces additional per-pixel noise that cannot be mitigated by classical calibration methods (bias and dark frame subtraction, non-linearity correction and flat fielding).The limits imposed by these low-frequency effects on frame co-addition are still poorly studied, and have to be thoroughly investigated prior the decision on exact strategy of frame co-addition in the operation of the complex. Photopolarimeter and different modes of operation The primary instrument of every SAINT channel, and the only custom-built part of it, is a photopolarimeter which provides observations in both monitoring and research modes.It is attached to the Cassegain focus of the telescope and consists of two parts -an optical-mechanical unit and a camera unit, its general view with installation on the telescope is shown in Figure 2. The camera unit houses the detector, which is mounted on sliding rails that allow it to move along the optical axis, thereby focusing images on the sensor.The hoses of the water cooling system of the chamber, not shown in the figure, are inserted into the rear end of the casing of the unit. The structure of the photopolarimeter is shown in the diagrams of Figure 3, its internal view is in Figure 4, its longitudinal view is in Figure 5.Control electronics units, power supply, motors that ensure the movement of moving elements of the device, and connecting cables are not shown here. The front part of the photopolarimeter, located in the gap between the back focus of the telescope and the entrance of the camera, contains the following components: a turret with griz filters, an empty window, and a narrow-field block containing mirrors 1 and 4, as well as a field-of-view diaphragm (the optical axis of the telescope passes through the centers of the turret holes); input (COMPUTAR M2514-MP2 with a focal length of 25 mm and aperture ratio of 1:1.4) and output (COMPUTAR V5014-MP with the focal length of 50mm) lenses of the collimator; mirror 3. The side part of the photopolarimeter is located outside the camera casing.It contains: a turret with narrow-field griz filters and an empty window, as well as a turret coaxial with it with three polaroids of different orientations (with polarization plane rotated by 120 • in respect to each other) and an empty window; mirror 2. During observations in the wide-field (monitoring, or primary survey) mode, an empty turret window 1 or one of its filters is installed on the optical axis of all 12 telescopes (depending on the specific tasks of the survey, observations of different areas with different filters, etc. are possible).The cameras acquire a sequence of full-frame images (4128 x 4104 pixels, or 6.3 • x6.3 • each) with a rate of 54 fps, and the entire complex monitors the overall field of view at 470 sq.degrees. In the narrow-field (research) mode, a block with mirrors 1, 4 and a field diaphragm measuring 11 ′ x 11 ′ (1.45 x 1.45 mm) of turret 1 (see Fig. 3) is installed on the optical axis, while mirror 1 focuses the redirected beam on the diaphragm placed at the focus of the input lens of the collimator.The parallel beam constructed by it passes through one of the griz filters (or empty window) of turret 2 and one of the polaroids (empty window) of turret 3.After being reflected by mirrors 2 and 3, the parallel beam is focused by the output lens of the collimator and redirected by mirror 4 to the sensor of the camera.As a result, the light from the 11 ′ x 11 ′ sky region with a linear size of 3 x 3 mm (240 x 240 pixels) is imaged with a scale of 2.75 ′′ /pix, twice the original (5.5 ′′ /pix) -the scale is determined by the focal length of the collimator output lens.This image can be recorded with a time resolution of 20 to 2.5 ms, and its central 6 ′ x 6 ′ part (128 x 128 pixels) -with a resolution of 0.6 ms (see Table 5).The transition from monitoring to research mode takes a few seconds, and is determined by both the rotation speed of the photopolarimeter turrets, and the repointing speed of the mount.In particular, when an optical transient is detected during real-time monitoring, the following operations are performed: • its characteristics are determined (initial brightness, characteristic duration, structure of the light curve); • rough initial classification is performed (as one of noise, meteor, satellite, new or already known astrophysical object classes); • the follow-up mode for every channel is selected (photometry and/or polarimetry, filters to use, exposure time); • in parallel, all telescopes are repointed to the source, placing it in a small central field (with the exception of the instrument that detected the transient, it retains the original mode used for detection, in order to keep uninterrupted sequence of data); • observations start. Figure 6 illustrates this process.Table 6 shows the estimated detection limits of the complex in different modes and with different time resolutions.Estimates were made for a single exposures in white light at a characteristic wavelength of 5500 Å, a sky background of 21.5 mag/sq.arcsec,an optical and atmospheric transmission of 0.25, the characteristics of the telescope and detector from Tables 2 and 4, and assuming that the object flux is fully contained inside single pixel (i.e.significantly undersampled PSF).It should be noted that the real limits when using filters and polaroids will be 0.5 -1.0 magnitude worse. Shelter and infrastructure of the complex Twelve telescopes of the SAINT complex are installed in two cylindrical shelters housing six channels each (see Fig. 7).They consist of a reinforced concrete supporting slab mount and a steel truss frame sheathed with sandwich panels on which it is mounted.Its free part serves as a flyover to accommodate the cylindrical roof of the shelter in the working state of the complex, which, moving along rail guides, closes the plate with the telescopes in the non-working state.The frame is fixed on a reinforced concrete base covered with metal-plastic cladding, installed on a concrete pad dug into the ground. The internal volumes of the frame are the technical compartments of the SAINT complex.They house the following equipment and infrastructure components: • power supply input unit of the complex network, including power drives of all its mechanical elements, electronic components of the telescopes and instruments, computer cluster, uninterruptible power supplies; • system for providing water cooling of detectors; • climate control system based on dehumidifiers COTES CR300. Both parts of the shelter are equipped with video cameras for monitoring the internal and external space.The complex is also equipped with a meteorological system, a fish-eye all-sky camera and an IR cloud sensor, which allows to automatically monitor weather conditions and make a decision on the start and end of observations. The movable roofs of the shelters are covered with a sun-protection coating, and flat screens with uniform illumination are placed on their inner surface to calibrate the sensitivity of the detectors. Data acquisition, processing and storage During the primary monitoring mode of operation, the complex will acquire the data for every sky field (470 square degrees) for 20 minutes in a row, thus covering up to 12700 square degrees in a typical 9-hour observational night.The data will be processed in real time, and a number of data products (transients on various time scales, co-added images with different temporal resolution, etc) will be derived and stored. As both the computational hardware and data analysis algorithms evolve very quickly, it is not possible (and not necessary) to specify the IT infrastructure of SAINT complex here in full details as we did for the telescope itself.Therefore, below we will outline its generic structure, requirements and possible approaches for handling an enormous amount of data the complex will produce. In general, the operation of the complex will be carried out using the software that is based on that we used in observations with the FAVOR, TORTORA and Mini-MegaTORTORA instruments (see [47] and references therein).On the shortest time scales, where timing constraints are the most demanding, transient detection will be performed in real time using fast image subtraction methods that are based on sufficient local stability of image PSF, as described e.g. in [70].On longer time scales, where PSF stability cannot be ensured, the methods developed for image subtraction with PSF matching developed over last 20 years will be used, such as [71] family of methods, ZOGY [72] algorithm, or SFFT [73].These methods generally allow parallelization using modern GPU hardware, and thus may be expected to run fast enough to be used for analyzing the data in real time on minutes to hours time scale.An alternative to image subtraction methods may also be implemented using convolutional neural networks (CNN, see e.g.[74] or [75]).CNNs will also be employed in order to reliably separate the image subtraction artefacts, cosmic rays, as well as e.g.effects of stellar scintillations on short time scales from bona fide transients using the methods developed in e.g.[76,77,78,79] Image co-addition will also allow simultaneously achieving highest possible temporal resolution (on individual frames) and going for deeper objects (in running coadds of various length, corresponding to e.g.every 100 or 1000 original images).Unfortunately, without thorough laboratory study of the stability of the detector it is impossible to define the limits for such co-addition, as at some point the effects of pixel parameter drifts specific for CMOSes will be more significant than the further sensitivity gain.However, we may foresee that implementing some tailored observational strategy in the data sequence -e.g.dithering using some pre-defined pattern -will allow to mitigate such effects to some degree. As a minimal architecture for the data processing cluster for SAINT we propose a hierarchical scheme with two identical computers per telescope channel -one for data acquisition from the CMOS and real-time image processing, second for the channel hardware and high-level controls, intermediate data storage and image processing -plus a central computer that handles the overall operation of the complex, including survey scheduling and reaction to the detected transients, communication with external networks, as well as various databases.Such architecture also allows to use all cluster machines for a more computationally-intensive data processing tasks during day time, when no observations are performed and no real-time data processing is necessary. Let's outline the basic requirements for data processing and storage infrastructure.They are defined by the extremely large amount of data being acquired from several large format detectors (16.9 millions of pixels each for Balor camera) routinely operating at the frame rates up to 54 frames per second (or even up to 1684 fps in follow-up regime), thus amounting to 1.8 gigabytes of data per second for every channel, or up to 800 Terabytes per night for the whole complex.As a minimal realistic configuration using present-day commercially available hardware, we may use a camera connected to the PC through a four-channel CoaXPress interface, and a RAID-0 array consisting of five SATA-III harddrives 20 terabytes each.Due to parallel writing, the speed of up to 4 gigabytes per second may be achieved for such RAID configuration, which is sufficient for handling the real-time data flow.Longer term data storage will require datacenter-class hardware which is also commercially available.6 for various distances, timescales and peak luminosities. Expected results The proposed complex is able to operate in still poorly studied region of the parameter space shown in Figure 1, and thus may provide a number of results related to previously unknown classes of rapid optical transients, as well as a vast amount of data on known ones.Below we will briefly outline some of the fields where such instrument may provide important developments. Flaring stars are now being routinely discovered in modern time domain sky surveys like Kepler [80], TESS [81,82], EvryScope [83], NGTS [84], ZTF [85] or Tomo-e Gozen [86] what perform high cadence continuous observations of the fixed sky regions.However, most of these experiments do not provide temporal resolution better than half a minute, or in rare cases -better than a second, and are being performed in a single photometric filter that somehow reduces the scientific content of collected statistical results.Thus, using specifically tailored monitoring mode with SAINT what is possible due to its multi-channel architecture -i.e.simultaneous multicolor observations -would allow to augment the statistics of white-light flares with at least some estimations of their temperature.Moreover, employing the polarimetric mode for targeted (or follow-up, for the flares detected by realtime processing pipeline) observations would also allow to place statistically reliable upper limits on (or even get a detection of) the polarized components of stellar flares of various durations, including the shorter ones not typically resolved by existing surveys (see also Figure 8 for details). [86] reported 22 flares detected in 40 hours of 1 second cadence observations with Tomo-e Gozen camera (21 square degree simultaneous field of view).While the depth of SAINT survey is not as good, due to superior field of view we may expect up to 12 flares per hour in a wide-field mode, thus getting significantly larger amount of flashes, covered with better temporal resolution. The observations of faint meteors with the Mini-MegaTORTORA system [47] typically resulted in detection of several hundreds events per observational night, with maximum brightness down to 8-10 magnitdues, which is 2-4 magnitudes deeper than the limits of most modern meteor detection experiments (see, for example, [87,88]) and comparable to the capabilities of the Tomo-e Gozen camera [89].Such faint, and even fainter meteor events are caused by micrometeoroids -particles of interplanetary (interstellar?) dust with masses of 0.1µg-0.1g[90] -that, according to direct transatmospheric studies, make up the majority of the cosmic matter falling to the Earth, 100-200 tons/year [91].On the other hand, observations of meteors give an order of magnitude smaller volume [92] -this discrepancy is most probably due to an underestimation of the role of ablation and fragmentation processes of micrometeoroids, as well as inaccuracy in determining their velocities [92].These problems are largely associated with the difficulty of interpreting radar observation data, which is the main means of studying such faint meteors with optical magnitudes of 10-14 [93].Thus, carrying out a long-term survey of a population of faint meteors, covering the entire range of velocities from 12.3 to 71.9 km/s, will allow us to study the features of their optical emission, taking into account possible fragmentation, and compare it with dynamic models of micrometeoroids.Another directions of meteor study might be the analysis of their mass distribution both in the showers and in the sporadic component, as well as massive colorimetry of events during the peaks of different meteor showers in order to compare their spectral properties. UV Ceti type flare stars belong to the main sequence red dwarfs, making up 70-75% of the population of the Galaxy [94].The duration of their stay on the main sequence exceeds the age of the Universe, and during this time their main physical characteristics remain unchanged [95].These features determine great interest in red dwarfs as possible regions where life arises [96].The probabilities of the occurrence of Earth-like planets orbiting these stars in habitable zones with a characteristic size of about 0.2 AU [97], estimated from various observations, are in the range of 30 -50% [98,99].This gives about 100 habitable Earth-like planets in the vicinity of 10 pc, and several billion in the entire Galaxy [96].The small masses and sizes, as well as the low luminosity of red dwarfs, as a result of which the habitable zones are located close to the star, determine the relatively high probability of detecting Earth-like planets during transits.Their depth for earth-like planets ranges from 0.002 to 0.0084 magnitudes with the radius of the host star from 0.6 to 0.1 solar, and the probability of a successful "edge-on" orientation of the orbit of the star-planet system for an observer is 0.5 -1% [100].The photometric accuracy required for transit detection is close to the limit for ground-based instruments.Nevertheless, it is achieved in various programs, thanks to the optimal strategy of observations and selection of objects, the use of effective methods of data analysis and methods of modeling the processes of obtaining them.Thus, when studying several dozen red dwarfs of 12-15 magnitudes with systems of distributed telescopes with diameters of 0.2 to 2 meters at exposures of 30-120 seconds, the measurement accuracies in the range 0.001 -0.004 magnitudes were achieved, leading to the detection of effects with a depth of 0.005 -0.007 magnitude for planets of 1 -3 Earth masses located in habitable zones, with a probability of 2.5 -8% [101,102,103].These results were obtained from large data sets using BLS [104,105] and TLS [106] folding algorithms in the space of periods and filling factors. In this context, it seems that such studies using the SAINT complex should be very effective.Instead of simultaneously monitoring up to 10 red dwarfs (according to the number of instruments used) within the framework of the mentioned programs, SAINT will be able to observe more than a thousand stars.With a minimum brightness of objects with potentially habitable Earth-like planets at a level of 16 magnitude, they will be located at a distance of up to 100 pc, their number in the northern sky will be about 100,000 [96], and transits of these planets may be detected at several hundred red dwarfs.Note that when observing each star about 50 times over a visibility period of 100 days and a transit duty cycle of 0.001 -0.005 [100] the accuracy of intensity determination in phased light curves increases by tens of times relative to its initial estimates in individual exposures.Thus, as result of monitoring the sky with the SAINT complex, it is possible to increase the sample of Earth-like planets in the habitable zones by 4-6 times relative to its current volume 2 . Similar results within the framework of the strategy for detecting periodic signals in wide-angle surveys can be obtained when searching for transits of Earth-like (and not only) planets around white dwarfs.These objects may also host planetary systems and Earth-like planets located in habitable zones.Indeed, with a luminosity of 10 −3 − 10 −4 and a mass of 0.5 -0.6 solar, their radii are close to those of the Earth, and the temperature lies in the range of 4000 -9000 K, which leads to the existence of a habitable zone in the 0.005 -0.02 AU range of distances from stars over several billion years [97,107,108].On the other hand, there are numerous evidences of the existence of various proto-planetary and post-planetary structures around white dwarfs -disks, planetesimals, asteroids [109,110].Finally, 4 giant planets were discovered in systems with white dwarfs [111], one of them by transit method [112].Thus, one can hope also for the presence of Earth-like (rocky) planets with a mass of 1 -2 of Earth one in the habitable zones of white dwarfs, as established for 63 G-M dwarfs (according to Habitable Exoplanet Catalogue).The depth of their transits, repeated with periods in the interval of 4 -30 hours (corresponding to the size of the habitable zone), is 10 -100%, the duration is 1 -2 minutes, and the probability of the best "edge-on" orientation of the orbital plane relative to the observer is close to 1 -2% for planet radii 1 -2 Earth's [107,108,113].In ground-based observations, even with small-diameter telescopes and standard photometric accuracy, it is possible to search for such effects.Primary difficulty in such studies is the needed relatively high temporal resolution at the level of several seconds.In particular, [113] searched for transits in long-term monitoring of 194 white dwarfs of 9 -15 mag with a time resolution of 30 seconds (an 8-channel SuperWASP telescope was used -see Fig. 1 and Table 7) and obtained only upper limits at 10% for the frequency of occurrence of planets and brown dwarfs.The authors come to the conclusion that it is necessary to increase the studied sample, the duration of observations, and to improve the temporal resolution.Similar programs have been planned, and are being partially implemented using space-borne instruments [114,115], ground-based telescopes [116,117] and their combinations [118]. In the northern sky, according to various estimates, there are 10 -15 thousand white dwarfs brighter than 18 magnitude (see, for example, [119] or [120]), and for approximately 100 -150 ones the transits with periods between 3 hours and 50 days, with duty cycles of 0.03 -0.0006 and depths 0.1 -1, may be detected by SAINT.Note, however, that this estimate is solely for the objects where it is possible to detect planets, if they are there -and the probability of it is unknown to us.On the other hand, in the absence of an effect, it will be possible to obtain an estimate of the upper limit for this probability. Finally, there are several other (high risk) directions placed in the area of "Terra incognita" in the Fig. 8 where SAINT observations may be potentially crucial: 1. detection of optical flashes accompanying gamma-ray bursts (about 5 events per year?), 2. registration of optical companions of fast radio bursts, in particular, the ones with repeating activity, where the same approach as for exoplanets may be used to improve the performance, 3. registration of optical companions of gravitational wave events or setting upper limits on their electromagnetic counterparts energy, 4. search for signals of terrestrial civilizations. Conclusions In this paper, we present a project of a SAINT (Small Aperture Imaging Network Telescope) -robotic 12-telescope wide-field complex with high temporal resolution, aimed towards detection and studying optical transient phenomena on shortest possible time scales.Its multi-channel nature allows different modes of operation depending on the task -e.g.wide-field survey or a narrow-field follow-up.Using modern large-format CMOS detectors allows operation with the exposures as short as 20 milliseconds, and to observe the whole sky once every two nights.Novel reducer-collimator that may be rapidly installed into the light beam allows increasing the spatial resolution and effectively converting the complex to a photopolarimeter able to simultaneously acquire both color and polarimetric information.Flexible nature of the complex, and its operation in the poorly studied region of parameter space for optical transients faster than a second, will provide an overwhelming amount of data on various classes of both astrophysical and near-Earth objects.On the other hand, as the complex is built from mostly off-the-shelf components, it may be easily scaled horizontally, to either increase the performance of an individual installation, or to expand it to several sites over the globe able to provide a continuous view of the entire sky, thus realizing the "all sky all time" concept which is extremely important when searching for any transient events, especially short ones. Figure 2 : Figure 2: Schematic view of a single assembled channel of SAINT complex. Figure 3 : Figure 3: Schematic view of a photopolarimeter installed in the Cassegrain focus of every channel of SAINT complex. Figure 4 : Figure 4: Internal view of the photopolarimeter and the course of rays in the follow-up mode. Figure 5 : Figure 5: Longitudinal view of the SAINT photopolarimeter. Figure 6 : Figure 6: Change of observation modes of the complex after detecting an optical transient. Figure 7 : Figure 7: The scheme and general view of the shelter of the SAINT complex. Figure 8 : Figure 8: Detection limits of the SAINT according to the Table6for various distances, timescales and peak luminosities. Table 1 : Main characteristics of the SAINT complex. Table 2 : Main characteristics of the GENON Max telescope (design by G. Borisov). Table 5 : Maximum frame rate depending on the size of a read-out region of a sensor (region of interest, ROI). Table 6 : Detection limits in different modes. Table 7 : Summary of wide field survey telescopes currently at operation.The parameters of the SAINT are also included to the bottom of the table.N Survey t exp , s FOV, deg 2 m lim D, m AΩ a , m 2 deg 2 References
2023-12-05T06:42:12.163Z
2023-12-03T00:00:00.000
{ "year": 2023, "sha1": "d75f398215e4e0a397cd64a71548de0b6b0edb60", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6732/10/12/1352/pdf?version=1701996900", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "d75f398215e4e0a397cd64a71548de0b6b0edb60", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227165801
pes2o/s2orc
v3-fos-license
Glucosamine Modified the Surface of pH-Responsive Poly(2-(diethylamino)ethyl Methacrylate) Brushes Grafted on Hollow Mesoporous Silica Nanoparticles as Smart Nanocarrier This work presents the synthesis of pH-responsive poly(2-(diethylamino) ethyl methacrylate) (PDEAEMA) brushes anchored on hollow mesoporous silica nanoparticles (HMSN-PDEAEMA) via a surface-initiated ARGET ATRP technique. The average size of HMSNs was ca. 340 nm, with a 90 nm mesoporous silica shell. The dry thickness of grafted PDEAEMA brushes was estimated to be ca 30 nm, as estimated by SEM and TEM. The halogen group on the surface of PDEAMA brushes was successfully derivatized with glucosamine, as confirmed by XPS. The effect of pH on the size of the hybrid nanoparticles was investigated by DLS. The size of fabricated nanoparticle decreased from ca. 950 nm in acidic media to ca. 500 nm in basic media due to the deprotonation of tertiary amine in the PDEAEMA. The PDEAEMA modified HMSNs nanocarrier was efficiently loaded with doxorubicin (DOX) with a loading capacity of ca. 64%. DOX was released in a relatively controlled pH-triggered manner from hybrid nanoparticles. The cytotoxicity studies demonstrated that DOX@HMSN-PDEAEMA-Glucosamine showed a strong ability to kill breast cancer cells (MCF-7 and MCF-7/ADR) at low drug concentrations, in comparison to free DOX. Introduction Nanoparticles have been widely used as drug delivery systems (DDSs), specifically for anticancer therapy. The size of the nanoparticles is 100-to 10,000-fold smaller than that of cancer cells; therefore, they are able to cross the cell barriers easily [1]. Among the various nanoparticles that have been utilized as anticancer nanocarriers, including liposomes [2], polymeric nanoparticles [3], nucleic acid [4], carbon [5], and silica nanoparticles [6,7], the latter stand out for the manufacture of DDSs due to their high surface area, high rigidity, thermal stability, biocompatibility, high loading and protection of the drug, controllable rate of release, and efficient targeting [8][9][10][11][12][13][14]. Thus, different types of silica nanoparticles such as mesoporous silica nanoparticles (MSNs) and hollow mesoporous silica nanoparticles (HMSNs) have been used as nanocarriers [15]. HMSNs have a central cavity that is beneficial for the loading of drugs. Previous studies have shown that HMSNs can store a higher amount of drugs than conventional MSNs [16][17][18]. Zhu et al. used ibuprofen to examine the storage capacity of HMSNs and MSNs. They found that 744.5 and 358.6 mg/g of ibuprofen molecules can be stored in HMSNs and MSNs, respectively [17]. Geng et al. prepared MSNs and HMSNs as drug carriers for carvedilol and fenofibrate using solvent evaporation and adsorption equilibrium methods to investigate the different drug loading ability of HMSNs and MSNs [18]. Their results showed that HMSNs had higher drug loading than MSNs in both methods. However, the control of drug release is more limited for HMSNs than for MSNs. To achieve better control of the drug release, the HMSN surface can be modified with organic molecules such as stimuli-responsive polymer brushes [19,20]. Polymer brushes can be anchored by the end of the polymer chains to the surface of particles via physisorption or covalent attachment [20,21]. Interestingly, some of these polymer brushes are stimuli-responsive; environmental changes such as light, temperature, and pH can alter the polymer chain conformation [22][23][24]. Various techniques have been used to grow polymer brushes on nanoparticle surfaces, including reversible addition-fragmentation chain transfer polymerization (RAFT) and atom transfer radical polymerization (ATRP) [25][26][27]. To achieve a controlled drug release in specific tumor sites, mesoporous silica has been modified with pH-responsive polymer brushes, affording a gatekeeper to open the pores in either lower pH or higher temperature conditions [28][29][30]. Nam-Kyoung et al. reported the synthesis of a novel pH-triggered drug delivery system by grafting poly-L-lysine on the pore entrances of MSNs as a drug gatekeeper [31]. In addition, the synthesis of a pH-responsive diblock copolymer, i.e., 2-(tert-butylamino)ethyl methacrylate-b-poly(ethylene glycol) methyl ether methacrylate, was reported by Alswieleh et al. [32]. Doxycycline was loaded in the multifunctional pH-responsive diblock copolymer, and the drug could be released from the nanosystem in a relatively controlled manner. Yang et al. designed a pH and glutathione dually responsive HMSNs-coated poly(acrylic acid) shell for controlled drug delivery [33]. The loading capacity and encapsulation efficiency were found to be 43% and 96%, respectively. Yu et al. reported the growth of poly(N,N-dimethylaminoethyl methacrylate) on the HMSN surface by surface-initiated (SI)-ATRP [34]. The results showed that the hybrid nanomaterials have a large storage capacity with controlled release behavior. Zhang et al. modified HMSNs with poly(2-(diethylamino)ethyl methacrylate) (PDEAEMA) via SI-ATRP, [35] finding that the drug was easily encapsulated into and quickly released from the nanosystem with a high loading capacity. Another HMSN system coated with a copolymer shell bearing N- (3,4-dihydroxyphenethyl) methacrylamide and N-isopropylacrylamide reported by Zhang et al. exhibited a high drug loading capacity and embedding efficiency [36]. Despite all of these advances, to the best of our knowledge, very little work has been done on the modification of the polymer brush surface. Herein, HMSNs were synthesized with an average particle size of 340 nm. The as-prepared HMSNs were coated with pH-responsive poly(2-(diethylamino) ethyl methacrylate) (PDEAEMA) brushes via SI-activators regenerated by the electron transfer (ARGET) ATRP method. The halogen group on the surface of the PDEAMA brushes was then converted to amine, followed by the reaction with succinic anhydride and derivatization with glucosamine. The effect of pH on the size of the fabricated nanoparticles was investigated by dynamic light scattering (DLS). The nanocarriers were loaded with doxorubicin (DOX) in acidic media. The DOX release behavior was studied at different pH values. The nanosystem was characterized using various techniques such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), and X-ray photoelectron spectroscopy (XPS). The cytotoxicity studies were performed using breast cancer cells (MCF-7 and MCF-7/ADR) at low drug concentrations. Synthesis of Uniform HMSNs HMSNs were prepared according to the method published by Fang et al. [37] The SiO 2 core was firstly synthesized via a modified Stöber method. Briefly, ethanol (74 mL), 10 mL of DI water, and ammonium aqueous solution (28%, 3.15 mL) were mixed; then, 6 mL of TEOS was added, and the mixture was stirred at room temperature for 1 h. The SiO 2 product was collected by centrifugation at 6000 rpm for 15 min and washed with DI water and ethanol. Secondly, the as-prepared SiO 2 particles were coated with a mesoporous shell (SiO 2 @MSN) as follows: SiO 2 (50 mg) was homogeneously dispersed in 10 mL of DI water by ultrasonication for 15 min. The SiO 2 suspension was mixed with a solution containing CTAB (75 mg), DI water (15 mL), ethanol (15 mL), and ammonia solution (0.275 mL) under stirring at room temperature for 45 min. Then, 0.125 mL of TEOS was added to the mixture and allowed to react for 3 h. The SiO 2 @MSN nanoparticles were collected and washed with DI water and ethanol. Thirdly, HMSNs were prepared by etching the SiO 2 in SiO 2 @MSN to form a hollow core. Thus, SiO 2 @MSN was dispersed in 10 mL DI and sonicated for 15 min. Na 2 CO 3 (212 mg) was added to the mixture and stirred at 50 • C for 15 h. The final product was centrifuged and washed with DI water and ethanol. Synthesis of HMSNs Modified with Amine Groups (HMSN-NH 2 ) To functionalize the HMSN surface with amine groups, 1.5 g of HMSNs was dispersed into methanol (50 mL) containing 1.5 mL of APTES, and the mixture was refluxed for 12 h. The obtained solution was centrifuged and washed with ethanol several times to remove the residual APTES. Formation of Channels of HMSNs The surfactant (CTAB) was removed by suspending 1 g of HMSN-NH 2 in a solution of ammonium nitrate/ethanol (1 g/100 mL) under reflux and stirring overnight. The final product was obtained by centrifugation, washed with ethanol three times, and dried under a vacuum overnight. Immobilization of BIBB Initiator on HMSNs (HMSN-Br) HMSN-NH 2 (1 g) was dispersed in DCM (25.0 mL) and TEA (0.5 mL, 3.6 mmol). Then, 0.25 mL (2.02 mmol) of 2-bromo-2-methylpropionyl bromide was mixed with 5 mL of DCM, added dropwise into the suspension, and allowed to react for 48 h at room temperature. The HMSN-Br nanoparticles were collected by centrifugation and washed with DCM and ethanol [38]. In a typical reaction, 0.4 g of HMSN-Br, 12 mL of ethanol, and 3 mL of water were mixed and degassed with N 2 for 30 min under stirring. 2-Diethylaminoethyl methacrylate (2 mL), 0.0009 g of CuBr 2 , and 0.0067 g of BIPY were added to the mixture. Ascorbic acid (0.0076 g) was introduced into the mixture under N 2 and stirred at room temperature for 6 h. The products were collected by centrifugation and washed with acidic aqueous solution to remove any copper residue, followed by washing with ethanol. HMSN-PDEAEMA (200 mg) was dispersed in 2 mL of degassed DMF and kept under N 2 . In a separate flask, NaN 3 was degassed, then DMF was added to give a 0.2 M solution. The saturated solution of NaN 3 (5 mL) was then added to HMSN-PDEAEMA under N 2 , and the resulting mixture was heated at 60 • C for 18 h. Afterwards, the particles were separated by centrifugation and washed three times with DMF. The obtained product was suspended in 2 mL of degassed DMF and kept under N 2 . In a separate flask, PPh 3 was degassed, and then DMF was added to give a 0.2 M solution. The saturated solution of PPh 3 (5 mL) was added to HMSN-PDEAEMA under N 2 , and the mixture was heated at 60 • C for 18 h. Then, the particles were separated by centrifugation and washed with DMF, ethanol, and water. The sample was mixed with water/THF (5/5) under N 2 and stirred for 18 h at 40 • C. The final product was obtained by centrifugation and washed with ethanol. Immobilization of Glucosamine on the Surface of HMSNs (HMSN-PDEAEMA-Glucosamine) HMSN-PDEAEMA-COOH was obtained by suspending 100 mg of HMSN-PDEAEMA-NH 2 in a solution of DCM:pyridine (1:1) and sonicating for 10 min. Succinic anhydride (500 mg) was added to the reaction mixture and sonicated for 30 min. The resulting mixture was stirred at room temperature for 18 h. The sample was separated by centrifugation and washed with DMF to remove excess succinic anhydride. The product (100 mg) was suspended in 2 mL of DMF and sonicated for 10 min. EDC (200 mg) and NHS (200 mg) were added to the suspension and sonicated for 30 min. The resulting mixture was stirred at room temperature for 18 h to afford HMSN-PDEAEMA-NHS. The sample was separated by centrifugation and washed with DMF to remove excess EDC and NHS [39]. HMSN-PDEAEMA-NHS (100 mg) was suspended in 10 mL of DMF and sonicated for 30 min. Glucosamine (0.7 mg) and TEA (100 µL) were added to the suspension at room temperature for 24 h. The final product was separated by centrifugation and washed with DMF and DI water to remove unreacted glucosamine (Scheme 1). Measurement and Characterization The nanoparticles were imaged by SEM using a JEOL JSM-7610F microscope (Japan) at 15 kV without any pretreatment. For TEM imaging, a JEOL JEM-1400 microscope was used at 100 kV by placing a drop of nanoparticles diluted in ethanol on a copper grid and dried at 60 °C. Infrared (IR) spectra were acquired using a Perkin-Elmer Spectrum BX instrument (USA), using KBr pellets at the region of 400-4400 cm −1 with a resolution of 4 cm −1 . The surface area, pore volume, and pore size were measured using N2 physisorption isotherms on a Micrometrics Gemini 2375 volumetric analyzer (USA). Before analysis, samples were degassed at 140 °C for 10 h. Thermogravimetric analysis (TGA) was performed on a Perkin-Elmer Pyris 1 TGA instrument (USA) in a temperature range of 25-600 °C and a heating rate of 20 °C/min. XPS measurements were used (model number JPS-9030) manufactured by JOEL company, Japan. All samples were etched for 20 s by Ar gas to remove surface contamination inside an Ultra High Vacuum Chamber (UHV) of about 10 −9 Torr. Particle size was measured at different pH values using DLS Malvern instruments (Zetasizer Nano ZS, UK) at 25 °C. UV spectra were obtained using a SpectraMax Plus 384 microplate reader (USA). Drug Loading and Release The procedure of drug loading and release was conducted according to the method published by Bilalis et al. [40]. Accordingly, the fabricated nanoparticles (1 mg) were suspended in 1 mL of PBS; then, 1 mL of DOX solution (2 mg/mL) was added to the suspension. The pH of the suspension was adjusted to 3 by adding an aqueous solution of HCl (0.1 M) and stirred for 24 h at room temperature in the dark condition. The pH of the suspension was changed to 8 by adding an aqueous solution of NaOH (0.1 M) and stirred for 2 h. The supernatants were collected to determine the concentration of unloaded DOX using an UV-vis spectrophotometer at 480 nm. The entrapment efficiency (EE) and loading capacity (LC) for DOX were calculated by the following formulas, respectively: EE = (weight of drug in nanoparticles/weight of drug added initially) × 100 (1) In the release procedure, 0.25 mg of DOX@HMSN-PDEAEMA-glucosamine or DOX@HMSN-PDEAEMA was added into 1 mL of PBS with different pH values (5, 6.5, 7.4, 8) at 37 °C under constant shaking (220 rpm). The solution was collected from the dispersion at different time periods, and the amount of released drug was determined by UV-vis spectroscopy. The volume of buffer was kept constant by adding (1 mL) of fresh medium after each collection. The cumulative weight percent of DOX was calculated using the following equation: Illustration of the synthesis route of glucosamine-modified pH-responsive poly(2-(diethylamino)ethyl methacrylate) brushes grafted on hollow mesoporous silica nanoparticles. Measurement and Characterization The nanoparticles were imaged by SEM using a JEOL JSM-7610F microscope (Japan) at 15 kV without any pretreatment. For TEM imaging, a JEOL JEM-1400 microscope was used at 100 kV by placing a drop of nanoparticles diluted in ethanol on a copper grid and dried at 60 • C. Infrared (IR) spectra were acquired using a Perkin-Elmer Spectrum BX instrument (USA), using KBr pellets at the region of 400-4400 cm −1 with a resolution of 4 cm −1 . The surface area, pore volume, and pore size were measured using N 2 physisorption isotherms on a Micrometrics Gemini 2375 volumetric analyzer (USA). Before analysis, samples were degassed at 140 • C for 10 h. Thermogravimetric analysis (TGA) was performed on a Perkin-Elmer Pyris 1 TGA instrument (USA) in a temperature range of 25-600 • C and a heating rate of 20 • C/min. XPS measurements were used (model number JPS-9030) manufactured by JOEL company, Japan. All samples were etched for 20 s by Ar gas to remove surface contamination inside an Ultra High Vacuum Chamber (UHV) of about 10 −9 Torr. Particle size was measured at different pH values using DLS Malvern instruments (Zetasizer Nano ZS, UK) at 25 • C. UV spectra were obtained using a SpectraMax Plus 384 microplate reader (USA). Drug Loading and Release The procedure of drug loading and release was conducted according to the method published by Bilalis et al. [40]. Accordingly, the fabricated nanoparticles (1 mg) were suspended in 1 mL of PBS; then, 1 mL of DOX solution (2 mg/mL) was added to the suspension. The pH of the suspension was adjusted to 3 by adding an aqueous solution of HCl (0.1 M) and stirred for 24 h at room temperature in the dark condition. The pH of the suspension was changed to 8 by adding an aqueous solution of NaOH (0.1 M) and stirred for 2 h. The supernatants were collected to determine the concentration of unloaded DOX using an UV-vis spectrophotometer at 480 nm. The entrapment efficiency (EE) and loading capacity (LC) for DOX were calculated by the following formulas, respectively: In the release procedure, 0.25 mg of DOX@HMSN-PDEAEMA-glucosamine or DOX@HMSN-PDEAEMA was added into 1 mL of PBS with different pH values (5, 6.5, 7.4, 8) at 37 • C under constant shaking (220 rpm). The solution was collected from the dispersion at different time periods, and the amount of released drug was determined by UV-vis spectroscopy. The volume of buffer was kept constant by adding (1 mL) of fresh medium after each collection. The cumulative weight percent of DOX was calculated using the following equation: Cumulative weight (%) = (weight of drug at specific time points/weight of drug in nanoparticles) × 100 (3) Cytotoxicity Assay The cytotoxicity of the hybrid HMSNs was evaluated by using the viability reagent PrestoBlue. The PrestoBlue kit relies on resazurin substrate (water-soluble dye and non-toxic molecule), and it can be effectively reduced in mitochondria to resorufin by NADPH dehydrogenase or NADH dehydrogenase as a measurement for mitochondrial metabolic activity. This conversion occurs intracellularly, where resorufin-formed molecules enter the cytosol as indicators for cell death and viability [41]. Breast cancer cells lines (MCF-7 and MCF-7/ADR) were harvested using 0.25% trypsin-EDTA and seeded into 96-well plates at a density of 3000 cells/well in DMEM media and cultured in 5% CO 2 at 37 • C for 24 h. On the second day, media were removed and replaced by media with different drug forms of Free-DOX, DOX@HMSN-PDEAEMA, and DOX@HMSNs-PDEAEMA-Glucosamine with concentrations of 2931, 1466, 733, 366.37, 183, 92, 46, 23, 11, 6, and 3 µM, and the cells plates were incubated in 5% CO 2 at 37 • C for 24 h. Finally, plates were subjected to PrestoBlue kit procedures to measure cell viability as instructed in the kit manual. Data Analysis GraphPad-Prism software was used to plot the figures and to conduct the data analysis, and NOVA and Tukey's multiple comparisons test analysis were used to compare groups and measure the p-value. A (p < 0.05) was considered significant. All experiments were done in triplicate. Results and Discussion The silica cores were synthesized via the Stöber method in the presence of a silica source and an aqueous basic-alcohol medium. The obtained silica nanoparticles had a spherical shape, smooth surface, and a monodispersed diameter of around 140 nm, as shown in Figure 1A. The size distribution was estimated by image J software to be between 120 and 180 nm. The SiO 2 nanoparticles were then coated with a silica shell in the presence of CTAB, leading to the formation of a mesoporous shell. The mean shell thickness was ca. 100 nm with a particle size distribution between 230 and 340 nm ( Figure 1B). After etching by Na 2 CO 3 for removal of the silica cores, hollow silica nanoparticles were formed, as can be seen in the TEM images depicted in Figure 1C,D. Most HMSNs were formed uniformly with a narrow size distribution. The average size of HMSNs was estimated to be ca. 340 nm, which agrees with the SEM image profile. The thickness of the shell was measured to be ca. 90 nm. PDEAEMA brushes were successfully grafted on the surface of HMSNs via ATRP, as illustrated in the SEM and TEM images shown in Figure 1E,F. The average diameter of the spherical HMSN-PDEAEMA was larger than that of HMSNs (ca. 400 nm vs. 340 nm, respectively). The dry thickness of the PDEAEMA brushes was estimated to be 30 nm. The TEM image shows that the size of the nanoparticles was ca. 410 nm, which is in good agreement with the result obtained from the SEM image. The thickness of the silica shell and dry polymer was ca. 30 nm. Figure 2A shows the N2 adsorption-desorption isotherms for HMSN-Br and HMSN-PDEAEMA, which exhibit a typical IV-type curve, indicating the presence of mesoporous materials. The surface area and pore volume of HMSN-Br were 372 m 2 •g −1 and 0.45 cm 3 •g −1 , respectively. After grafting PDEAEMA, the resulting HMSN-PDEAEMA showed a decrease in the surface area (237 m 2 •g −1 ) and pore volume (0.18 cm 3 •g −1 ). These results demonstrate that PDEAEMA blocked the entrance of the mesoporous channels. The FTIR spectra of HMSN-CTAB, HMSNs without CTAB, HMSN-Br, and HMSN-PDEAEMA are shown in Figure 2B Figure 2A shows the N 2 adsorption-desorption isotherms for HMSN-Br and HMSN-PDEAEMA, which exhibit a typical IV-type curve, indicating the presence of mesoporous materials. The surface area and pore volume of HMSN-Br were 372 m 2 ·g −1 and 0.45 cm 3 ·g −1 , respectively. After grafting PDEAEMA, the resulting HMSN-PDEAEMA showed a decrease in the surface area (237 m 2 ·g −1 ) and pore volume (0.18 cm 3 ·g −1 ). These results demonstrate that PDEAEMA blocked the entrance of the mesoporous channels. The FTIR spectra of HMSN-CTAB, HMSNs without CTAB, HMSN-Br, and HMSN-PDEAEMA are shown in Figure 2B The successful modification of HMSNs was also evaluated by TGA by heating to 1000 • C in an N 2 atmosphere ( Figure 2C). As shown in the TGA curves, the increased weight loss after each synthetic step further confirmed the modification of the HMSN surface. It was found that the weight loss of HMSNs with CTAB exhibited three mass loss steps. The first step ranged from ambient temperature to about 220 • C and is attributable to water desorption and dehydration from the materials. The second step occurred from ca. 300 to 480 • C and is assigned to the degradation of CTAB. The third mass loss step, which was around 580 • C, can be attributed to the dehydroxylation of the materials. The weight loss of HMSNs with CTAB was about 38%. The TGA curve of the amino-functionalized HMSNs after removing CTAB showed four mass loss steps. The first step ranged from room temperature to ca. 100 • C and is ascribed to water desorption from the surface of the nanoparticles. The second step occurred from ca. 100 to 300 • C and is due to dehydration from the materials. The third mass loss step was around 350 to 480 • C and is attributable to the degradation of APTES. The fourth mass loss step, which occurred around 580 • C, can be assigned to dehydroxylation. The weight loss was ca. 20%. Similar behavior was observed when the polymerization was initiated, with a weight loss of 25%. After the polymerization, three mass loss steps were observed. The first step, ranging from ambient temperature to about 230 • C, is attributed to water desorption and dehydration from the materials. The second step occurred from ca. 300 • C to 450 • C, corresponding to polymer degradation. The third step is assigned to the degradation of the saline monolayer and CTAB at a temperature ranging from 480 to 550 • C. The fourth mass loss step was around 580 • C and is attributable to dehydroxylation of the materials. The weight of PDEAEMA attached to the HMSN surface was estimated to be ca. 20%. XPS characterizations were conducted to determine the surface composition of HMSN-PDEAEMA before and after glucosamine modification (Figure 3). At the C1s region ( Figure 3A), three peaks at binding energies of 284.9, 286, and 288.5 eV with a ratio of 4.8:3.9:1 correspond to the C-H, C- (N, O), and O=C-O bonds of PDEAEMA, respectively. The peak areas were similar to the theoretical ratio 5:4:1, which corresponds to the polymer composition. As expected, no noticeable difference was observed in the polymer peaks after debromination ( Figure 3B). After the XPS characterizations were conducted to determine the surface composition of HMSN-PDEAEMA before and after glucosamine modification (Figure 3). At the C1s region ( Figure 3A), three peaks at binding energies of 284.9, 286, and 288.5 eV with a ratio of 4.8:3.9:1 correspond to the C-H, C-(N, O), and O=C-O bonds of PDEAEMA, respectively. The peak areas were similar to the theoretical ratio 5:4:1, which corresponds to the polymer composition. As expected, no noticeable difference was observed in the polymer peaks after debromination ( Figure 3B). After the immobilization of glucosamine, a peak appeared at~286.5 eV, which can be attributed to the increase in the surface content of C-O bonds from glucosamine. An increase in the intensity of the peak at 288.5 eV confirmed the successful attachment of glucosamine ( Figure 3C). As shown in Figure 4, the hydrodynamic sizes of HMSN-PDEAEMA dispersions in PBS at different pH values were measured by DLS. As expected, the size of HMSN-PDEAEMA was dependent on the pH value due to protonation or deprotonation of the tertiary amine groups at the side-chain of PDEAEMA, which affected the polymer polarity. At an acidic pH, the hydrodynamic size of HMSN-PDEAEMA increased due to the hydrophilic characteristic caused by protonation. The As shown in Figure 4, the hydrodynamic sizes of HMSN-PDEAEMA dispersions in PBS at different pH values were measured by DLS. As expected, the size of HMSN-PDEAEMA was dependent on the pH value due to protonation or deprotonation of the tertiary amine groups at the side-chain of PDEAEMA, which affected the polymer polarity. At an acidic pH, the hydrodynamic size of HMSN-PDEAEMA increased due to the hydrophilic characteristic caused by protonation. The size of the particle increased from ca. 700 nm at pH 7 to ca. 990 nm at pH 4 as a result of the repulsion between polymer chains. In alkaline media, HMSN-PDEAEMA was completely deprotonated to the hydrophobic polymer, reducing the particle size from 900 to 680 nm at pH 8. Polymers 2020, 12, x 11 of 16 The LC of HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine was determined by UV-vis spectroscopy at 480 nm using DOX as a guest molecule. At pH = 3, protonation of PDEAEMA leads to pore opening; thus, DOX can be hosted into the pores due to the electrostatic interaction between the drug and the internal surface of HMSNs. In contrast, at alkaline pH, the PDEAEMA shell collapses. The results indicate that the storage amount increases with DOX concentration. No significant difference in the LC was observed between HMSN-PDEAEMA and HMSN-PDEAEMAglucosamine at the same concentration (see Table S1 and Figure S1). The DOX release from HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine nanoparticles was studied in PBS buffer at pH 5, 6.5, 7.4, and 8. As illustrated in Figure 5, the drug release rate for both nanosystems was faster in mild acidic solution than in basic media. There was a slight difference in the cumulative drug release for both materials at pH 5 and 6.5, which was ca. 20% and ca. 18%, respectively, after 48 h. This is due to the fact that protons can easily reach the polymer chains and protonate the amine group of DOX, which accelerates the drug release. However, the amount of cumulative drug released from HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine decreased significantly at pH 7.4 and 8 to ca. 4% after 48 h. In these conditions, most of the drug was encapsulated in the nanocarrier due to the collapse of the polymer chains. The LC of HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine was determined by UV-vis spectroscopy at 480 nm using DOX as a guest molecule. At pH = 3, protonation of PDEAEMA leads to pore opening; thus, DOX can be hosted into the pores due to the electrostatic interaction between the drug and the internal surface of HMSNs. In contrast, at alkaline pH, the PDEAEMA shell collapses. The results indicate that the storage amount increases with DOX concentration. No significant difference in the LC was observed between HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine at the same concentration (see Table S1 and Figure S1). The DOX release from HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine nanoparticles was studied in PBS buffer at pH 5, 6.5, 7.4, and 8. As illustrated in Figure 5, the drug release rate for both nanosystems was faster in mild acidic solution than in basic media. There was a slight difference in the cumulative drug release for both materials at pH 5 and 6.5, which was ca. 20% and ca. 18%, respectively, after 48 h. This is due to the fact that protons can easily reach the polymer chains and protonate the amine group of DOX, which accelerates the drug release. However, the amount of cumulative drug released from HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine decreased significantly at pH 7.4 and 8 to ca. 4% after 48 h. In these conditions, most of the drug was encapsulated in the nanocarrier due to the collapse of the polymer chains. Figure S2). When cancer cell lines were incubated with all drug forms for 24 h to measure the cell toxicity, Free DOX demonstrated higher toxicity on both MCF-7 and MCF-7/ADR at only high drug concentrations (2931, 1465.5, 733, 366, and 183 μM). On the other hand, DOX@HMSN-PDEAEMA-Glucosamine showed strong killing ability to kill breast cancer cells in both types at low drug concentrations (23,11,6, and 3 μM) in comparison to the Free-DOX and the DOX@HMSN-PDEAEMA nanoparticles, p value (< 0.0001), as shown in Figure 6A,B. Figure S2). When cancer cell lines were incubated with all drug forms for 24 h to measure the cell toxicity, Free DOX demonstrated higher toxicity on both MCF-7 and MCF-7/ADR at only high drug concentrations (2931, 1465.5, 733, 366, and 183 µM). On the other hand, DOX@HMSN-PDEAEMA-Glucosamine showed strong killing ability to kill breast cancer cells in both types at low drug concentrations (23,11,6, and 3 µM) in comparison to the Free-DOX and the DOX@HMSN-PDEAEMA nanoparticles, p value (< 0.0001), as shown in Figure 6A,B. Conclusions In summary, we reported a facile synthesis of glucosamine-modified PDEAEMA brushes grafted on the outer surface of HMSNs via ATRP. The SiO2 core was coated with 50 nm mesoporous silica, followed by etching in alkaline media. The external surface of the nanoparticles was functionalized with an ATRP initiator, followed by growing PDEAEMA via ARGET ATRP method. The end-group functionality was converted to amine groups using sodium azide and triphenylphosphine. The amine groups were then reacted with succinic anhydride, followed by NHS activation, and glucosamine reaction. The fabricated nanomaterials were characterized by SEM, TEM, FTIR, TGA, and XPS. DLS was used to investigate the conformation change of the polymer chains at different pH values. The size of the nanoparticles increased twice in acidic media due to protonation of the tertiary amine in PDEAEMA, and the size decreased at pH above 7.4. DOX was efficiently loaded within the HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine nanocarriers, which exhibited an LC of ca. 64%. Furthermore, the release of DOX proceeded in a relatively controlled pHtriggered manner from both nanocarriers. The cytotoxicity studies demonstrated that DOX@HMSN-PDEAEMA-Glucosamine showed strong killing ability to kill breast cancer cells (MCF-7 and MCF-7/ADR) at low drug concentrations, in comparison to free DOX. Conclusions In summary, we reported a facile synthesis of glucosamine-modified PDEAEMA brushes grafted on the outer surface of HMSNs via ATRP. The SiO 2 core was coated with 50 nm mesoporous silica, followed by etching in alkaline media. The external surface of the nanoparticles was functionalized with an ATRP initiator, followed by growing PDEAEMA via ARGET ATRP method. The end-group functionality was converted to amine groups using sodium azide and triphenylphosphine. The amine groups were then reacted with succinic anhydride, followed by NHS activation, and glucosamine reaction. The fabricated nanomaterials were characterized by SEM, TEM, FTIR, TGA, and XPS. DLS was used to investigate the conformation change of the polymer chains at different pH values. The size of the nanoparticles increased twice in acidic media due to protonation of the tertiary amine in PDEAEMA, and the size decreased at pH above 7.4. DOX was efficiently loaded within the HMSN-PDEAEMA and HMSN-PDEAEMA-glucosamine nanocarriers, which exhibited an LC of ca. 64%. Furthermore, the release of DOX proceeded in a relatively controlled pH-triggered manner from both nanocarriers. The cytotoxicity studies demonstrated that DOX@HMSN-PDEAEMA-Glucosamine showed strong killing ability to kill breast cancer cells (MCF-7 and MCF-7/ADR) at low drug concentrations, in comparison to free DOX. Conflicts of Interest: The authors declare no conflict of interest.
2020-11-26T09:06:36.422Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "0991478e49d922cdfc5b8855e88ef60051e3d6bc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/12/11/2749/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "340b85124df50cbd3167b609c68ddc00171e004c", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
258988445
pes2o/s2orc
v3-fos-license
Antibody-dependent cellular cytotoxicity, infected cell binding and neutralization by antibodies to the SIV envelope glycoprotein Antibodies specific for diverse epitopes of the simian immunodeficiency virus envelope glycoprotein (SIV Env) have been isolated from rhesus macaques to provide physiologically relevant reagents for investigating antibody-mediated protection in this species as a nonhuman primate model for HIV/AIDS. With increasing interest in the contribution of Fc-mediated effector functions to protective immunity, we selected thirty antibodies representing different classes of SIV Env epitopes for a comparison of antibody-dependent cellular cytotoxicity (ADCC), binding to Env on the surface of infected cells and neutralization of viral infectivity. These activities were measured against cells infected with neutralization-sensitive (SIVmac316 and SIVsmE660-FL14) and neutralization-resistant (SIVmac239 and SIVsmE543-3) viruses representing genetically distinct isolates. Antibodies to the CD4-binding site and CD4-inducible epitopes were identified with especially potent ADCC against all four viruses. ADCC correlated well with antibody binding to virus-infected cells. ADCC also correlated with neutralization. However, several instances of ADCC without detectable neutralization or neutralization without detectable ADCC were observed. The incomplete correspondence between ADCC and neutralization shows that some antibody-Env interactions can uncouple these antiviral activities. Nevertheless, the overall correlation between neutralization and ADCC implies that most antibodies that are capable of binding to Env on the surface of virions to block infectivity are also capable of binding to Env on the surface of virus-infected cells to direct their elimination by ADCC. Introduction Recent efforts to develop effective vaccines and immunotherapies for HIV-1 have primarily focused on antibody responses to the viral envelope glycoprotein [1,2]. The most effective antibodies can neutralize genetically diverse HIV-1 field isolates at low concentrations and are referred to as potent broadly neutralizing antibodies (bnAbs). These antibodies are capable of binding to functional Env trimers as they exist on virions to block infectivity. However, most antibodies to Env elicited in response to vaccination or natural infection are non-neutralizing [3,4]. Non-neutralizing antibodies (nnAbs) bind to surfaces of Env that are not accessible prior to receptor engagement and by definition cannot block viral infectivity. However, under certain circumstances these antibodies may contribute to the elimination of virus-infected cells through Fc-dependent mechanisms such as antibody-dependent cellular cytotoxicity (ADCC). Several correlative studies suggest that Fc-mediated effector functions may contribute to protective immunity. Although ADCC was not identified as a correlate of protection in the RV144 trial, a non-significant trend towards a lower risk of HIV-1 infection was observed among vaccine recipients with higher ADCC responses [5]. Follow-up studies later revealed the induction of Env-specific antibodies among vaccinated subjects of the RV144 trial with enhanced Fcɣ receptor (FcɣR)-mediated functions [6][7][8]. Higher ADCC responses have also been associated with better clinical outcomes in HIV-infected patients [9-15] and with partial or complete protection against mucosal SIV or SHIV challenge in nonhuman primate models [16][17][18][19][20][21]. While these observations do not directly implicate ADCC as a protective mechanism, they have prompted a number of animal studies to investigate the contribution of FcɣR-mediated effector functions to protective immunity. The results of passive antibody transfer experiments to directly assess Fc-mediated protection in nonhuman primate models have been mixed. In support of Fc-mediated protection, Fc domain substitutions in a first-generation HIV-1 bnAb that abrogate FcɣR binding impaired protection against SHIV challenge [22,23]. FcɣR null mutations also reduced the rate of viral load decline following the treatment of SHIV-infected animals with a bispecific bnAb [24]. Passive transfer of purified IgG from vaccinated macaques to naïve animals afforded partial protection against mucosal SIV challenge in the absence of detectable neutralizing antibodies [20]. Furthermore, AAV delivery of an SIV Env-specific antibody capable of mediating ADCC without detectable neutralization protected one animal against mucosal SIV challenge [25]. However, other studies have failed to support these observations. The passive administration of a number of HIV-1 nnAbs, including some reported to mediate ADCC in vitro, failed to protect macaques against SHIV challenge [26][27][28][29][30]. The administration of a non-fucosylated HIV-1 bnAb with increased ADCC activity also did not enhance protection against SHIV challenge [31]. Moreover, contrary to earlier results with a first generation bnAb [22,23], Fc domain substitutions in a potent second generation bnAb that abrogate FcɣR binding did not diminish protection against SHIV challenge [32,33]. To facilitate the investigation of antibody-mediated protection in nonhuman primate models, more than 70 antibodies to diverse epitopes of the SIV envelope glycoprotein were isolated from infected or vaccinated rhesus macaques [34,35]. SIV Env-specific antibodies make it possible to assess protection using pathogenic SIV challenge strains that do not require extensive adaptation for efficient replication in animals [36][37][38][39][40]. This is an important advantage over SHIV challenge strains that are often poorly adapted for replication in macaques and have acquired changes in Env that reduce their sensitivity to certain HIV-1 bnAbs [41][42][43][44][45][46]. Antibodies isolated from rhesus macaques also induce much lower anti-drug antibody (ADA) responses than human or "simianized" antibodies when administered to macaques [35,47], which is essential for studies designed to achieve sustained antibody concentrations in vivo [25,35,[47][48][49][50]. Rhesus macaque antibodies further ensure physiologically relevant interactions with macaque FcɣRs that may differ for animal studies using human antibodies because of species-specific differences in these receptors [51]. In the present study, we selected rhesus macaque antibodies specific for diverse epitopes of the SIV envelope glycoprotein for a comparison of ADCC, Env binding and neutralization. Our results identify antibodies to the CD4 binding site and to CD4-inducible epitopes with especially broad and potent ADCC. We found that ADCC correlates well with Env binding and neutralization, consistent with the notion that most antibodies that are capable of binding to Env on the surface of virions to block infectivity are also capable of binding to Env on virusinfected cells to mediate ADCC. Nevertheless, instances of ADCC in the absence of neutralization and neutralization in the absence of ADCC were observed, indicating differences in some antibody-Env interactions that can uncouple these antiviral functions. Results Thirty SIV Env-specific antibodies were selected for an analysis of their ADCC responses to virus-infected cells and for comparison of ADCC with Env binding and neutralization of viral infectivity. The majority of these antibodies were isolated from rhesus macaques vaccinated with SIV Env immunogens and subsequently challenged with SIV mac 239, SIV mac 251 or SIV sm E660 [21,34]. Exceptions were 5L7, which was reconstructed from Fab fragments cloned from an animal infected with a gp120 glycosylation mutant of SIV mac 239 [25,52], and 1.4H, which was isolated from an HIV-2-infected individual [53,54]. Thus, all of the antibodies have the same rhesus macaque IgG1 Fc domain, except 1.4H, which is human IgG1. Since we previously found that human and rhesus macaque IgG1 mediate similar signaling through FcγR3A (CD16a) [51], the human Fc domain of 1.4H is expected to have a minimal effect on the ADCC activity of this antibody. These antibodies bind to different surfaces of Env, including epitopes in the CD4 binding site (CD4bs), V1/V2 loop, V3 loop, high mannose patch (HMP), gp120-gp41 interface, and the membrane proximal external region (MPER) of gp41 [34,55,56]. Each of the antibodies were tested against four different SIV infectious molecular clones (IMCs): SIV mac 239, SIV sm E543-3, SIV mac 316 and SIV sm E660-FL14. SIV mac 239 and SIV sm E543-3 are independently isolated, neutralization-resistant viruses that differ from one another to a similar extent as unrelated HIV-1 field isolates and share only 83% amino acid identity in Env [38,39]. SIV mac 316 and SIV sm E660-FL14 are neutralization-sensitive viruses related to SIV mac 239 and SIV sm E543-3, respectively [57-59]. SIV sm E543-3 and SIV sm E660-FL14 share 93% amino acid identity in Env, whereas SIV mac 239 and SIV mac 316 differ by only nine amino acids (99% identity) in Env [38,39,57,58]. Hence, this study was designed to compare Env binding, ADCC and neutralization across pairs of genetically distinct SIVs that differ in their sensitivity to antibodies. Binding to Env on the surface of SIV-infected cells Antibody binding to Env on the surface of virus-infected cells is a prerequisite for ADCC [60]. We therefore measured the binding of each of the Env-specific antibodies to SIV-infected cells. Env staining was assessed on the surface of the CD4+ T cell line (CEM.NKR-CCR5 -sLTR-Luc) used as target cells for measuring ADCC (Fig 1) and on activated rhesus macaque CD4+ lymphocytes (S1 and S2 Figs). Although Env staining was generally lower on primary CD4+ T cells than on CEM.NKR-CCR5 -sLTR-Luc cells, the levels of Env staining on these cells strongly correlated (Fig 2 and S3 Fig). Thus, antibody binding to SIV-infected CEM. NKR-CCR5 -sLTR-Luc cells reflects antibody binding to virus-infected primary CD4+ T cells. ADCC responses to SIV-infected cells ADCC was assessed using an assay designed to measure the ability of the antibodies to direct NK cell killing of SIV-infected cells expressing physiologically relevant conformations of the viral envelope glycoprotein [63]. CEM.NKR-CCR5 -sLTR-Luc cells, which contain an LTRdriven luciferase reporter gene that is inducible by the viral Tat protein, were infected with each of the SIVs and incubated in the presence of serial dilutions of antibody with an NK cell line that constitutively expresses rhesus macaque FcɣR3A (CD16a). ADCC was measured as the dose-dependent loss of luciferase activity after an eight-hour incubation. The rhesus macaque FcɣR3A allele expressed by the NK cell line contains the I158 polymorphism in the extracellular domain, which represents approximately 94% of haplotypes in Indian-origin rhesus macaques [51]. Functional comparisons of the IgG interactions of FcɣR3A I158 with the next most frequent allotype of rhesus macaque FcɣR3A (FcɣR3A V158), which represents approximately 4% haplotypes, indicated that the I158 and V158 polymorphisms exhibit very similar interactions with the human and rhesus macaque IgG subclasses [51]. The ADCC responses of the antibodies generally corresponded to their ability to bind to Env on the surface of SIV-infected cells. ITS103 mediated potent ADCC to cells infected with all four SIVs, with lower responses to SIV mac 316-infected cells in accordance with lower levels Env staining (Fig 3). The ADCC responses of ITS03, ITS06.02, ITS52, ITS61.01/.02, ITS70.1, ITS104.01 and 5L7 also reflected differences in Env staining (Fig 3). In each case where Env staining was detectable on cells infected with neutralization-resistant SIV mac 239 and CEM.NKR-CCR5 -sLTR-Luc cells were infected with SIV mac 239, SIV mac 316, SIV sm E543-3, SIV sm E660-FL14 and SHIVAD8-EO. After 3-5 days, the cells were stained with the indicated SIV Env-specific antibodies and with a dengue virus-specific antibody (DEN3) as a negative control, followed by AF647-conjugated anti-human IgG F(ab 0 ) 2 . The cells were also stained for surface expression of CD4, intracellular expression of the SIV Gag protein and for viability. The histograms plots depict Env staining in SIV sm E543-3, ADCC responses were also observed against these viruses. However, the levels of Env staining were not always commensurate with ADCC. Despite marginal levels of Env staining for the CD4bs antibody ITS102.03 and the HMP-specific antibodies ITS54 and ITS55, these antibodies mediated detectable ADCC (Fig 3). ITS102.03 directed similar ADCC responses to cells infected with all four SIVs, whereas ITS54 and ITS55 preferentially killed target cells infected with SIV sm E543-3 and SIV sm E660-FL14. In contrast, the V2-specific antibodies NCI05 and NCI09 did not exhibit detectable ADCC (Fig 3). Since these antibodies were recently reported to mediate ADCC against gp120-coated cells and to compete with vaccineinduced antibodies associated with a reduced risk of mucosal SIV mac 251 acquisition [21], they were also tested for binding to gp120-coated cells. NCI05 and NCI09 readily bound to cells coated with SIV mac 239 gp120 or with wild-type or V1-deleted SIV mac 251-M766 gp120 (S4 Fig). However, with the exception of a slight shift in NCI05 staining on SIV sm E660-FL14-infected cells, these antibodies did not bind to virus-infected cells (Fig 1). These results imply that there are conformational differences in the exposure of the V2 epitopes for NCI05 and NCI09 on monomeric gp120 compared to Env trimers on the surface of SIV-infected cells. To further investigate the relationship between Env binding and ADCC, we compared ADCC responses to Env staining on virus-infected cells. For ADCC, area above the curve (AAC) values were calculated to provide a consistent measure of responses for all antibodies comparison to non-specific staining with DEN3 (shaded) on virus-infected (Gag+ CD4 low ) cells. Representative data is shown from two independent experiments. https://doi.org/10.1371/journal.ppat.1011407.g001 Fig 2. Comparison of Env staining on SIV-infected CEM.NKR-CCR5 -sLTR-Luc cells and primary rhesus macaque CD4+ T cells. Antibody binding to the surface of SIV-infected cells was quantified as the ratio of the geometric mean fluorescence intensity (gMFI) of staining with each of the Env-specific antibodies to staining with DEN3 control antibody. The relationship between the gMFI ratios of antibody binding to CEM.NKR-CCR5 -sLTR-Luc cells and to rhesus macaque CD4+ T cells was determined by calculating the Spearman's rank order correlation. https://doi.org/10.1371/journal.ppat.1011407.g002 PLOS PATHOGENS ADCC activity of SIV env-specific antibodies whether or not they reached an arbitrary threshold of killing (e.g. 50% RLU). AAC values were calculated from the differences between the theoretical maximal luciferase induction (100% RLU) and the percent reductions in luciferase activity at each antibody concentration. For Env binding, the intensity of infected cell staining with each of the Env-specific antibodies was divided by non-specific staining with the DEN3 control antibody to calculate relative Env binding ratios. To avoid biasing comparisons with repeated measures of closely related antibodies, data from the four ITS103 and two ITS61 siblings were averaged and treated as single antibodies. ADCC generally correlated with Env binding (p<0.0001, R = 0.795). However, this relationship was complicated by a number of antibodies with little or no Env binding or ADCC (Fig 4 and S1 File). We therefore established a threshold for detectable ADCC at one standard deviation above the absolute value of the mean of the negative values, and a threshold for detectable Env binding at one standard deviation above the mean of gMFI ratios less than one. By these criteria, most of the antibodies that fell below the threshold of detectable binding also fell below the threshold of detectable ADCC. This suggests that a certain level of Env binding is necessary for antibodies to reliably mediate ADCC. With the exclusion of antibodies that lack detectable Env staining or ADCC, the relationship between Env binding and ADCC remained strong (p<0.0001, R = 0.567) (Fig 4). However, a wide range of ADCC responses were observed within a similar range of Env binding, indicating a non-linear relationship between these activities. These results suggest that while a certain threshold of antibody binding is required for ADCC, other factors also determine the efficiency of ADCC. Instances of neutralization in the absence of ADCC and ADCC in the absence of neutralization were also observed. Although the CD4bs antibody ITS01 potently neutralized SIV mac 316 and SIV sm E660-FL14, this antibody did not mediate detectable ADCC against cells infected with these viruses (Fig 3). Likewise, ITS112.01 and ITS113.01, which bind to the MPER of gp41, neutralized one or more of the SIVs (Fig 5), but mediated little or no ADCC against cells infected with these viruses (Fig 3). Conversely, the HMP-specific antibodies ITS54 and ITS55 did not neutralize any of the SIV IMCs (Fig 5), but exhibited potent ADCC against cells infected with SIV mac 316 and SIV sm E660-FL14 as well as detectable ADCC to cells infected RLU between SIV-infected cells in the presence of antibody and uninfected cells without antibody (experimentalbackground) by the difference in RLU between SIV-infected cells and uninfected cells in the absence antibody (maximal-background) and multiplying by 100. The values indicate the mean and standard deviation (error bars) for triplicate wells at each antibody concentration and the dotted line indicates half-maximal killing of SIV-infected cells. Representative data is shown from two independent experiments. https://doi.org/10.1371/journal.ppat.1011407.g003 PLOS PATHOGENS ADCC activity of SIV env-specific antibodies with SIV mac 239 and SIV sm E543-3 (Fig 3). Similarly, ITS61.01/.02, ITS70.01, ITS104.01 and 5L7 did not neutralize SIV mac 239 ( Fig 5), but mediated potent ADCC against SIV mac 239-infected cells (Fig 3). These results illustrate that neutralization and ADCC do not always correspond and reveal differences in some of the antibody-Env interactions that result in these responses. To better understand the uncoupling of these antiviral functions, several of the antibodies that mediate ADCC without detectable neutralization were titrated for Env binding to the surface of SIV-infected cells. SIV mac 239-infected cells were stained with ITS61.01, ITS70.01, ITS55 and 5L7 at concentrations ranging from 0.2 to 100 μg/ml. For comparison, these cells were also stained over the same range with ITS103.01 and with PGT145, which mediates ADCC against SIV-infected cells but does not neutralize SIV infectivity [67]. ITS103.01 exhibited 7-fold higher binding to Env than PGT145, as reflected by differences in area under the curve values. (S5 Fig). The level of Env staining for ITS61.01, ITS70.01, ITS55 and 5L7 was in the same range or lower than PGT145, indicating that these antibodies bind to Env with similar efficiency as PGT145 (S5 Fig). We previously demonstrated that the affinity of PGT145 for SIV Env is sufficient to trigger ADCC by cross-linking Env trimers on the surface of SIVinfected cells to FcɣR3A receptors on NK cells, but is not high enough to occupy enough Env trimers on virions to block SIV infectivity [67]. Thus, the uncoupling of ADCC from neutralization by ITS61.01, ITS70.01, ITS55 and 5L7 may be explained by the low affinity of these antibodies for Env in comparison to antibodies such as ITS103.01 that are capable of efficiently neutralizing virus infectivity. Fig 4. Comparison of ADCC and antibody binding to SIV-infected cells. Area above the curve (AAC) values for ADCC were calculated from differences between the theoretical maximal luciferase induction and the reduction in luciferase activity at each antibody concentration. Antibody binding to SIV-infected CEM.NKR-CCR5 -sLTR-Luc cells was quantified by calculating the gMFI ratios of staining with each of the Env-specific antibodies to non-specific staining with the DEN3 control antibody. The dotted lines indicate thresholds for detectable ADCC and antibody binding, which were set at one standard deviation above the absolute value of mean of negative ADCC (AAC) values and one standard deviation above the mean of gMFI ratios of antibody binding less than one. Spearman's rank order correlation coefficients were calculated for comparisons of ADCC and antibody binding for all antibodies and for those antibodies that were above the thresholds for detectable ADCC and binding. https://doi.org/10.1371/journal.ppat.1011407.g004 PLOS PATHOGENS ADCC activity of SIV env-specific antibodies PLOS PATHOGENS ADCC activity of SIV env-specific antibodies Neutralization and ADCC were compared to investigate the relationship between these antiviral activities. As for ADCC, AAC values were calculated from neutralization curves to capture responses that did not reach an arbitrary threshold of neutralization and to provide a consistent metric for comparison with ADCC. Data for the ITS103 and ITS61 siblings was again averaged and treated as single antibodies to avoid biases from repeated measures of closely related antibodies. ADCC generally correlated with neutralization; however, this relationship was not especially strong (p = 0.0014, R = 0.309) (Fig 6). A closer inspection of the data suggested this correlation might be skewed by a large number of the antibodies which appear to lack one or both activities. Thresholds for detectable neutralization and ADCC were therefore established at one standard deviation above the absolute value of the mean of negative AAC values for neutralization and ADCC. Limiting the comparison to antibodies that were above these thresholds improved the strength but not the overall significance of the correlation (p = 0.0021, R = 0.503) (Fig 6). These analyses reveal a significant albeit imperfect correlation between ADCC and neutralization. Neutralization and ADCC activity of antibodies to CD4-inducible epitopes Exposure of HIV-1 virions or infected cells to CD4 triggers Env to adopt an "open" CD4-bound conformation in which surfaces that are normally concealed in the "closed" preliganded conformation of the Env trimer become exposed to antibodies [68,69]. Antibodies to these surfaces, known as CD4-inducible (CD4i) epitopes, are thus dependent on CD4 for the ability to neutralize viral infectivity and to sensitize infected cells to ADCC. To determine the Comparison of ADCC and neutralization of viral infectivity. Area above the curve (AAC) values for ADCC and neutralization were calculated from the differences between the theoretical maximal luciferase induction and the reduction in luciferase activity at each antibody concentration. The dotted lines indicate thresholds for detectable ADCC and neutralization, which were set at one standard deviation above the absolute value of the mean negative ADCC and neutralization AAC values. Spearman's rank order correlation coefficients were calculated for comparisons of ADCC and neutralization for all antibodies and for those antibodies that were above the thresholds for detectable responses. https://doi.org/10.1371/journal.ppat.1011407.g006 PLOS PATHOGENS ADCC activity of SIV env-specific antibodies extent to which SIV-infected cells are susceptible to this class of antibodies, Env binding, ADCC, and neutralization were compared for two CD4i antibodies (ITS99 and 1.4H) in the presence and absence of soluble CD4 (sCD4). ITS99 and 1.4H bound to SIV-infected CEM.NKR-CCR5 -sLTR-Luc cells and to primary rhesus macaque CD4+ T cells in the absence of sCD4. Env staining was markedly higher for 1.4H than for ITS99. Env staining was also higher for both antibodies on cells infected with neutralization-sensitive SIV mac 316 and SIV sm E660-FL14 than with neutralization-resistant SIV mac 239 or SIV sm E543-3 (Fig 7A and S1 and S2 Figs). Treatment with sCD4 dramatically increased the intensity of staining by both antibodies for all four SIVs, consistent with the binding of these antibodies to conserved surfaces of Env exposed upon CD4 engagement (Fig 7A and S1 and S2 Figs). Surprisingly, CD4-induced Env staining by 1.4H was even detectable on cells infected with SHIV-AD8-EO, indicating that this antibody binds to an epitope that is at least partially conserved in HIV-1 Env (Fig 7A). The ADCC responses of ITS99 and 1.4H closely corresponded to differences in their ability to bind to Env on the surface of virus-infected cells. In the absence of sCD4, ITS99 mediated low but detectable ADCC, and 1.4H directed efficient killing of cells infected with all four SIVs ( Fig 7B). The addition of sCD4 greatly increased the susceptibility of virus-infected cells to both antibodies as reflected by reductions in the antibody concentrations required for halfmaximal killing ( Fig 7B). Thus, CD4-induced conformational changes in Env significantly enhance the breadth and potency of ITS99-and 1.4H-mediated ADCC responses. ITS99 and 1.4H were also tested for the ability to neutralize each of the SIVs in the presence and absence of sCD4. In the absence of sCD4, ITS99 and 1.4H efficiently neutralized SIVmac 316 and SIV sm E660-FL14 but were unable to neutralize SIV mac 239 or SIV sm E543-3, consistent with differences in the sensitivity of these viruses to antibodies (Fig 7C). However, similar to ADCC, treatment with sCD4 dramatically increased the susceptibility of all four SIVs to neutralization (Fig 7C). These results confirm CD4-induced changes in the sensitivity of SIV to neutralization by ITS99 and 1.4H and reveal parallels between neutralization and ADCC by CD4i antibodies. Additional characterization of ITS104.01 revealed that this antibody also recognizes a CD4-inducible epitope. ITS104.01 mediates detectable ADCC against cells infected with SIVmac 239 and SIV sm E543-3 but does not neutralize these viruses (Figs 3 and 5). Similar to ITS99, sCD4 increased Env staining and the sensitivity of SIV mac 239-infected cells to ADCC in the presence of ITS104.01 (S6 Fig). Thus, limited Env binding and ADCC may occur in the absence of detectable neutralization for certain antibodies that target CD4-inducible epitopes. Soluble CD4 facilitates Env Binding and ADCC by the V2-specific antibodies NCI05 and NCI09 In light of evidence that CD4 binding induces conformational rearrangement of the V1/V2 loops [70][71][72], we postulated that differences in the binding of NCI05 and NCI09 to monomeric gp120 versus Env on the surface of SIV-infected cells may reflect CD4-inducible changes in V2. These antibodies were therefore tested for Env binding and ADCC against SIV-infected cells in the presence and absence of soluble CD4. With the exception of a low level of NCI05 staining on SIV sm E660-FL14-infected cells as noted above, NCI05 and NCI09 did not bind or mediate detectable ADCC without sCD4 (S7A PLOS PATHOGENS ADCC activity of SIV env-specific antibodies SIV mac 316-infected cells, indicating that sensitivity to sCD4 is strain-dependent. These findings nevertheless demonstrate that CD4 engagement can expose the V2 epitopes for NCI05 and NCI09, which may in turn explain the inability of these antibodies to bind to pre-liganded conformations of Env necessary for ADCC against SIV-infected cells in the absence of sCD4. Discussion The isolation of monoclonal antibodies to the SIV envelope glycoprotein directly from SIVinfected or vaccinated rhesus macaques provides a valuable resource for investigating antibody-mediated protection in this species as a nonhuman primate model for HIV/AIDS [34,35]. In contrast to HIV-1 Env-specific antibodies, which require the use of recombinant SHIVs to assess protection, the protection afforded by these antibodies can be evaluated under more physiological conditions using SIV challenge strains that are well-adapted for replication in macaques. Rhesus macaque antibodies also retain natural interactions with macaque Fc receptors [51] and minimize ADA that can complicate repeated or sustained antibody delivery to animals [35]. Env expression levels for HIV-1 and SIV are very similar, reflecting conserved endocytosis motifs in the gp41 cytoplasmic domain and the incorporation of comparable numbers of Env trimers into virions [73][74][75][76][77]. With increasing interest in the development of vaccines and immunotherapies to take advantage of Fc-dependent effector functions such as ADCC, it is important to understand the capacity of Env-specific antibodies to mediate these activities and their relationship to virus neutralization. Using an assay specifically designed to measure the ability of antibodies to direct NK cell killing of virus-infected cells expressing physiologically relevant conformations of Env [63], we measured the ADCC responses of a diverse collection of Env-specific antibodies against cells infected with neutralization-resistant and neutralization-sensitive SIV isolates. ADCC responses were compared to Env binding on the surface of SIV-infected cells and to the neutralization of SIV infectivity. Overall, we observed a good correlation between neutralization and ADCC. These results are consistent with previous analyses of HIV-1 Env-specific antibodies showing that most antibodies that are capable of binding to Env on virions to block infectivity are also capable of binding to Env on the surface of virus-infected cells to mediate ADCC [78][79][80]. The correspondence between these antiviral activities is best illustrated by ITS103, which exhibited the most potent neutralization and the highest ADCC responses of all of the antibodies tested. Nevertheless, this relationship was not absolute, as instances of ADCC in the absence of neutralization and neutralization in the absence of ADCC were also observed. Several antibodies were found to mediate ADCC against SIV mac 239-or SIV sm E543-3-infected cells without detectable neutralization of these viruses. This pattern was most common among the HMP-specific antibodies such as ITS54, ITS55, ITS61.01/02 and ITS70.01, but was also observed for 5L7 and a couple of the antibodies to the V1/V2 region (ITS03 and ITS06.02). Each of these antibodies exhibited ADCC responses to cells infected with SIV mac 239 or SIV sm E543-3 as well as SIV mac 316 and/or SIV sm E660-FL14, but they did not neutralize either SIV mac 239 or SIV sm E543-3. Neutralization was detectable however against SIV mac 316 and SIV sm E660-FL14, which suggests that the epitopes for these antibodies are present on both infected cells and virions but are inaccessible on neutralization-resistant viruses. One interpretation of these observations is that virus-infected cells express more heterogeneous conformations of Env than virions and that some of these conformations render infected cells more sensitive to antibodies. This appears to be the case for lab-adapted HIV-1 [78,81]. However, the primate lentiviruses have evolved molecular mechanisms to tightly control both the amount and the conformation of Env expressed on the cell surface [67,73,82], which greatly reduces the susceptibility of primary HIV-1 isolates to ADCC [78][79][80]. An alternative explanation may reflect differences in the binding affinity of antibodies required to mediate ADCC versus neutralization. This possibility was recently illustrated by the cross-reactivity of PGT145 with SIV Env [67]. PGT145 binds to a conserved epitope at the V2 apex of SIV Env in the same way that it binds to its HIV-1 Env epitope, but with lower affinity. Although the low affinity of PGT145 for SIV Env was sufficient to cross-link enough Env trimers on the surface of virus-infected cells with Fcɣ receptors on NK cells to trigger ADCC, it was not sufficient to bind enough Env trimers on virions to neutralize infectivity [67]. Similar to PGT145, we found that several other antibodies that mediate ADCC without detectable neutralization (ITS55, ITS61.01, ITS70.01 and 5L7) also bind to Env with lower affinity than an antibody that can efficiently neutralize virus infectivity. Thus, differences in the exposure of epitopes on virusinfected cells and virions and/or a lower binding affinity for Env could account for the ability of certain antibodies to mediate ADCC without detectable neutralization. A few instances of neutralization in the absence of ADCC were also observed. These include the gp41 MPER antibodies ITS112.01 and ITS113.01 and the gp120 CD4bs antibody ITS01. In each case, these antibodies exhibited little if any Env staining, strongly suggesting that their failure to mediate ADCC reflects their inability to bind to Env on the surface of virus-infected cells. For ITS112.01 and ITS113.01, these results are consistent with exposure of the MPER epitope on a transient gp41 intermediate during the process of viral fusion that is not typically exposed on virus-infected cells [83,84], and with previous studies by our group and others showing that the HIV-1 MPER antibodies (2F5, 4E10 and 10E8) do not mediate ADCC against virus-infected cells [78,79]. The inability of ITS01 to bind or direct ADCC against cells infected with any of the SIV isolates is more difficult to understand in light of its neutralization of SIVmac 316 and SIV sm E660-FL14. It is possible that exposure of the epitope for this antibody is dependent on conformational changes in Env that are completed after the release of infectious virus particles. Alternatively, this antibody may bind to Env in an orientation that is sterically hindered on the surface of infected cells, but not on virions. In this regard it is perhaps relevant that unlike more potent antibodies to the CD4bs such as ITS103 and IST102.03, ITS01 only blocks infection by neutralization-sensitive viruses. A more precise molecular explanation for the uncoupling of neutralization and ADCC by ITS01 may require further biophysical or structural characterization of antibody-Env complexes. CD4 engagement induces Env trimers to transition from a "closed" pre-liganded conformation to an "open" CD4-bound state that is susceptible to recognition by CD4-inducible antibodies. CD4i-antibodies are abundant in most HIV-1 infected individuals as a consequence of B cell responses to viral debris, such as monomeric gp120. These antibodies are considered non-neutralizing since primary HIV-1 isolates are highly resistant to CD4i antibodies [78,85,86]. However, treatment with soluble CD4 or CD4-mimetic compounds can sensitize cell-free HIV-1 and virus-infected cells to CD4i-antibodies [68,85,87]. Cells infected with vpuor nef-deficient HIV-1 strains that are impaired for CD4 downmodulation are also susceptible to ADCC as a result of the formation of CD4-Env complexes on the cell surface that expose CD4i epitopes [68,85,88]. These properties have led to a great deal of confusion regarding the importance of CD4i antibodies, since ADCC assays that use CD4 + target cells coated with recombinant gp120/gp140 [89,90], target cells infected with vpu-or nef-deficient viruses [91], or that do not differentiate HIV-infected cells from uninfected bystander cells [60, 92,93] greatly overestimate the contribution of CD4i-antibodies to ADCC responses [94]. Two antibodies targeting CD4i epitopes of SIV Env (ITS99 and 1.4H) were evaluated for Env binding, ADCC and neutralization in the presence and absence of soluble CD4. Similar to HIV-1 CD4i-antibodies, ITS99 and 1.4H failed to block the infectivity of neutralization-resistant primary SIV isolates (SIV mac 239 and SIV sm E543-3) in the absence of sCD4. Although Env binding and ADCC were detectable without sCD4, the addition of sCD4 potently enhanced PLOS PATHOGENS ADCC activity of SIV env-specific antibodies ADCC and neutralization by both antibodies. CD4-induced increases in sensitivity to ADCC corresponded to increases in Env binding. Increases in susceptibility to neutralization also reflected increases in Env binding and ADCC. These results are consistent with CD4i responses to HIV-1 and reinforce the general correlation between neutralization and ADCC. NCI05 and NCI09 bind to non-overlapping epitopes in V2 and the specificity of these antibodies has been associated with a decreased risk of mucosal SIV transmission in vaccinated macaques [21, 95,96]. We confirmed that NCI09 and NCI05 bind to gp120-coated cells in accordance with their previously reported ADCC activity against gp120-coated target cells [21]. However, we were unable to detect Env binding or ADCC with these antibodies in assays using SIV-infected cells. These discrepancies likely reflect conformational differences in the exposure of the V2 epitopes for NCI05 and NCI09 on monomeric gp120 versus Env trimers on SIV-infected cells. In support of this, we found that treatment with soluble CD4 increased antibody binding and rendered SIV-infected susceptible to ADCC. These results are consistent with studies showing that CD4 binding to HIV-1 Env induces conformational rearrangements of the V1/V2 loops [70][71][72] and with similar CD4-dependent ADCC activity of non-neutralizing antibodies to the inner domain and co-receptor binding sites of HIV-1 gp120 [68,94]. Structural features of Env that confer resistance to neutralizing antibodies [3] also protect virus-infected cells from ADCC [78][79][80]97]. Thus, antibodies to surfaces of gp120 that are normally concealed in Env trimers and are unable to mediate ADCC against virus-infected cells may still bind to monomeric gp120 and direct ADCC in assays using gp120-coated cells [94]. The present study reveals fundamental insights into the relationship between ADCC, Env binding, and neutralization for a diverse collection of SIV Env-specific antibodies. Neutralizing antibodies to the CD4bs and non-neutralizing antibodies to CD4i epitopes were identified with especially broad and potent ADCC against genetically distinct SIVs. The overall correlation between ADCC and neutralization implies, perhaps not surprisingly, that most antibodies that are capable of binding to functional Env trimers on virions to block infectivity are also capable of binding to Env on virus-infected cells to mediate ADCC. Nevertheless, several antibodies were found to mediate ADCC without detectable neutralization, or neutralization without detectable ADCC, pointing to key differences in the way that certain antibodies interact with Env on virus-infected cells and virions that can uncouple these activities. These findings provide a valuable dataset and conceptual framework for investigating antiviral effector functions of antibodies in macaques as a preclinical model for HIV/AIDS. PLOS PATHOGENS ADCC activity of SIV env-specific antibodies Antibodies SIV Env-specific antibodies were isolated and produced as previously described [34,35]. Briefly, immunoglobulin variable region gene sequences were determined from SIV Env-specific B cells which were individually sorted by flow cytometry using a fluorescently labelled SIV gp140 competitive probe binding strategy and 1JO8 scaffolded SIV sm E660 V1/V2 probes. These sequences were cloned into rhesus Igɣ, IgLκ and IgLλ expression vectors. Full-length IgG (rhesus IgG1) was expressed by co-transfection of heavy and light chain plasmids into 293Freestyle cells and purified using Protein A Sepharose beads (GE Healthcare) according to manufacturer's instructions. 5L7 was expressed in Expi293 cells from a recombinant AAV vector very similar to the 'ssAAV (H+L)' plasmid [25] with the following exceptions: the signal peptide on both IgG chains corresponded to the sequence from human VH4 (UniProt entry O95973), the skip peptide separating heavy and light chains was P2A instead of F2A, and the heavy chain of 5L7 was C-terminally tagged with the rhodopsin-derived peptide corresponding to the epitope for 1D4 mouse monoclonal antibody (TETSQVAPA). The plasmid backbone was a gift from Dr. Matthew Gardner, Emory University. 5L7 was purified from cell culture supernatant on rProtein A GraviTrap columns (GE Healthcare) according to the manufacturer's recommendations, except that the bound antibody was eluted with 3 ml of 25 mM sodium phosphate/citrate buffer pH 3.0 into a 0.2 ml of 1.0 M sodium carbonate buffer pH 9.3.1.4H antibody sequences were originally isolated from Epstein-Barr virus transformed B cells from an HIV-2 infected patient as previously described [53,54,98]. Plasmids encoding 1.4H human IgG1 heavy and light chain sequences were generously provided by James Robinson (Tulane University). 1.4H antibody was produced by co-transfection of heavy and light chain plasmids into Expi293 cells and purified from culture supernatant using rProtein A GraviTrap columns according to manufacturer's instructions. Envelope staining on the surface of infected cells CEM.NKR-CCR5 -sLTR-Luc cells were infected with vif-deleted SIV mac 239, SIV mac 316 and SIV sm E543-3 pseudotyped with VSV G, or with wild-type SIV sm E660-FL14, in the presence of 40 μg/ml Polybrene and centrifuged for 1 hour at 1200 g. Antibody binding to Env was assessed by staining the cells 3-5 days post-infection. Cells were treated with a Live/Dead NEAR IR viability dye (Invitrogen), washed in PBS containing 1% FBS (staining buffer), and stained on ice for 30 minutes with 10 μg/ml of Env-specific antibody or with a dengue virusspecific antibody (DEN3) as a negative control. For soluble CD4 (sCD4) experiments, cells were stained on ice for 30 minutes with 10 μg/ml of anti-SIV Env antibody and 10 μg/ml sCD4-183 (NIH AIDS Reagent Program). Cells were then washed and stained on ice with an AF647-conjugated goat anti-human F(ab 0 ) 2 (Jackson Immunoresearch), followed by PE-Cy7-conjugated anti-CD4 (Clone OKT4, Biolegend). For intracellular p27 staining, cells were fixed in PBS with 2% paraformaldehyde, washed in staining buffer and stained with FITC-conjugated anti-SIV Gag antibody (clone 552F12) in Perm/Fix Medium B (Invitrogen). Cells were washed, fixed in PBS with 2% paraformaldehyde and analyzed on a BD LSRII flow cytometer. Antibody binding to Env was assessed by Env staining on the surface of SIV-infected (CD4 low-Gag + ) cells. Primary rhesus macaque CD4+ T cells were infected with SIV and stained for surface expression of Env. Peripheral blood mononuclear cells (PBMCs) were isolated from whole blood on Ficoll-Paque PLUS gradients. CD8 + T cells and NK cells were depleted using 0.7 μg/ ml mouse anti-CD8 antibody (clone SK11), followed by sheep anti-mouse magnetic beads (Dynabeads) [99]. After CD8 depletion, cells were activated with 5 μg/ml concavalin A for 3 days, washed, and cultured for 3 days in medium with 20 U/ml IL-2 (R&D Systems). Tissue culture-treated flasks were laid on their side to deplete adherent cells such as monocytes and macrophages [100]. CD4+ T cells were then infected with SIV by spinoculation at 1200 g for 2 hours in the presence of Polybrene (40 μg/ml): SIV mac 239 (500 ng p27/10 6 cells), SIV mac 316 (375 ng p27/10 6 cells), SIV sm E543-3 (500 ng/10 6 cells) and SIV sm E660-FL14 (340 ng/10 6 cells). Five days post-infection, cells were stained with a viability dye, followed by incubation with 10 μg/ml anti-SIV Env antibody (with or without 10 μg/ml sCD4-183) for 30 minutes on ice and an AF647-conjugated goat anti-human F(ab 0 ) 2 as described above. Cells were then surface stained on ice with PE-Cy7-conjugated anti-CD4 and BV421-conjugated anti-CD8 (Clone SK11, Biolegend). For intracellular p27 staining, cells were fixed in PBS plus 2% paraformaldehyde, washed and stained with AF488-conjugated anti-SIV Gag antibody (clone 552F12 for SIV mac strains) or with AF488-conjugated anti-HIV Gag antibody (clone 183-H2 for SIV sm strains) in Perm/Fix Medium B. Cells were then washed, fixed in PBS with 2% paraformaldehyde before analysis on a BD LSRII flow cytometer. Antibody binding to Env was assessed by Env staining on the surface of SIV-infected (CD4 low Gag + ) cells. Antibodies NCI05 and NCI09 were tested for binding to cells coated with monomeric SIV gp120. CEM.NKR-CCR5 -sLTR-Luc cells were incubated with 50 μg/ml of SIV mac 251-M766 gp120, SIV mac 251-M766 ΔV1 gp120, or SIV mac 239 gp120 for 2 hours at 37˚C. SIVmac 251-M766 ΔV1 gp120 contains a 45 amino acid deletion in the V1 region (Env residues 119-163) which was shown to retain an α-helical conformation of V2 [21]. Coated cells were washed and treated with a Live/Dead NEAR IR viability dye before staining with 10 μg/ml of the Env-specific antibodies, NCI05 or NCI09. Cells were then washed and stained on ice with an AF647-conjugated goat anti-human F(ab 0 ) 2 followed by PE-Cy7-conjugated anti-CD4 as described above. Uncoated cells and non-specific DEN3 antibody were used as negative controls. Antibody binding to monomeric gp120 was assessed by staining on the surface of live CD4 + gp120 coated cells. ADCC assay ADCC was measured as described previously described [63,94]. CEM.NKR-CCR5 -sLTR-Luc cells were infected with VSV G-trans-complemented, vif-deleted SIV mac 239, SIV mac 316 or SIV sm E543-3, or with wild-type SIV sm E660-FL14, in the presence of 40 μg/ml Polybrene and centrifuged for 1 hour at 1200 g. After 2-4 days of infection, the CEM.NKR-CCR5 -sLTR-Luc cells were incubated for 8 hours with an NK cell line (KHYG-1 cells) transduced with rhesus macaque CD16 at a 10:1 effector to target (E:T) ratio in the presence of antibodies. CD16 + KHYG-1 cells (10 5 cells/well) were incubated with CEM.NKR-CCR5 -sLTR-Luc cells (10 4 cells/well) in triplicate wells of 96-well plates (0.2 ml/well). For antibodies to CD4-inducible epitopes, the assay was performed with or without 10 μg/ml sCD4-183 (NIH AIDS Reagent Program). ADCC was determined from the dose-dependent loss of luciferase activity. ADCC responses were calculated as the remaining luciferase activity (% RLU) by dividing the difference in RLU between SIV-infected cells in the presence of antibody and uninfected cells without antibody (experimental-background) by the difference in RLU between SIV-infected cells and uninfected cells in the absence antibody (maximal-background) and multiplying by 100. Neutralization assay Virus neutralization was measured using a standard TZM-bl reporter assay as previously described [65,66]. Serial dilutions of antibodies were incubated for 1 hour at 37˚C with viral supernatants at the following SIV p27 concentrations: 2 ng/well SIV mac 239, 20 ng/well SIVmac 316, 15 ng/well SIV sm E543-3, or 6 ng/well SIV sm E660-FL14. For antibodies to PLOS PATHOGENS ADCC activity of SIV env-specific antibodies CD4-inducible epitopes, the assay was performed with or without 10 μg/ml sCD4-183. TZMbl cells (10 4 cells/well) were added after a 1 hour incubation and luciferase activity was measured after a 3-day incubation at 37˚C. Virus neutralization was calculated as the dose-dependent inhibition of luciferase induction (RLU) by dividing the difference in RLU between wells containing virus in the presence of antibody and uninfected cells in the absence of antibody (experimental-background) by the difference in RLU between SIV-infected cells and uninfected cells in the absence of antibody (maximal-background) and multiplying by 100. Statistical analyses All statistical analyses were performed using GraphPad Prism v.9.2. Geometric mean fluorescence intensity (gMFI) values for antibody binding to infected cells were calculated using FlowJo Ver 10.5.3 (Tree Star, Inc.). For comparison of Env binding, a gMFI ratio was calculated by dividing the gMFI of Env staining by the gMFI of staining with the DEN3 control antibody. To avoid biasing the correlations with negative data points, gMFI ratios with values less than one were interpreted as noise and used to define an arbitrary threshold of detection. This threshold of detectable Env binding was calculated as one standard deviation above the mean of gMFI ratios less than one. Similarly, negative area above the curve (AAC) values for ADCC and neutralization assays were interpreted as noise and used to define the arbitrary thresholds for detectable responses as one standard deviation above the absolute value of the mean of negative values. For all correlations, gMFI ratios and AAC values for ITS103 and ITS61 siblings were averaged and plotted as a single point to avoid biasing results from repeated measures of closely related antibodies. Relationships were evaluated using a Spearman's rank order correlation. Correlations were performed both with all data points and with only data points above threshold values for comparison. A complete data set of the gMFI ratios for antibody binding to SIV-infected target cells (CEM.NKR-CCR5 -sLTR-Luc cells) and primary CD4+ T cells and the AAC values for ADCC and neutralization used for these analyses is provided (S1 File). S1 Fig. Env staining on the surface of SIV mac -infected rhesus macaque CD4+ T cells. Rhesus macaque PBMCs were CD8-depleted, activated with concanavalin A (5 μg/ml) and CD4 + T cells were expanded in medium with IL-2 (20 U/ml). Activated CD4+ T cells were infected with (A) SIV mac 239 (blue) or (B) SIV mac 316 (magenta). After 3-5 days, the cells were stained with each of the SIV Env-specific antibodies and with DEN3 as a control. Antibody binding to Env was detected by staining with AF647-conjugated anti-human IgG F(ab 0 ) 2 . The lymphocytes were also stained for surface expression of CD4 and CD8, intracellular expression of the SIV Gag protein and for cell viability. The histograms depict Env staining (color) relative to non-specific DEN3 staining (shaded) on virus-infected (Gag + CD4 low ) cells. (TIF) S2 Fig. Env staining on the surface of SIV sm -infected rhesus macaque CD4+ T cells. Rhesus macaque PBMCs were CD8-depleted, activated with concanavalin A (5 μg/ml) and CD4+ T cells were expanded in medium with IL-2 (20 U/ml). Activated CD4+ T cells were infected with (A) SIV sm E543-3 (green) or (B) SIV sm E660-FL14 (orange). After 3-5 days, the cells were stained with each of the SIV Env-specific antibodies and with DEN3 as a control. Antibody binding to Env was detected by staining with AF647-conjugated anti-human IgG F(ab 0 ) 2 . The lymphocytes were also stained for surface expression of CD4 and CD8, intracellular expression of the SIV Gag protein and for cell viability. The histograms depict Env staining (color) relative to non-specific DEN3 staining (shaded) on virus-infected (Gag + CD4 low ) cells. (TIF)
2023-06-01T06:16:28.537Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "0967a4c264d87416c76cba4126eb82fb0eddeb0a", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "522e6e1759e8b30a626c646dd1cc57a67f2b7226", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256479823
pes2o/s2orc
v3-fos-license
At the Heart of the Diagnosis: A Case of Systemic Lupus Erythematosus Presenting as Cardiac Tamponade Systemic lupus erythematosus (SLE) is a heterogenous, systemic disease characterized by the production of pathogenic autoantibodies against nuclear antigens. Although the most common cardiac manifestation of SLE is pericardial effusions, their progression to cardiac tamponade is rare and has an incidence between 1-3%. We describe a case of a 42-year-old Hispanic woman who presented with severe shortness of breath, vague chest pain, and hemodynamic compromise secondary to cardiac tamponade. The patient’s underlying etiology of cardiac tamponade was attributed to a new diagnosis of SLE based on the 2019 European Alliance of Associations for Rheumatology/American College of Rheumatology classification (EULAR/ACR) criteria for SLE. The patient’s treatment consisted of a pericardial window and immunosuppressive therapy with corticosteroids, Mycophenolate, and hydroxychloroquine. This case aims to increase awareness of SLE as a possible differential diagnosis of cardiac tamponade in the appropriate clinical setting. Introduction Systemic lupus erythematosus (SLE) is a heterogeneous, systemic disease characterized by the production of pathogenic autoantibodies against nuclear antigens generating multiorgan inflammation [1]. Although SLE can affect the pericardium, myocardium, valvular structures, and the conduction system, pericardial involvement is the most common [2]. Pericardial effusions are the most frequent manifestation and are typically treated conservatively with immunosuppressive therapy; however, large pericardial effusions resulting in tamponade and hemodynamic compromise can rarely occur [3]. We describe a case of a 42-yearold Hispanic woman who presented with severe shortness of breath, vague chest pain, and hemodynamic compromise secondary to cardiac tamponade. The underlying etiology of the patient's presentation was attributed to SLE. This article will discuss the pathophysiology of cardiac tamponade, describe the rarity of this presentation as a manifestation of SLE, and aims to increase awareness of SLE as a possible differential diagnosis of cardiac tamponade in the appropriate clinical setting. Case Presentation A 42-year-old Hispanic woman presented to the hospital with progressively worsening shortness of breath over two months. The patient's shortness of breath was exacerbated with exertion and associated with vague chest tightness and infrequent episodes of lightheadedness. Upon further questioning, the patient noticed raised, non-pruritic lesions on the anterior aspect of her chest, face, nasal bridge, hairline, and posterior neck, aggravated with exposure to the sunlight that started about two months prior to presentation. Over the previous month, the patient developed swelling to her lower extremities, pain, and swelling to the small joints of her hands with associated morning stiffness, fatigue, and dry cough. She denied symptoms of alopecia, weight loss, fever, oral or nasal ulcers, Raynaud's phenomenon, or hemoptysis. The patient's only known past medical history was diabetes, hypertension, and dyslipidemia. On examination, the patient's vital signs showed a blood pressure of 98/68 mmHg, a heart rate of 104 beats per minute, a temperature of 99.3 degrees F, and oxygen saturation of 92% on room air. The patient was illappearing and was in moderate distress. She was found to have mild peri-orbital edema, an erythematous rash to the anterior nasal bridge and cheeks in a butterfly distribution sparing the nasolabial folds, and several raised, hyperpigmented round lesions noted to the hairline, eyebrows anterior aspect of the chest, posterior neck, and scalp (Figures 1-2). The patient had an elevated jugular venous pressure of 5 cm above the sternal angle, was tachycardic with the presence of a loud midsternal pericardial friction rub, had diminished breath sounds in the lung bases bilaterally, and had 2+ pitting edema to bilateral lower extremities. An electrocardiogram revealed sinus tachycardia with low voltage QRS and few premature ventricular complexes ( Figure 3). Arterial blood gas was performed while the patient was breathing ambient air and showed a pH of 7.34, pCo2 28, pO2 62, and a bicarbonate level of 18.9. FIGURE 3: EKG demonstrating sinus tachycardia with low voltage QRS and few premature ventricular complexes Her laboratories revealed a white blood cell count of 2.6 cells/uL, hemoglobin 10 gm/dL with a mean corpuscular volume of 77%, and a platelet count of 186 cells/uL. The sodium level was 130 mmol/L, potassium 5.2 mmol/L, bicarbonate 16 mmol/L, blood urea nitrogen (BUN) 44 mg/dL, creatinine 1.1 mg/dL, blood glucose level 126 mg/dL, thyroid-stimulating hormone (TSH) 1.5 IU/mL, erythrocyte sedimentation rate (ESR) 62 mm/hr, C-reactive protein (CRP) 0.4 mg/dL, and an albumin level of 2.9 g/dL. Her brain natriuretic peptide (BNP) level was 224 pg/mL, and her urinalysis showed hematuria, pyuria, and proteinuria. A 24-hour urine protein to creatinine ratio demonstrated 2.7 mg/gm. Given the patient's clinical presentation and concern for cardiac tamponade, a bedside point-of-care ultrasound was performed, which revealed a large pericardial effusion, bilateral pleural effusions, and a hyperdynamic heart with an enlarged right ventricle. Subsequently, a formal cardiac echocardiogram was performed, which showed a normal left ventricular (LV) cavity with an ejection fraction of around 55% to 60%, mild LV hypertrophy, moderate to severe tricuspid regurgitation, right ventricular systolic pressure of 85-90 mmHg, and a large pericardial effusion with a respiratory variation of mitral inflow and compression of the right atrium suggesting tamponade physiology (Figures 4-5). FIGURE 4: Apical view of the heart by 2D-echocardiogram with the presence of moderate to large pericardial effusion (PE) and underfilled left (LV) and right ventricle (RV) The left atrium is not shown because it collapsed. underwent left anterolateral thoracotomy with pericardial window and biopsy and drainage of 700 mL of pericardial fluid that was sent for analysis. Pericardial fluid cultures revealed no bacterial, acid-fast bacilli (AFB,) or fungal growth. The fluid cytology revealed few reactive mesothelial cells but was negative for malignant cells. The pericardial biopsy revealed fibro-collagenous tissue with attenuated lining epithelium, capillaries, and inflammation without any atypia or malignancy. Because of her initial laboratory studies and clinical presentation, an autoimmune workup was performed and revealed an antinuclear antibody (ANA) by ELISA ratio of >32, ANA by immunofluorescence of 1:1280, double-stranded DNA 706 IU/mL, anti-SS-A antibody >240 U/mL, anti-SS-B antibody 19 U/mL, complement C3 of 23 mg/dL, and complement C4 <5 mg/dL. The rest of her autoimmune workup was negative and is shown in Table In addition to the pericardial window, the patient was treated with pulse methylprednisolone 500 mg daily for three days, followed by a transition to oral prednisone 40 mg three times daily (TID), mycophenolate 500 mg twice daily (BID), and hydroxychloroquine 200 mg daily. Despite drainage of the pericardial effusion and improvement of pleural effusions with immunosuppressive therapy, the patient continued to have shortness of breath. Due to findings of elevated right ventricular systolic pressure, the diagnosis of pulmonary hypertension was entertained. The patient underwent right heart catheterization, which revealed wedge pressure of 25 mmHg, mean pulmonary arterial pressure of 25 mmHg, and pulmonary vascular resistance of 3.22 Woods units, confirming the diagnosis of pulmonary hypertension with a World Health Organization (WHO) Class I and II overlap. The patient was started on sildenafil 40 mg daily with titration up to 80 mg TID and furosemide 40 mg daily with a moderate improvement of symptoms. Given the complexity of her initial presentation, the patient's kidney biopsy was deferred and was scheduled to be performed in the outpatient setting, but she continued mycophenolate 1.0 gm BID along with hydroxychloroquine 200 mg daily. The patient was discharged with close follow-up with rheumatology, cardiology, nephrology, and pulmonology teams. Discussion Cardiac tamponade is a known medical emergency characterized by fluid accumulation in the pericardium, which results in restriction of the filling of the cardiac chambers, and if not treated promptly, can lead to cardiac arrest [4]. Infectious, inflammatory, and neoplastic processes can all result in pericardial effusions. The rate and acuity at which these fluids accumulate within the pericardial space are the primary drivers for the development of tamponade physiology. The pericardium is a membranous tissue surrounding the heart, and its primary function is to protect and reduce the mechanical friction. The pericardial cavity contains approximately 15-50 ml of ultra-filtrated plasma between its visceral and parietal layers. Pericardial inflammation or pericarditis can present as acute, chronic, or as pericardial effusion. Although the pericardium has some degree of elasticity, sudden or constantly increasing accumulation of fluid in the pericardial space may compromise the heart's ability to distend and can lead to external compression of the heart. As a result of this compression, the hemodynamics are compromised, resulting in decreased compliance of the ventricles and subsequently decreased venous return and cardiac output. Although pericardial effusions frequently occur in patients with SLE and typically present clinically as pericarditis, cardiac tamponade is rare and occurs in less than 1-3% of patients [3]. The low incidence of cardiac tamponade in SLE may be related to the fact that patients typically present with other overt manifestations such as arthritis, rashes, or cytopenias, and incidentally, smaller effusions are detected earlier in the disease course [5]. One observational study found that more than 50% of patients with a pericardial effusion had concomitant mucocutaneous manifestations [6]. The presence of a rash or oral ulcers would likely prompt patients to seek medical care sooner, which would expedite testing and diagnosis resulting in earlier treatment and fewer complications such as cardiac tamponade. According to the literature, patients with SLE who present with cardiac tamponade typically complain of shortness of breath, chest pain, hypotension, and will have raised jugular pressure [6]. Moreover, there have been several studies performed to try to determine which patients are at risk for progression to cardiac tamponade. One retrospective study found that patients with pericardial effusions who developed tamponade had a statistically lower complement C4 level compared to those that did not develop tamponade physiology [7]. Furthermore, a more recent retrospective study found that the presence of pleuritis and the anti-nucleosome antibody positivity are significant predictors of progression to cardiac tamponade in patients with SLE. Regardless of risk factors, patients who develop tamponade physiology have a poor prognosis with one study, noting a 46% survival at the five-year follow-up [8]. Since cardiac tamponade is a medical emergency, the goal of treatment aims to promptly evacuate pericardial fluid in order to improve cardiac function. After removal of fluid by either pericardiocentesis or pericardiotomy, prevention of re-accumulation is best achieved by the combination of intravenous glucocorticosteroids plus or minus the addition of an immunosuppressive agent such as cyclophosphamide [6,9]. Conclusions Cardiac tamponade is a rare complication of SLE and occurs in 1-3% of patients with SLE. A high index suspicion of underlying autoimmune disease as the etiology of cardiac tamponade should be warranted in young reproductive-aged women with systemic symptoms, including serositis, rashes, arthralgias, and fatigue. Our patient's gender, age, clinical presentation, and laboratory findings suggested that her underlying etiology was likely secondary to an autoimmune disease. Typically, patients with pericardial effusions can be treated with conservative immunosuppressive management; however, it is essential to note that because cardiac tamponade is a medical emergency, and regardless of etiology, prompt treatment should be with a pericardial window or pericardiocentesis to decrease disease-related complications. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-02-02T16:24:26.990Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "19649d629f39b5df0f018992791c5d795cc38bd9", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/124710/20230131-17850-1enq8w4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61d48d8e849613b992965fad21a307cc95afef59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251772484
pes2o/s2orc
v3-fos-license
Case report: Mexiletine suppresses ventricular arrhythmias in Andersen-Tawil syndrome It is arduous to determine clinical solutions for Andersen-Tawil syndrome (ATS) in patients intolerant of β-blocker. Here, we present the case of a 7-year-old boy with periodic paralysis and dysmorphic features who experienced syncope four times during exercise. His ECG revealed enlarged U waves and QU-prolongation associated with ATS-specific U wave patterns, frequent PVCs, and non-sustained bidirectional or polymorphic ventricular tachycardia. The genetic test showed a de novo missense R218W mutation of KCNJ2. With the diagnosis of ATS and intolerance of β-blocker, the patient was prescribed oral medications of mexiletine 450 mg/day without severe adverse effects. The repeat ECG showed decreased PVC burden from 38 to 3% and absence of ventricular tachycardia. He remained symptom-free during over 2 years of outpatient follow-up. This case demonstrates a new anti-arrhythmic therapy with mexiletine for prevention of life-threatening cardiac events in patients with ATS who are intolerant of β-blocker treatment. Introduction Andersen-Tawil syndrome (ATS) is a rare hereditary arrhythmia disease also classified as long QT syndrome type 7 (LQT7) and is manifested as ventricular arrhythmias (VAs), periodic paralysis, and dysmorphic features. In patients diagnosed with ATS, β-blockers and/or flecainide is primarily recommended according to a Heart Rhythm Society (HRS) expert consensus statement (1). However, it is arduous to determine clinical solutions for patients intolerant of β-blocker considering the lack of access to flecainide in China. Recently, several studies reported that patients with LQT types 1-3 were responsive to anti-arrhythmic mexiletine. However, the efficacy of mexiletine in patients with ATS (LQT7) with KCNJ2 mutation was rarely reported. Here, we present the effective treatment of a boy who suffered from ATS and recurrent syncope with oral mexiletine for complex ventricular arrhythmias. Case description A 7-year-old boy was brought to the local hospital with a main complaint of experiencing syncope four times. The first episode happened during a tug-of-war at the kindergarten about 2 years ago before the visit to our hospital; the boy reported palpitation with amaurosis followed by loss of consciousness without limb convulsions, tongue bites, or incontinence. The boy regained consciousness spontaneously after 20 s and experienced palpitations for the following 10 min. The patient experienced three additional episodes during exercising, climbing stairs, and lifting heavy objects. A 24-h ambulatory electrocardiogram (ECG) demonstrated frequent premature ventricular contractions (PVCs) in bigeminy and non-sustained bidirectional ventricular tachycardia (VT). His serum potassium was 3.9 mmol/L, and the echocardiograms revealed normal cardiac structure and function. Both head computer tomography and electroencephalogram (EEG) appeared to be normal. The local doctors then considered the diagnosis of catecholaminergic polymorphic ventricular tachycardia (CPVT) and prescribed metoprolol 11.25 mg twice per day. He manifested apparent fatigue and cold extremities after taking the pill just for 2 days. Successive electrocardiograms showed sinus bradycardia, and he stopped taking the medication. Two months later, the boy lost consciousness repeatedly after climbing stairs. Therefore, he was C > T (p.R W) in the coding region of KCNJ . His parents were tested as wild types without mutations, so our patient carried a de novo mutation. transferred to our institution, a university-affiliated teaching hospital, for further evaluation and treatment. Upon admission, his neurology examination was unremarkable. The physical examination revealed dysmorphology, including mandibular hypoplasia, single palmar crease, and long bone over hyperextension ( Figure 1A). His medical history revealed lower limb myasthenia with hypokalemia, but it recovered after potassium supplement. The patient was of normal stature (height of 125 cm) with no family history of sudden cardiac death (SCD). The patient's surface electrocardiograms (ECGs) taken in the local hospital demonstrated sinus rhythm with frequent PVCs. Enlarged U waves (U wave is defined as an early diastolic deflection after the end of the T wave and is considered enlarged if its amplitude is ≥ 0.15 mV and its duration is ≥ 210 ms, indicated by red arrows in Figure 2) (2) in leads II, III, aVF, V1-V2, and wide T-U junction (T peak-U peak 240 ms) were shown in ECGs. He had a QTc interval of 380 ms and a QUc interval of 671 ms (QU interval is defined as from onset of QRS to the end of the U wave, and QT and QU intervals are corrected (QTc and QUc) using Bazett's formula for comparison with different heart rates). A series of electrocardiograms conducted in our hospital revealed frequent PVCs with right bundle branch block (RBBB) morphology and bidirectional PVCs, with QTc of 420 ms and QUc of 680 ms (Supplementary Figure 1). Furthermore, frequent PVCs (38% of total beats) in bigeminy and recurrent asymptomatic non-sustained bidirectional or polymorphic ventricular tachycardia (bVT/pVT, 2,659 times VT) were shown in 24-h Holter recording (GE MARS) (Supplementary Figure 2). Atrial electrical abnormalities were not detected during careful rhythm monitoring. U wave was more prominent at a slow heart rate. The clinical diagnosis of congenital ATS was made based on the typical prolonged QUc interval and enlarged U waves, dysmorphology, and history of paroxysmal hypokalemia. The recurrent syncope was most likely due to sustained VT or ventricular fibrillation (VF). With the diagnosis of ATS and intolerance to β-blocker, the patient was prescribed an oral medication of mexiletine. Considering the high risk of gastrointestinal side effects, a low dose of 300 mg/day (10 mg/kg/day every 8 h) was initially prescribed. Then, the prescribed dosage was gradually increased to 450 mg/day (15 mg/kg/day every 8 h), for which the patient reported occurrence of mild side effects including nausea and loss of appetite. Three days after taking mexiletine, continuous electrocardiograph monitoring showed suppressed ventricular arrhythmias. A 24-h Holter was performed after the treatment showed decreased PVC burden from 38 to 3% of total beats ( Figure 3) and absence of ventricular tachycardia. We also measured the patient's electrocardiographic indexes of SCD prediction including heart rate turbulence (turbulence onset and turbulence slope, TO and TS) and T wave alternans, TWA). The result showed a significant difference before and after the treatment (TO 1.84 vs.−1.24%, TS 3.44 vs. 14.17 ms/RR, and TWA 98 uV vs. 77 uV in lead V1). Additionally, the patient was given an exercise stress test (Bruce protocol). The test results revealed a QUc 730 ms at the baseline heart rate 78 bpm and 590 ms during exercise with a heart rate of 121 bpm, a U wave infusion with the P wave, and a "U on P" sign (U-wave masquerading P-wave) at peak heart rate 144 bpm (Supplementary Figure 3). Neither ventricular nor atrial arrhythmia was induced during exercise after taking mexiletine. Considering the genetic characteristics, we screened the ECGs of his 1st-degree blood relatives. The QT interval was completely normal without a U wave in his parents. The genetic testing of our patient identified a heterozygous missense mutation named c.652C > T (p.R218W, NM_00891, Figure 1B was a hot spot that caused loss of function and resulted in a dominant-negative effect on Kir2.1 (3). His parents were tested as wild types without mutations ( Figure 1B). Therefore, the result showed that our patient carried a de novo mutation. The patient was discharged from our hospital with a prescription of mexiletine (15 mg/kg/day every 8 h). The patient reported no further experience of severe adverse effects. For over 2 years of outpatient follow-up, he remained symptom-free. A repeat 24-h Holter showed < 0.1% PVC burden. The QUc interval was shortened from 680 to 610 ms with no change in U wave amplitude (Supplementary Figure 4). Discussion Andersen-Tawil syndrome (ATS) is a rare inherited disease characterized by specific ventricular arrhythmias including bidirectional or polymorphic ventricular tachycardia (bVT/pVT), periodic paralysis, and dysmorphic features (2). BVT/pVT has been popularly described in ATS excluding digitalis toxicity and catecholaminergic polymorphic ventricular tachycardia (CPVT). Since both ATS and CPVT are rare inherited arrhythmic disorders, there are some clinical similarities between patients with ATS and those with CPVT. Careful differential diagnosis between ATS and CPVT is important before treatment. First, typical QUc prolongation is obvious in widespread limb and/or precordial leads of baseline electrocardiograms in ATS. Second, frequent premature ventricular contractions (PVCs) with right bundle branch block (RBBB) morphology are popular in patients with ATS, while those with left bundle branch block (LBBB) morphology are dominant in patients with CPVT. Third, ventricular arrhythmias are induced during exercise stress test even on effective medication in patients with CPVT while suppressed at peak exercise in patients with ATS (4). There are two types of ATS based on gene mutation. Type 1 ATS, in which a mutation in the KCNJ2 gene can be identified, accounts for about 60-70% of all patients with ATS, while type 2 (ATS2), of which the genetic cause is still unknown, accounts for the remaining 30-40% of ATS cases. The KCNJ2 gene encodes the Kir2.1 inward rectifier potassium channel protein, which mainly causes prolongation of phase 3 of the action potential with reduced inward rectifier potassium current (I K1 ) resulting from KCNJ2 mutations (3). Cardiac repolarization duration prolongation leads to distinctive T-U wave morphology and a variety of ventricular arrhythmias. According to the latest ATS-iPS cell-derived cardiomyocytes, the mechanism of ventricular arrhythmia is associated with intracellular calcium overload and sodium/calcium exchanger (NCX) mediated triggered activity (5). In the expert consensus on the treatment of ATS, the βadrenergic blocker is a primary option including metoprolol or propranolol in China. However, treatment response varies in patients with the type of arrhythmia. Proportional LQT2 patients and a majority of LQT3 patients showed QT interval prolongation in slow heart rate after taking β-blocker. And the efficacy of β-blockers was controversial for counteracting VAs in patients with ATS (6). In our case, the boy experienced apparent fatigue and bradycardia after receiving metoprolol 11.25 mg twice a day for only 2 days. Flecainide is an alternative anti-arrhythmic agent for patients' intolerant of β-blocker. It can significantly suppress ventricular arrhythmia by directly blocking NCX and irregular calcium release besides blocking the fast-inward sodium channel. Additionally, flecainide may activate I K1 in ventricular myocytes (7). Unfortunately, flecainide is not yet available in China. Therefore, the class Ib anti-arrhythmic drug mexiletine was proposed as an alternative therapy for patients with partial LQT2, LQT3, and LQT8 (Timothy Syndrome). To date, mexiletine treatment efficacy in ATS (LQT7) with KCNJ2 mutations has rarely been reported. The mechanism underlying the suppression of VAs by mexiletine in ATS has not been fully understood. One possible explanation is that mexiletine inhibits the late Na current (I Na−L ), thereby reducing NCX and intracellular calcium during the repolarization phase. The physiologic I Na−L is intensified by loss of potassium channel function including I KS and I K1 . Additionally, the increased I Na−L at lower heart rate was named as reverse use-dependence (8), which explains the treatment failure of beta-blocker in patients with ATS. It is important to note that the ratio of the inhibitory concentration at 50% block (IC50) for I Na−L block by mexiletine falls within the therapeutic concentration range (9). There are no studies on the effect of mexiletine on the I K1 current. Another interpretation is that mexiletine shortens the action potential duration (APD) of ventricular muscles by activation of the ATP-sensitive potassium channel (K ATP ) (10), as the QUc interval is shortened by 70 ms in our patient. The most popular adverse effects associated with mexiletine are minor gastrointestinal or neurological effects, which can be tolerated. In summary, our case demonstrates a new anti-arrhythmic therapy with mexiletine for prevention of life-threatening cardiac events in patients with ATS who are intolerant of βblocker. Future studies are needed to provide empirical supports for the aforementioned treatment and to further examine the underlying mechanism of its efficacy. Data availability statement The original contributions presented in the study are included in the article/Supplementary materials, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Ethics Committee of Beijing Tsinghua Changgung Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions The study was designed by JY and PZ. ECG data collection was performed by KL. The KCNJ2 gene mutation was identified by TL by Sanger sequencing. The clinical care and treatment follow-up of the patient were performed by YX and FL. All authors contributed to the article, conception of the study, and approved the submitted version. . /fcvm. .
2022-08-25T13:15:21.460Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "25485713aadc15af6bbabf678cd8a68455661bc0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "25485713aadc15af6bbabf678cd8a68455661bc0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }